paper_name stringlengths 11 170 | text stringlengths 8.07k 307k | summary stringlengths 152 6.16k | paper_id stringlengths 43 43 |
|---|---|---|---|
Gradient Broadcast Adaptation: Defending against the backdoor attack in pre-trained models | 1 INTRODUCTION . Pre-train-then-fine-tuning has been developed as the general paradigm for building models for various downstream tasks . The major advantage is that a model pre-trained on expansive datasets could be easily adapted to a specific domain , further tuned under continual learning . For example , Devlin et al . ( 2019 ) and Brown et al . ( 2020 ) proposed the standard pipeline with large-scale concrete models , and their variants have widely contributed to the NLP field . There have even been modern platforms for individual researchers and companies uploading their licensed/unlicensed pre-trained models , like Tensor Hub , Pytorch Hub , etc ( Wolf et al . ( 2020 ) ) . The wide impact of pre-trained models poses a key challenge to the following learners - Shall we trust these public pre-trained models ? Recent studies by Gu et al . ( 2017 ) ; Kurita et al . ( 2020 ) ; Zhang et al . ( 2021 ) ; Schuster et al . ( 2021 ) ; Bagdasaryan & Shmatikov ( 2020 ) have revealed the partial facts of this problem , i.e. , the over-parameterized model weights in the pre-trained models could be manipulated , and it causes the underlying threats for embedding malicious triggers . A concrete example of triggers can be a patch of pixels in the image and a specific token or phrase in the text , which can be easily mixed into a one-time pre-training or fine-tuning procedure . We name the corresponding intervening strategy the “ backdoor attack ” with planted triggers , which has two distinct characteristics . 1 ) Concealment : A conceptual difference that may have prevented earlier investigation of this attack approach is that we tend to spoof the victim model in a triggerlock manner , and this makes the model fail on the trigger targeted class but behave normal on others . Unlike the adversarial attack ( Ribeiro et al . ( 2018 ) ; Iyyer et al . ( 2018 ) ; Zhao et al . ( 2017 ) ; Jin et al . ( 2020 ) ; Ren et al . ( 2019 ) ; Alzantot et al . ( 2018 ) ; Zang et al . ( 2019 ) ; Li et al . ( 2020 ) ; Garg & Ramakrishnan ( 2020 ) ; Papernot et al . ( 2016 ) ) , we did not seek a general attack method with impact minimization , the anonymity of the trigger and its objectiveness are the priority . 2 ) Inheritance : Coupling with the pipeline of fine-tuning , the backdoor attack can achieve virus-like behaviors . Zhang et al . ( 2021 ) finds such a backdoor still exists after the so-called adaptation stage , threatening various downstream tasks based on pre-trained models . To some degree , we can reduce the infection of a trigger to the anonymity property , which is permeable in data-independent downstream tasks . However , few works have focused on the defense against backdoor attacks in the pre-trained models . Likewise , several defense papers like Azizi et al . ( 2021 ) ; Chen et al . ( 2018 ; 2019 ) ; Gao et al . ( 2019 ) ; Tran et al . ( 2018 ) ; Wang et al . ( 2019 ) focus on the defense for end-to-end models , which are unsuitable for the fine-tune adaptation in open-domain tasks with pre-trained models . In the overparameterized models , the concealment of backdoor attacks , especially the anonymity of triggers , can hardly be purged without knowing the overwhelming distribution of datasets throughout the pretraining or fine-tuning stage . Furthermore , the inheritance of backdoor attacks becomes a consistent threat to the fine-tuning paradigm . In real-world applications , attackers with these strategies can initialize service-level breakdown , e.g. , making advertisements passing the spam filer or fooling the input-sensitive ranking system in search engines . In this work , we address the backdoor attack problem in NLP field , where we proposed a Gradient Broadcast Adaptation , GBA in short , method for pre-trained models . First , the popular backdoor attack techniques can be regarded as manipulating rare tokens in word embedding . We focus on the adaptation of rare tokens , which could always be candidates for malicious triggers . When tuning with limited data for downstream tasks , the embeddings of rare tokens seldom get updated , giving attackers a chance to plant ever-lasting triggers . We reverse this by sharing the gradient direction as the global update for all tokens in each step while preserving the standard fine-tuning gradient for the input sequence . Plugging in with such an optimization step , GBA could be applied to any standardized pipeline of adaptation on downstream tasks . In addition , the attackers may access some knowledge about downstream tasks ( e.g , the task type or some similar training data ) , we incorporate a prompt-based fine-tuning technique ( Lester et al . ( 2021 ) ; Han et al . ( 2021 ) ; Hu et al . ( 2021 ) ; Le Scao & Rush ( 2021 ) ; Liu et al . ( 2021 ) ) to enable flexible adaptation . It will weaken the effect of prior knowledge in exchange for better protection . Different from former defense techniques ( Wang et al . ( 2019 ) ; Tran et al . ( 2018 ) ; Chen et al . ( 2018 ; 2019 ) ; Gao et al . ( 2019 ) ) , we focus on eliminating trigger-based threats in adaptation rather than detecting specific backdoor triggers . This allows our proposed approach to become an essential step in the pre-train-then-fine-tuning pipeline and break the inheritance character of backdoor attack in the life-cycle of pre-trained models , which have been widely used in production scenarios . Our main contributions can be summarized as follows : 1 . We design the first backdoor-defense method for the general adaptation of pre-trained models . 2 . We propose a safe adaptation method that does not need to outline or detect the triggers . 3 . Experiments on five real-world datasets reveal our gradient broadcast method suppressing the trigger while maintaining comparable performance . 2 RELATED WORK . Backdoor Attack . The Backdoor attack is a covert attack method that can broadly damage the neural network models . Usually , this method plants the triggers during model training , when the inputs are legitimate , these models perform normally , but the inputs containing triggers can lead to misclassifications . Compared with adversarial sample attacks , Liu et al . ( 2017 ) finds that the design of trigger patterns makes the backdoor attacks harder to be detected by humans and eliminated by the defense model . Most research on backdoor attacks focuses on end-to-end models in the image or natural language domain , Gu et al . ( 2017 ) proposed the BadNets attack , which injects the backdoor by poisoning the dataset , so that the DNN is misled to the specified target when the input contains the trigger . With the success of the pre-trained models , Zhang et al . ( 2021 ) introduced the Neuron-level backdoor attack ( NeuBA ) . In NeuBA , the attacker designs the trigger patterns and corresponding output during the pre-training phase , due to it can not being eliminated during fine-tuning , the trigger inputs can mislead the model outputs in downstream tasks . Now that the pre-trained model is widely used , e.g . Foundation model Bommasani et al . ( 2021 ) , the NeuBA sounds a red alarm . Backdoor Defense . Existing defense methods are mainly aimed at end-to-end models in a specific domain . Moreover , their limitations are discussed below . Neural Cleanse : Wang et al . ( 2019 ) proposed a defense method that takes effect in the image domain . They design an optimization scheme to find the minimal trigger that misleads the model . Repeat this step for each label , and detect the trigger whose modification is significantly smaller than other hello good terrible world cf hello good terrible world cf hello good terrible world cf hello good terrible world cf hello good terrible world cf 1 . Finetuning Student 2 . NAD Teacher hello good terrible world cf hello good terrible world cf Backdoor Trigger Backdoor Trigger After Fine-tuning Backdoor Trigger After NAD distill Clean Trigger Token After GBA pulling 1 . Finetuning 3 . GBA ( a ) Erasing backdoor by Finetuning ( b ) Erasing backdoor by Distilling ( c ) Erasing backdoor by our GBA Figure 1 : The pipeline of backdoor erasing techniques from a word embedding view . ( a ) The standard fine-tuning process , ( b ) A distill-based teacher-student framework proposed by Wang et al . ( 2019 ) , ( c ) Our GBA framework . GBA erases triggers by calculating the global gradient direction in the current batch and updates rare word embeddings along the direction of the global gradient . candidates . Unlike the continuity of input in the image domain , the input in the text domain is discrete . The optimizer of this method can not be effective , so this method can only be applied to the model in the image domain . T-Miner : In the text domain , Azizi et al . ( 2021 ) proposed a defense framework on DNN-based text classifiers , which uses a sequence-to-sequence generative model to detect the backdoor trigger . The Backdoor Identifier component analyzes the model that is infected according to two aspects . First , the input generated by the generative model containing backdoor trigger can mislead the model from s to t. Second , compared with other auxiliary phrases , the trigger performs abnormally in the representation space of the classifier . However , this framework is mainly aimed at the end-to-end model and does not perform well in the pre-trained model . Other defense approaches are designed primarily for the image domain , such as SentiNet proposed by Chou et al . ( 2018 ) and DeepInspect proposed by Chen et al . ( 2019 ) . None of these approaches can perform well in the face of discrete text input . Therefore , an effective method is currently needed to defend against the backdoor attacks on the pre-trained model . 3 PROPOSED APPROACH . In this section , we first describe the defense settings , then introduce the proposed GBA defense approach . We focus on a typical application setting of pre-trained models . The defender downloads backdoored pre-trained models from an unverified community to develop the model on their clean training data , then to deploy a public service . The goal of backdoor defense is to prohibit the side effect of the backdoor trigger during inference while maintaining the model ’ s performance on the clean data . Three particular settings are included in our paper : • Full Data Knowledge ( FDK ) . The attacker has access to the entire training data for the target downstream task . This often happens when user trains their model on a public dataset . • Limited Data Knowledge ( LDK ) . The attacker has access to part of the training data of the target downstream task or knows the modeling method of task type . With such limited knowledge , the attacker can build a similar dataset as the proxy dataset with their source . • Data Free ( DF ) . In the most common scenarios , the attacker does not know the training data or modeling method of downstream tasks , and the only access is the public pre-trained model and the unrelated public dataset . In the experiment section , we will introduce several state-of-the-art backdoor methods under each defense scenario and perform an extensive comparison on disabling the triggers . | This paper identifies an emerging threat for the prevailing pre-trained models -- the inheritance of backdoor attack, and proposes a simple yet effective defense approach: gradient broadcast adaptation (GBA). Instead of the traditional “erasing triggers”, GBA utilizes the “prompt-tuning” as a tool to guide the “perturbed weights” back to the normal state, which helps avoid the degradation of generalization ability. It provides an exciting and novel analysis of why backdoor attacks could be inherited during the pretraining and tuning procedure. Meanwhile, the authors perform an empirical evaluation of the proposed method against four state-of-the-art backdoor attacks. | SP:212e480fbeb43ffb00707628f48058a8d8517e96 |
Learning to Act with Affordance-Aware Multimodal Neural SLAM | 1 INTRODUCTION . There is significant recent progress in learning simulated embodied agents Pashevich et al . ( 2021 ) ; Zhang & Chai ( 2021 ) ; Blukis et al . ( 2021 ) ; Nagarajan & Grauman ( 2020 ) ; Singh et al . ( 2020 ) ; Suglia et al . ( 2021 ) that follow human language instructions , process multi-sensory inputs and act to complete complex tasks Anderson et al . ( 2018 ) ; Das et al . ( 2018 ) ; Chen et al . ( 2019 ) ; Shridhar et al . ( 2020 ) . Despite this , challenges remain before agent performance approaches satisfactory levels , including long-horizon planning and reasoning Blukis et al . ( 2021 ) , effective language grounding in visually rich environments , efficient exploration Chen et al . ( 2018 ) , and importantly , generalization to unseen environments . Most prior work Singh et al . ( 2020 ) ; Pashevich et al . ( 2021 ) ; Nguyen et al . ( 2021 ) ; Suglia et al . ( 2021 ) adopted end-to-end deep learning models that map visual and language inputs into action sequences . Besides being difficult to interpret , these models show limited generalization , suffering from significant performance drop when tested on new tasks and scenes . In contrast , hierarchical approach Zhang & Chai ( 2021 ) ; Blukis et al . ( 2021 ) achieve better generalization performance and interpretability . Although hierarchical structure is helpful for long-horizon planning , its key impact is an expressive semantic representation of the environment acquired via Neural SLAM-based approaches Chaplot et al . ( 2020a ; c ) ; Blukis et al . ( 2021 ) . However , a missing component in these methods is fine-grained affordance Kim & Sukhatme ( 2015 ) ; Qi et al . ( 2019 ) . To build a robotic assistant that can follow human instructions to complete a task ( e.g. , Open the fridge and grab me a soda ) , it is essential that the agent can perform affordance-aware navigation : it must navigate to a reasonable position and pose near the fridge that enables follow-on actions open and pick-up . Operationally , the agent has to move to a location where the fridge is within reach yet without arresting the fridge door from being opened . Ideally , it should also position itself so that the soda is in its first person viewing field to allow the follow-on pick-up action . This is challenging compared to pure navigation ( where navigating to any location close to the fridge is acceptable ) . To achieve this , we propose a sophisticated affordance-aware semantic representation that leads to accurate planning for navigation setting up subsequent object interactions for success . Efficient exploration of the environment Ramakrishnan et al . ( 2021 ) ; Chen et al . ( 2018 ) needs to be addressed to establish this semantic representation - it is unacceptable for a robot to wander around for an extended period of time to complete a single task in a real-world setting . To resolve this issue , we propose the first multimodal exploration module that takes language instruction as guidance and keeps track of visited regions to explore the area of interest effectively and efficiently . This lays a foundation for map construction , which is critical to long-horizon planning . Here , we introduce Affordance-aware Multimodal Neural SLAM ( AMSLAM ) , which implements two key insights to address the challenges of robust long-horizon planning , namely , efficient exploration and generalization : 1 . Affordance-aware semantic representation that estimates object information in terms of where the agent can interact with them to support sophisticated affordanceaware navigation , and 2 . Task-driven multimodal exploration that takes guidance from language instruction , visual input , and previously explored regions to improve the effectiveness and efficiency of exploration . AMSLAM is the first Neural SLAM-based approach for Embodied AI tasks to utilize several modalities for effective exploration and an affordance-aware semantic representation for robust long-horizon planning . We conduct comprehensive empirical studies on the ALFRED benchmark Shridhar et al . ( 2020 ) to demonstrate the key components of AMSLAM , setting a new state-of-the-art generalization performance at 23.48 % , a > 40 % improvement over prior published state-of-the-art approaches . 2 RELATED WORK . Recent progress in Embodied Artificial Intelligence , spans both simulation environments Kolve et al . ( 2017 ) ; Li et al . ( 2021 ) ; Savva et al . ( 2019 ) ; Gan et al . ( 2020 ) ; Puig et al . ( 2018 ) and sophisticated tasks Das et al . ( 2018 ) ; Anderson et al . ( 2018 ) ; Shridhar et al . ( 2020 ) . Our work is most closely related to research in language-guided task completion , Neural SLAM , and exploration . Language-Guided Task Completion . ALFRED Shridhar et al . ( 2020 ) is a benchmark that enables a learning agent to follow natural language descriptions to complete complex household tasks . The agent ’ s goal is to learn a mapping from natural language instructions to a sequence of actions for task completion in a simulated 3D environment . Various modeling approaches have been proposed falling into roughly two families of methods . The first focuses on learning large end-to-end models that directly translate instructions to low-level agent actions Singh et al . ( 2020 ) ; Suglia et al . ( 2021 ) ; Pashevich et al . ( 2021 ) . However , these agents typically suffer from poor generalization performance , and are difficult to interpret . Recently , hierarchical approaches Zhang & Chai ( 2021 ) ; Blukis et al . ( 2021 ) have attracted attention due to their better generalization and interpretability . We also adopt a hierarchical structure , focusing on affordance-aware navigation thereby achieving significantly better generalization than all existing approaches . Neural SLAM and Affordance-aware Semantic Representation . Neural SLAM Chaplot et al . ( 2020a ; b ; c ) , constructs an environment semantic representation enabling map-based long-horizon planning Chaplot et al . ( 2021 ) . However , these are tested in pure navigation tasks instead of complex household tasks , and does not consider affordance Qi et al . ( 2019 ) ; Nagarajan & Grauman ( 2020 ) ; Xu et al . ( 2020 ) , which is required for tasks involving both navigation and manipulations . In Blukis et al . ( 2021 ) , the authors utilize SLAM for 3D environment reconstruction in language-guided task completion . Their approach relies heavily on accurate depth prediction ( less robust in unseen environments ) . Instead , we propose a waypoint-oriented representation which associates each object with the locations on the floor from where the agent can interact with the object . Furthermore , different from the 2D affordance map in Blukis et al . ( 2021 ) that directly predicts affordance type , our semantic representation supports more fine-grained control of the robot ’ s position and pose , which facilitates significantly better generalization . The approach in Qi et al . ( 2019 ) assumes direct access to the ground truth depth information ( not available in our setup ) and the method in Nagarajan & Grauman ( 2020 ) only focuses on pure navigation problems . Learning to Explore for Navigation . An essential step in Neural SLAM-based approaches is learning to explore the environment for map building Ramakrishnan et al . ( 2021 ) ; Chen et al . ( 2018 ) ; Jayaraman & Grauman ( 2018 ) ; Chaplot et al . ( 2020a ) . Multiple approaches have been proposed to tackle aspects of exploration in the reinforcement learning Schmidhuber ( 1991 ) ; Pathak et al . ( 2017 ) ; Burda et al . ( 2018 ) ; Chen et al . ( 2018 ) ; Jayaraman & Grauman ( 2018 ) , computer vision Ramakrishnan et al . ( 2021 ) ; Nagarajan & Grauman ( 2020 ) , and robotics Blukis et al . ( 2021 ) ; Harrison et al . ( 2018 ) communities . The central principle of prior methods is learning to reduce environment uncertainty ; different definitions of uncertainty lead to the following types of methods Ramakrishnan et al . ( 2021 ) . Curiosity-driven Schmidhuber ( 1991 ) ; Pathak et al . ( 2017 ) ; Burda et al . ( 2018 ) approaches learn forward dynamics and reward visiting areas that are poorly predicted by the model . Count-based exploration Tang et al . ( 2017 ) ; Bellemare et al . ( 2016 ) ; Ostrovski et al . ( 2017 ) ; Rashid et al . ( 2020 ) encourages visiting states that are less frequently visited . Coverage-based Chen et al . ( 2018 ) ; Jayaraman & Grauman ( 2018 ) approaches reward visiting all navigable areas by searching in a task-agnostic manner . In contrast , we propose a multimodal exploration approach utilizing egocentric visual input , language instructions , and memory of explored areas to reduce task-specific uncertainty of points of interest ( areas important to complete the task ) . We show this to be more efficient , leading to more effective map prediction and robust planning . 3 PROBLEM FORMULATION . We focus on the ALFRED challenge Shridhar et al . ( 2020 ) , where an agent is asked to follow human instructions to complete long-horizon household tasks in indoor scenes ( simulated in AI2Thor Kolve et al . ( 2017 ) ) . Each task in ALFRED consists of several subgoals for either navigation ( moving in the environment ) or object interactions ( interacting with at least one object ) . Language inputs contain a high-level task description and a sequence of low-level step-by-step instructions ( each corresponding to a subgoal ) . The agent is a simulated robot with access to the states of the environment only through a front-view RGB camera with a relatively small field of view . The agent ’ s own state is a 5-tuple ( x , y , r , h , o ) , where x , y are its 2D position , r the horizontal rotation angle , h the vertical camera angles ( also called “ horizon ” ) and o the type of object held in its hand . The state space of the agent is discrete , with navigation actions : MoveAhead ( moving forward by 0.25m ) , RotateLeft & RotateRight ( rotating in the horizontal plane by 90◦ ) and LookUp & LookDown ( adjusting the horizon by 15◦ ) . Formally , r ∈ { 0◦ , 90◦ , 180◦ , 270◦ } and h ∈ { 60◦ , 45◦ , ... , −15◦ , −30◦ , } where positive h indicates facing downward . With these discrete actions , the agent has full knowledge of the relative changes ∆x , ∆y , ∆r and ∆h . Each of the 7 object interaction actions ( PickUp , Open , Slice , etc . ) is parametrized by an binary mask for the target object , which is usually predicted with a pre-trained instance segmentation module . Featuring long-horizon tasks with a range of interactions , the ALFRED challenge evaluates an agent ’ s ability to perform tasks over unseen test scenes , while only allowing ≤1000 steps and ≤10 action failures for each task at inference time . 4 AFFORDANCE-AWARE MULTIMODAL NEURAL SLAM . Affordance-aware navigation is a major challenge in solving complex and long-horizon indoor tasks such as ALFRED with both navigation and object interactions . Specifically , given each object of interest in the scene , the agent is required to not only find and approach it but also end up at a pose ( x , y , r , h ) , that is feasible for subsequent interactions with the object . For instance , to open a fridge , the robot should approach the fridge closely enough ( so the door is within reach ) , look at it ( so that the fridge is in the field of view ) , and leave enough room to open the door . To solve a long-horizon task involving multiple navigation and object interaction subgoals , it is natural to use an explicit semantic map , either 2D or 3D , of the environment ( similar to Neural Active SLAM Chaplot et al . ( 2020a ) ) , together with model-based planning ( e.g . as in HLSM Blukis et al . ( 2021 ) ) . This line of work tends to generalize better than models that directly learn mappings from human instructions to navigation & interaction actions ( e.g. , E.T . Pashevich et al . ( 2021 ) ) . With perfect knowledge of the environment , it is possible to achieve ( nearly ) perfect performance . In practice , however , the semantic map acquired at inference time is usually far from ideal , primarily due to Incompleteness ( missing information due to insufficient exploration of the scene ) and Inaccuracy ( erroneous object location prediction on the map , especially for small objects ) . To improve exploration performance , we propose a multimodal module that , at each step , predicts an exploration action a ∈ { MoveAhead , RotateLeft , RotateRight } by taking visual observations & actions in the past , step-by-step language instructions , and the explored area map which indicates where the agent has visited . We show that , compared to existing model-based approaches on ALFRED ( e.g. , HLSM Blukis et al . ( 2021 ) which applies random exploration ) , our use of lowlevel language instructions leads to more efficient exploration . The proposed exploration module operates at the subgoal level and only predicts exploration actions ( in contrast to E.T . which directly predicts actions for the entire task ) . The extra modality ( the explored area ) facilitates exploration by providing the agent with explicit spatial information . We illustrate the exploration module in Figure 3 , elaborate its details in Section 4.3 , and empirically demonstrate its advantages in Section 5 . To deal with the inaccuracy in map prediction , we carefully design an affordance-aware semantic representation for the environments . On one hand , knowing the precise spatial coordinates of objects requires precise depth information , which is difficult to acquire due to 3D sensor noise and/or inaccuracy in predicting depth from 2D images . On the other hand , affordance-aware navigation essentially asks for poses ( x , y , r , h ) of the agent suitable for interactions with the target objects , thus requiring only coarse-grained spatial information . Given an object type o , we define such corresponding poses as waypoints Wo and then treat navigation as a path planning problem among different waypoints . To generate such waypoints , we handle large objects ( fridges , cabinets , etc . ) and small objects ( apples , mug , etc . ) differently . The waypoints for large objects are computed using 2D grid maps predicted and aggregated from front-view camera images by a CNN-based network ; for small objects , we directly search over all observations acquired during the exploration phase with the help of a pre-trained Mask RCNN He et al . ( 2017 ) ( detailed below in Section 4.2 ) . | The paper proposes a method for solving the ALFRED task (following language instructions to perform a set of household tasks). The model has two main components: (1) Affordance-aware Semantic Representation (2) Multi-modal exploration. The former component estimates the location of the objects in the scene. The latter provides an exploration strategy by using instructions, images, previous actions and previously explored areas as input. | SP:ea0eec7c040a79e4d108d42578313eefe54efbee |
Learning to Act with Affordance-Aware Multimodal Neural SLAM | 1 INTRODUCTION . There is significant recent progress in learning simulated embodied agents Pashevich et al . ( 2021 ) ; Zhang & Chai ( 2021 ) ; Blukis et al . ( 2021 ) ; Nagarajan & Grauman ( 2020 ) ; Singh et al . ( 2020 ) ; Suglia et al . ( 2021 ) that follow human language instructions , process multi-sensory inputs and act to complete complex tasks Anderson et al . ( 2018 ) ; Das et al . ( 2018 ) ; Chen et al . ( 2019 ) ; Shridhar et al . ( 2020 ) . Despite this , challenges remain before agent performance approaches satisfactory levels , including long-horizon planning and reasoning Blukis et al . ( 2021 ) , effective language grounding in visually rich environments , efficient exploration Chen et al . ( 2018 ) , and importantly , generalization to unseen environments . Most prior work Singh et al . ( 2020 ) ; Pashevich et al . ( 2021 ) ; Nguyen et al . ( 2021 ) ; Suglia et al . ( 2021 ) adopted end-to-end deep learning models that map visual and language inputs into action sequences . Besides being difficult to interpret , these models show limited generalization , suffering from significant performance drop when tested on new tasks and scenes . In contrast , hierarchical approach Zhang & Chai ( 2021 ) ; Blukis et al . ( 2021 ) achieve better generalization performance and interpretability . Although hierarchical structure is helpful for long-horizon planning , its key impact is an expressive semantic representation of the environment acquired via Neural SLAM-based approaches Chaplot et al . ( 2020a ; c ) ; Blukis et al . ( 2021 ) . However , a missing component in these methods is fine-grained affordance Kim & Sukhatme ( 2015 ) ; Qi et al . ( 2019 ) . To build a robotic assistant that can follow human instructions to complete a task ( e.g. , Open the fridge and grab me a soda ) , it is essential that the agent can perform affordance-aware navigation : it must navigate to a reasonable position and pose near the fridge that enables follow-on actions open and pick-up . Operationally , the agent has to move to a location where the fridge is within reach yet without arresting the fridge door from being opened . Ideally , it should also position itself so that the soda is in its first person viewing field to allow the follow-on pick-up action . This is challenging compared to pure navigation ( where navigating to any location close to the fridge is acceptable ) . To achieve this , we propose a sophisticated affordance-aware semantic representation that leads to accurate planning for navigation setting up subsequent object interactions for success . Efficient exploration of the environment Ramakrishnan et al . ( 2021 ) ; Chen et al . ( 2018 ) needs to be addressed to establish this semantic representation - it is unacceptable for a robot to wander around for an extended period of time to complete a single task in a real-world setting . To resolve this issue , we propose the first multimodal exploration module that takes language instruction as guidance and keeps track of visited regions to explore the area of interest effectively and efficiently . This lays a foundation for map construction , which is critical to long-horizon planning . Here , we introduce Affordance-aware Multimodal Neural SLAM ( AMSLAM ) , which implements two key insights to address the challenges of robust long-horizon planning , namely , efficient exploration and generalization : 1 . Affordance-aware semantic representation that estimates object information in terms of where the agent can interact with them to support sophisticated affordanceaware navigation , and 2 . Task-driven multimodal exploration that takes guidance from language instruction , visual input , and previously explored regions to improve the effectiveness and efficiency of exploration . AMSLAM is the first Neural SLAM-based approach for Embodied AI tasks to utilize several modalities for effective exploration and an affordance-aware semantic representation for robust long-horizon planning . We conduct comprehensive empirical studies on the ALFRED benchmark Shridhar et al . ( 2020 ) to demonstrate the key components of AMSLAM , setting a new state-of-the-art generalization performance at 23.48 % , a > 40 % improvement over prior published state-of-the-art approaches . 2 RELATED WORK . Recent progress in Embodied Artificial Intelligence , spans both simulation environments Kolve et al . ( 2017 ) ; Li et al . ( 2021 ) ; Savva et al . ( 2019 ) ; Gan et al . ( 2020 ) ; Puig et al . ( 2018 ) and sophisticated tasks Das et al . ( 2018 ) ; Anderson et al . ( 2018 ) ; Shridhar et al . ( 2020 ) . Our work is most closely related to research in language-guided task completion , Neural SLAM , and exploration . Language-Guided Task Completion . ALFRED Shridhar et al . ( 2020 ) is a benchmark that enables a learning agent to follow natural language descriptions to complete complex household tasks . The agent ’ s goal is to learn a mapping from natural language instructions to a sequence of actions for task completion in a simulated 3D environment . Various modeling approaches have been proposed falling into roughly two families of methods . The first focuses on learning large end-to-end models that directly translate instructions to low-level agent actions Singh et al . ( 2020 ) ; Suglia et al . ( 2021 ) ; Pashevich et al . ( 2021 ) . However , these agents typically suffer from poor generalization performance , and are difficult to interpret . Recently , hierarchical approaches Zhang & Chai ( 2021 ) ; Blukis et al . ( 2021 ) have attracted attention due to their better generalization and interpretability . We also adopt a hierarchical structure , focusing on affordance-aware navigation thereby achieving significantly better generalization than all existing approaches . Neural SLAM and Affordance-aware Semantic Representation . Neural SLAM Chaplot et al . ( 2020a ; b ; c ) , constructs an environment semantic representation enabling map-based long-horizon planning Chaplot et al . ( 2021 ) . However , these are tested in pure navigation tasks instead of complex household tasks , and does not consider affordance Qi et al . ( 2019 ) ; Nagarajan & Grauman ( 2020 ) ; Xu et al . ( 2020 ) , which is required for tasks involving both navigation and manipulations . In Blukis et al . ( 2021 ) , the authors utilize SLAM for 3D environment reconstruction in language-guided task completion . Their approach relies heavily on accurate depth prediction ( less robust in unseen environments ) . Instead , we propose a waypoint-oriented representation which associates each object with the locations on the floor from where the agent can interact with the object . Furthermore , different from the 2D affordance map in Blukis et al . ( 2021 ) that directly predicts affordance type , our semantic representation supports more fine-grained control of the robot ’ s position and pose , which facilitates significantly better generalization . The approach in Qi et al . ( 2019 ) assumes direct access to the ground truth depth information ( not available in our setup ) and the method in Nagarajan & Grauman ( 2020 ) only focuses on pure navigation problems . Learning to Explore for Navigation . An essential step in Neural SLAM-based approaches is learning to explore the environment for map building Ramakrishnan et al . ( 2021 ) ; Chen et al . ( 2018 ) ; Jayaraman & Grauman ( 2018 ) ; Chaplot et al . ( 2020a ) . Multiple approaches have been proposed to tackle aspects of exploration in the reinforcement learning Schmidhuber ( 1991 ) ; Pathak et al . ( 2017 ) ; Burda et al . ( 2018 ) ; Chen et al . ( 2018 ) ; Jayaraman & Grauman ( 2018 ) , computer vision Ramakrishnan et al . ( 2021 ) ; Nagarajan & Grauman ( 2020 ) , and robotics Blukis et al . ( 2021 ) ; Harrison et al . ( 2018 ) communities . The central principle of prior methods is learning to reduce environment uncertainty ; different definitions of uncertainty lead to the following types of methods Ramakrishnan et al . ( 2021 ) . Curiosity-driven Schmidhuber ( 1991 ) ; Pathak et al . ( 2017 ) ; Burda et al . ( 2018 ) approaches learn forward dynamics and reward visiting areas that are poorly predicted by the model . Count-based exploration Tang et al . ( 2017 ) ; Bellemare et al . ( 2016 ) ; Ostrovski et al . ( 2017 ) ; Rashid et al . ( 2020 ) encourages visiting states that are less frequently visited . Coverage-based Chen et al . ( 2018 ) ; Jayaraman & Grauman ( 2018 ) approaches reward visiting all navigable areas by searching in a task-agnostic manner . In contrast , we propose a multimodal exploration approach utilizing egocentric visual input , language instructions , and memory of explored areas to reduce task-specific uncertainty of points of interest ( areas important to complete the task ) . We show this to be more efficient , leading to more effective map prediction and robust planning . 3 PROBLEM FORMULATION . We focus on the ALFRED challenge Shridhar et al . ( 2020 ) , where an agent is asked to follow human instructions to complete long-horizon household tasks in indoor scenes ( simulated in AI2Thor Kolve et al . ( 2017 ) ) . Each task in ALFRED consists of several subgoals for either navigation ( moving in the environment ) or object interactions ( interacting with at least one object ) . Language inputs contain a high-level task description and a sequence of low-level step-by-step instructions ( each corresponding to a subgoal ) . The agent is a simulated robot with access to the states of the environment only through a front-view RGB camera with a relatively small field of view . The agent ’ s own state is a 5-tuple ( x , y , r , h , o ) , where x , y are its 2D position , r the horizontal rotation angle , h the vertical camera angles ( also called “ horizon ” ) and o the type of object held in its hand . The state space of the agent is discrete , with navigation actions : MoveAhead ( moving forward by 0.25m ) , RotateLeft & RotateRight ( rotating in the horizontal plane by 90◦ ) and LookUp & LookDown ( adjusting the horizon by 15◦ ) . Formally , r ∈ { 0◦ , 90◦ , 180◦ , 270◦ } and h ∈ { 60◦ , 45◦ , ... , −15◦ , −30◦ , } where positive h indicates facing downward . With these discrete actions , the agent has full knowledge of the relative changes ∆x , ∆y , ∆r and ∆h . Each of the 7 object interaction actions ( PickUp , Open , Slice , etc . ) is parametrized by an binary mask for the target object , which is usually predicted with a pre-trained instance segmentation module . Featuring long-horizon tasks with a range of interactions , the ALFRED challenge evaluates an agent ’ s ability to perform tasks over unseen test scenes , while only allowing ≤1000 steps and ≤10 action failures for each task at inference time . 4 AFFORDANCE-AWARE MULTIMODAL NEURAL SLAM . Affordance-aware navigation is a major challenge in solving complex and long-horizon indoor tasks such as ALFRED with both navigation and object interactions . Specifically , given each object of interest in the scene , the agent is required to not only find and approach it but also end up at a pose ( x , y , r , h ) , that is feasible for subsequent interactions with the object . For instance , to open a fridge , the robot should approach the fridge closely enough ( so the door is within reach ) , look at it ( so that the fridge is in the field of view ) , and leave enough room to open the door . To solve a long-horizon task involving multiple navigation and object interaction subgoals , it is natural to use an explicit semantic map , either 2D or 3D , of the environment ( similar to Neural Active SLAM Chaplot et al . ( 2020a ) ) , together with model-based planning ( e.g . as in HLSM Blukis et al . ( 2021 ) ) . This line of work tends to generalize better than models that directly learn mappings from human instructions to navigation & interaction actions ( e.g. , E.T . Pashevich et al . ( 2021 ) ) . With perfect knowledge of the environment , it is possible to achieve ( nearly ) perfect performance . In practice , however , the semantic map acquired at inference time is usually far from ideal , primarily due to Incompleteness ( missing information due to insufficient exploration of the scene ) and Inaccuracy ( erroneous object location prediction on the map , especially for small objects ) . To improve exploration performance , we propose a multimodal module that , at each step , predicts an exploration action a ∈ { MoveAhead , RotateLeft , RotateRight } by taking visual observations & actions in the past , step-by-step language instructions , and the explored area map which indicates where the agent has visited . We show that , compared to existing model-based approaches on ALFRED ( e.g. , HLSM Blukis et al . ( 2021 ) which applies random exploration ) , our use of lowlevel language instructions leads to more efficient exploration . The proposed exploration module operates at the subgoal level and only predicts exploration actions ( in contrast to E.T . which directly predicts actions for the entire task ) . The extra modality ( the explored area ) facilitates exploration by providing the agent with explicit spatial information . We illustrate the exploration module in Figure 3 , elaborate its details in Section 4.3 , and empirically demonstrate its advantages in Section 5 . To deal with the inaccuracy in map prediction , we carefully design an affordance-aware semantic representation for the environments . On one hand , knowing the precise spatial coordinates of objects requires precise depth information , which is difficult to acquire due to 3D sensor noise and/or inaccuracy in predicting depth from 2D images . On the other hand , affordance-aware navigation essentially asks for poses ( x , y , r , h ) of the agent suitable for interactions with the target objects , thus requiring only coarse-grained spatial information . Given an object type o , we define such corresponding poses as waypoints Wo and then treat navigation as a path planning problem among different waypoints . To generate such waypoints , we handle large objects ( fridges , cabinets , etc . ) and small objects ( apples , mug , etc . ) differently . The waypoints for large objects are computed using 2D grid maps predicted and aggregated from front-view camera images by a CNN-based network ; for small objects , we directly search over all observations acquired during the exploration phase with the help of a pre-trained Mask RCNN He et al . ( 2017 ) ( detailed below in Section 4.2 ) . | This paper presents a Neural SLAM-based approach for tackling embodied multimodal tasks in ALFRED benchmark. The approach, called Affordance-aware Multimodal Neural SLAM (AMSLAM), utilizes several modalities for exploration, predicts an affordance-aware semantic map, and plans over it at the same time. The approach achieves 40% improvement over prior published work. | SP:ea0eec7c040a79e4d108d42578313eefe54efbee |
Learning to Act with Affordance-Aware Multimodal Neural SLAM | 1 INTRODUCTION . There is significant recent progress in learning simulated embodied agents Pashevich et al . ( 2021 ) ; Zhang & Chai ( 2021 ) ; Blukis et al . ( 2021 ) ; Nagarajan & Grauman ( 2020 ) ; Singh et al . ( 2020 ) ; Suglia et al . ( 2021 ) that follow human language instructions , process multi-sensory inputs and act to complete complex tasks Anderson et al . ( 2018 ) ; Das et al . ( 2018 ) ; Chen et al . ( 2019 ) ; Shridhar et al . ( 2020 ) . Despite this , challenges remain before agent performance approaches satisfactory levels , including long-horizon planning and reasoning Blukis et al . ( 2021 ) , effective language grounding in visually rich environments , efficient exploration Chen et al . ( 2018 ) , and importantly , generalization to unseen environments . Most prior work Singh et al . ( 2020 ) ; Pashevich et al . ( 2021 ) ; Nguyen et al . ( 2021 ) ; Suglia et al . ( 2021 ) adopted end-to-end deep learning models that map visual and language inputs into action sequences . Besides being difficult to interpret , these models show limited generalization , suffering from significant performance drop when tested on new tasks and scenes . In contrast , hierarchical approach Zhang & Chai ( 2021 ) ; Blukis et al . ( 2021 ) achieve better generalization performance and interpretability . Although hierarchical structure is helpful for long-horizon planning , its key impact is an expressive semantic representation of the environment acquired via Neural SLAM-based approaches Chaplot et al . ( 2020a ; c ) ; Blukis et al . ( 2021 ) . However , a missing component in these methods is fine-grained affordance Kim & Sukhatme ( 2015 ) ; Qi et al . ( 2019 ) . To build a robotic assistant that can follow human instructions to complete a task ( e.g. , Open the fridge and grab me a soda ) , it is essential that the agent can perform affordance-aware navigation : it must navigate to a reasonable position and pose near the fridge that enables follow-on actions open and pick-up . Operationally , the agent has to move to a location where the fridge is within reach yet without arresting the fridge door from being opened . Ideally , it should also position itself so that the soda is in its first person viewing field to allow the follow-on pick-up action . This is challenging compared to pure navigation ( where navigating to any location close to the fridge is acceptable ) . To achieve this , we propose a sophisticated affordance-aware semantic representation that leads to accurate planning for navigation setting up subsequent object interactions for success . Efficient exploration of the environment Ramakrishnan et al . ( 2021 ) ; Chen et al . ( 2018 ) needs to be addressed to establish this semantic representation - it is unacceptable for a robot to wander around for an extended period of time to complete a single task in a real-world setting . To resolve this issue , we propose the first multimodal exploration module that takes language instruction as guidance and keeps track of visited regions to explore the area of interest effectively and efficiently . This lays a foundation for map construction , which is critical to long-horizon planning . Here , we introduce Affordance-aware Multimodal Neural SLAM ( AMSLAM ) , which implements two key insights to address the challenges of robust long-horizon planning , namely , efficient exploration and generalization : 1 . Affordance-aware semantic representation that estimates object information in terms of where the agent can interact with them to support sophisticated affordanceaware navigation , and 2 . Task-driven multimodal exploration that takes guidance from language instruction , visual input , and previously explored regions to improve the effectiveness and efficiency of exploration . AMSLAM is the first Neural SLAM-based approach for Embodied AI tasks to utilize several modalities for effective exploration and an affordance-aware semantic representation for robust long-horizon planning . We conduct comprehensive empirical studies on the ALFRED benchmark Shridhar et al . ( 2020 ) to demonstrate the key components of AMSLAM , setting a new state-of-the-art generalization performance at 23.48 % , a > 40 % improvement over prior published state-of-the-art approaches . 2 RELATED WORK . Recent progress in Embodied Artificial Intelligence , spans both simulation environments Kolve et al . ( 2017 ) ; Li et al . ( 2021 ) ; Savva et al . ( 2019 ) ; Gan et al . ( 2020 ) ; Puig et al . ( 2018 ) and sophisticated tasks Das et al . ( 2018 ) ; Anderson et al . ( 2018 ) ; Shridhar et al . ( 2020 ) . Our work is most closely related to research in language-guided task completion , Neural SLAM , and exploration . Language-Guided Task Completion . ALFRED Shridhar et al . ( 2020 ) is a benchmark that enables a learning agent to follow natural language descriptions to complete complex household tasks . The agent ’ s goal is to learn a mapping from natural language instructions to a sequence of actions for task completion in a simulated 3D environment . Various modeling approaches have been proposed falling into roughly two families of methods . The first focuses on learning large end-to-end models that directly translate instructions to low-level agent actions Singh et al . ( 2020 ) ; Suglia et al . ( 2021 ) ; Pashevich et al . ( 2021 ) . However , these agents typically suffer from poor generalization performance , and are difficult to interpret . Recently , hierarchical approaches Zhang & Chai ( 2021 ) ; Blukis et al . ( 2021 ) have attracted attention due to their better generalization and interpretability . We also adopt a hierarchical structure , focusing on affordance-aware navigation thereby achieving significantly better generalization than all existing approaches . Neural SLAM and Affordance-aware Semantic Representation . Neural SLAM Chaplot et al . ( 2020a ; b ; c ) , constructs an environment semantic representation enabling map-based long-horizon planning Chaplot et al . ( 2021 ) . However , these are tested in pure navigation tasks instead of complex household tasks , and does not consider affordance Qi et al . ( 2019 ) ; Nagarajan & Grauman ( 2020 ) ; Xu et al . ( 2020 ) , which is required for tasks involving both navigation and manipulations . In Blukis et al . ( 2021 ) , the authors utilize SLAM for 3D environment reconstruction in language-guided task completion . Their approach relies heavily on accurate depth prediction ( less robust in unseen environments ) . Instead , we propose a waypoint-oriented representation which associates each object with the locations on the floor from where the agent can interact with the object . Furthermore , different from the 2D affordance map in Blukis et al . ( 2021 ) that directly predicts affordance type , our semantic representation supports more fine-grained control of the robot ’ s position and pose , which facilitates significantly better generalization . The approach in Qi et al . ( 2019 ) assumes direct access to the ground truth depth information ( not available in our setup ) and the method in Nagarajan & Grauman ( 2020 ) only focuses on pure navigation problems . Learning to Explore for Navigation . An essential step in Neural SLAM-based approaches is learning to explore the environment for map building Ramakrishnan et al . ( 2021 ) ; Chen et al . ( 2018 ) ; Jayaraman & Grauman ( 2018 ) ; Chaplot et al . ( 2020a ) . Multiple approaches have been proposed to tackle aspects of exploration in the reinforcement learning Schmidhuber ( 1991 ) ; Pathak et al . ( 2017 ) ; Burda et al . ( 2018 ) ; Chen et al . ( 2018 ) ; Jayaraman & Grauman ( 2018 ) , computer vision Ramakrishnan et al . ( 2021 ) ; Nagarajan & Grauman ( 2020 ) , and robotics Blukis et al . ( 2021 ) ; Harrison et al . ( 2018 ) communities . The central principle of prior methods is learning to reduce environment uncertainty ; different definitions of uncertainty lead to the following types of methods Ramakrishnan et al . ( 2021 ) . Curiosity-driven Schmidhuber ( 1991 ) ; Pathak et al . ( 2017 ) ; Burda et al . ( 2018 ) approaches learn forward dynamics and reward visiting areas that are poorly predicted by the model . Count-based exploration Tang et al . ( 2017 ) ; Bellemare et al . ( 2016 ) ; Ostrovski et al . ( 2017 ) ; Rashid et al . ( 2020 ) encourages visiting states that are less frequently visited . Coverage-based Chen et al . ( 2018 ) ; Jayaraman & Grauman ( 2018 ) approaches reward visiting all navigable areas by searching in a task-agnostic manner . In contrast , we propose a multimodal exploration approach utilizing egocentric visual input , language instructions , and memory of explored areas to reduce task-specific uncertainty of points of interest ( areas important to complete the task ) . We show this to be more efficient , leading to more effective map prediction and robust planning . 3 PROBLEM FORMULATION . We focus on the ALFRED challenge Shridhar et al . ( 2020 ) , where an agent is asked to follow human instructions to complete long-horizon household tasks in indoor scenes ( simulated in AI2Thor Kolve et al . ( 2017 ) ) . Each task in ALFRED consists of several subgoals for either navigation ( moving in the environment ) or object interactions ( interacting with at least one object ) . Language inputs contain a high-level task description and a sequence of low-level step-by-step instructions ( each corresponding to a subgoal ) . The agent is a simulated robot with access to the states of the environment only through a front-view RGB camera with a relatively small field of view . The agent ’ s own state is a 5-tuple ( x , y , r , h , o ) , where x , y are its 2D position , r the horizontal rotation angle , h the vertical camera angles ( also called “ horizon ” ) and o the type of object held in its hand . The state space of the agent is discrete , with navigation actions : MoveAhead ( moving forward by 0.25m ) , RotateLeft & RotateRight ( rotating in the horizontal plane by 90◦ ) and LookUp & LookDown ( adjusting the horizon by 15◦ ) . Formally , r ∈ { 0◦ , 90◦ , 180◦ , 270◦ } and h ∈ { 60◦ , 45◦ , ... , −15◦ , −30◦ , } where positive h indicates facing downward . With these discrete actions , the agent has full knowledge of the relative changes ∆x , ∆y , ∆r and ∆h . Each of the 7 object interaction actions ( PickUp , Open , Slice , etc . ) is parametrized by an binary mask for the target object , which is usually predicted with a pre-trained instance segmentation module . Featuring long-horizon tasks with a range of interactions , the ALFRED challenge evaluates an agent ’ s ability to perform tasks over unseen test scenes , while only allowing ≤1000 steps and ≤10 action failures for each task at inference time . 4 AFFORDANCE-AWARE MULTIMODAL NEURAL SLAM . Affordance-aware navigation is a major challenge in solving complex and long-horizon indoor tasks such as ALFRED with both navigation and object interactions . Specifically , given each object of interest in the scene , the agent is required to not only find and approach it but also end up at a pose ( x , y , r , h ) , that is feasible for subsequent interactions with the object . For instance , to open a fridge , the robot should approach the fridge closely enough ( so the door is within reach ) , look at it ( so that the fridge is in the field of view ) , and leave enough room to open the door . To solve a long-horizon task involving multiple navigation and object interaction subgoals , it is natural to use an explicit semantic map , either 2D or 3D , of the environment ( similar to Neural Active SLAM Chaplot et al . ( 2020a ) ) , together with model-based planning ( e.g . as in HLSM Blukis et al . ( 2021 ) ) . This line of work tends to generalize better than models that directly learn mappings from human instructions to navigation & interaction actions ( e.g. , E.T . Pashevich et al . ( 2021 ) ) . With perfect knowledge of the environment , it is possible to achieve ( nearly ) perfect performance . In practice , however , the semantic map acquired at inference time is usually far from ideal , primarily due to Incompleteness ( missing information due to insufficient exploration of the scene ) and Inaccuracy ( erroneous object location prediction on the map , especially for small objects ) . To improve exploration performance , we propose a multimodal module that , at each step , predicts an exploration action a ∈ { MoveAhead , RotateLeft , RotateRight } by taking visual observations & actions in the past , step-by-step language instructions , and the explored area map which indicates where the agent has visited . We show that , compared to existing model-based approaches on ALFRED ( e.g. , HLSM Blukis et al . ( 2021 ) which applies random exploration ) , our use of lowlevel language instructions leads to more efficient exploration . The proposed exploration module operates at the subgoal level and only predicts exploration actions ( in contrast to E.T . which directly predicts actions for the entire task ) . The extra modality ( the explored area ) facilitates exploration by providing the agent with explicit spatial information . We illustrate the exploration module in Figure 3 , elaborate its details in Section 4.3 , and empirically demonstrate its advantages in Section 5 . To deal with the inaccuracy in map prediction , we carefully design an affordance-aware semantic representation for the environments . On one hand , knowing the precise spatial coordinates of objects requires precise depth information , which is difficult to acquire due to 3D sensor noise and/or inaccuracy in predicting depth from 2D images . On the other hand , affordance-aware navigation essentially asks for poses ( x , y , r , h ) of the agent suitable for interactions with the target objects , thus requiring only coarse-grained spatial information . Given an object type o , we define such corresponding poses as waypoints Wo and then treat navigation as a path planning problem among different waypoints . To generate such waypoints , we handle large objects ( fridges , cabinets , etc . ) and small objects ( apples , mug , etc . ) differently . The waypoints for large objects are computed using 2D grid maps predicted and aggregated from front-view camera images by a CNN-based network ; for small objects , we directly search over all observations acquired during the exploration phase with the help of a pre-trained Mask RCNN He et al . ( 2017 ) ( detailed below in Section 4.2 ) . | This paper proposes a new framework that improves the state-of-the-art performance of the ALFRED benchmark (accomplishing navigation and interaction tasks given language instructions in AI2THOR environments) by 40% relatively. Two claimed main technical contributions are: 1) leveraging multimodal inputs, specifically the newly incorporated language instructions and visited area maps, during exploration, 2) proposing an affordance-aware semantic representation that marks the object locations, heights, and possible agent interaction spots. Results that are 40% relatively better than the state-of-the-art are observed and several ablation studies over key network modules are also presented. | SP:ea0eec7c040a79e4d108d42578313eefe54efbee |
Reinforcement Learning with Ex-Post Max-Min Fairness | We consider reinforcement learning with vectorial rewards , where the agent receives a vector of K ≥ 2 different types of rewards at each time step . The agent aims to maximize the minimum total reward among the K reward types . Different from existing works that focus on maximizing the minimum expected total reward , i.e . ex-ante max-min fairness , we maximize the expected minimum total reward , i.e . ex-post max-min fairness . Through an example and numerical experiments , we show that the optimal policy for the former objective generally does not converge to optimality under the latter , even as the number of time steps T grows . Our main contribution is a novel algorithm , Online-ReOpt , that achieves near-optimality under our objective , assuming an optimization oracle that returns a near-optimal policy given any scalar reward . The expected objective value under Online-ReOpt is shown to converge to the asymptotic optimum as T increases . Finally , we propose offline variants to ease the burden of online computation in Online-ReOpt , and we propose generalizations from the max-min objective to concave utility maximization . 1 INTRODUCTION . The prevailing paradigm in reinforcement learning ( RL ) concerns the maximization of a single scalar reward . On one hand , optimizing a single scalar reward is sufficient for modeling simple tasks . On the other hand , in many complex tasks there are often multiple , potentially competing , rewards to be maximized . Expressing the objective function as a single linear combination of the rewards can be constraining and insufficiently expressive for the nature of these complex tasks . In addition , a suitable choice of the linear combination is often not clear a priori . In this work , we consider the reinforcement learning with max-min fairness ( RL-MMF ) problem . The agent accumulates a vector of K ≥ 1 time-average rewards V̄1 : T = ( V̄1 : T , k ) Kk=1 ∈ RK in T time steps , and aims to maximize E [ mink∈ { 1 , ... , K } V̄1 : T , k ] . The maximization objective represents ex-post max-min fairness , in contrast to the objective of ex-ante max-min fairness by maximizing mink∈ { 1 , ... , K } E [ V̄1 : T , k ] . Our main contributions are the design and analysis of the Online-ReOpt algorithm , which achieves near-optimality for the ex-post max-min fairness objective . More specifically , the objective under Online-ReOpt converges to the optimum as T increases . Our algorithm design involves a novel adaptation of the multiplicative weight update method ( Arora et al. , 2012 ) , in conjunction with a judiciously designed re-optimization schedule . The schedule ensures that the agent adapts his decision to the total vectorial reward collected at a current time point , while allowing enough time for the currently adopted policy to converge before switching to another policy . En route , we highlight crucial differences between the ex-ante and ex-post max-min fairness objectives , by showing that an optimal algorithm for the former needs not converge to the optimality even when T increases . Finally , our results are extended to the case of maximizing E [ g ( V̄1 : T ) ] , where g is a Lipschitz continuous and concave reward function . 2 RELATED WORKS . The Reinforcement Learning with Max-Min Fairness ( RL-MMF ) problem described is related to an emerging body of research on RL with ex-ante concave reward maximization . The class of ex-ante concave reward maximization problems include the maximization of g ( E [ V̄1 : T ] ) , as well as its exante variants , including the long term average variant g ( E [ limT→∞ V̄1 : T ] ) and its infinite horizon discounted reward variant . The function g : RK → R is assumed to be concave . The class of ex-ante concave reward maximization problems is studied by the following research works . Chow et al . ( 2017 ) study the case where g is specialized to the Conditional Value-at-Risk objective . Hazan et al . ( 2019 ) study the case when g models the entropy function over the probability distribution over the state space , in order to construct a policy which induces a distribution over the state space that is as close to the uniform distribution as possible . Miryoosefi et al . ( 2019 ) study the case of minimizing the distance between E [ V̄1 : T ] and a target set in RK . Lee et al . ( 2019 ) study the objective of state marginal matching , which aims to make the state marginal distribution match a given target state distribution . Pareto optimality of E [ V̄1 : T ] and its ex-ante variants are studied in ( Mannor & Shimkin , 2004 ; Gábor et al. , 1998 ; Barrett & Narayanan , 2008 ; Van Moffaert & Nowé , 2014 ) . Lastly , a recent work Zahavy et al . ( 2021 ) provides a unifying framework that encompasses many of the previously mentioned works , by studying the problem of maximizing g ( E [ V̄1 : T ] ) and its ex-ante variants , where g is concave and Lipschitz continuous . Our contributions , which concern the ex-post max-min fairness E [ mink∈ { 1 , ... , K } V̄1 : T , k ] and its generalization to the ex-post concave case , are crucially different from the body of works on the ex-ante case . The difference is further highlighted in the forthcoming Section 3.2 . Additionally , a body of works Altman ( 1999 ) ; Tessler et al . ( 2019 ) ; Le et al . ( 2019 ) ; Liu et al . ( 2020 ) study the setting where g is a linear function , subject to the constraint that E [ V̄1 : T ] ( or its ex-ante variants ) is contained in a convex feasible region , such as a polytope . There is another line of research works Tarbouriech & Lazaric ( 2019 ) ; Cheung ( 2019 ) ; Brantley et al . ( 2020 ) focusing on various online settings . The works Tarbouriech & Lazaric ( 2019 ) ; Cheung ( 2019 ) focus on the ex-post setting like ours , but they crucially assume that the underlying g is smooth , which is not the case for our max-min objective nor the case of Lipschitz continuous concave functions . In addition , the optimality gap ( quantified by the notion of regret ) degrades linearly with the number of states , which makes their applications to large scale problems challenging . Brantley et al . ( 2020 ) focus on the ex-ante setting , different from our ex-post setting , and their optimality gap also degrades linearly with the number of states . 3 MODEL . Set up . An instance of the Reinforcement Learning with Max-Min Fairness ( RL-MMF ) problem is specified by the tuple ( S , s1 , A , T , O ) . The set S is a finite state space , and s1 ∈ S is the initial state . In the collection A = { As } s∈S , the set As contains the actions that the agent can take when he is at state s. Each set As is finite . The quantity T ∈ N is the number of time steps . When the agent takes action a ∈ As at state s , he receives the array of stochastic outcomes ( s′ , U ( s , a ) ) , governed by the outcome distribution O ( s , a ) . For brevity , we abbreviate the relationship as ( s′ , U ( s , a ) ) ∼ O ( s , a ) . The outcome s′ ∈ S is the subsequent state he transits to . The outcome U ( s , a ) = ( Uk ( s , a ) ) Kk=1 is a random vector lying in [ −1 , 1 ] K almost surely . The random variable Uk ( s , a ) is the amount of type-k stochastic reward the agent receives . We allow the random variables s′ , U1 ( s , a ) , . . . UK ( s , a ) to be arbitrarily correlated . Dynamics . At time t ∈ { 1 , . . . T } , the agent observes his current state st. Then , he selects an action at ∈ Ast . After that , he receives the stochastic feedback ( st+1 , Vt ( st , at ) ) ∼ O ( st , at ) . We denote Vt ( st , at ) = ( Vt , k ( st , at ) ) K k=1 , where Vt , k ( st , at ) is the type-k stochastic reward received at time t. The agent select the actions { at } Tt=1 with a policy π = { πt } Tt=1 , which is a collection of functions . For each t , the function πt inputs the history Ht−1 = ∪t−1q=1 { sq , aq , Vq ( sq , aq ) } and the current state { st } , and outputs at ∈ Ast . We use the notation aπt to highlight that the action is chosen under policy π . A policy π is stationary if for all t , Ht−1 , st it holds that πt ( Ht−1 , st ) = π̄ ( st ) for some function π̄ , where π̄ ( s ) ∈ As for all s. With a slight abuse of notation , we identify a stationary policy with the function π̄ . Objective . We denote V̄ π1 : t = 1t ∑t q=1 Vq ( sq , a π q ) as the time average vectorial reward during time 1 to t under policy π . The agent ’ s over-arching goal is to design a policy π that maximizes E [ gmin ( V̄ π1 : T ) ] , where gmin : RK → R is defined as gmin ( v ) = mink∈ { 1 , ... , K } vk . Denoting V̄ π1 : T , k as the k-th component of the vector V̄ π1 : T , the value gmin ( V̄ π 1 : T ) = mink V̄ π 1 : T , k is the minimum time average reward , among the reward types 1 , . . . , K. The function gmin is concave , and is 1-Lipschitz w.r.t . ‖ · ‖∞ over the domain RK . When K = 1 , the RL-MMF problem reduces to the conventional RL problem with scalar reward maximization . The case of K > 1 is more subtle . Generally , the optimizing agent needs to focus on different reward types in different time steps , contingent upon the amounts of the different reward types at the current time step . Since the max-min fairness objective could lead to an intractable optimization problem , we aim to design a near-optimal policy for the RL-MMF problem . 3.1 REGRET . We quantify the near-optimality of a policy π by the notion of regret , which is the difference between a benchmark opt ( P ( gmin ) ) and the expected reward E [ gmin ( V̄ π1 : T ) ] . Formally , the regret of a policy π in a T time step horizon is Reg ( π , T ) = opt ( P ( gmin ) ) − E [ gmin ( V̄ π1 : T ) ] . ( 1 ) The benchmark opt ( P ( gmin ) ) is a fluid approximation to the expected optimum . To define opt ( P ( gmin ) ) , we introduce the notation p = { p ( s′|s , a ) } s∈S , a∈As , where p ( s′|s , a ) is the probability of transiting to s′ from s , a . In addition , we introduce v = { v ( s , a ) } s∈S , a∈As , where v ( s , a ) = E [ U ( s , a ) ] is the vector of the K expected rewards . The benchmark opt ( P ( gmin ) ) is the optimal value of the maximization problem P ( gmin ) . For any g : RK → R , we define P ( g ) : max x g ∑ s∈S , a∈As v ( s , a ) x ( s , a ) s.t . ∑ a∈As x ( s , a ) = ∑ s′∈S , a′∈As′ p ( s|s′ , a′ ) x ( s′ , a′ ) ∀s ∈ S ( 2a ) ∑ s∈S , a∈As x ( s , a ) = 1 ( 2b ) x ( s , a ) ≥ 0 ∀s ∈ S , a ∈ As . ( 2c ) The concave maximization problem P ( gmin ) serves as a fluid relaxation to RL-MMF . For each s ∈ S , a ∈ As , the variable x ( s , a ) can be interpreted as the frequency of the agent visiting state s and taking action a . The set of constraints ( 2a ) stipulates that the rate of transiting out of a state s is equal to the rate of transiting into the state s for each s ∈ S , while the sets of constraints ( 2b , 2c ) require that { x ( s , a ) } s∈S , a∈As forms a probability distribution over the state-action pairs . Consequently , opt ( P ( gmin ) ) is an asymptotic ( in T ) upper bound to the expected optimum . Our goal is to design a policy π such that its regret1 Reg ( T ) satisfies Reg ( T ) = opt ( P ( gmin ) ) − E [ gmin ( V̄ π1 : T ) ] ≤ D T γ ( 3 ) holds for all initial state s1 ∈ S and all T ∈ N , with parameters D , γ > 0 independent of T . We assume the access to an optimization oracle Λ , which returns a near-optimal policy given any scalar reward . For ϑ ∈ RK , define the linear function gϑ : RK → R as gϑ ( w ) = ϑ > w = ∑K k=1 ϑkwk . The oracle Λ inputs ϑ ∈ RK , and outputs a policy π satisfying opt ( P ( gϑ ) ) − E [ gϑ ( V̄ π1 : T ) ] = opt ( P ( gϑ ) ) − E [ ϑ > V̄ π1 : T ] ≤ Dlin T β ( 4 ) for all initial state s1 ∈ S and all T ∈ N , with parameters Dlin , β > 0 independent of T . By assuming β > 0 , we are assuming that the output policy π is near-optimal , in the sense that the difference opt ( P ( gϑ ) ) − E [ ϑ > V̄ π1 : T ] converges to 0 as T tends to the infinity . A higher β signifies a 1We omit the notation with π for brevity sake faster convergence , representing a higher degree of near-optimality . We refer to ϑ as a scalarization of v , with the resulting scalarized reward being ϑ > v ( s , a ) for each s , a . Our algorithmic frameworks involve invoking Λ as a sub-routine on different ϑ ’ s . In other words , we assume an algorithmic sub-routine that solves the underlying RL problem with scalar reward ( the case of K = 1 ) , and delivers an algorithm that ensures max-min fairness ( the case of K ≥ 1 ) . Finally , while the main text focuses on gmin , our algorithm design and analysis can be generalized to the case of concave g , as detailed in Appendix C. 3.2 COMPARISON BETWEEN MAXIMIZING E [ gMIN ( V̄ π1 : T ) ] AND gMIN ( E [ V̄ π1 : T ] ) Before introducing our algorithms , we illustrate the difference between the objectives of maximizing E [ gmin ( V̄ π1 : T ) ] and gmin ( E [ V̄ π1 : T ] ) by the deterministic instance in Figure 1 , with initial state s1 = so . ) ( 00 ) ( ( ) The figure depicts an instance with K = 2 . An arc represents an action that leads to a transition from its tail to its head . For example , the arc from so to s ` represents the action ao ` , with p ( s ` | so , ao ` ) = 1 . Likewise , the loop at s ` represents the action a `` with p ( s ` | s ` , a `` ) = 1 . Each arc is labeled with its vectorial reward , which is deterministic . For example , with certainty we have V ( so , ao ` ) = ( 0 0 ) and V ( s ` , a `` ) = ( 0 1 ) . Consider two stationary policies π ` , πr , defined as π ` ( sr ) = aro , π ` ( so ) = ao ` , π ` ( s ` ) = a `` and πr ( sr ) = arr , πr ( so ) = aor , πr ( s ` ) = a ` o . The policy π ` always seeks to transit to s ` , and then loop at s ` indefinitely , likewise for πr . With certainty , V̄ π ` 1 : T = ( 0 1−1/T ) , V̄ π r 1 : T = ( 1−1/T 0 ) . The objective gmin ( E [ V̄ π1 : T ] ) is maximized by choosing πran uniformly at random from the collection { π ` , πr } . We have E [ V̄ πran1 : T ] = ( 1/2−1/ ( 2T ) 1/2−1/ ( 2T ) ) , leading to the optimal value of 1/2 − 1/ ( 2T ) . More generally , existing research focuses on maximizing g ( E [ V̄ π1 : T ] ) for certain concave g , and the related objectives of maximizing g ( limT→∞ E [ V̄ π1 : T ] ) or g ( E [ ∑∞ t=1 α tVt ( st , a π t ) ] ) , where α ∈ ( 0 , 1 ) is the discount factor . In these research works , a near-optimal policy π is constructed by first generating a collection Π of stationary policies , then sampling π uniformly at random from Π. Interestingly , πran is sub-optimal for maximizing E [ gmin ( V̄ π1 : T ) ] . Indeed , Pr ( V̄ πran 1 : T = ( 0 1−1/T ) ) = Pr ( V̄ πran1 : T = ( 1−1/T 0 ) ) = 1/2 , so we have E [ gmin ( V̄ πran1 : T ) ] = 0 for all T . Now , consider the deterministic policy πsw , which first follows π ` for the first bT/2c time steps , then follows πr in the remaining dT/2e time steps . We have V̄ πsw1 : T , k ≥ 1/2 − 2/T for each k ∈ { 1 , 2 } , meaning that gmin ( V̄ πsw 1 : T ) ≥ 1/2− 2/T . Note that gmin ( E [ V̄ πsw 1 : T ] ) ≥ gmin ( E [ V̄ πran 1 : T ] ) − 2/T , so the policy πsw is also near-optimal for maximizing gmin ( E [ V̄ π1 : T ] ) . Altogether , an optimal policy for maximizing gmin ( E [ V̄ π1 : T ] ) can be far from optimal for maximizing E [ gmin ( V̄ π1 : T ) ] . In addition , for the latter objective , it is intuitive to imitate πsw , which is to partition the horizon into episodes and run a suitable stationary policy during each episode . A weakness to πsw is that its partitioning requires the knowledge on T . While our algorithm follows the intuition to imitate πsw , we propose an alternate partitioning that allows does not require T as an input . | The paper studies an RL problem, where the rewards are given as a vector at each time step. The goal is to find a policy that balances the total rewards on different dimensions of the reward vector, i.e., one that maximizes the total reward of the worst dimension. More specifically, the paper takes an ex-post perspective and presents an online algorithm that achieves near optimality. An offline variant of this online algorithm is also proposed to alleviate the heavy cost of online computation in some applications. The authors also conducted experiments to evaluate the proposed algorithms. | SP:8952c1e4e27c758cb98871fa4db31b71d9ee607f |
Reinforcement Learning with Ex-Post Max-Min Fairness | We consider reinforcement learning with vectorial rewards , where the agent receives a vector of K ≥ 2 different types of rewards at each time step . The agent aims to maximize the minimum total reward among the K reward types . Different from existing works that focus on maximizing the minimum expected total reward , i.e . ex-ante max-min fairness , we maximize the expected minimum total reward , i.e . ex-post max-min fairness . Through an example and numerical experiments , we show that the optimal policy for the former objective generally does not converge to optimality under the latter , even as the number of time steps T grows . Our main contribution is a novel algorithm , Online-ReOpt , that achieves near-optimality under our objective , assuming an optimization oracle that returns a near-optimal policy given any scalar reward . The expected objective value under Online-ReOpt is shown to converge to the asymptotic optimum as T increases . Finally , we propose offline variants to ease the burden of online computation in Online-ReOpt , and we propose generalizations from the max-min objective to concave utility maximization . 1 INTRODUCTION . The prevailing paradigm in reinforcement learning ( RL ) concerns the maximization of a single scalar reward . On one hand , optimizing a single scalar reward is sufficient for modeling simple tasks . On the other hand , in many complex tasks there are often multiple , potentially competing , rewards to be maximized . Expressing the objective function as a single linear combination of the rewards can be constraining and insufficiently expressive for the nature of these complex tasks . In addition , a suitable choice of the linear combination is often not clear a priori . In this work , we consider the reinforcement learning with max-min fairness ( RL-MMF ) problem . The agent accumulates a vector of K ≥ 1 time-average rewards V̄1 : T = ( V̄1 : T , k ) Kk=1 ∈ RK in T time steps , and aims to maximize E [ mink∈ { 1 , ... , K } V̄1 : T , k ] . The maximization objective represents ex-post max-min fairness , in contrast to the objective of ex-ante max-min fairness by maximizing mink∈ { 1 , ... , K } E [ V̄1 : T , k ] . Our main contributions are the design and analysis of the Online-ReOpt algorithm , which achieves near-optimality for the ex-post max-min fairness objective . More specifically , the objective under Online-ReOpt converges to the optimum as T increases . Our algorithm design involves a novel adaptation of the multiplicative weight update method ( Arora et al. , 2012 ) , in conjunction with a judiciously designed re-optimization schedule . The schedule ensures that the agent adapts his decision to the total vectorial reward collected at a current time point , while allowing enough time for the currently adopted policy to converge before switching to another policy . En route , we highlight crucial differences between the ex-ante and ex-post max-min fairness objectives , by showing that an optimal algorithm for the former needs not converge to the optimality even when T increases . Finally , our results are extended to the case of maximizing E [ g ( V̄1 : T ) ] , where g is a Lipschitz continuous and concave reward function . 2 RELATED WORKS . The Reinforcement Learning with Max-Min Fairness ( RL-MMF ) problem described is related to an emerging body of research on RL with ex-ante concave reward maximization . The class of ex-ante concave reward maximization problems include the maximization of g ( E [ V̄1 : T ] ) , as well as its exante variants , including the long term average variant g ( E [ limT→∞ V̄1 : T ] ) and its infinite horizon discounted reward variant . The function g : RK → R is assumed to be concave . The class of ex-ante concave reward maximization problems is studied by the following research works . Chow et al . ( 2017 ) study the case where g is specialized to the Conditional Value-at-Risk objective . Hazan et al . ( 2019 ) study the case when g models the entropy function over the probability distribution over the state space , in order to construct a policy which induces a distribution over the state space that is as close to the uniform distribution as possible . Miryoosefi et al . ( 2019 ) study the case of minimizing the distance between E [ V̄1 : T ] and a target set in RK . Lee et al . ( 2019 ) study the objective of state marginal matching , which aims to make the state marginal distribution match a given target state distribution . Pareto optimality of E [ V̄1 : T ] and its ex-ante variants are studied in ( Mannor & Shimkin , 2004 ; Gábor et al. , 1998 ; Barrett & Narayanan , 2008 ; Van Moffaert & Nowé , 2014 ) . Lastly , a recent work Zahavy et al . ( 2021 ) provides a unifying framework that encompasses many of the previously mentioned works , by studying the problem of maximizing g ( E [ V̄1 : T ] ) and its ex-ante variants , where g is concave and Lipschitz continuous . Our contributions , which concern the ex-post max-min fairness E [ mink∈ { 1 , ... , K } V̄1 : T , k ] and its generalization to the ex-post concave case , are crucially different from the body of works on the ex-ante case . The difference is further highlighted in the forthcoming Section 3.2 . Additionally , a body of works Altman ( 1999 ) ; Tessler et al . ( 2019 ) ; Le et al . ( 2019 ) ; Liu et al . ( 2020 ) study the setting where g is a linear function , subject to the constraint that E [ V̄1 : T ] ( or its ex-ante variants ) is contained in a convex feasible region , such as a polytope . There is another line of research works Tarbouriech & Lazaric ( 2019 ) ; Cheung ( 2019 ) ; Brantley et al . ( 2020 ) focusing on various online settings . The works Tarbouriech & Lazaric ( 2019 ) ; Cheung ( 2019 ) focus on the ex-post setting like ours , but they crucially assume that the underlying g is smooth , which is not the case for our max-min objective nor the case of Lipschitz continuous concave functions . In addition , the optimality gap ( quantified by the notion of regret ) degrades linearly with the number of states , which makes their applications to large scale problems challenging . Brantley et al . ( 2020 ) focus on the ex-ante setting , different from our ex-post setting , and their optimality gap also degrades linearly with the number of states . 3 MODEL . Set up . An instance of the Reinforcement Learning with Max-Min Fairness ( RL-MMF ) problem is specified by the tuple ( S , s1 , A , T , O ) . The set S is a finite state space , and s1 ∈ S is the initial state . In the collection A = { As } s∈S , the set As contains the actions that the agent can take when he is at state s. Each set As is finite . The quantity T ∈ N is the number of time steps . When the agent takes action a ∈ As at state s , he receives the array of stochastic outcomes ( s′ , U ( s , a ) ) , governed by the outcome distribution O ( s , a ) . For brevity , we abbreviate the relationship as ( s′ , U ( s , a ) ) ∼ O ( s , a ) . The outcome s′ ∈ S is the subsequent state he transits to . The outcome U ( s , a ) = ( Uk ( s , a ) ) Kk=1 is a random vector lying in [ −1 , 1 ] K almost surely . The random variable Uk ( s , a ) is the amount of type-k stochastic reward the agent receives . We allow the random variables s′ , U1 ( s , a ) , . . . UK ( s , a ) to be arbitrarily correlated . Dynamics . At time t ∈ { 1 , . . . T } , the agent observes his current state st. Then , he selects an action at ∈ Ast . After that , he receives the stochastic feedback ( st+1 , Vt ( st , at ) ) ∼ O ( st , at ) . We denote Vt ( st , at ) = ( Vt , k ( st , at ) ) K k=1 , where Vt , k ( st , at ) is the type-k stochastic reward received at time t. The agent select the actions { at } Tt=1 with a policy π = { πt } Tt=1 , which is a collection of functions . For each t , the function πt inputs the history Ht−1 = ∪t−1q=1 { sq , aq , Vq ( sq , aq ) } and the current state { st } , and outputs at ∈ Ast . We use the notation aπt to highlight that the action is chosen under policy π . A policy π is stationary if for all t , Ht−1 , st it holds that πt ( Ht−1 , st ) = π̄ ( st ) for some function π̄ , where π̄ ( s ) ∈ As for all s. With a slight abuse of notation , we identify a stationary policy with the function π̄ . Objective . We denote V̄ π1 : t = 1t ∑t q=1 Vq ( sq , a π q ) as the time average vectorial reward during time 1 to t under policy π . The agent ’ s over-arching goal is to design a policy π that maximizes E [ gmin ( V̄ π1 : T ) ] , where gmin : RK → R is defined as gmin ( v ) = mink∈ { 1 , ... , K } vk . Denoting V̄ π1 : T , k as the k-th component of the vector V̄ π1 : T , the value gmin ( V̄ π 1 : T ) = mink V̄ π 1 : T , k is the minimum time average reward , among the reward types 1 , . . . , K. The function gmin is concave , and is 1-Lipschitz w.r.t . ‖ · ‖∞ over the domain RK . When K = 1 , the RL-MMF problem reduces to the conventional RL problem with scalar reward maximization . The case of K > 1 is more subtle . Generally , the optimizing agent needs to focus on different reward types in different time steps , contingent upon the amounts of the different reward types at the current time step . Since the max-min fairness objective could lead to an intractable optimization problem , we aim to design a near-optimal policy for the RL-MMF problem . 3.1 REGRET . We quantify the near-optimality of a policy π by the notion of regret , which is the difference between a benchmark opt ( P ( gmin ) ) and the expected reward E [ gmin ( V̄ π1 : T ) ] . Formally , the regret of a policy π in a T time step horizon is Reg ( π , T ) = opt ( P ( gmin ) ) − E [ gmin ( V̄ π1 : T ) ] . ( 1 ) The benchmark opt ( P ( gmin ) ) is a fluid approximation to the expected optimum . To define opt ( P ( gmin ) ) , we introduce the notation p = { p ( s′|s , a ) } s∈S , a∈As , where p ( s′|s , a ) is the probability of transiting to s′ from s , a . In addition , we introduce v = { v ( s , a ) } s∈S , a∈As , where v ( s , a ) = E [ U ( s , a ) ] is the vector of the K expected rewards . The benchmark opt ( P ( gmin ) ) is the optimal value of the maximization problem P ( gmin ) . For any g : RK → R , we define P ( g ) : max x g ∑ s∈S , a∈As v ( s , a ) x ( s , a ) s.t . ∑ a∈As x ( s , a ) = ∑ s′∈S , a′∈As′ p ( s|s′ , a′ ) x ( s′ , a′ ) ∀s ∈ S ( 2a ) ∑ s∈S , a∈As x ( s , a ) = 1 ( 2b ) x ( s , a ) ≥ 0 ∀s ∈ S , a ∈ As . ( 2c ) The concave maximization problem P ( gmin ) serves as a fluid relaxation to RL-MMF . For each s ∈ S , a ∈ As , the variable x ( s , a ) can be interpreted as the frequency of the agent visiting state s and taking action a . The set of constraints ( 2a ) stipulates that the rate of transiting out of a state s is equal to the rate of transiting into the state s for each s ∈ S , while the sets of constraints ( 2b , 2c ) require that { x ( s , a ) } s∈S , a∈As forms a probability distribution over the state-action pairs . Consequently , opt ( P ( gmin ) ) is an asymptotic ( in T ) upper bound to the expected optimum . Our goal is to design a policy π such that its regret1 Reg ( T ) satisfies Reg ( T ) = opt ( P ( gmin ) ) − E [ gmin ( V̄ π1 : T ) ] ≤ D T γ ( 3 ) holds for all initial state s1 ∈ S and all T ∈ N , with parameters D , γ > 0 independent of T . We assume the access to an optimization oracle Λ , which returns a near-optimal policy given any scalar reward . For ϑ ∈ RK , define the linear function gϑ : RK → R as gϑ ( w ) = ϑ > w = ∑K k=1 ϑkwk . The oracle Λ inputs ϑ ∈ RK , and outputs a policy π satisfying opt ( P ( gϑ ) ) − E [ gϑ ( V̄ π1 : T ) ] = opt ( P ( gϑ ) ) − E [ ϑ > V̄ π1 : T ] ≤ Dlin T β ( 4 ) for all initial state s1 ∈ S and all T ∈ N , with parameters Dlin , β > 0 independent of T . By assuming β > 0 , we are assuming that the output policy π is near-optimal , in the sense that the difference opt ( P ( gϑ ) ) − E [ ϑ > V̄ π1 : T ] converges to 0 as T tends to the infinity . A higher β signifies a 1We omit the notation with π for brevity sake faster convergence , representing a higher degree of near-optimality . We refer to ϑ as a scalarization of v , with the resulting scalarized reward being ϑ > v ( s , a ) for each s , a . Our algorithmic frameworks involve invoking Λ as a sub-routine on different ϑ ’ s . In other words , we assume an algorithmic sub-routine that solves the underlying RL problem with scalar reward ( the case of K = 1 ) , and delivers an algorithm that ensures max-min fairness ( the case of K ≥ 1 ) . Finally , while the main text focuses on gmin , our algorithm design and analysis can be generalized to the case of concave g , as detailed in Appendix C. 3.2 COMPARISON BETWEEN MAXIMIZING E [ gMIN ( V̄ π1 : T ) ] AND gMIN ( E [ V̄ π1 : T ] ) Before introducing our algorithms , we illustrate the difference between the objectives of maximizing E [ gmin ( V̄ π1 : T ) ] and gmin ( E [ V̄ π1 : T ] ) by the deterministic instance in Figure 1 , with initial state s1 = so . ) ( 00 ) ( ( ) The figure depicts an instance with K = 2 . An arc represents an action that leads to a transition from its tail to its head . For example , the arc from so to s ` represents the action ao ` , with p ( s ` | so , ao ` ) = 1 . Likewise , the loop at s ` represents the action a `` with p ( s ` | s ` , a `` ) = 1 . Each arc is labeled with its vectorial reward , which is deterministic . For example , with certainty we have V ( so , ao ` ) = ( 0 0 ) and V ( s ` , a `` ) = ( 0 1 ) . Consider two stationary policies π ` , πr , defined as π ` ( sr ) = aro , π ` ( so ) = ao ` , π ` ( s ` ) = a `` and πr ( sr ) = arr , πr ( so ) = aor , πr ( s ` ) = a ` o . The policy π ` always seeks to transit to s ` , and then loop at s ` indefinitely , likewise for πr . With certainty , V̄ π ` 1 : T = ( 0 1−1/T ) , V̄ π r 1 : T = ( 1−1/T 0 ) . The objective gmin ( E [ V̄ π1 : T ] ) is maximized by choosing πran uniformly at random from the collection { π ` , πr } . We have E [ V̄ πran1 : T ] = ( 1/2−1/ ( 2T ) 1/2−1/ ( 2T ) ) , leading to the optimal value of 1/2 − 1/ ( 2T ) . More generally , existing research focuses on maximizing g ( E [ V̄ π1 : T ] ) for certain concave g , and the related objectives of maximizing g ( limT→∞ E [ V̄ π1 : T ] ) or g ( E [ ∑∞ t=1 α tVt ( st , a π t ) ] ) , where α ∈ ( 0 , 1 ) is the discount factor . In these research works , a near-optimal policy π is constructed by first generating a collection Π of stationary policies , then sampling π uniformly at random from Π. Interestingly , πran is sub-optimal for maximizing E [ gmin ( V̄ π1 : T ) ] . Indeed , Pr ( V̄ πran 1 : T = ( 0 1−1/T ) ) = Pr ( V̄ πran1 : T = ( 1−1/T 0 ) ) = 1/2 , so we have E [ gmin ( V̄ πran1 : T ) ] = 0 for all T . Now , consider the deterministic policy πsw , which first follows π ` for the first bT/2c time steps , then follows πr in the remaining dT/2e time steps . We have V̄ πsw1 : T , k ≥ 1/2 − 2/T for each k ∈ { 1 , 2 } , meaning that gmin ( V̄ πsw 1 : T ) ≥ 1/2− 2/T . Note that gmin ( E [ V̄ πsw 1 : T ] ) ≥ gmin ( E [ V̄ πran 1 : T ] ) − 2/T , so the policy πsw is also near-optimal for maximizing gmin ( E [ V̄ π1 : T ] ) . Altogether , an optimal policy for maximizing gmin ( E [ V̄ π1 : T ] ) can be far from optimal for maximizing E [ gmin ( V̄ π1 : T ) ] . In addition , for the latter objective , it is intuitive to imitate πsw , which is to partition the horizon into episodes and run a suitable stationary policy during each episode . A weakness to πsw is that its partitioning requires the knowledge on T . While our algorithm follows the intuition to imitate πsw , we propose an alternate partitioning that allows does not require T as an input . | The paper considers RL problem with a vectorial reward of $K$ dimension. The authors consider to maximize the minimal value function $\mathbb{E}[g_{min}(\bar{V}_{1:T})]$ and achieve a regret bound of $O(\sqrt{logK}T^{\frac{2}{3}})$ given an oracle to solve the RL problem with scalar reward. The authors also discuss the offline variant of the proposed algorithm. Finally, the authors verify their algorithm with a queuing system. | SP:8952c1e4e27c758cb98871fa4db31b71d9ee607f |
Reinforcement Learning with Ex-Post Max-Min Fairness | We consider reinforcement learning with vectorial rewards , where the agent receives a vector of K ≥ 2 different types of rewards at each time step . The agent aims to maximize the minimum total reward among the K reward types . Different from existing works that focus on maximizing the minimum expected total reward , i.e . ex-ante max-min fairness , we maximize the expected minimum total reward , i.e . ex-post max-min fairness . Through an example and numerical experiments , we show that the optimal policy for the former objective generally does not converge to optimality under the latter , even as the number of time steps T grows . Our main contribution is a novel algorithm , Online-ReOpt , that achieves near-optimality under our objective , assuming an optimization oracle that returns a near-optimal policy given any scalar reward . The expected objective value under Online-ReOpt is shown to converge to the asymptotic optimum as T increases . Finally , we propose offline variants to ease the burden of online computation in Online-ReOpt , and we propose generalizations from the max-min objective to concave utility maximization . 1 INTRODUCTION . The prevailing paradigm in reinforcement learning ( RL ) concerns the maximization of a single scalar reward . On one hand , optimizing a single scalar reward is sufficient for modeling simple tasks . On the other hand , in many complex tasks there are often multiple , potentially competing , rewards to be maximized . Expressing the objective function as a single linear combination of the rewards can be constraining and insufficiently expressive for the nature of these complex tasks . In addition , a suitable choice of the linear combination is often not clear a priori . In this work , we consider the reinforcement learning with max-min fairness ( RL-MMF ) problem . The agent accumulates a vector of K ≥ 1 time-average rewards V̄1 : T = ( V̄1 : T , k ) Kk=1 ∈ RK in T time steps , and aims to maximize E [ mink∈ { 1 , ... , K } V̄1 : T , k ] . The maximization objective represents ex-post max-min fairness , in contrast to the objective of ex-ante max-min fairness by maximizing mink∈ { 1 , ... , K } E [ V̄1 : T , k ] . Our main contributions are the design and analysis of the Online-ReOpt algorithm , which achieves near-optimality for the ex-post max-min fairness objective . More specifically , the objective under Online-ReOpt converges to the optimum as T increases . Our algorithm design involves a novel adaptation of the multiplicative weight update method ( Arora et al. , 2012 ) , in conjunction with a judiciously designed re-optimization schedule . The schedule ensures that the agent adapts his decision to the total vectorial reward collected at a current time point , while allowing enough time for the currently adopted policy to converge before switching to another policy . En route , we highlight crucial differences between the ex-ante and ex-post max-min fairness objectives , by showing that an optimal algorithm for the former needs not converge to the optimality even when T increases . Finally , our results are extended to the case of maximizing E [ g ( V̄1 : T ) ] , where g is a Lipschitz continuous and concave reward function . 2 RELATED WORKS . The Reinforcement Learning with Max-Min Fairness ( RL-MMF ) problem described is related to an emerging body of research on RL with ex-ante concave reward maximization . The class of ex-ante concave reward maximization problems include the maximization of g ( E [ V̄1 : T ] ) , as well as its exante variants , including the long term average variant g ( E [ limT→∞ V̄1 : T ] ) and its infinite horizon discounted reward variant . The function g : RK → R is assumed to be concave . The class of ex-ante concave reward maximization problems is studied by the following research works . Chow et al . ( 2017 ) study the case where g is specialized to the Conditional Value-at-Risk objective . Hazan et al . ( 2019 ) study the case when g models the entropy function over the probability distribution over the state space , in order to construct a policy which induces a distribution over the state space that is as close to the uniform distribution as possible . Miryoosefi et al . ( 2019 ) study the case of minimizing the distance between E [ V̄1 : T ] and a target set in RK . Lee et al . ( 2019 ) study the objective of state marginal matching , which aims to make the state marginal distribution match a given target state distribution . Pareto optimality of E [ V̄1 : T ] and its ex-ante variants are studied in ( Mannor & Shimkin , 2004 ; Gábor et al. , 1998 ; Barrett & Narayanan , 2008 ; Van Moffaert & Nowé , 2014 ) . Lastly , a recent work Zahavy et al . ( 2021 ) provides a unifying framework that encompasses many of the previously mentioned works , by studying the problem of maximizing g ( E [ V̄1 : T ] ) and its ex-ante variants , where g is concave and Lipschitz continuous . Our contributions , which concern the ex-post max-min fairness E [ mink∈ { 1 , ... , K } V̄1 : T , k ] and its generalization to the ex-post concave case , are crucially different from the body of works on the ex-ante case . The difference is further highlighted in the forthcoming Section 3.2 . Additionally , a body of works Altman ( 1999 ) ; Tessler et al . ( 2019 ) ; Le et al . ( 2019 ) ; Liu et al . ( 2020 ) study the setting where g is a linear function , subject to the constraint that E [ V̄1 : T ] ( or its ex-ante variants ) is contained in a convex feasible region , such as a polytope . There is another line of research works Tarbouriech & Lazaric ( 2019 ) ; Cheung ( 2019 ) ; Brantley et al . ( 2020 ) focusing on various online settings . The works Tarbouriech & Lazaric ( 2019 ) ; Cheung ( 2019 ) focus on the ex-post setting like ours , but they crucially assume that the underlying g is smooth , which is not the case for our max-min objective nor the case of Lipschitz continuous concave functions . In addition , the optimality gap ( quantified by the notion of regret ) degrades linearly with the number of states , which makes their applications to large scale problems challenging . Brantley et al . ( 2020 ) focus on the ex-ante setting , different from our ex-post setting , and their optimality gap also degrades linearly with the number of states . 3 MODEL . Set up . An instance of the Reinforcement Learning with Max-Min Fairness ( RL-MMF ) problem is specified by the tuple ( S , s1 , A , T , O ) . The set S is a finite state space , and s1 ∈ S is the initial state . In the collection A = { As } s∈S , the set As contains the actions that the agent can take when he is at state s. Each set As is finite . The quantity T ∈ N is the number of time steps . When the agent takes action a ∈ As at state s , he receives the array of stochastic outcomes ( s′ , U ( s , a ) ) , governed by the outcome distribution O ( s , a ) . For brevity , we abbreviate the relationship as ( s′ , U ( s , a ) ) ∼ O ( s , a ) . The outcome s′ ∈ S is the subsequent state he transits to . The outcome U ( s , a ) = ( Uk ( s , a ) ) Kk=1 is a random vector lying in [ −1 , 1 ] K almost surely . The random variable Uk ( s , a ) is the amount of type-k stochastic reward the agent receives . We allow the random variables s′ , U1 ( s , a ) , . . . UK ( s , a ) to be arbitrarily correlated . Dynamics . At time t ∈ { 1 , . . . T } , the agent observes his current state st. Then , he selects an action at ∈ Ast . After that , he receives the stochastic feedback ( st+1 , Vt ( st , at ) ) ∼ O ( st , at ) . We denote Vt ( st , at ) = ( Vt , k ( st , at ) ) K k=1 , where Vt , k ( st , at ) is the type-k stochastic reward received at time t. The agent select the actions { at } Tt=1 with a policy π = { πt } Tt=1 , which is a collection of functions . For each t , the function πt inputs the history Ht−1 = ∪t−1q=1 { sq , aq , Vq ( sq , aq ) } and the current state { st } , and outputs at ∈ Ast . We use the notation aπt to highlight that the action is chosen under policy π . A policy π is stationary if for all t , Ht−1 , st it holds that πt ( Ht−1 , st ) = π̄ ( st ) for some function π̄ , where π̄ ( s ) ∈ As for all s. With a slight abuse of notation , we identify a stationary policy with the function π̄ . Objective . We denote V̄ π1 : t = 1t ∑t q=1 Vq ( sq , a π q ) as the time average vectorial reward during time 1 to t under policy π . The agent ’ s over-arching goal is to design a policy π that maximizes E [ gmin ( V̄ π1 : T ) ] , where gmin : RK → R is defined as gmin ( v ) = mink∈ { 1 , ... , K } vk . Denoting V̄ π1 : T , k as the k-th component of the vector V̄ π1 : T , the value gmin ( V̄ π 1 : T ) = mink V̄ π 1 : T , k is the minimum time average reward , among the reward types 1 , . . . , K. The function gmin is concave , and is 1-Lipschitz w.r.t . ‖ · ‖∞ over the domain RK . When K = 1 , the RL-MMF problem reduces to the conventional RL problem with scalar reward maximization . The case of K > 1 is more subtle . Generally , the optimizing agent needs to focus on different reward types in different time steps , contingent upon the amounts of the different reward types at the current time step . Since the max-min fairness objective could lead to an intractable optimization problem , we aim to design a near-optimal policy for the RL-MMF problem . 3.1 REGRET . We quantify the near-optimality of a policy π by the notion of regret , which is the difference between a benchmark opt ( P ( gmin ) ) and the expected reward E [ gmin ( V̄ π1 : T ) ] . Formally , the regret of a policy π in a T time step horizon is Reg ( π , T ) = opt ( P ( gmin ) ) − E [ gmin ( V̄ π1 : T ) ] . ( 1 ) The benchmark opt ( P ( gmin ) ) is a fluid approximation to the expected optimum . To define opt ( P ( gmin ) ) , we introduce the notation p = { p ( s′|s , a ) } s∈S , a∈As , where p ( s′|s , a ) is the probability of transiting to s′ from s , a . In addition , we introduce v = { v ( s , a ) } s∈S , a∈As , where v ( s , a ) = E [ U ( s , a ) ] is the vector of the K expected rewards . The benchmark opt ( P ( gmin ) ) is the optimal value of the maximization problem P ( gmin ) . For any g : RK → R , we define P ( g ) : max x g ∑ s∈S , a∈As v ( s , a ) x ( s , a ) s.t . ∑ a∈As x ( s , a ) = ∑ s′∈S , a′∈As′ p ( s|s′ , a′ ) x ( s′ , a′ ) ∀s ∈ S ( 2a ) ∑ s∈S , a∈As x ( s , a ) = 1 ( 2b ) x ( s , a ) ≥ 0 ∀s ∈ S , a ∈ As . ( 2c ) The concave maximization problem P ( gmin ) serves as a fluid relaxation to RL-MMF . For each s ∈ S , a ∈ As , the variable x ( s , a ) can be interpreted as the frequency of the agent visiting state s and taking action a . The set of constraints ( 2a ) stipulates that the rate of transiting out of a state s is equal to the rate of transiting into the state s for each s ∈ S , while the sets of constraints ( 2b , 2c ) require that { x ( s , a ) } s∈S , a∈As forms a probability distribution over the state-action pairs . Consequently , opt ( P ( gmin ) ) is an asymptotic ( in T ) upper bound to the expected optimum . Our goal is to design a policy π such that its regret1 Reg ( T ) satisfies Reg ( T ) = opt ( P ( gmin ) ) − E [ gmin ( V̄ π1 : T ) ] ≤ D T γ ( 3 ) holds for all initial state s1 ∈ S and all T ∈ N , with parameters D , γ > 0 independent of T . We assume the access to an optimization oracle Λ , which returns a near-optimal policy given any scalar reward . For ϑ ∈ RK , define the linear function gϑ : RK → R as gϑ ( w ) = ϑ > w = ∑K k=1 ϑkwk . The oracle Λ inputs ϑ ∈ RK , and outputs a policy π satisfying opt ( P ( gϑ ) ) − E [ gϑ ( V̄ π1 : T ) ] = opt ( P ( gϑ ) ) − E [ ϑ > V̄ π1 : T ] ≤ Dlin T β ( 4 ) for all initial state s1 ∈ S and all T ∈ N , with parameters Dlin , β > 0 independent of T . By assuming β > 0 , we are assuming that the output policy π is near-optimal , in the sense that the difference opt ( P ( gϑ ) ) − E [ ϑ > V̄ π1 : T ] converges to 0 as T tends to the infinity . A higher β signifies a 1We omit the notation with π for brevity sake faster convergence , representing a higher degree of near-optimality . We refer to ϑ as a scalarization of v , with the resulting scalarized reward being ϑ > v ( s , a ) for each s , a . Our algorithmic frameworks involve invoking Λ as a sub-routine on different ϑ ’ s . In other words , we assume an algorithmic sub-routine that solves the underlying RL problem with scalar reward ( the case of K = 1 ) , and delivers an algorithm that ensures max-min fairness ( the case of K ≥ 1 ) . Finally , while the main text focuses on gmin , our algorithm design and analysis can be generalized to the case of concave g , as detailed in Appendix C. 3.2 COMPARISON BETWEEN MAXIMIZING E [ gMIN ( V̄ π1 : T ) ] AND gMIN ( E [ V̄ π1 : T ] ) Before introducing our algorithms , we illustrate the difference between the objectives of maximizing E [ gmin ( V̄ π1 : T ) ] and gmin ( E [ V̄ π1 : T ] ) by the deterministic instance in Figure 1 , with initial state s1 = so . ) ( 00 ) ( ( ) The figure depicts an instance with K = 2 . An arc represents an action that leads to a transition from its tail to its head . For example , the arc from so to s ` represents the action ao ` , with p ( s ` | so , ao ` ) = 1 . Likewise , the loop at s ` represents the action a `` with p ( s ` | s ` , a `` ) = 1 . Each arc is labeled with its vectorial reward , which is deterministic . For example , with certainty we have V ( so , ao ` ) = ( 0 0 ) and V ( s ` , a `` ) = ( 0 1 ) . Consider two stationary policies π ` , πr , defined as π ` ( sr ) = aro , π ` ( so ) = ao ` , π ` ( s ` ) = a `` and πr ( sr ) = arr , πr ( so ) = aor , πr ( s ` ) = a ` o . The policy π ` always seeks to transit to s ` , and then loop at s ` indefinitely , likewise for πr . With certainty , V̄ π ` 1 : T = ( 0 1−1/T ) , V̄ π r 1 : T = ( 1−1/T 0 ) . The objective gmin ( E [ V̄ π1 : T ] ) is maximized by choosing πran uniformly at random from the collection { π ` , πr } . We have E [ V̄ πran1 : T ] = ( 1/2−1/ ( 2T ) 1/2−1/ ( 2T ) ) , leading to the optimal value of 1/2 − 1/ ( 2T ) . More generally , existing research focuses on maximizing g ( E [ V̄ π1 : T ] ) for certain concave g , and the related objectives of maximizing g ( limT→∞ E [ V̄ π1 : T ] ) or g ( E [ ∑∞ t=1 α tVt ( st , a π t ) ] ) , where α ∈ ( 0 , 1 ) is the discount factor . In these research works , a near-optimal policy π is constructed by first generating a collection Π of stationary policies , then sampling π uniformly at random from Π. Interestingly , πran is sub-optimal for maximizing E [ gmin ( V̄ π1 : T ) ] . Indeed , Pr ( V̄ πran 1 : T = ( 0 1−1/T ) ) = Pr ( V̄ πran1 : T = ( 1−1/T 0 ) ) = 1/2 , so we have E [ gmin ( V̄ πran1 : T ) ] = 0 for all T . Now , consider the deterministic policy πsw , which first follows π ` for the first bT/2c time steps , then follows πr in the remaining dT/2e time steps . We have V̄ πsw1 : T , k ≥ 1/2 − 2/T for each k ∈ { 1 , 2 } , meaning that gmin ( V̄ πsw 1 : T ) ≥ 1/2− 2/T . Note that gmin ( E [ V̄ πsw 1 : T ] ) ≥ gmin ( E [ V̄ πran 1 : T ] ) − 2/T , so the policy πsw is also near-optimal for maximizing gmin ( E [ V̄ π1 : T ] ) . Altogether , an optimal policy for maximizing gmin ( E [ V̄ π1 : T ] ) can be far from optimal for maximizing E [ gmin ( V̄ π1 : T ) ] . In addition , for the latter objective , it is intuitive to imitate πsw , which is to partition the horizon into episodes and run a suitable stationary policy during each episode . A weakness to πsw is that its partitioning requires the knowledge on T . While our algorithm follows the intuition to imitate πsw , we propose an alternate partitioning that allows does not require T as an input . | The paper considers the ex-post max-min objective for the vector reward RL problem. The paper provides a simple example to illustrate the difference between ex-ante and ex-post objectives. An MWU-based algorithm is proposed to solve the problem with a provable regret guarantee, where the algorithm resorts to an approximately optimal policy oracle episodically. The algorithm also features a variant that can fully rely on offline solutions. Numerical experiments on a classic queue control problem demonstrate the performance of the algorithm against two existing benchmarks. | SP:8952c1e4e27c758cb98871fa4db31b71d9ee607f |
Few-shot Learning via Dirichlet Tessellation Ensemble | 1 INTRODUCTION . Recent years have witnessed a tremendous success of deep learning in a number of data-intensive applications ; one critical reason for which is the vast collection of hand-annotated high-quality data , such as the millions of natural images for visual object recognition ( Deng et al. , 2009 ) . However , in many real-world applications , such large-scale data acquisition might be difficult and comes at a premium , such as in rare disease diagnosis ( Yoo et al. , 2021 ) and drug discovery ( Ma et al. , 2021b ; 2018 ) . As a consequence , Few-shot Learning ( FSL ) has recently drawn growing interests ( Wang et al. , 2020 ) . Generally , few-shot learning algorithms can be categorized into two types , namely inductive and transductive , depending on whether estimating the distribution of query samples is allowed . A typical transductive FSL algorithm learns to propagate labels among a larger pool of query samples in a semi-supervised manner ( Liu et al. , 2019 ) ; notwithstanding its normally higher performance , in many real world scenarios a query sample ( e.g . patient ) also comes individually and is unique , for instance , in personalized pharmacogenomics ( Sharifi-Noghabi et al. , 2020 ) . Thus , we in this paper adhere to the inductive setting and make on-the-fly prediction for each newly seen sample . Few-shot learning is challenging and substantially different from conventional deep learning , and has been tackled by many researchers from a wide variety of angles . Despite the extensive research All four authors are corresponding authors . on the algorithmic aspects of FSL ( see Sec . 2 ) , two challenges still pose an obstacle to successful FSL : ( 1 ) how to sufficiently compensate for the data deficiency in FSL ? and ( 2 ) how to make the most use of the base samples and the pre-trained model ? For the first question , data augmentation has been a successful approach to expand the size of data , either by Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) ( Li et al. , 2020b ; Zhang et al. , 2018 ) or by variational autoencoders ( VAEs ) ( Kingma & Welling , 2014 ) ( Zhang et al. , 2019 ; Chen et al. , 2019b ) . However , in each way , the authenticity of either the augmented data or the feature is not guaranteed , and the out-of-distribution hallucinated samples ( Ma et al. , 2019 ) may hinder the subsequent FSL . Recently , Liu et al . ( 2020b ) and Ni et al . ( 2021 ) investigate supportlevel , query-level , task-level , and shot-level augmentation for meta-learning , but the diversity of FSL models has not been taken into consideration . For the second question , Yang et al . ( 2021 ) borrows the top-2 nearest base classes for each novel sample to calibrate its distribution and to generate more novel samples . However , when there is no proximal base class , this calibration may utterly alter the distribution . Another line of work ( Sbai et al. , 2020 ; Zhou et al. , 2020 ) learns to select and design base classes for a better discrimination on novel classes , which all introduce extra training burden . As a matter of fact , we still lack a method that makes full use of the base classes and the pretrained model effectively . In this paper , we study the FSL problem from a geometric point of view . In metric-based FSL , despite being surprisingly simple , the nearest neighbor-like approaches , e.g . ProtoNet ( Snell et al. , 2017 ) and SimpleShot ( Wang et al. , 2019 ) , have achieved remarkable performance that is even better than many sophisticatedly designed methods . Geometrically , what a nearest neighbor-based method does , under the hood , is to partition the feature space into a Voronoi Diagram ( VD ) that is induced by the feature centroids of the novel classes . Although it is highly efficient and simple , Voronoi Diagrams coarsely draw the decision boundary by linear bisectors separating two centers , and may lack the ability to subtly delineate the geometric structure arises in FSL . To resolve this issue , we adopt a novel technique called Cluster-induced Voronoi Diagram ( CIVD ) ( Chen et al. , 2013 ; 2017 ; Huang & Xu , 2020 ; Huang et al. , 2021 ) , which is a recent breakthrough in computation geometry . CIVD generalizes VD from a point-to-point distance-based diagram to a cluster-to-point influence-based structure . It enables us to determine the dominating region ( or Voronoi cell ) not only for a point ( e.g . a class prototype ) but also for a cluster of points , guaranteed to have a ( 1 + ) -approximation with a nearly linear size of diagram for a wide range of locally dominating influence functions . CIVD provides us a mathematically elegant framework to depict the feature space and draw the decision boundary more precisely than VD without losing the resistance to overfitting . Accordingly , in this paper , we show how CIVD is used to improve multiple stages of FSL and make several contributions as follows . 1 . We first categorize different types of few-shot classifiers as different variants of Voronoi Diagram : nearest neighbor model as Voronoi Diagram , linear classifier as Power Diagram , and cosine classifier as spherical Voronoi Diagram ( Table 1 ) . We then unify them via CIVD that enjoys the advantages of multiple models , either parametric or nonparametric ( denoted as DeepVoro -- ) . 2 . Going from cluster-to-point to cluster-to-cluster influence , we further propose Cluster-to-cluster Voronoi Diagram ( CCVD ) , as a natural extension of CIVD . Based on CCVD , we present DeepVoro which enables fast geometric ensemble of a large pool of thousands of configurations for FSL . 3 . Instead of using base classes for distribution calibration and data augmentation ( Yang et al. , 2021 ) , we propose a novel surrogate representation , the collection of similarities to base classes , and thus promote DeepVoro to DeepVoro++ that integrates feature-level , transformation-level , and geometry-level heterogeneities in FSL . Extensive experiments have shown that , although a fixed feature extractor is used without independently pretrained or epoch-wise models , our method achieves new state-of-the-art results on all three benchmark datasets including mini-ImageNet , CUB , and tiered-ImageNet , and improves by up to 2.18 % on 5-shot classification , 2.53 % on 1-shot classification , and up to 5.55 % with different network architectures . 2 RELATED WORK . Few-Shot Learning . There are a number of different lines of research dedicated to FSL . ( 1 ) Metricbased methods employ a certain distance function ( cosine distance ( Mangla et al. , 2020 ; Xu et al. , 2021 ) , Euclidean distance ( Wang et al. , 2019 ; Snell et al. , 2017 ) , or Earth Mover ’ s Distance ( Zhang et al. , 2020a ; b ) ) to bypass the optimization and avoid possible overfitting . ( 2 ) Optimization-based approaches ( Finn et al. , 2017 ) manages to learn a good model initialization that accelerates the optimization in the meta-testing stage . ( 3 ) Self-supervised-based ( Zhang et al. , 2021b ; Mangla et al. , 2020 ) methods incorporate supervision from data itself to learn a robuster feature extractor . ( 4 ) Ensemble method is another powerful technique that boosting the performance by integrating multiple models ( Ma et al. , 2021a ) . For example , Dvornik et al . ( 2019 ) trains several networks simultaneously and encourages robustness and cooperation among them . However , due to the high computation load of training deep models , this ensemble is restricted by the number of networks which is typically < 20 . In Liu et al . ( 2020c ) , instead , the ensemble consists of models learned at each epoch , which , may potentially limit the diversity of ensemble members . Geometric Understanding of Deep Learning . The geometric structure of deep neural networks is first hinted at by Raghu et al . ( 2017 ) who reveals that piecewise linear activations subdivide input space into convex polytopes . Then , Balestriero et al . ( 2019 ) points out that the exact structure is a Power Diagram ( Aurenhammer , 1987 ) which is subsequently applied upon recurrent neural network ( Wang et al. , 2018 ) and generative model ( Balestriero et al. , 2020 ) . The Power/Voronoi Diagram subdivision , however , is not necessarily the optimal model for describing feature space . Recently , Chen et al . ( 2013 ; 2017 ) ; Huang et al . ( 2021 ) uses an influence function F ( C , z ) to measure the joint influence of all objects in C on a query z to build a Cluster-induced Voronoi Diagram ( CIVD ) . In this paper , we utilize CIVD to magnify the expressivity of geometric modeling for FSL . 3 METHODOLOGY . 3.1 PRELIMINARIES . Few-shot learning aims at discriminating between novel classes Cnovel with the aid of a larger amount of samples from base classes Cbase , Cnovel∩Cbase = ∅ . The whole learning process usually follows the meta-learning scheme . Formally , given a dataset of base classes D = { ( xi , yi ) } , xi ∈ D , yi ∈ Cbase with D being an arbitrary domain e.g . natural image , a deep neural network z = φ ( x ) , z ∈ Rn , which maps from image domain D to feature domain Rn , is trained using standard gradient descent algorithm , and after which φ is fixed as a feature extractor . This process is referred to as metatraining stage that squeezes out the commonsense knowledge from D. For a fair evaluation of the learning performance on a few samples , the meta-testing stage is typically formulated as a series of K-way N -shot tasks ( episodes ) { T } . Each such episode is further decomposed into a support set S = { ( xi , yi ) } K×Ni=1 , yi ∈ CT and a query set Q = { ( xi , yi ) } K×Q i=1 , yi ∈ CT , in which the episode classes CT is a randomly sampled subset of Cnovel with cardinality K , and each class contains onlyN andQ random samples in the support set and query set , respectively . For few-shot classification , we introduce here two widely used schemes as follows . For simplicity , all samples here are from S and Q , without data augmentation applied . Nearest Neighbor Classifier ( Nonparametric ) . In Snell et al . ( 2017 ) ; Wang et al . ( 2019 ) etc. , a prototype ck is acquired by averaging over all supporting features for a class k ∈ CT : ck = 1 N ∑ x∈S , y=k φ ( x ) ( 1 ) Then each query sample x ∈ Q is classified by finding the nearest prototype : ŷ = arg minkd ( z , ck ) = ||z − ck||22 , in which we use Euclidean distance for distance metric d. Linear Classifier ( Parametric ) . Another scheme uses a linear classifier with cross-entropy loss optimized on the supporting samples : L ( W , b ) = ∑ ( x , y ) ∈S − log p ( y|φ ( x ) ; W , b ) = ∑ ( x , y ) ∈S − log exp ( W Ty φ ( x ) + by ) ∑ k exp ( W T k φ ( x ) + bk ) ( 2 ) in which Wk , bk are the linear weight and bias for class k , and the predicted class for query x ∈ Q is ŷ = arg maxk p ( y|z ; Wk , bk ) . | The paper proposes a CIVD-based approach to few-shot learning. CIVD, cluster-induced Voronoi diagrams, are a known technique that is used to categorize / describe different types of few-shot classifiers. In the experiment section DeepVoro(++) is shown to perform superior to other methods on three datasets. | SP:db254200b00eef9cbe43879df29db3720cd18762 |
Few-shot Learning via Dirichlet Tessellation Ensemble | 1 INTRODUCTION . Recent years have witnessed a tremendous success of deep learning in a number of data-intensive applications ; one critical reason for which is the vast collection of hand-annotated high-quality data , such as the millions of natural images for visual object recognition ( Deng et al. , 2009 ) . However , in many real-world applications , such large-scale data acquisition might be difficult and comes at a premium , such as in rare disease diagnosis ( Yoo et al. , 2021 ) and drug discovery ( Ma et al. , 2021b ; 2018 ) . As a consequence , Few-shot Learning ( FSL ) has recently drawn growing interests ( Wang et al. , 2020 ) . Generally , few-shot learning algorithms can be categorized into two types , namely inductive and transductive , depending on whether estimating the distribution of query samples is allowed . A typical transductive FSL algorithm learns to propagate labels among a larger pool of query samples in a semi-supervised manner ( Liu et al. , 2019 ) ; notwithstanding its normally higher performance , in many real world scenarios a query sample ( e.g . patient ) also comes individually and is unique , for instance , in personalized pharmacogenomics ( Sharifi-Noghabi et al. , 2020 ) . Thus , we in this paper adhere to the inductive setting and make on-the-fly prediction for each newly seen sample . Few-shot learning is challenging and substantially different from conventional deep learning , and has been tackled by many researchers from a wide variety of angles . Despite the extensive research All four authors are corresponding authors . on the algorithmic aspects of FSL ( see Sec . 2 ) , two challenges still pose an obstacle to successful FSL : ( 1 ) how to sufficiently compensate for the data deficiency in FSL ? and ( 2 ) how to make the most use of the base samples and the pre-trained model ? For the first question , data augmentation has been a successful approach to expand the size of data , either by Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) ( Li et al. , 2020b ; Zhang et al. , 2018 ) or by variational autoencoders ( VAEs ) ( Kingma & Welling , 2014 ) ( Zhang et al. , 2019 ; Chen et al. , 2019b ) . However , in each way , the authenticity of either the augmented data or the feature is not guaranteed , and the out-of-distribution hallucinated samples ( Ma et al. , 2019 ) may hinder the subsequent FSL . Recently , Liu et al . ( 2020b ) and Ni et al . ( 2021 ) investigate supportlevel , query-level , task-level , and shot-level augmentation for meta-learning , but the diversity of FSL models has not been taken into consideration . For the second question , Yang et al . ( 2021 ) borrows the top-2 nearest base classes for each novel sample to calibrate its distribution and to generate more novel samples . However , when there is no proximal base class , this calibration may utterly alter the distribution . Another line of work ( Sbai et al. , 2020 ; Zhou et al. , 2020 ) learns to select and design base classes for a better discrimination on novel classes , which all introduce extra training burden . As a matter of fact , we still lack a method that makes full use of the base classes and the pretrained model effectively . In this paper , we study the FSL problem from a geometric point of view . In metric-based FSL , despite being surprisingly simple , the nearest neighbor-like approaches , e.g . ProtoNet ( Snell et al. , 2017 ) and SimpleShot ( Wang et al. , 2019 ) , have achieved remarkable performance that is even better than many sophisticatedly designed methods . Geometrically , what a nearest neighbor-based method does , under the hood , is to partition the feature space into a Voronoi Diagram ( VD ) that is induced by the feature centroids of the novel classes . Although it is highly efficient and simple , Voronoi Diagrams coarsely draw the decision boundary by linear bisectors separating two centers , and may lack the ability to subtly delineate the geometric structure arises in FSL . To resolve this issue , we adopt a novel technique called Cluster-induced Voronoi Diagram ( CIVD ) ( Chen et al. , 2013 ; 2017 ; Huang & Xu , 2020 ; Huang et al. , 2021 ) , which is a recent breakthrough in computation geometry . CIVD generalizes VD from a point-to-point distance-based diagram to a cluster-to-point influence-based structure . It enables us to determine the dominating region ( or Voronoi cell ) not only for a point ( e.g . a class prototype ) but also for a cluster of points , guaranteed to have a ( 1 + ) -approximation with a nearly linear size of diagram for a wide range of locally dominating influence functions . CIVD provides us a mathematically elegant framework to depict the feature space and draw the decision boundary more precisely than VD without losing the resistance to overfitting . Accordingly , in this paper , we show how CIVD is used to improve multiple stages of FSL and make several contributions as follows . 1 . We first categorize different types of few-shot classifiers as different variants of Voronoi Diagram : nearest neighbor model as Voronoi Diagram , linear classifier as Power Diagram , and cosine classifier as spherical Voronoi Diagram ( Table 1 ) . We then unify them via CIVD that enjoys the advantages of multiple models , either parametric or nonparametric ( denoted as DeepVoro -- ) . 2 . Going from cluster-to-point to cluster-to-cluster influence , we further propose Cluster-to-cluster Voronoi Diagram ( CCVD ) , as a natural extension of CIVD . Based on CCVD , we present DeepVoro which enables fast geometric ensemble of a large pool of thousands of configurations for FSL . 3 . Instead of using base classes for distribution calibration and data augmentation ( Yang et al. , 2021 ) , we propose a novel surrogate representation , the collection of similarities to base classes , and thus promote DeepVoro to DeepVoro++ that integrates feature-level , transformation-level , and geometry-level heterogeneities in FSL . Extensive experiments have shown that , although a fixed feature extractor is used without independently pretrained or epoch-wise models , our method achieves new state-of-the-art results on all three benchmark datasets including mini-ImageNet , CUB , and tiered-ImageNet , and improves by up to 2.18 % on 5-shot classification , 2.53 % on 1-shot classification , and up to 5.55 % with different network architectures . 2 RELATED WORK . Few-Shot Learning . There are a number of different lines of research dedicated to FSL . ( 1 ) Metricbased methods employ a certain distance function ( cosine distance ( Mangla et al. , 2020 ; Xu et al. , 2021 ) , Euclidean distance ( Wang et al. , 2019 ; Snell et al. , 2017 ) , or Earth Mover ’ s Distance ( Zhang et al. , 2020a ; b ) ) to bypass the optimization and avoid possible overfitting . ( 2 ) Optimization-based approaches ( Finn et al. , 2017 ) manages to learn a good model initialization that accelerates the optimization in the meta-testing stage . ( 3 ) Self-supervised-based ( Zhang et al. , 2021b ; Mangla et al. , 2020 ) methods incorporate supervision from data itself to learn a robuster feature extractor . ( 4 ) Ensemble method is another powerful technique that boosting the performance by integrating multiple models ( Ma et al. , 2021a ) . For example , Dvornik et al . ( 2019 ) trains several networks simultaneously and encourages robustness and cooperation among them . However , due to the high computation load of training deep models , this ensemble is restricted by the number of networks which is typically < 20 . In Liu et al . ( 2020c ) , instead , the ensemble consists of models learned at each epoch , which , may potentially limit the diversity of ensemble members . Geometric Understanding of Deep Learning . The geometric structure of deep neural networks is first hinted at by Raghu et al . ( 2017 ) who reveals that piecewise linear activations subdivide input space into convex polytopes . Then , Balestriero et al . ( 2019 ) points out that the exact structure is a Power Diagram ( Aurenhammer , 1987 ) which is subsequently applied upon recurrent neural network ( Wang et al. , 2018 ) and generative model ( Balestriero et al. , 2020 ) . The Power/Voronoi Diagram subdivision , however , is not necessarily the optimal model for describing feature space . Recently , Chen et al . ( 2013 ; 2017 ) ; Huang et al . ( 2021 ) uses an influence function F ( C , z ) to measure the joint influence of all objects in C on a query z to build a Cluster-induced Voronoi Diagram ( CIVD ) . In this paper , we utilize CIVD to magnify the expressivity of geometric modeling for FSL . 3 METHODOLOGY . 3.1 PRELIMINARIES . Few-shot learning aims at discriminating between novel classes Cnovel with the aid of a larger amount of samples from base classes Cbase , Cnovel∩Cbase = ∅ . The whole learning process usually follows the meta-learning scheme . Formally , given a dataset of base classes D = { ( xi , yi ) } , xi ∈ D , yi ∈ Cbase with D being an arbitrary domain e.g . natural image , a deep neural network z = φ ( x ) , z ∈ Rn , which maps from image domain D to feature domain Rn , is trained using standard gradient descent algorithm , and after which φ is fixed as a feature extractor . This process is referred to as metatraining stage that squeezes out the commonsense knowledge from D. For a fair evaluation of the learning performance on a few samples , the meta-testing stage is typically formulated as a series of K-way N -shot tasks ( episodes ) { T } . Each such episode is further decomposed into a support set S = { ( xi , yi ) } K×Ni=1 , yi ∈ CT and a query set Q = { ( xi , yi ) } K×Q i=1 , yi ∈ CT , in which the episode classes CT is a randomly sampled subset of Cnovel with cardinality K , and each class contains onlyN andQ random samples in the support set and query set , respectively . For few-shot classification , we introduce here two widely used schemes as follows . For simplicity , all samples here are from S and Q , without data augmentation applied . Nearest Neighbor Classifier ( Nonparametric ) . In Snell et al . ( 2017 ) ; Wang et al . ( 2019 ) etc. , a prototype ck is acquired by averaging over all supporting features for a class k ∈ CT : ck = 1 N ∑ x∈S , y=k φ ( x ) ( 1 ) Then each query sample x ∈ Q is classified by finding the nearest prototype : ŷ = arg minkd ( z , ck ) = ||z − ck||22 , in which we use Euclidean distance for distance metric d. Linear Classifier ( Parametric ) . Another scheme uses a linear classifier with cross-entropy loss optimized on the supporting samples : L ( W , b ) = ∑ ( x , y ) ∈S − log p ( y|φ ( x ) ; W , b ) = ∑ ( x , y ) ∈S − log exp ( W Ty φ ( x ) + by ) ∑ k exp ( W T k φ ( x ) + bk ) ( 2 ) in which Wk , bk are the linear weight and bias for class k , and the predicted class for query x ∈ Q is ŷ = arg maxk p ( y|z ; Wk , bk ) . | The paper presents an approach for few-shot (FS) learning using Voronoi Diagrams (VD). In particular, it relates the objectives of existing FS approaches to VD, and shows how Cluster-induced Voronoi Diagrams (CIVD), a variant of VD that allows multiple centers in a cell, can be used in for FS learning ensemble method (DeepVoro). Extensive quantitative evaluations show improvements over prior work on three public datasets. | SP:db254200b00eef9cbe43879df29db3720cd18762 |
Few-shot Learning via Dirichlet Tessellation Ensemble | 1 INTRODUCTION . Recent years have witnessed a tremendous success of deep learning in a number of data-intensive applications ; one critical reason for which is the vast collection of hand-annotated high-quality data , such as the millions of natural images for visual object recognition ( Deng et al. , 2009 ) . However , in many real-world applications , such large-scale data acquisition might be difficult and comes at a premium , such as in rare disease diagnosis ( Yoo et al. , 2021 ) and drug discovery ( Ma et al. , 2021b ; 2018 ) . As a consequence , Few-shot Learning ( FSL ) has recently drawn growing interests ( Wang et al. , 2020 ) . Generally , few-shot learning algorithms can be categorized into two types , namely inductive and transductive , depending on whether estimating the distribution of query samples is allowed . A typical transductive FSL algorithm learns to propagate labels among a larger pool of query samples in a semi-supervised manner ( Liu et al. , 2019 ) ; notwithstanding its normally higher performance , in many real world scenarios a query sample ( e.g . patient ) also comes individually and is unique , for instance , in personalized pharmacogenomics ( Sharifi-Noghabi et al. , 2020 ) . Thus , we in this paper adhere to the inductive setting and make on-the-fly prediction for each newly seen sample . Few-shot learning is challenging and substantially different from conventional deep learning , and has been tackled by many researchers from a wide variety of angles . Despite the extensive research All four authors are corresponding authors . on the algorithmic aspects of FSL ( see Sec . 2 ) , two challenges still pose an obstacle to successful FSL : ( 1 ) how to sufficiently compensate for the data deficiency in FSL ? and ( 2 ) how to make the most use of the base samples and the pre-trained model ? For the first question , data augmentation has been a successful approach to expand the size of data , either by Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) ( Li et al. , 2020b ; Zhang et al. , 2018 ) or by variational autoencoders ( VAEs ) ( Kingma & Welling , 2014 ) ( Zhang et al. , 2019 ; Chen et al. , 2019b ) . However , in each way , the authenticity of either the augmented data or the feature is not guaranteed , and the out-of-distribution hallucinated samples ( Ma et al. , 2019 ) may hinder the subsequent FSL . Recently , Liu et al . ( 2020b ) and Ni et al . ( 2021 ) investigate supportlevel , query-level , task-level , and shot-level augmentation for meta-learning , but the diversity of FSL models has not been taken into consideration . For the second question , Yang et al . ( 2021 ) borrows the top-2 nearest base classes for each novel sample to calibrate its distribution and to generate more novel samples . However , when there is no proximal base class , this calibration may utterly alter the distribution . Another line of work ( Sbai et al. , 2020 ; Zhou et al. , 2020 ) learns to select and design base classes for a better discrimination on novel classes , which all introduce extra training burden . As a matter of fact , we still lack a method that makes full use of the base classes and the pretrained model effectively . In this paper , we study the FSL problem from a geometric point of view . In metric-based FSL , despite being surprisingly simple , the nearest neighbor-like approaches , e.g . ProtoNet ( Snell et al. , 2017 ) and SimpleShot ( Wang et al. , 2019 ) , have achieved remarkable performance that is even better than many sophisticatedly designed methods . Geometrically , what a nearest neighbor-based method does , under the hood , is to partition the feature space into a Voronoi Diagram ( VD ) that is induced by the feature centroids of the novel classes . Although it is highly efficient and simple , Voronoi Diagrams coarsely draw the decision boundary by linear bisectors separating two centers , and may lack the ability to subtly delineate the geometric structure arises in FSL . To resolve this issue , we adopt a novel technique called Cluster-induced Voronoi Diagram ( CIVD ) ( Chen et al. , 2013 ; 2017 ; Huang & Xu , 2020 ; Huang et al. , 2021 ) , which is a recent breakthrough in computation geometry . CIVD generalizes VD from a point-to-point distance-based diagram to a cluster-to-point influence-based structure . It enables us to determine the dominating region ( or Voronoi cell ) not only for a point ( e.g . a class prototype ) but also for a cluster of points , guaranteed to have a ( 1 + ) -approximation with a nearly linear size of diagram for a wide range of locally dominating influence functions . CIVD provides us a mathematically elegant framework to depict the feature space and draw the decision boundary more precisely than VD without losing the resistance to overfitting . Accordingly , in this paper , we show how CIVD is used to improve multiple stages of FSL and make several contributions as follows . 1 . We first categorize different types of few-shot classifiers as different variants of Voronoi Diagram : nearest neighbor model as Voronoi Diagram , linear classifier as Power Diagram , and cosine classifier as spherical Voronoi Diagram ( Table 1 ) . We then unify them via CIVD that enjoys the advantages of multiple models , either parametric or nonparametric ( denoted as DeepVoro -- ) . 2 . Going from cluster-to-point to cluster-to-cluster influence , we further propose Cluster-to-cluster Voronoi Diagram ( CCVD ) , as a natural extension of CIVD . Based on CCVD , we present DeepVoro which enables fast geometric ensemble of a large pool of thousands of configurations for FSL . 3 . Instead of using base classes for distribution calibration and data augmentation ( Yang et al. , 2021 ) , we propose a novel surrogate representation , the collection of similarities to base classes , and thus promote DeepVoro to DeepVoro++ that integrates feature-level , transformation-level , and geometry-level heterogeneities in FSL . Extensive experiments have shown that , although a fixed feature extractor is used without independently pretrained or epoch-wise models , our method achieves new state-of-the-art results on all three benchmark datasets including mini-ImageNet , CUB , and tiered-ImageNet , and improves by up to 2.18 % on 5-shot classification , 2.53 % on 1-shot classification , and up to 5.55 % with different network architectures . 2 RELATED WORK . Few-Shot Learning . There are a number of different lines of research dedicated to FSL . ( 1 ) Metricbased methods employ a certain distance function ( cosine distance ( Mangla et al. , 2020 ; Xu et al. , 2021 ) , Euclidean distance ( Wang et al. , 2019 ; Snell et al. , 2017 ) , or Earth Mover ’ s Distance ( Zhang et al. , 2020a ; b ) ) to bypass the optimization and avoid possible overfitting . ( 2 ) Optimization-based approaches ( Finn et al. , 2017 ) manages to learn a good model initialization that accelerates the optimization in the meta-testing stage . ( 3 ) Self-supervised-based ( Zhang et al. , 2021b ; Mangla et al. , 2020 ) methods incorporate supervision from data itself to learn a robuster feature extractor . ( 4 ) Ensemble method is another powerful technique that boosting the performance by integrating multiple models ( Ma et al. , 2021a ) . For example , Dvornik et al . ( 2019 ) trains several networks simultaneously and encourages robustness and cooperation among them . However , due to the high computation load of training deep models , this ensemble is restricted by the number of networks which is typically < 20 . In Liu et al . ( 2020c ) , instead , the ensemble consists of models learned at each epoch , which , may potentially limit the diversity of ensemble members . Geometric Understanding of Deep Learning . The geometric structure of deep neural networks is first hinted at by Raghu et al . ( 2017 ) who reveals that piecewise linear activations subdivide input space into convex polytopes . Then , Balestriero et al . ( 2019 ) points out that the exact structure is a Power Diagram ( Aurenhammer , 1987 ) which is subsequently applied upon recurrent neural network ( Wang et al. , 2018 ) and generative model ( Balestriero et al. , 2020 ) . The Power/Voronoi Diagram subdivision , however , is not necessarily the optimal model for describing feature space . Recently , Chen et al . ( 2013 ; 2017 ) ; Huang et al . ( 2021 ) uses an influence function F ( C , z ) to measure the joint influence of all objects in C on a query z to build a Cluster-induced Voronoi Diagram ( CIVD ) . In this paper , we utilize CIVD to magnify the expressivity of geometric modeling for FSL . 3 METHODOLOGY . 3.1 PRELIMINARIES . Few-shot learning aims at discriminating between novel classes Cnovel with the aid of a larger amount of samples from base classes Cbase , Cnovel∩Cbase = ∅ . The whole learning process usually follows the meta-learning scheme . Formally , given a dataset of base classes D = { ( xi , yi ) } , xi ∈ D , yi ∈ Cbase with D being an arbitrary domain e.g . natural image , a deep neural network z = φ ( x ) , z ∈ Rn , which maps from image domain D to feature domain Rn , is trained using standard gradient descent algorithm , and after which φ is fixed as a feature extractor . This process is referred to as metatraining stage that squeezes out the commonsense knowledge from D. For a fair evaluation of the learning performance on a few samples , the meta-testing stage is typically formulated as a series of K-way N -shot tasks ( episodes ) { T } . Each such episode is further decomposed into a support set S = { ( xi , yi ) } K×Ni=1 , yi ∈ CT and a query set Q = { ( xi , yi ) } K×Q i=1 , yi ∈ CT , in which the episode classes CT is a randomly sampled subset of Cnovel with cardinality K , and each class contains onlyN andQ random samples in the support set and query set , respectively . For few-shot classification , we introduce here two widely used schemes as follows . For simplicity , all samples here are from S and Q , without data augmentation applied . Nearest Neighbor Classifier ( Nonparametric ) . In Snell et al . ( 2017 ) ; Wang et al . ( 2019 ) etc. , a prototype ck is acquired by averaging over all supporting features for a class k ∈ CT : ck = 1 N ∑ x∈S , y=k φ ( x ) ( 1 ) Then each query sample x ∈ Q is classified by finding the nearest prototype : ŷ = arg minkd ( z , ck ) = ||z − ck||22 , in which we use Euclidean distance for distance metric d. Linear Classifier ( Parametric ) . Another scheme uses a linear classifier with cross-entropy loss optimized on the supporting samples : L ( W , b ) = ∑ ( x , y ) ∈S − log p ( y|φ ( x ) ; W , b ) = ∑ ( x , y ) ∈S − log exp ( W Ty φ ( x ) + by ) ∑ k exp ( W T k φ ( x ) + bk ) ( 2 ) in which Wk , bk are the linear weight and bias for class k , and the predicted class for query x ∈ Q is ŷ = arg maxk p ( y|z ; Wk , bk ) . | This paper provides a new geometric point of view for few-shot learning (FSL). In this view, the widely used ProtoNet can be regarded as a Dirichlet Tessellation (Voronoi Diagram) in the feature space. Furthermore, the authors use the recent Cluster-induced Voronoi Diagram (CIVD) for FSL and propose an ensemble approach to achieve a stronger FSL model. Extensive experimental results on three standard benchmarks demonstrate the effectiveness of the proposed method. | SP:db254200b00eef9cbe43879df29db3720cd18762 |
Exploring Non-Contrastive Representation Learning for Deep Clustering | 1 INTRODUCTION . Deep clustering is gaining considerable attention as it can learn representation of images and perform clustering in an end-to-end fashion . Remarkably , contrastive learning-based methods ( Wang et al. , 2021 ; Van Gansbeke et al. , 2020 ; Li et al. , 2021a ; b ; Tao et al. , 2021 ; Tsai et al. , 2021 ; Niu et al. , 2021 ) have become the main thrust to advance the representation of images on several complex benchmark datasets , significantly contributing to the clustering performance . In addition , some contrastive learning methods such as MoCo ( He et al. , 2020 ) and SimCLR ( Chen et al. , 2020 ) usually require specially designed losses ( Wang et al. , 2021 ; Li et al. , 2021a ; b ; Tao et al. , 2021 ; Tsai et al. , 2021 ) or an extra pre-training stage for more discriminative representations ( Van Gansbeke et al. , 2020 ; Niu et al. , 2021 ) . Although achieving promising clustering results , contrastive learning requires a large number of negative examples to achieve the instance-wise discrimination in an embedding space where all instances are well-separated . The constructed negative pairs usually require a large batch size ( Chen et al. , 2020 ) , memory queue ( He et al. , 2020 ) , or memory bank ( Wu et al. , 2018 ) , which not only bring extra computational cost but also give rise to class collision issue ( Saunshi et al. , 2019 ) . Here , the class collision issue refers to that different instances from the same semantic class are regarded as negative pairs , hurting the representation learning for clustering . A question naturally arises : are negative examples necessary for deep clustering ? Another kind of self-supervised learning is the non-contrastive methods such as BYOL ( Grill et al. , 2020 ) and SimSiam ( Chen & He , 2021 ) , which use the representations of one augmented view to predict another view . Their success demonstrates that negative examples are not the key to avoiding representation collapse . However , to the best of our knowledge , almost all recent successful literature of deep clustering is built upon contrastive learning-based methods such as MoCo ( He et al. , 2020 ) and SimCLR ( Chen et al. , 2020 ) . There is a general consensus that the negative examples are helpful to stabilize the training of representation learning for deep clustering . As discussed in ( Wang & Isola , 2020 ) , the typical contrastive loss can be identified into two properties : 1 ) alignment term to improve the closeness of positive pairs ; and 2 ) uniformity term to encourage instances to be uniformly distributed on a unit hypersphere . In contrast , non-contrastive methods such as BYOL only optimize the alignment term , leading to unstable training and suffering from the representation collapse—which may be worsen when adding extra losses . To tackle the class collision issue , we explore the non-contrastive representation learning for deep clustering , termed non-contrastive clustering or NCC , which is based on BYOL , a representative method without negative examples . First , instead of negative sampling that is a double-edged sword , i.e. , causing class collision issue but improving the training stability , we propose to align one augmented view of the instance with the neighbors of another view in the embedding space , called positive sampling strategy , which can avoid the class collision issue and hence improve the withincluster compactness . Second , as for the clustering task , the different clusters are truly negative pairs for contrastive loss . To this end , we propose to encourage the alignment between two augmented views of prototypes and the uniformity among all prototypes , named prototypical contrastive loss or ProtoCL , which can maximize the inter-cluster distance . Moreover , we formulate our method into an EM framework , in which we iteratively perform E-step as estimating the pseudo-labels of instances and distribution of prototypes via spherical k-means based on the target network and Mstep as optimizing the online network via the proposed losses . As a result , NCC is able to form an embedding space where all clusters are well-separated and within-cluster examples are compact . The contributions of this paper are summarized as follows : • We explore the non-contrastive representation learning for deep clustering by proposing noncontrastive clustering or NCC , which is based on the Bootstrap Your Own Latent ( BYOL ) , a representative method without negative examples . • We propose a positive sampling strategy to augment instance alignment by taking into account neighboring positive examples in the embedding space , which can avoid the class collision issue and hence improve the within-cluster compactness . • We propose a novel prototypical contrastive loss or ProtoCL , which can align one augmented view of prototypes with another view and encourage the uniformity among all prototypes on a unit hypersphere , hence maximizing the inter-cluster distance . • We formulate our method into an EM framework , in which we can iteratively estimate the pseudolabels and distribution of prototypes via spherical k-means based on the target network and optimize the online network via the proposed losses . • Extensively experimental results on several benchmark datasets as well as ImageNet-1K demonstrate that NCC outperforms the existing state-of-the-art methods by a significant margin . 2 RELATED WORK . Deep clustering can be significantly advanced by discriminative representations . Examples of traditional deep clustering methods include : Xie et al . ( 2016 ) ; Yang et al . ( 2017 ) use autoencoders to simultaneously perform representation learning and clustering ; Chang et al . ( 2017 ) ; Haeusser et al . ( 2018 ) ; Wu et al . ( 2019 ) ; Ji et al . ( 2019 ) learn pair-wise relationships between original and augmented instances . However , they often suffer from inferior performance on some complex datasets such as CIFAR-20 . Inspired by the success of contrastive learning , recent studies turn to exploit the discriminative representations learned from contrastive learning to assist the downstream clustering tasks ( Van Gansbeke et al. , 2020 ; Niu et al. , 2021 ) or simultaneously optimize representation learning and clustering ( Tao et al. , 2021 ; Tsai et al. , 2021 ; Li et al. , 2021a ; Shen et al. , 2021 ) . SCAN ( Van Gansbeke et al. , 2020 ) uses the model pre-trained by SimCLR to yield the confident pseudo-labels . IDFD ( Tao et al. , 2021 ) proposes to perform both instance discrimination and feature decorrelation . GCC ( Zhong et al. , 2021 ) and WCL ( Zheng et al. , 2021 ) build a graph to label the neighbor samples as pseudo-positive examples , however , they still suffer from the class collision issue due to the contrastive loss involved and these pseudo-positive examples that may not be truly positive . All of them are built upon the contrastive learning framework , which means that they require a large number of negative examples for training stability , inevitably giving rise to class collision issue . Different from prior work , this paper explores the non-contrastive self-supervised methods , i.e. , BYOL , to achieve both representation learning and clustering . We note that Regatti et al . ( 2021 ) ; Lee et al . ( 2020 ) have tried to build the clustering framework based on BYOL , however , their methods do not consider improving within-cluster compactness and maximizing inter-cluster distance like ours . Therefore , to the best of our knowledge , this is the first successful attempt that introduces the non-contrastive representation learning into deep clustering that yields a substantial performance improvement over previous state-of-the-art methods . In Appendix A , we present related work on self-supervised learning and difference from existing methods including CC ( Li et al. , 2021b ) , GCC ( Zhong et al. , 2021 ) , WCL ( Zheng et al. , 2021 ) , and PCL ( Li et al. , 2021a ) . 3 PRELIMINARY . The most successful self-supervised learning methods in recent years can be roughly divided into contrastive ( Chen et al. , 2020 ; He et al. , 2020 ) and non-contrastive ( Grill et al. , 2020 ; Chen & He , 2021 ) . Here , we briefly summarize their formulas and discuss their difference . Contrastive learning . Contrastive learning methods perform instance-wise discrimination ( Wu et al. , 2018 ) using the InfoNCE loss ( Oord et al. , 2018 ) . Formally , assume that we have one instance x , its augmented version x+ by using random data augmentation , and a set of M negative examples drawn from the dataset , { x−1 , x − 2 , . . . , x − M } . The contrastive learning aims to learn an embedding function f that maps x onto a unit hypersphere , in which the InfoNCE loss can be defined as : Lcontr = − log exp ( f ( x ) Tf ( x+ ) /τ ) exp ( f ( x ) Tf ( x+ ) /τ ) + ∑M i=1 exp ( f ( x ) Tf ( x−i ) /τ ) ( 1 ) ≈ −f ( x ) Tf ( x+ ) /τ + log ∑M i=1 exp ( f ( x ) Tf ( x−i ) /τ ) , ( 2 ) where the first and second terms in Eq . ( 2 ) refer to as instance alignment and instance uniformity , respectively . Here , we assume that the output of f ( · ) is ` 2 normalized . That is , the representation is on a unit hypersphere . The temperature τ controls the concentration level of representations ; please refer to ( Wang & Liu , 2021 ) for detailed behaviors of τ in the contrastive loss . Intuitively , the InfoNCE loss aims to pull together the positive pair ( x , x+ ) from two different data augmentations of the same instance , and push x away from M negative examples of other instances . As discussed in ( Wang & Isola , 2020 ) , when M → ∞ , the InfoNCE loss in Eq . ( 1 ) can be approximately decoupled into two terms : alignment and uniformity , as shown in Eq . ( 2 ) . Despite the alignment term closes the positive pair , the key to avoiding representation collapse is the uniformity term , which makes the negative examples uniformly distributed on the hypersphere . Although beneficial , the negative examples inevitably lead to the class collision issue , hurting the representation learning for clustering . Non-contrastive learning . Non-contrastive learning-based methods have shown more promising results than contrastive learning for representation learning and downstream tasks ( Ericsson et al. , 2021 ) . Non-contrastive methods only optimize the alignment term in Eq . ( 2 ) to match the representations between two augmented views . Without negative examples , they leverage an online and a target network for two views , and use a predictor network to bridge the gap between these two views . They also stop the gradient of the target network to avoid the representation collapse . In particular , if τ = 0.5 , the loss used in ( Grill et al. , 2020 ; Chen & He , 2021 ) can be written as : Lnon−contr = −2g ( f ( x ) ) T f ′ ( x+ ) = ∥∥g ( f ( x ) ) − f ′ ( x+ ) ∥∥2 2 + const , ( 3 ) where g the predictor ; f and f ′ are the online and target networks , respectively ; the outputs of g ( f ( · ) ) and f ′ ( · ) are ` 2-normalized . However , as mentioned in ( Fetterman & Albrecht , 2020 ) , the non-contrastive learning methods often suffer from unstable training and highly rely on the batch-statistics and hyper-parameter tuning to avoid representation collapse . Even though Grill et al . ( 2020 ) ; Richemond et al . ( 2020 ) have proposed to use some tricks such as SyncBN ( Ioffe & Szegedy , 2015 ) and weight normalization ( Qiao et al. , 2019 ) to alleviate this issue , the additional computation cost is significant . Without negative examples , the collapse issue could be worsen when adding additional clustering losses for clustering task ; see Fig . A1 for the analysis of applying PCL ( Li et al. , 2021a ) to the BYOL . In a nutshell , most of existing successful deep clustering methods are based on contrastive learning for representation learning—giving rise to class collision issue—while the non-contrastive learning , due to unstable training with additional losses , is not yet ready for deep clustering . To that end , we explore the non-contrastive learning , i.e . BYOL , for deep clustering with positive sampling strategy and prototypical contrastive loss to avoid the class collision issue , improve the within-cluster compactness , and maximize the inter-class distance . 4 NON-CONTRASTIVE CLUSTERING Fig . 1 presents the overall framework of the proposed NCC . Based on BYOL , NCC is comprised of three networks : an online , a target , and a predictor . In Sec . 4.1 , we propose a positive sampling strategy to augment instance alignment to improve the within-cluster compactness . In Sec . 4.2 , a prototypical contrastive loss is introduced to maximize the inter-cluster distance using the pseudolabels from k-means clustering , which can encourage uniform representations . Finally , we formulate NCC into an EM framework to facilitate the understanding of training procedure in Sec . 4.3 . | For deep clustering, this paper explores the non-contrastive representation learning based on BYOL to handle the issue of the class collision caused by inaccurate negative samples. ProtoCL is proposed to encourage prototypical alignment between two augmented views and prototypical uniformity, hence maximizing the inter-cluster distance. Experiments on various datasets demonstrate the superiority of the proposed method. | SP:23ee2576623f6988dd755b76b3d152a4bf28f43e |
Exploring Non-Contrastive Representation Learning for Deep Clustering | 1 INTRODUCTION . Deep clustering is gaining considerable attention as it can learn representation of images and perform clustering in an end-to-end fashion . Remarkably , contrastive learning-based methods ( Wang et al. , 2021 ; Van Gansbeke et al. , 2020 ; Li et al. , 2021a ; b ; Tao et al. , 2021 ; Tsai et al. , 2021 ; Niu et al. , 2021 ) have become the main thrust to advance the representation of images on several complex benchmark datasets , significantly contributing to the clustering performance . In addition , some contrastive learning methods such as MoCo ( He et al. , 2020 ) and SimCLR ( Chen et al. , 2020 ) usually require specially designed losses ( Wang et al. , 2021 ; Li et al. , 2021a ; b ; Tao et al. , 2021 ; Tsai et al. , 2021 ) or an extra pre-training stage for more discriminative representations ( Van Gansbeke et al. , 2020 ; Niu et al. , 2021 ) . Although achieving promising clustering results , contrastive learning requires a large number of negative examples to achieve the instance-wise discrimination in an embedding space where all instances are well-separated . The constructed negative pairs usually require a large batch size ( Chen et al. , 2020 ) , memory queue ( He et al. , 2020 ) , or memory bank ( Wu et al. , 2018 ) , which not only bring extra computational cost but also give rise to class collision issue ( Saunshi et al. , 2019 ) . Here , the class collision issue refers to that different instances from the same semantic class are regarded as negative pairs , hurting the representation learning for clustering . A question naturally arises : are negative examples necessary for deep clustering ? Another kind of self-supervised learning is the non-contrastive methods such as BYOL ( Grill et al. , 2020 ) and SimSiam ( Chen & He , 2021 ) , which use the representations of one augmented view to predict another view . Their success demonstrates that negative examples are not the key to avoiding representation collapse . However , to the best of our knowledge , almost all recent successful literature of deep clustering is built upon contrastive learning-based methods such as MoCo ( He et al. , 2020 ) and SimCLR ( Chen et al. , 2020 ) . There is a general consensus that the negative examples are helpful to stabilize the training of representation learning for deep clustering . As discussed in ( Wang & Isola , 2020 ) , the typical contrastive loss can be identified into two properties : 1 ) alignment term to improve the closeness of positive pairs ; and 2 ) uniformity term to encourage instances to be uniformly distributed on a unit hypersphere . In contrast , non-contrastive methods such as BYOL only optimize the alignment term , leading to unstable training and suffering from the representation collapse—which may be worsen when adding extra losses . To tackle the class collision issue , we explore the non-contrastive representation learning for deep clustering , termed non-contrastive clustering or NCC , which is based on BYOL , a representative method without negative examples . First , instead of negative sampling that is a double-edged sword , i.e. , causing class collision issue but improving the training stability , we propose to align one augmented view of the instance with the neighbors of another view in the embedding space , called positive sampling strategy , which can avoid the class collision issue and hence improve the withincluster compactness . Second , as for the clustering task , the different clusters are truly negative pairs for contrastive loss . To this end , we propose to encourage the alignment between two augmented views of prototypes and the uniformity among all prototypes , named prototypical contrastive loss or ProtoCL , which can maximize the inter-cluster distance . Moreover , we formulate our method into an EM framework , in which we iteratively perform E-step as estimating the pseudo-labels of instances and distribution of prototypes via spherical k-means based on the target network and Mstep as optimizing the online network via the proposed losses . As a result , NCC is able to form an embedding space where all clusters are well-separated and within-cluster examples are compact . The contributions of this paper are summarized as follows : • We explore the non-contrastive representation learning for deep clustering by proposing noncontrastive clustering or NCC , which is based on the Bootstrap Your Own Latent ( BYOL ) , a representative method without negative examples . • We propose a positive sampling strategy to augment instance alignment by taking into account neighboring positive examples in the embedding space , which can avoid the class collision issue and hence improve the within-cluster compactness . • We propose a novel prototypical contrastive loss or ProtoCL , which can align one augmented view of prototypes with another view and encourage the uniformity among all prototypes on a unit hypersphere , hence maximizing the inter-cluster distance . • We formulate our method into an EM framework , in which we can iteratively estimate the pseudolabels and distribution of prototypes via spherical k-means based on the target network and optimize the online network via the proposed losses . • Extensively experimental results on several benchmark datasets as well as ImageNet-1K demonstrate that NCC outperforms the existing state-of-the-art methods by a significant margin . 2 RELATED WORK . Deep clustering can be significantly advanced by discriminative representations . Examples of traditional deep clustering methods include : Xie et al . ( 2016 ) ; Yang et al . ( 2017 ) use autoencoders to simultaneously perform representation learning and clustering ; Chang et al . ( 2017 ) ; Haeusser et al . ( 2018 ) ; Wu et al . ( 2019 ) ; Ji et al . ( 2019 ) learn pair-wise relationships between original and augmented instances . However , they often suffer from inferior performance on some complex datasets such as CIFAR-20 . Inspired by the success of contrastive learning , recent studies turn to exploit the discriminative representations learned from contrastive learning to assist the downstream clustering tasks ( Van Gansbeke et al. , 2020 ; Niu et al. , 2021 ) or simultaneously optimize representation learning and clustering ( Tao et al. , 2021 ; Tsai et al. , 2021 ; Li et al. , 2021a ; Shen et al. , 2021 ) . SCAN ( Van Gansbeke et al. , 2020 ) uses the model pre-trained by SimCLR to yield the confident pseudo-labels . IDFD ( Tao et al. , 2021 ) proposes to perform both instance discrimination and feature decorrelation . GCC ( Zhong et al. , 2021 ) and WCL ( Zheng et al. , 2021 ) build a graph to label the neighbor samples as pseudo-positive examples , however , they still suffer from the class collision issue due to the contrastive loss involved and these pseudo-positive examples that may not be truly positive . All of them are built upon the contrastive learning framework , which means that they require a large number of negative examples for training stability , inevitably giving rise to class collision issue . Different from prior work , this paper explores the non-contrastive self-supervised methods , i.e. , BYOL , to achieve both representation learning and clustering . We note that Regatti et al . ( 2021 ) ; Lee et al . ( 2020 ) have tried to build the clustering framework based on BYOL , however , their methods do not consider improving within-cluster compactness and maximizing inter-cluster distance like ours . Therefore , to the best of our knowledge , this is the first successful attempt that introduces the non-contrastive representation learning into deep clustering that yields a substantial performance improvement over previous state-of-the-art methods . In Appendix A , we present related work on self-supervised learning and difference from existing methods including CC ( Li et al. , 2021b ) , GCC ( Zhong et al. , 2021 ) , WCL ( Zheng et al. , 2021 ) , and PCL ( Li et al. , 2021a ) . 3 PRELIMINARY . The most successful self-supervised learning methods in recent years can be roughly divided into contrastive ( Chen et al. , 2020 ; He et al. , 2020 ) and non-contrastive ( Grill et al. , 2020 ; Chen & He , 2021 ) . Here , we briefly summarize their formulas and discuss their difference . Contrastive learning . Contrastive learning methods perform instance-wise discrimination ( Wu et al. , 2018 ) using the InfoNCE loss ( Oord et al. , 2018 ) . Formally , assume that we have one instance x , its augmented version x+ by using random data augmentation , and a set of M negative examples drawn from the dataset , { x−1 , x − 2 , . . . , x − M } . The contrastive learning aims to learn an embedding function f that maps x onto a unit hypersphere , in which the InfoNCE loss can be defined as : Lcontr = − log exp ( f ( x ) Tf ( x+ ) /τ ) exp ( f ( x ) Tf ( x+ ) /τ ) + ∑M i=1 exp ( f ( x ) Tf ( x−i ) /τ ) ( 1 ) ≈ −f ( x ) Tf ( x+ ) /τ + log ∑M i=1 exp ( f ( x ) Tf ( x−i ) /τ ) , ( 2 ) where the first and second terms in Eq . ( 2 ) refer to as instance alignment and instance uniformity , respectively . Here , we assume that the output of f ( · ) is ` 2 normalized . That is , the representation is on a unit hypersphere . The temperature τ controls the concentration level of representations ; please refer to ( Wang & Liu , 2021 ) for detailed behaviors of τ in the contrastive loss . Intuitively , the InfoNCE loss aims to pull together the positive pair ( x , x+ ) from two different data augmentations of the same instance , and push x away from M negative examples of other instances . As discussed in ( Wang & Isola , 2020 ) , when M → ∞ , the InfoNCE loss in Eq . ( 1 ) can be approximately decoupled into two terms : alignment and uniformity , as shown in Eq . ( 2 ) . Despite the alignment term closes the positive pair , the key to avoiding representation collapse is the uniformity term , which makes the negative examples uniformly distributed on the hypersphere . Although beneficial , the negative examples inevitably lead to the class collision issue , hurting the representation learning for clustering . Non-contrastive learning . Non-contrastive learning-based methods have shown more promising results than contrastive learning for representation learning and downstream tasks ( Ericsson et al. , 2021 ) . Non-contrastive methods only optimize the alignment term in Eq . ( 2 ) to match the representations between two augmented views . Without negative examples , they leverage an online and a target network for two views , and use a predictor network to bridge the gap between these two views . They also stop the gradient of the target network to avoid the representation collapse . In particular , if τ = 0.5 , the loss used in ( Grill et al. , 2020 ; Chen & He , 2021 ) can be written as : Lnon−contr = −2g ( f ( x ) ) T f ′ ( x+ ) = ∥∥g ( f ( x ) ) − f ′ ( x+ ) ∥∥2 2 + const , ( 3 ) where g the predictor ; f and f ′ are the online and target networks , respectively ; the outputs of g ( f ( · ) ) and f ′ ( · ) are ` 2-normalized . However , as mentioned in ( Fetterman & Albrecht , 2020 ) , the non-contrastive learning methods often suffer from unstable training and highly rely on the batch-statistics and hyper-parameter tuning to avoid representation collapse . Even though Grill et al . ( 2020 ) ; Richemond et al . ( 2020 ) have proposed to use some tricks such as SyncBN ( Ioffe & Szegedy , 2015 ) and weight normalization ( Qiao et al. , 2019 ) to alleviate this issue , the additional computation cost is significant . Without negative examples , the collapse issue could be worsen when adding additional clustering losses for clustering task ; see Fig . A1 for the analysis of applying PCL ( Li et al. , 2021a ) to the BYOL . In a nutshell , most of existing successful deep clustering methods are based on contrastive learning for representation learning—giving rise to class collision issue—while the non-contrastive learning , due to unstable training with additional losses , is not yet ready for deep clustering . To that end , we explore the non-contrastive learning , i.e . BYOL , for deep clustering with positive sampling strategy and prototypical contrastive loss to avoid the class collision issue , improve the within-cluster compactness , and maximize the inter-class distance . 4 NON-CONTRASTIVE CLUSTERING Fig . 1 presents the overall framework of the proposed NCC . Based on BYOL , NCC is comprised of three networks : an online , a target , and a predictor . In Sec . 4.1 , we propose a positive sampling strategy to augment instance alignment to improve the within-cluster compactness . In Sec . 4.2 , a prototypical contrastive loss is introduced to maximize the inter-cluster distance using the pseudolabels from k-means clustering , which can encourage uniform representations . Finally , we formulate NCC into an EM framework to facilitate the understanding of training procedure in Sec . 4.3 . | This paper proposes a novel method of self-supervised representation learning. To circumvent the class collision issue arose from building a large set of negative samples in contrastive SSL based methods, it is built from non-contrastive SSL methods, such as BYOL. The goal is to handle the weaknesses of non-contrastive SSL methods, training instability and representation collapse. The two proposed methods for alleviating the two factors are i) augmented positive sampling and ii) optimizing uniformity of representation space via prototypical cluster features computed from k-means clustering. | SP:23ee2576623f6988dd755b76b3d152a4bf28f43e |
Exploring Non-Contrastive Representation Learning for Deep Clustering | 1 INTRODUCTION . Deep clustering is gaining considerable attention as it can learn representation of images and perform clustering in an end-to-end fashion . Remarkably , contrastive learning-based methods ( Wang et al. , 2021 ; Van Gansbeke et al. , 2020 ; Li et al. , 2021a ; b ; Tao et al. , 2021 ; Tsai et al. , 2021 ; Niu et al. , 2021 ) have become the main thrust to advance the representation of images on several complex benchmark datasets , significantly contributing to the clustering performance . In addition , some contrastive learning methods such as MoCo ( He et al. , 2020 ) and SimCLR ( Chen et al. , 2020 ) usually require specially designed losses ( Wang et al. , 2021 ; Li et al. , 2021a ; b ; Tao et al. , 2021 ; Tsai et al. , 2021 ) or an extra pre-training stage for more discriminative representations ( Van Gansbeke et al. , 2020 ; Niu et al. , 2021 ) . Although achieving promising clustering results , contrastive learning requires a large number of negative examples to achieve the instance-wise discrimination in an embedding space where all instances are well-separated . The constructed negative pairs usually require a large batch size ( Chen et al. , 2020 ) , memory queue ( He et al. , 2020 ) , or memory bank ( Wu et al. , 2018 ) , which not only bring extra computational cost but also give rise to class collision issue ( Saunshi et al. , 2019 ) . Here , the class collision issue refers to that different instances from the same semantic class are regarded as negative pairs , hurting the representation learning for clustering . A question naturally arises : are negative examples necessary for deep clustering ? Another kind of self-supervised learning is the non-contrastive methods such as BYOL ( Grill et al. , 2020 ) and SimSiam ( Chen & He , 2021 ) , which use the representations of one augmented view to predict another view . Their success demonstrates that negative examples are not the key to avoiding representation collapse . However , to the best of our knowledge , almost all recent successful literature of deep clustering is built upon contrastive learning-based methods such as MoCo ( He et al. , 2020 ) and SimCLR ( Chen et al. , 2020 ) . There is a general consensus that the negative examples are helpful to stabilize the training of representation learning for deep clustering . As discussed in ( Wang & Isola , 2020 ) , the typical contrastive loss can be identified into two properties : 1 ) alignment term to improve the closeness of positive pairs ; and 2 ) uniformity term to encourage instances to be uniformly distributed on a unit hypersphere . In contrast , non-contrastive methods such as BYOL only optimize the alignment term , leading to unstable training and suffering from the representation collapse—which may be worsen when adding extra losses . To tackle the class collision issue , we explore the non-contrastive representation learning for deep clustering , termed non-contrastive clustering or NCC , which is based on BYOL , a representative method without negative examples . First , instead of negative sampling that is a double-edged sword , i.e. , causing class collision issue but improving the training stability , we propose to align one augmented view of the instance with the neighbors of another view in the embedding space , called positive sampling strategy , which can avoid the class collision issue and hence improve the withincluster compactness . Second , as for the clustering task , the different clusters are truly negative pairs for contrastive loss . To this end , we propose to encourage the alignment between two augmented views of prototypes and the uniformity among all prototypes , named prototypical contrastive loss or ProtoCL , which can maximize the inter-cluster distance . Moreover , we formulate our method into an EM framework , in which we iteratively perform E-step as estimating the pseudo-labels of instances and distribution of prototypes via spherical k-means based on the target network and Mstep as optimizing the online network via the proposed losses . As a result , NCC is able to form an embedding space where all clusters are well-separated and within-cluster examples are compact . The contributions of this paper are summarized as follows : • We explore the non-contrastive representation learning for deep clustering by proposing noncontrastive clustering or NCC , which is based on the Bootstrap Your Own Latent ( BYOL ) , a representative method without negative examples . • We propose a positive sampling strategy to augment instance alignment by taking into account neighboring positive examples in the embedding space , which can avoid the class collision issue and hence improve the within-cluster compactness . • We propose a novel prototypical contrastive loss or ProtoCL , which can align one augmented view of prototypes with another view and encourage the uniformity among all prototypes on a unit hypersphere , hence maximizing the inter-cluster distance . • We formulate our method into an EM framework , in which we can iteratively estimate the pseudolabels and distribution of prototypes via spherical k-means based on the target network and optimize the online network via the proposed losses . • Extensively experimental results on several benchmark datasets as well as ImageNet-1K demonstrate that NCC outperforms the existing state-of-the-art methods by a significant margin . 2 RELATED WORK . Deep clustering can be significantly advanced by discriminative representations . Examples of traditional deep clustering methods include : Xie et al . ( 2016 ) ; Yang et al . ( 2017 ) use autoencoders to simultaneously perform representation learning and clustering ; Chang et al . ( 2017 ) ; Haeusser et al . ( 2018 ) ; Wu et al . ( 2019 ) ; Ji et al . ( 2019 ) learn pair-wise relationships between original and augmented instances . However , they often suffer from inferior performance on some complex datasets such as CIFAR-20 . Inspired by the success of contrastive learning , recent studies turn to exploit the discriminative representations learned from contrastive learning to assist the downstream clustering tasks ( Van Gansbeke et al. , 2020 ; Niu et al. , 2021 ) or simultaneously optimize representation learning and clustering ( Tao et al. , 2021 ; Tsai et al. , 2021 ; Li et al. , 2021a ; Shen et al. , 2021 ) . SCAN ( Van Gansbeke et al. , 2020 ) uses the model pre-trained by SimCLR to yield the confident pseudo-labels . IDFD ( Tao et al. , 2021 ) proposes to perform both instance discrimination and feature decorrelation . GCC ( Zhong et al. , 2021 ) and WCL ( Zheng et al. , 2021 ) build a graph to label the neighbor samples as pseudo-positive examples , however , they still suffer from the class collision issue due to the contrastive loss involved and these pseudo-positive examples that may not be truly positive . All of them are built upon the contrastive learning framework , which means that they require a large number of negative examples for training stability , inevitably giving rise to class collision issue . Different from prior work , this paper explores the non-contrastive self-supervised methods , i.e. , BYOL , to achieve both representation learning and clustering . We note that Regatti et al . ( 2021 ) ; Lee et al . ( 2020 ) have tried to build the clustering framework based on BYOL , however , their methods do not consider improving within-cluster compactness and maximizing inter-cluster distance like ours . Therefore , to the best of our knowledge , this is the first successful attempt that introduces the non-contrastive representation learning into deep clustering that yields a substantial performance improvement over previous state-of-the-art methods . In Appendix A , we present related work on self-supervised learning and difference from existing methods including CC ( Li et al. , 2021b ) , GCC ( Zhong et al. , 2021 ) , WCL ( Zheng et al. , 2021 ) , and PCL ( Li et al. , 2021a ) . 3 PRELIMINARY . The most successful self-supervised learning methods in recent years can be roughly divided into contrastive ( Chen et al. , 2020 ; He et al. , 2020 ) and non-contrastive ( Grill et al. , 2020 ; Chen & He , 2021 ) . Here , we briefly summarize their formulas and discuss their difference . Contrastive learning . Contrastive learning methods perform instance-wise discrimination ( Wu et al. , 2018 ) using the InfoNCE loss ( Oord et al. , 2018 ) . Formally , assume that we have one instance x , its augmented version x+ by using random data augmentation , and a set of M negative examples drawn from the dataset , { x−1 , x − 2 , . . . , x − M } . The contrastive learning aims to learn an embedding function f that maps x onto a unit hypersphere , in which the InfoNCE loss can be defined as : Lcontr = − log exp ( f ( x ) Tf ( x+ ) /τ ) exp ( f ( x ) Tf ( x+ ) /τ ) + ∑M i=1 exp ( f ( x ) Tf ( x−i ) /τ ) ( 1 ) ≈ −f ( x ) Tf ( x+ ) /τ + log ∑M i=1 exp ( f ( x ) Tf ( x−i ) /τ ) , ( 2 ) where the first and second terms in Eq . ( 2 ) refer to as instance alignment and instance uniformity , respectively . Here , we assume that the output of f ( · ) is ` 2 normalized . That is , the representation is on a unit hypersphere . The temperature τ controls the concentration level of representations ; please refer to ( Wang & Liu , 2021 ) for detailed behaviors of τ in the contrastive loss . Intuitively , the InfoNCE loss aims to pull together the positive pair ( x , x+ ) from two different data augmentations of the same instance , and push x away from M negative examples of other instances . As discussed in ( Wang & Isola , 2020 ) , when M → ∞ , the InfoNCE loss in Eq . ( 1 ) can be approximately decoupled into two terms : alignment and uniformity , as shown in Eq . ( 2 ) . Despite the alignment term closes the positive pair , the key to avoiding representation collapse is the uniformity term , which makes the negative examples uniformly distributed on the hypersphere . Although beneficial , the negative examples inevitably lead to the class collision issue , hurting the representation learning for clustering . Non-contrastive learning . Non-contrastive learning-based methods have shown more promising results than contrastive learning for representation learning and downstream tasks ( Ericsson et al. , 2021 ) . Non-contrastive methods only optimize the alignment term in Eq . ( 2 ) to match the representations between two augmented views . Without negative examples , they leverage an online and a target network for two views , and use a predictor network to bridge the gap between these two views . They also stop the gradient of the target network to avoid the representation collapse . In particular , if τ = 0.5 , the loss used in ( Grill et al. , 2020 ; Chen & He , 2021 ) can be written as : Lnon−contr = −2g ( f ( x ) ) T f ′ ( x+ ) = ∥∥g ( f ( x ) ) − f ′ ( x+ ) ∥∥2 2 + const , ( 3 ) where g the predictor ; f and f ′ are the online and target networks , respectively ; the outputs of g ( f ( · ) ) and f ′ ( · ) are ` 2-normalized . However , as mentioned in ( Fetterman & Albrecht , 2020 ) , the non-contrastive learning methods often suffer from unstable training and highly rely on the batch-statistics and hyper-parameter tuning to avoid representation collapse . Even though Grill et al . ( 2020 ) ; Richemond et al . ( 2020 ) have proposed to use some tricks such as SyncBN ( Ioffe & Szegedy , 2015 ) and weight normalization ( Qiao et al. , 2019 ) to alleviate this issue , the additional computation cost is significant . Without negative examples , the collapse issue could be worsen when adding additional clustering losses for clustering task ; see Fig . A1 for the analysis of applying PCL ( Li et al. , 2021a ) to the BYOL . In a nutshell , most of existing successful deep clustering methods are based on contrastive learning for representation learning—giving rise to class collision issue—while the non-contrastive learning , due to unstable training with additional losses , is not yet ready for deep clustering . To that end , we explore the non-contrastive learning , i.e . BYOL , for deep clustering with positive sampling strategy and prototypical contrastive loss to avoid the class collision issue , improve the within-cluster compactness , and maximize the inter-class distance . 4 NON-CONTRASTIVE CLUSTERING Fig . 1 presents the overall framework of the proposed NCC . Based on BYOL , NCC is comprised of three networks : an online , a target , and a predictor . In Sec . 4.1 , we propose a positive sampling strategy to augment instance alignment to improve the within-cluster compactness . In Sec . 4.2 , a prototypical contrastive loss is introduced to maximize the inter-cluster distance using the pseudolabels from k-means clustering , which can encourage uniform representations . Finally , we formulate NCC into an EM framework to facilitate the understanding of training procedure in Sec . 4.3 . | The paper proposes that negative samples are risky, therefore, let's cluster samples in the latent space to form prototypical clusters, and apply contrastive loss only on the cluster centres and not on the features themselves, unlike in the case of ProtoNCE where all samples are optimized. This however risks clusters becoming meaningless, hence the cluster grouping loss which maximizes the likelihood of each sample being from a nearby cluster with a Gaussian assumption. The method is tested on multiple datasets outperforming all compared methods. | SP:23ee2576623f6988dd755b76b3d152a4bf28f43e |
Variance Reduced Domain Randomization for Policy Gradient | 1 INTRODUCTION . Deep reinforcement learning ( DRL ) has achieved an impressive success on complex sequential decision-making tasks , such as playing video games at top human-level ( Ye et al. , 2020 ) , continuous robot manipulation ( Agarwal et al. , 2021 ) and traffic signal control ( Egeaa et al. , 2021 ) . For an extensive exploration , DRL requires a massive amount of samples to train the policy , which is usually done in a simulator constructed for the task . Due to the lack of diversity , however , policy trained by DRL tends to overfit to a specific training environment ( Cobbe et al. , 2019 ) , causing possibly a severe performance degradation ( i.e. , reality gap ) when the policy learned in simulator is transferred to the real deployment ( Peng et al. , 2018 ; Kang et al. , 2019 ) . To close this gap , domain randomization ( DR ) has been proposed to randomize the environment parameters that may vary the dynamics and observation in simulator , imposing diversity on the trained policy ( Tobin et al. , 2017 ; Jiang et al. , 2021 ) . Recent success in robot control has shown that DR enables the trained policy to generalize in real deployment , even with presence of fundamental modeling errors in the simulator ( Muratore et al. , 2021 ; Xie et al. , 2020 ) . DR has also demonstrated its capability of robustness to contend with extremely rare environments ( Muratore et al. , 2021 ) . Direct applications of DR on policy gradient methods can be thought as policy optimization on multiple randomly generated environments , which incurs additional variability of the observed data . Therefore , a critical challenge is the low sample efficiency due to the extremely high variance of the gradient estimator , where the variability not only stems from the gradient approximation of expected return from a batch of sampled trajectories ( Greensmith et al. , 2004 ) , but is also imposed by the randomization of environment parameters . In standard DRL , a commonly used method to reduce variance is to construct a bias-free and action-independent baseline that is subtracted from the expected return ( Sutton & Barto , 2018 ) . For example , a typical choice of baseline is the state value function of a policy . When directly applied to DR , as proposed by Andrychowicz et al . ( 2020 ) , it is equivalent to learning a state value function that predicts the expected return over all possible environments . Though remaining unbiased , such a state-dependent baseline may be a poor choice in DR , since the additional variability of randomized environments is not taken into account . In this paper , we aim to address the high variance issue of policy gradient methods for domain randomization , with a particular focus on reducing the additional variance imposed by randomization of environments . Our key insight is that the additional information on varying environments can be incorporated into the baseline to further reduce this variance . Theoretically , we derive the optimal state/environment-dependent baseline , and demonstrate that it improves consistently the variance reduction over baselines that is constant or uses state information only . In order for the practical implementation to strike the tradeoff between the variance reduction performance and computational complexity for maintaining the state/environment-dependent baselines , we propose a variance reduced domain randomization ( VRDR ) approach for policy gradient methods , which can improve the sample efficiency while maintaining a reasonable number of baselines associated with a set of specifically designed environment subspaces . Our main contributions can be summarized as follows . • Theoretically optimal state/environment-dependent baseline for DR. For the policy gradient in a variety of environments that differ in dynamics , we derive theoretically an optimal baseline that depends on both the states and environments . We further quantify the variance reduction improvement achieved by the proposed state/environment-dependent baseline over two common choices of baselines for DR , i.e. , the constant and state-dependent baselines . • Criterion for constructing practical state/subspace-dependent baseline . Since the accurate estimation of state/environment-dependent baseline for each possible environment is infeasible during practical implementation of RL , we propose to alternatively divide the entire space of environment parameters into a limited number of subspaces , and estimate instead the optimal baseline for every pair of state and environment subspace . We further show that the clustering of environments into subspaces should follow the policy ’ s expected returns on these environments , which can guarantee an improvement of variance reduction over the state-dependent baseline . • Variance reduced domain randomization ( VRDR ) with empirical evaluation . To strike the tradeoff between the variance reduction performance and computational complexity for maintaining optimal baselines , we develop a variance reduced domain randomization ( VRDR ) approach for policy gradient methods . Specifically , VRDR learns an acceptable number of baselines for each pair of state and environment subspace , where the environment subspaces are determined based on the above criterion . We then conduct experiments on six robot control tasks with their fundamental physical parameters randomized , demonstrating that VRDR can accelerate the convergence of policy training in DR settings , as compared to the standard state-dependent baselines . In some specific tasks , VRDR can even achieve a higher reward . 2 BACKGROUND . Notation . Under the standard reinforcement learning ( RL ) setting , the environment is modeled as a Markov decision process ( MDP ) defined by a tuple < S , A , Tp , Φ > , where S and A denote the state and action spaces , respectively . For the convenience of derivation , we assume that they are finite . Tp : S × A × S → [ 0 , 1 ] is the environment transition model that is essentially determined by environment parameter p ∈ P , with P denoting the space of environment parameters . In robot control , for example , environment parameter p can be a vector containing the rolling friction of each joint and the mass of torso and arms . Throughout the rest of this paper , by environment p we mean that the environment has dynamics determined by parameter p. Φ : S × A → R is the reward function . At each time step t , the agent observes its state st ∈ S and takes an action at ∈ A under the guidance of policy πθ ( at|st ) parameterized by θ . It will then receive a reward rt = Φ ( st , at ) , while the environment shifts to the next state st+1 with probability T ( st+1|st , at , p ) . The goal of standard RL is to search for a policy π that maximizes the expected discounted return η ( π , p ) = Eτ [ R ( τ ) ] over all possible trajectories τ = { st , at , rt , st+1 } ∞t=0 of states and actions , where R ( τ ) = ∑∞ t=0 γ trt and γ ∈ [ 0 , 1 ] is the discount factor . We can then define the state value function as Vπ ( s , p ) = E [ ∑∞ k=0 γ krt+k|st = s ] , the action value function asQπ ( s , a , p ) = E [ ∑∞ k=0 γ krt+k|st = s , at = a ] , and the advantage function as Aπ ( s , a , p ) = Qπ ( s , a , p ) − Vπ ( s , p ) . Policy gradient methods for DR . In DR , the environment parameter p is a random variable that follows the probability distribution P over P . For the convenience of derivation , we assume a finite en- vironment parameter space P with cardinality |P| . By introducing DR , the goal of policy optimization is to maximize the expected return over all possible environment parameters : Ep∼P [ η ( π , p ) ] . The policy gradient with action-independent baseline ( Sutton & Barto , 2018 ) can be formulated as : ∇θEp∼P [ η ( π , p ) ] = EP [ Eµpπ , π [ ∇θ log πθ ( a|s ) [ Qπ ( s , a , p ) − b ] ] ] , ( 1 ) where we define µpπ ( s ) = ∑∞ t=0 γ tPπ ( st = s|p ) as the discounted state visitation frequency , with Pπ ( st = s|p ) denoting the probability of shifting from the initial state s0 to state s after t steps under policy π in environment p. For convenience , we further denote g ( θ , s , a , p ) , ∇θ log πθ ( a|s ) [ Qπ ( s , a , p ) − b ] , which is the gradient estimator for the state-action pair under environment p. As long as the baseline b is action-independent , we have Ea [ ∇θ log πθ ( a|s ) b ] = ∇θEa [ b ] = 0 and thus EP , µpπ , π [ g ] = EP , µpπ , π [ ∇θ log πθ ( a|s ) Qπ ( s , a , p ) ] , E [ g ] , where the subscript is dropped for convenience . Therefore , by subtracting this action-independent baseline b , the variance of gradient estimator V ar ( g ) = E [ gTg ] − E [ g ] TE [ g ] can be reduced without introducing any bias ( Greensmith et al. , 2004 ) . 3 OPTIMAL BASELINES FOR DOMAIN RANDOMIZATION . To derive the optimal baselines for DR , we formulate the following optimization problem that aims to minimize the variance of gradient estimate w.r.t . the baseline b ( Greensmith et al. , 2004 ) : min b E [ G ( a , s ) [ Qπ ( s , a , p ) − b ] 2︸ ︷︷ ︸ gTg ] − E [ g ] TE [ g ] ⇔ min b EP , µpπ , π [ G ( a , s ) [ Qπ ( s , a , p ) − b ] 2 ] ( 2 ) where we denote G ( a , s ) , ∇θ log πθ ( a|s ) T∇θ log πθ ( a|s ) . Due to its independence of b , the second term E [ g ] TE [ g ] does not affect the minimizer , and is thus omitted on RHS of Eq . ( 2 ) . In the following , we first derive for DR two common choices of baselines that are constant or depend on the state only . We then propose the optimal state/environment-dependent baseline , and show its ability to further reduce the variance incurred by the randominzation of environment parameters . 3.1 TWO COMMON CHOICES OF ACTION-INDEPENDENT BASELINES . Optimal constant baseline . We first consider a constant baseline bc that depends neither on the action nor on the state . The optimization problem in Eq . ( 2 ) then becomes minimizing the expectation of a quadratic function , which can be proved to be convex . Referring to the detailed derivation in Appendix A.1 , the optimal constant baseline b∗c for DR-based policy gradient methods is : b∗c = EP , µpπ , π [ G ( a , s ) Qπ ( s , a , p ) ] EP , µpπ , π [ G ( a , s ) ] = EP , µpπ [ V ′ ( s , p ) ] . ( 3 ) This optimal baseline b∗c can be understood as the expected state value function V ′ ( s , p ) over all states and possible environments , where the state value function V ′ ( s , p ) is computed as the weighted average of the action value function : V ′ ( s , p ) = Eπ [ G ( a , s ) EP , µpπ , π [ G ( a , s ) ] Qπ ( s , a , p ) ] . Optimal state-dependent baseline . As in the standard RL , the utilization of state-dependent baseline in the DR setting can be considered as an expected value function b ( s ) = EP [ Eπ [ Qπ ( s , a , p ) ] ] for each state over all possible environments , which predicts the expected return over the distribution of dynamics caused by the variation of environment parameters . Still referring to Appendix A.1 , the optimal state-dependent baseline for DR is given as follow : b∗ ( s ) = EP ( p|s ) Eπ [ G ( a , s ) [ Qπ ( s , a , p ) ] ] Eπ [ G ( a , s ) ] . ( 4 ) | This paper tackles the variance of policy gradient due to the domain randomization used in RL in simulations. The authors prove that the policy gradient variance can be further reduced by learning a state-dependent baseline for each environment parameter compared to state-dependent baselines. The authors then develop a practical algorithm based on the analysis and analyze the properties of the algorithm. The algorithm is implemented and tested on six robot control tasks. It consistently accelerates policy training. | SP:a559d17fc68caba16f78b1d3c749e9e351415f6e |
Variance Reduced Domain Randomization for Policy Gradient | 1 INTRODUCTION . Deep reinforcement learning ( DRL ) has achieved an impressive success on complex sequential decision-making tasks , such as playing video games at top human-level ( Ye et al. , 2020 ) , continuous robot manipulation ( Agarwal et al. , 2021 ) and traffic signal control ( Egeaa et al. , 2021 ) . For an extensive exploration , DRL requires a massive amount of samples to train the policy , which is usually done in a simulator constructed for the task . Due to the lack of diversity , however , policy trained by DRL tends to overfit to a specific training environment ( Cobbe et al. , 2019 ) , causing possibly a severe performance degradation ( i.e. , reality gap ) when the policy learned in simulator is transferred to the real deployment ( Peng et al. , 2018 ; Kang et al. , 2019 ) . To close this gap , domain randomization ( DR ) has been proposed to randomize the environment parameters that may vary the dynamics and observation in simulator , imposing diversity on the trained policy ( Tobin et al. , 2017 ; Jiang et al. , 2021 ) . Recent success in robot control has shown that DR enables the trained policy to generalize in real deployment , even with presence of fundamental modeling errors in the simulator ( Muratore et al. , 2021 ; Xie et al. , 2020 ) . DR has also demonstrated its capability of robustness to contend with extremely rare environments ( Muratore et al. , 2021 ) . Direct applications of DR on policy gradient methods can be thought as policy optimization on multiple randomly generated environments , which incurs additional variability of the observed data . Therefore , a critical challenge is the low sample efficiency due to the extremely high variance of the gradient estimator , where the variability not only stems from the gradient approximation of expected return from a batch of sampled trajectories ( Greensmith et al. , 2004 ) , but is also imposed by the randomization of environment parameters . In standard DRL , a commonly used method to reduce variance is to construct a bias-free and action-independent baseline that is subtracted from the expected return ( Sutton & Barto , 2018 ) . For example , a typical choice of baseline is the state value function of a policy . When directly applied to DR , as proposed by Andrychowicz et al . ( 2020 ) , it is equivalent to learning a state value function that predicts the expected return over all possible environments . Though remaining unbiased , such a state-dependent baseline may be a poor choice in DR , since the additional variability of randomized environments is not taken into account . In this paper , we aim to address the high variance issue of policy gradient methods for domain randomization , with a particular focus on reducing the additional variance imposed by randomization of environments . Our key insight is that the additional information on varying environments can be incorporated into the baseline to further reduce this variance . Theoretically , we derive the optimal state/environment-dependent baseline , and demonstrate that it improves consistently the variance reduction over baselines that is constant or uses state information only . In order for the practical implementation to strike the tradeoff between the variance reduction performance and computational complexity for maintaining the state/environment-dependent baselines , we propose a variance reduced domain randomization ( VRDR ) approach for policy gradient methods , which can improve the sample efficiency while maintaining a reasonable number of baselines associated with a set of specifically designed environment subspaces . Our main contributions can be summarized as follows . • Theoretically optimal state/environment-dependent baseline for DR. For the policy gradient in a variety of environments that differ in dynamics , we derive theoretically an optimal baseline that depends on both the states and environments . We further quantify the variance reduction improvement achieved by the proposed state/environment-dependent baseline over two common choices of baselines for DR , i.e. , the constant and state-dependent baselines . • Criterion for constructing practical state/subspace-dependent baseline . Since the accurate estimation of state/environment-dependent baseline for each possible environment is infeasible during practical implementation of RL , we propose to alternatively divide the entire space of environment parameters into a limited number of subspaces , and estimate instead the optimal baseline for every pair of state and environment subspace . We further show that the clustering of environments into subspaces should follow the policy ’ s expected returns on these environments , which can guarantee an improvement of variance reduction over the state-dependent baseline . • Variance reduced domain randomization ( VRDR ) with empirical evaluation . To strike the tradeoff between the variance reduction performance and computational complexity for maintaining optimal baselines , we develop a variance reduced domain randomization ( VRDR ) approach for policy gradient methods . Specifically , VRDR learns an acceptable number of baselines for each pair of state and environment subspace , where the environment subspaces are determined based on the above criterion . We then conduct experiments on six robot control tasks with their fundamental physical parameters randomized , demonstrating that VRDR can accelerate the convergence of policy training in DR settings , as compared to the standard state-dependent baselines . In some specific tasks , VRDR can even achieve a higher reward . 2 BACKGROUND . Notation . Under the standard reinforcement learning ( RL ) setting , the environment is modeled as a Markov decision process ( MDP ) defined by a tuple < S , A , Tp , Φ > , where S and A denote the state and action spaces , respectively . For the convenience of derivation , we assume that they are finite . Tp : S × A × S → [ 0 , 1 ] is the environment transition model that is essentially determined by environment parameter p ∈ P , with P denoting the space of environment parameters . In robot control , for example , environment parameter p can be a vector containing the rolling friction of each joint and the mass of torso and arms . Throughout the rest of this paper , by environment p we mean that the environment has dynamics determined by parameter p. Φ : S × A → R is the reward function . At each time step t , the agent observes its state st ∈ S and takes an action at ∈ A under the guidance of policy πθ ( at|st ) parameterized by θ . It will then receive a reward rt = Φ ( st , at ) , while the environment shifts to the next state st+1 with probability T ( st+1|st , at , p ) . The goal of standard RL is to search for a policy π that maximizes the expected discounted return η ( π , p ) = Eτ [ R ( τ ) ] over all possible trajectories τ = { st , at , rt , st+1 } ∞t=0 of states and actions , where R ( τ ) = ∑∞ t=0 γ trt and γ ∈ [ 0 , 1 ] is the discount factor . We can then define the state value function as Vπ ( s , p ) = E [ ∑∞ k=0 γ krt+k|st = s ] , the action value function asQπ ( s , a , p ) = E [ ∑∞ k=0 γ krt+k|st = s , at = a ] , and the advantage function as Aπ ( s , a , p ) = Qπ ( s , a , p ) − Vπ ( s , p ) . Policy gradient methods for DR . In DR , the environment parameter p is a random variable that follows the probability distribution P over P . For the convenience of derivation , we assume a finite en- vironment parameter space P with cardinality |P| . By introducing DR , the goal of policy optimization is to maximize the expected return over all possible environment parameters : Ep∼P [ η ( π , p ) ] . The policy gradient with action-independent baseline ( Sutton & Barto , 2018 ) can be formulated as : ∇θEp∼P [ η ( π , p ) ] = EP [ Eµpπ , π [ ∇θ log πθ ( a|s ) [ Qπ ( s , a , p ) − b ] ] ] , ( 1 ) where we define µpπ ( s ) = ∑∞ t=0 γ tPπ ( st = s|p ) as the discounted state visitation frequency , with Pπ ( st = s|p ) denoting the probability of shifting from the initial state s0 to state s after t steps under policy π in environment p. For convenience , we further denote g ( θ , s , a , p ) , ∇θ log πθ ( a|s ) [ Qπ ( s , a , p ) − b ] , which is the gradient estimator for the state-action pair under environment p. As long as the baseline b is action-independent , we have Ea [ ∇θ log πθ ( a|s ) b ] = ∇θEa [ b ] = 0 and thus EP , µpπ , π [ g ] = EP , µpπ , π [ ∇θ log πθ ( a|s ) Qπ ( s , a , p ) ] , E [ g ] , where the subscript is dropped for convenience . Therefore , by subtracting this action-independent baseline b , the variance of gradient estimator V ar ( g ) = E [ gTg ] − E [ g ] TE [ g ] can be reduced without introducing any bias ( Greensmith et al. , 2004 ) . 3 OPTIMAL BASELINES FOR DOMAIN RANDOMIZATION . To derive the optimal baselines for DR , we formulate the following optimization problem that aims to minimize the variance of gradient estimate w.r.t . the baseline b ( Greensmith et al. , 2004 ) : min b E [ G ( a , s ) [ Qπ ( s , a , p ) − b ] 2︸ ︷︷ ︸ gTg ] − E [ g ] TE [ g ] ⇔ min b EP , µpπ , π [ G ( a , s ) [ Qπ ( s , a , p ) − b ] 2 ] ( 2 ) where we denote G ( a , s ) , ∇θ log πθ ( a|s ) T∇θ log πθ ( a|s ) . Due to its independence of b , the second term E [ g ] TE [ g ] does not affect the minimizer , and is thus omitted on RHS of Eq . ( 2 ) . In the following , we first derive for DR two common choices of baselines that are constant or depend on the state only . We then propose the optimal state/environment-dependent baseline , and show its ability to further reduce the variance incurred by the randominzation of environment parameters . 3.1 TWO COMMON CHOICES OF ACTION-INDEPENDENT BASELINES . Optimal constant baseline . We first consider a constant baseline bc that depends neither on the action nor on the state . The optimization problem in Eq . ( 2 ) then becomes minimizing the expectation of a quadratic function , which can be proved to be convex . Referring to the detailed derivation in Appendix A.1 , the optimal constant baseline b∗c for DR-based policy gradient methods is : b∗c = EP , µpπ , π [ G ( a , s ) Qπ ( s , a , p ) ] EP , µpπ , π [ G ( a , s ) ] = EP , µpπ [ V ′ ( s , p ) ] . ( 3 ) This optimal baseline b∗c can be understood as the expected state value function V ′ ( s , p ) over all states and possible environments , where the state value function V ′ ( s , p ) is computed as the weighted average of the action value function : V ′ ( s , p ) = Eπ [ G ( a , s ) EP , µpπ , π [ G ( a , s ) ] Qπ ( s , a , p ) ] . Optimal state-dependent baseline . As in the standard RL , the utilization of state-dependent baseline in the DR setting can be considered as an expected value function b ( s ) = EP [ Eπ [ Qπ ( s , a , p ) ] ] for each state over all possible environments , which predicts the expected return over the distribution of dynamics caused by the variation of environment parameters . Still referring to Appendix A.1 , the optimal state-dependent baseline for DR is given as follow : b∗ ( s ) = EP ( p|s ) Eπ [ G ( a , s ) [ Qπ ( s , a , p ) ] ] Eπ [ G ( a , s ) ] . ( 4 ) | this work studies policy gradient methods for domain randomization (DR). in particular, it investigates baselines for policy gradient under the dr settings such that the gradient estimate can have a lower variance to ensure better policy updates and learning. this paper derives optimal state/environment-dependent baseline theoretically, gives general recipes for building the baseline, and proposes an algorithm called variance reduced domain randomization (VRDR). vrdr is evaluated on several continuous control tasks and it performs better compared with two baselines. | SP:a559d17fc68caba16f78b1d3c749e9e351415f6e |
Variance Reduced Domain Randomization for Policy Gradient | 1 INTRODUCTION . Deep reinforcement learning ( DRL ) has achieved an impressive success on complex sequential decision-making tasks , such as playing video games at top human-level ( Ye et al. , 2020 ) , continuous robot manipulation ( Agarwal et al. , 2021 ) and traffic signal control ( Egeaa et al. , 2021 ) . For an extensive exploration , DRL requires a massive amount of samples to train the policy , which is usually done in a simulator constructed for the task . Due to the lack of diversity , however , policy trained by DRL tends to overfit to a specific training environment ( Cobbe et al. , 2019 ) , causing possibly a severe performance degradation ( i.e. , reality gap ) when the policy learned in simulator is transferred to the real deployment ( Peng et al. , 2018 ; Kang et al. , 2019 ) . To close this gap , domain randomization ( DR ) has been proposed to randomize the environment parameters that may vary the dynamics and observation in simulator , imposing diversity on the trained policy ( Tobin et al. , 2017 ; Jiang et al. , 2021 ) . Recent success in robot control has shown that DR enables the trained policy to generalize in real deployment , even with presence of fundamental modeling errors in the simulator ( Muratore et al. , 2021 ; Xie et al. , 2020 ) . DR has also demonstrated its capability of robustness to contend with extremely rare environments ( Muratore et al. , 2021 ) . Direct applications of DR on policy gradient methods can be thought as policy optimization on multiple randomly generated environments , which incurs additional variability of the observed data . Therefore , a critical challenge is the low sample efficiency due to the extremely high variance of the gradient estimator , where the variability not only stems from the gradient approximation of expected return from a batch of sampled trajectories ( Greensmith et al. , 2004 ) , but is also imposed by the randomization of environment parameters . In standard DRL , a commonly used method to reduce variance is to construct a bias-free and action-independent baseline that is subtracted from the expected return ( Sutton & Barto , 2018 ) . For example , a typical choice of baseline is the state value function of a policy . When directly applied to DR , as proposed by Andrychowicz et al . ( 2020 ) , it is equivalent to learning a state value function that predicts the expected return over all possible environments . Though remaining unbiased , such a state-dependent baseline may be a poor choice in DR , since the additional variability of randomized environments is not taken into account . In this paper , we aim to address the high variance issue of policy gradient methods for domain randomization , with a particular focus on reducing the additional variance imposed by randomization of environments . Our key insight is that the additional information on varying environments can be incorporated into the baseline to further reduce this variance . Theoretically , we derive the optimal state/environment-dependent baseline , and demonstrate that it improves consistently the variance reduction over baselines that is constant or uses state information only . In order for the practical implementation to strike the tradeoff between the variance reduction performance and computational complexity for maintaining the state/environment-dependent baselines , we propose a variance reduced domain randomization ( VRDR ) approach for policy gradient methods , which can improve the sample efficiency while maintaining a reasonable number of baselines associated with a set of specifically designed environment subspaces . Our main contributions can be summarized as follows . • Theoretically optimal state/environment-dependent baseline for DR. For the policy gradient in a variety of environments that differ in dynamics , we derive theoretically an optimal baseline that depends on both the states and environments . We further quantify the variance reduction improvement achieved by the proposed state/environment-dependent baseline over two common choices of baselines for DR , i.e. , the constant and state-dependent baselines . • Criterion for constructing practical state/subspace-dependent baseline . Since the accurate estimation of state/environment-dependent baseline for each possible environment is infeasible during practical implementation of RL , we propose to alternatively divide the entire space of environment parameters into a limited number of subspaces , and estimate instead the optimal baseline for every pair of state and environment subspace . We further show that the clustering of environments into subspaces should follow the policy ’ s expected returns on these environments , which can guarantee an improvement of variance reduction over the state-dependent baseline . • Variance reduced domain randomization ( VRDR ) with empirical evaluation . To strike the tradeoff between the variance reduction performance and computational complexity for maintaining optimal baselines , we develop a variance reduced domain randomization ( VRDR ) approach for policy gradient methods . Specifically , VRDR learns an acceptable number of baselines for each pair of state and environment subspace , where the environment subspaces are determined based on the above criterion . We then conduct experiments on six robot control tasks with their fundamental physical parameters randomized , demonstrating that VRDR can accelerate the convergence of policy training in DR settings , as compared to the standard state-dependent baselines . In some specific tasks , VRDR can even achieve a higher reward . 2 BACKGROUND . Notation . Under the standard reinforcement learning ( RL ) setting , the environment is modeled as a Markov decision process ( MDP ) defined by a tuple < S , A , Tp , Φ > , where S and A denote the state and action spaces , respectively . For the convenience of derivation , we assume that they are finite . Tp : S × A × S → [ 0 , 1 ] is the environment transition model that is essentially determined by environment parameter p ∈ P , with P denoting the space of environment parameters . In robot control , for example , environment parameter p can be a vector containing the rolling friction of each joint and the mass of torso and arms . Throughout the rest of this paper , by environment p we mean that the environment has dynamics determined by parameter p. Φ : S × A → R is the reward function . At each time step t , the agent observes its state st ∈ S and takes an action at ∈ A under the guidance of policy πθ ( at|st ) parameterized by θ . It will then receive a reward rt = Φ ( st , at ) , while the environment shifts to the next state st+1 with probability T ( st+1|st , at , p ) . The goal of standard RL is to search for a policy π that maximizes the expected discounted return η ( π , p ) = Eτ [ R ( τ ) ] over all possible trajectories τ = { st , at , rt , st+1 } ∞t=0 of states and actions , where R ( τ ) = ∑∞ t=0 γ trt and γ ∈ [ 0 , 1 ] is the discount factor . We can then define the state value function as Vπ ( s , p ) = E [ ∑∞ k=0 γ krt+k|st = s ] , the action value function asQπ ( s , a , p ) = E [ ∑∞ k=0 γ krt+k|st = s , at = a ] , and the advantage function as Aπ ( s , a , p ) = Qπ ( s , a , p ) − Vπ ( s , p ) . Policy gradient methods for DR . In DR , the environment parameter p is a random variable that follows the probability distribution P over P . For the convenience of derivation , we assume a finite en- vironment parameter space P with cardinality |P| . By introducing DR , the goal of policy optimization is to maximize the expected return over all possible environment parameters : Ep∼P [ η ( π , p ) ] . The policy gradient with action-independent baseline ( Sutton & Barto , 2018 ) can be formulated as : ∇θEp∼P [ η ( π , p ) ] = EP [ Eµpπ , π [ ∇θ log πθ ( a|s ) [ Qπ ( s , a , p ) − b ] ] ] , ( 1 ) where we define µpπ ( s ) = ∑∞ t=0 γ tPπ ( st = s|p ) as the discounted state visitation frequency , with Pπ ( st = s|p ) denoting the probability of shifting from the initial state s0 to state s after t steps under policy π in environment p. For convenience , we further denote g ( θ , s , a , p ) , ∇θ log πθ ( a|s ) [ Qπ ( s , a , p ) − b ] , which is the gradient estimator for the state-action pair under environment p. As long as the baseline b is action-independent , we have Ea [ ∇θ log πθ ( a|s ) b ] = ∇θEa [ b ] = 0 and thus EP , µpπ , π [ g ] = EP , µpπ , π [ ∇θ log πθ ( a|s ) Qπ ( s , a , p ) ] , E [ g ] , where the subscript is dropped for convenience . Therefore , by subtracting this action-independent baseline b , the variance of gradient estimator V ar ( g ) = E [ gTg ] − E [ g ] TE [ g ] can be reduced without introducing any bias ( Greensmith et al. , 2004 ) . 3 OPTIMAL BASELINES FOR DOMAIN RANDOMIZATION . To derive the optimal baselines for DR , we formulate the following optimization problem that aims to minimize the variance of gradient estimate w.r.t . the baseline b ( Greensmith et al. , 2004 ) : min b E [ G ( a , s ) [ Qπ ( s , a , p ) − b ] 2︸ ︷︷ ︸ gTg ] − E [ g ] TE [ g ] ⇔ min b EP , µpπ , π [ G ( a , s ) [ Qπ ( s , a , p ) − b ] 2 ] ( 2 ) where we denote G ( a , s ) , ∇θ log πθ ( a|s ) T∇θ log πθ ( a|s ) . Due to its independence of b , the second term E [ g ] TE [ g ] does not affect the minimizer , and is thus omitted on RHS of Eq . ( 2 ) . In the following , we first derive for DR two common choices of baselines that are constant or depend on the state only . We then propose the optimal state/environment-dependent baseline , and show its ability to further reduce the variance incurred by the randominzation of environment parameters . 3.1 TWO COMMON CHOICES OF ACTION-INDEPENDENT BASELINES . Optimal constant baseline . We first consider a constant baseline bc that depends neither on the action nor on the state . The optimization problem in Eq . ( 2 ) then becomes minimizing the expectation of a quadratic function , which can be proved to be convex . Referring to the detailed derivation in Appendix A.1 , the optimal constant baseline b∗c for DR-based policy gradient methods is : b∗c = EP , µpπ , π [ G ( a , s ) Qπ ( s , a , p ) ] EP , µpπ , π [ G ( a , s ) ] = EP , µpπ [ V ′ ( s , p ) ] . ( 3 ) This optimal baseline b∗c can be understood as the expected state value function V ′ ( s , p ) over all states and possible environments , where the state value function V ′ ( s , p ) is computed as the weighted average of the action value function : V ′ ( s , p ) = Eπ [ G ( a , s ) EP , µpπ , π [ G ( a , s ) ] Qπ ( s , a , p ) ] . Optimal state-dependent baseline . As in the standard RL , the utilization of state-dependent baseline in the DR setting can be considered as an expected value function b ( s ) = EP [ Eπ [ Qπ ( s , a , p ) ] ] for each state over all possible environments , which predicts the expected return over the distribution of dynamics caused by the variation of environment parameters . Still referring to Appendix A.1 , the optimal state-dependent baseline for DR is given as follow : b∗ ( s ) = EP ( p|s ) Eπ [ G ( a , s ) [ Qπ ( s , a , p ) ] ] Eπ [ G ( a , s ) ] . ( 4 ) | This paper tackles the high variance problem caused by the randomization of environments for estimating the policy gradients. The idea is to derive a bias-free and state/environment-dependent optimal baseline for domain randomization. The authors further develop a VRDR method by dividing the entire environment space into subspaces and estimating the state/subspace-dependent baseline. | SP:a559d17fc68caba16f78b1d3c749e9e351415f6e |
Towards Understanding the Condensation of Neural Networks at Initial Training | 1 INTRODUCTION . Over-parameterized neural networks often show good generalization performance on real-world problems by minimizing loss functions without explicit regularization ( Breiman , 1995 ; Zhang et al. , 2017 ) . For over-parameterized NNs , there are infinite possible sets of training parameters that can reach a satisfying training loss . However , their generalization performances can be very different . It is important to study what implicit regularization is imposed aside to the loss function during the training that leads the NN to a specific type of solutions . Empirical works suggest that NNs may learn the data from simple to complex patterns ( Arpit et al. , 2017 ; Xu et al. , 2019 ; Rahaman et al. , 2019 ; Xu et al. , 2020 ; Jin et al. , 2020 ; Kalimeris et al. , 2019 ) . For example , an implicit bias of frequency principle is widely observed that NNs often learn the target function from low to high frequency ( Xu et al. , 2019 ; Rahaman et al. , 2019 ; Xu et al. , 2020 ) , which has been utilized to understand various phenomena ( Ma et al. , 2020 ; Xu & Zhou , 2021 ) and inspiring algorithm design Liu et al . ( 2020 ) . The NN output , either simple or complex , is a collective result of all neurons . The study of how neuron weights evolve during the training is central to understanding the collective behavior , including the complexity , of the NN output . Luo et al . ( 2021 ) establish a phase diagram to study the effect of initialization on weight evolution for two-layer ReLU NNs at the infinite-width limit and find three distinct regimes in the phase diagram , i.e. , linear regime , critical regime and condensed regime . The non-linear regime , a largely unexplored non-linear regime , is named as condensed regime because the input weights of hidden neurons ( the input weight or the feature of a hidden neuron consists of the weight from its input layer to the hidden neuron and its bias term ) condense on isolated orientations during the training ( Luo et al. , 2021 ) . The three regimes are identified based on the relative change of input weights as the width approaches infinity , which tends to 0 , O ( 1 ) and +∞ , respectively . The condensation is a feature learning process , which is important to the learning of DNNs . Note that in the following , condensation is accompanied by a default assumption of small initialization or large relative change of input weights during training . For practical networks , such as resnet18-like ( He et al. , 2016 ) in learning CIFAR10 , as shown in Fig . 1 ( a ) and Table 1 , we find that the performance of networks with initialization in the condensed regime is very similar to the common initialization methods . However , the condensation phenomenon provides an intuitive explanation of the good performance as follows , which may lead to a quantitative theoretical explanation in future work . The condensation transforms a large network to a network of only a few effective neurons , leading to an output function with low complexity . Since the complexity bounds the generalization error ( Bartlett & Mendelson , 2002 ) , the study of condensation could provide insight to how NNs are implicitly regularized to achieve good generalization performance in practice . For two-layer ReLU NN , Maennel et al . ( 2018 ) prove that , as the initialization of parameters goes to zero , the features of hidden neurons condense at finite number of orientations depending on the input data ; when performing a linearly separable classification task with infinite data , Pellegrini & Biroli ( 2020 ) show that at mean-field limit , a two-layer infinite-width ReLU NN is effectively equal to a NN of one hidden neuron , i.e. , condensation on a single orientation . Both works ( Maennel et al. , 2018 ; Pellegrini & Biroli , 2020 ) study the condensation behavior for ReLU-NNs at an initial training stage in which the magnitudes of NN parameters are far smaller from well-fitting an O ( 1 ) target function . However , it still remains unclear that for NNs of more general activation functions , how the condensation emerges at the initial training stage . In this work , we show that the condensation at the initial stage is closely related to the multiplicity p at x = 0 , which means the derivative of activation at x = 0 is zero up to the ( p − 1 ) th-order and is non-zero for the p-th order . To verify their relation , we use the common activation function sigmoid ( x ) , softplus ( x ) , tanh ( x ) , which are multiplicity one , and variants of tanh ( x ) , i.e . x tanh ( x ) and x2 tanh ( x ) with multiplicity two and three , for our experiments . For comparison , we also show the initial condensation of ReLU ( x ) , which is studied previously ( Maennel et al. , 2018 ) and has totally different properties at origin compared with tanh ( x ) . Our experiments suggest that the maximal number of condensed orientations is twice the multiplicity of the activation function used in general NNs . For finite-width two-layer NNs with small initialization at the initial training stage , each hidden neuron ’ s output in a finite domain around 0 can be approximated by a p-th order polynomial and so is the NN output function . Based on the p-th order approximation , we show a preliminary theoretical support for condensation by a theoretical analysis for two cases , one is for the activation function of multiplicity one with arbitrary dimension input , which contains many common activation functions , and the other is for the layer with one-dimensional input and arbitrary multiplicity . Therefore , small initialization imposes an implicit regularization that restricts the NN to be effectively much narrower neural network at the initial training stage.As commonly used activation functions , such as tanh ( x ) , sigmoid ( x ) , softplus ( x ) , etc. , are all multiplicity one , our study of initial training behavior lays an important basis for the further study of implicit regularization throughout the training . 2 RELATED WORKS . A research line studies how initialization affects the weight evolution of NNs with a sufficiently large or infinite width . For example , with an initialization in the neural tangent kernel ( NTK ) regime or lazy training regime ( weights change slightly during the training ) , the gradient flow of infinite-width NN , can be approximated by a linear dynamics of random feature model ( Jacot et al. , 2018 ; Arora et al. , 2019 ; Zhang et al. , 2020 ; E et al. , 2020 ; Chizat & Bach , 2019 ) , whereas for the initialization in the mean-field regime ( weights change significantly during the training ) , the gradient flow of infinite-width NN exhibits highly nonlinear dynamics ( Mei et al. , 2019 ; Rotskoff & Vanden-Eijnden , 2018 ; Chizat & Bach , 2018 ; Sirignano & Spiliopoulos , 2020 ) . Pellegrini & Biroli ( 2020 ) analyze how the dynamics of each parameter transforms from a lazy regime ( NTK initialization ) to a rich regime ( mean-field initialization ) for an two-layer infinite-width ReLU NN to perform a linearly separable classification task with infinite data . Luo et al . ( 2021 ) systematically study the effect of initialization for two-layer ReLU NN with infinite width by establishing a phase diagram , which shows three distinct regimes , i.e. , linear regime ( similar to the lazy regime ) , critical regime and condensed regime ( similar to the rich regime ) , based on the relative change of input weights as the width approaches infinity , which tends to 0 , O ( 1 ) and +∞ , respectively . Luo et al . ( 2021 ) also empirically find that , in the condensed regime , the features of hidden neurons ( orientation of the input weight ) condense in several isolated orientations , which is a strong feature learning behavior , an important characteristic of deep learning , however , in Luo et al . ( 2021 ) , it is not clear how general of the condensation when other activation functions are used and why there is condensation . 3 PRELIMINARY : NEURAL NETWORKS AND INITIAL STAGE . A two-layer NN is fθ ( x ) = m∑ j=1 ajσ ( wj · x ) , ( 1 ) where σ ( · ) is the activation function , wj = ( w̄j , bj ) ∈ Rd+1 is the neuron feature including the input weight and bias terms , and x = ( x̄ , 1 ) ∈ Rd+1 is combination of the input sample and scalar 1 , θ is the set of all parameters , i.e. , { aj , wj } mj=1 . For simplicity , we call wj as input weight or weight and x as input sample . A L-layer NN can be recursively defined by feeding the output of the previous layer as the input to the current hidden layer i.e. , x [ 0 ] = ( x , 1 ) , x [ 1 ] = ( σ ( W [ 1 ] x [ 0 ] ) , 1 ) , x [ l ] = ( σ ( W [ l ] x [ l−1 ] ) , 1 ) , for l ∈ { 2 , 3 , ... , L } f ( θ , x ) = 1 α aᵀx [ L ] , fθ ( x ) , ( 2 ) where W [ l ] = ( W̄ [ l ] , b [ l ] ) ∈ Rml× ( ml−1+1 ) , and ml represents the dimension of the l-th hidden layer . For simplicity , we also call each row of W [ l ] as input weight or weight and x [ l−1 ] as input neurons . The target function is denoted as f∗ ( x ) . The training loss function is mean squared error RS ( θ ) = 1 2n n∑ i=1 ( fθ ( xi ) − f∗ ( xi ) ) 2 . ( 3 ) Without loss of generality , we assume that the output is one-dimensional for theoretical analysis , because for high-dimensional cases , we only need to sum the components directly . For summation , it does not affect the results of our theories . We consider the gradient flow training θ̇ = −∇θRS ( θ ) . ( 4 ) For convenience , we characterize the activation function by the following definition . Definition 1 ( multiplicity p ) . Suppose that σ ( x ) satisfies the following condition , there exists a p ∈ N and p ≥ 1 , such that the k-th order derivative σ ( k ) ( 0 ) = 0 for k = 1 , 2 , · · · , p − 1 , and σ ( p ) ( 0 ) 6= 0 , then we say σ has multiplicity p. In the experiments , we study the condensation at the initial stage of training . For a fixed loss , the step we need to achieve it is highly related to the size of learning rate . Therefore , we propose a definition of the initial stage of training by the size of loss in this article , that is the stage before the value of loss function decays to 70 % of its initial value . Such a definition is reasonable , for generally a loss could decay to 1 % of its initial value or even lower . The loss of the all experiments in the article can be found in Appendix A.3 , and they do meet the definition of the initial stage here . | This paper studies the role of activation functions (via their multiplicity at the origin) in the condensation of neural networks at the initial training stage. Condensation can be viewed as a feature learning process, where the wide network can be described effectively as a narrower network and the input weights condense on isolated orientations during training. This mechanism may provide a plausible explanation for the performance of the wide network. In particular, the paper shows empirically that the maximal number of condensed orientations is twice the multiplicity of the activation function used. Moreover, using polynomial approximations, the paper provides theoretical support in two cases: when the activation function is of multiplicity one and when the input is one-dimensional. | SP:593fb6284ae338ae1536f86715e3b09f62b09e9f |
Towards Understanding the Condensation of Neural Networks at Initial Training | 1 INTRODUCTION . Over-parameterized neural networks often show good generalization performance on real-world problems by minimizing loss functions without explicit regularization ( Breiman , 1995 ; Zhang et al. , 2017 ) . For over-parameterized NNs , there are infinite possible sets of training parameters that can reach a satisfying training loss . However , their generalization performances can be very different . It is important to study what implicit regularization is imposed aside to the loss function during the training that leads the NN to a specific type of solutions . Empirical works suggest that NNs may learn the data from simple to complex patterns ( Arpit et al. , 2017 ; Xu et al. , 2019 ; Rahaman et al. , 2019 ; Xu et al. , 2020 ; Jin et al. , 2020 ; Kalimeris et al. , 2019 ) . For example , an implicit bias of frequency principle is widely observed that NNs often learn the target function from low to high frequency ( Xu et al. , 2019 ; Rahaman et al. , 2019 ; Xu et al. , 2020 ) , which has been utilized to understand various phenomena ( Ma et al. , 2020 ; Xu & Zhou , 2021 ) and inspiring algorithm design Liu et al . ( 2020 ) . The NN output , either simple or complex , is a collective result of all neurons . The study of how neuron weights evolve during the training is central to understanding the collective behavior , including the complexity , of the NN output . Luo et al . ( 2021 ) establish a phase diagram to study the effect of initialization on weight evolution for two-layer ReLU NNs at the infinite-width limit and find three distinct regimes in the phase diagram , i.e. , linear regime , critical regime and condensed regime . The non-linear regime , a largely unexplored non-linear regime , is named as condensed regime because the input weights of hidden neurons ( the input weight or the feature of a hidden neuron consists of the weight from its input layer to the hidden neuron and its bias term ) condense on isolated orientations during the training ( Luo et al. , 2021 ) . The three regimes are identified based on the relative change of input weights as the width approaches infinity , which tends to 0 , O ( 1 ) and +∞ , respectively . The condensation is a feature learning process , which is important to the learning of DNNs . Note that in the following , condensation is accompanied by a default assumption of small initialization or large relative change of input weights during training . For practical networks , such as resnet18-like ( He et al. , 2016 ) in learning CIFAR10 , as shown in Fig . 1 ( a ) and Table 1 , we find that the performance of networks with initialization in the condensed regime is very similar to the common initialization methods . However , the condensation phenomenon provides an intuitive explanation of the good performance as follows , which may lead to a quantitative theoretical explanation in future work . The condensation transforms a large network to a network of only a few effective neurons , leading to an output function with low complexity . Since the complexity bounds the generalization error ( Bartlett & Mendelson , 2002 ) , the study of condensation could provide insight to how NNs are implicitly regularized to achieve good generalization performance in practice . For two-layer ReLU NN , Maennel et al . ( 2018 ) prove that , as the initialization of parameters goes to zero , the features of hidden neurons condense at finite number of orientations depending on the input data ; when performing a linearly separable classification task with infinite data , Pellegrini & Biroli ( 2020 ) show that at mean-field limit , a two-layer infinite-width ReLU NN is effectively equal to a NN of one hidden neuron , i.e. , condensation on a single orientation . Both works ( Maennel et al. , 2018 ; Pellegrini & Biroli , 2020 ) study the condensation behavior for ReLU-NNs at an initial training stage in which the magnitudes of NN parameters are far smaller from well-fitting an O ( 1 ) target function . However , it still remains unclear that for NNs of more general activation functions , how the condensation emerges at the initial training stage . In this work , we show that the condensation at the initial stage is closely related to the multiplicity p at x = 0 , which means the derivative of activation at x = 0 is zero up to the ( p − 1 ) th-order and is non-zero for the p-th order . To verify their relation , we use the common activation function sigmoid ( x ) , softplus ( x ) , tanh ( x ) , which are multiplicity one , and variants of tanh ( x ) , i.e . x tanh ( x ) and x2 tanh ( x ) with multiplicity two and three , for our experiments . For comparison , we also show the initial condensation of ReLU ( x ) , which is studied previously ( Maennel et al. , 2018 ) and has totally different properties at origin compared with tanh ( x ) . Our experiments suggest that the maximal number of condensed orientations is twice the multiplicity of the activation function used in general NNs . For finite-width two-layer NNs with small initialization at the initial training stage , each hidden neuron ’ s output in a finite domain around 0 can be approximated by a p-th order polynomial and so is the NN output function . Based on the p-th order approximation , we show a preliminary theoretical support for condensation by a theoretical analysis for two cases , one is for the activation function of multiplicity one with arbitrary dimension input , which contains many common activation functions , and the other is for the layer with one-dimensional input and arbitrary multiplicity . Therefore , small initialization imposes an implicit regularization that restricts the NN to be effectively much narrower neural network at the initial training stage.As commonly used activation functions , such as tanh ( x ) , sigmoid ( x ) , softplus ( x ) , etc. , are all multiplicity one , our study of initial training behavior lays an important basis for the further study of implicit regularization throughout the training . 2 RELATED WORKS . A research line studies how initialization affects the weight evolution of NNs with a sufficiently large or infinite width . For example , with an initialization in the neural tangent kernel ( NTK ) regime or lazy training regime ( weights change slightly during the training ) , the gradient flow of infinite-width NN , can be approximated by a linear dynamics of random feature model ( Jacot et al. , 2018 ; Arora et al. , 2019 ; Zhang et al. , 2020 ; E et al. , 2020 ; Chizat & Bach , 2019 ) , whereas for the initialization in the mean-field regime ( weights change significantly during the training ) , the gradient flow of infinite-width NN exhibits highly nonlinear dynamics ( Mei et al. , 2019 ; Rotskoff & Vanden-Eijnden , 2018 ; Chizat & Bach , 2018 ; Sirignano & Spiliopoulos , 2020 ) . Pellegrini & Biroli ( 2020 ) analyze how the dynamics of each parameter transforms from a lazy regime ( NTK initialization ) to a rich regime ( mean-field initialization ) for an two-layer infinite-width ReLU NN to perform a linearly separable classification task with infinite data . Luo et al . ( 2021 ) systematically study the effect of initialization for two-layer ReLU NN with infinite width by establishing a phase diagram , which shows three distinct regimes , i.e. , linear regime ( similar to the lazy regime ) , critical regime and condensed regime ( similar to the rich regime ) , based on the relative change of input weights as the width approaches infinity , which tends to 0 , O ( 1 ) and +∞ , respectively . Luo et al . ( 2021 ) also empirically find that , in the condensed regime , the features of hidden neurons ( orientation of the input weight ) condense in several isolated orientations , which is a strong feature learning behavior , an important characteristic of deep learning , however , in Luo et al . ( 2021 ) , it is not clear how general of the condensation when other activation functions are used and why there is condensation . 3 PRELIMINARY : NEURAL NETWORKS AND INITIAL STAGE . A two-layer NN is fθ ( x ) = m∑ j=1 ajσ ( wj · x ) , ( 1 ) where σ ( · ) is the activation function , wj = ( w̄j , bj ) ∈ Rd+1 is the neuron feature including the input weight and bias terms , and x = ( x̄ , 1 ) ∈ Rd+1 is combination of the input sample and scalar 1 , θ is the set of all parameters , i.e. , { aj , wj } mj=1 . For simplicity , we call wj as input weight or weight and x as input sample . A L-layer NN can be recursively defined by feeding the output of the previous layer as the input to the current hidden layer i.e. , x [ 0 ] = ( x , 1 ) , x [ 1 ] = ( σ ( W [ 1 ] x [ 0 ] ) , 1 ) , x [ l ] = ( σ ( W [ l ] x [ l−1 ] ) , 1 ) , for l ∈ { 2 , 3 , ... , L } f ( θ , x ) = 1 α aᵀx [ L ] , fθ ( x ) , ( 2 ) where W [ l ] = ( W̄ [ l ] , b [ l ] ) ∈ Rml× ( ml−1+1 ) , and ml represents the dimension of the l-th hidden layer . For simplicity , we also call each row of W [ l ] as input weight or weight and x [ l−1 ] as input neurons . The target function is denoted as f∗ ( x ) . The training loss function is mean squared error RS ( θ ) = 1 2n n∑ i=1 ( fθ ( xi ) − f∗ ( xi ) ) 2 . ( 3 ) Without loss of generality , we assume that the output is one-dimensional for theoretical analysis , because for high-dimensional cases , we only need to sum the components directly . For summation , it does not affect the results of our theories . We consider the gradient flow training θ̇ = −∇θRS ( θ ) . ( 4 ) For convenience , we characterize the activation function by the following definition . Definition 1 ( multiplicity p ) . Suppose that σ ( x ) satisfies the following condition , there exists a p ∈ N and p ≥ 1 , such that the k-th order derivative σ ( k ) ( 0 ) = 0 for k = 1 , 2 , · · · , p − 1 , and σ ( p ) ( 0 ) 6= 0 , then we say σ has multiplicity p. In the experiments , we study the condensation at the initial stage of training . For a fixed loss , the step we need to achieve it is highly related to the size of learning rate . Therefore , we propose a definition of the initial stage of training by the size of loss in this article , that is the stage before the value of loss function decays to 70 % of its initial value . Such a definition is reasonable , for generally a loss could decay to 1 % of its initial value or even lower . The loss of the all experiments in the article can be found in Appendix A.3 , and they do meet the definition of the initial stage here . | This well-written paper takes a step forward in understanding the implicit regularization in neural net optimization. The authors offer empirical evidence that the complexity of the function initially learned by nets is related to the multiplicity of the activation function at zero (i.e., the number of derivatives with nonzero values when evaluated at zero). Then, analytically, they show that all input weights converge towards the same or opposite direction for a multiplicity of 1 (which is the case for common activation functions like tanh, sigmoid, softplus), and that this holds for any multiplicity in the special case of a 1-dimensional hidden layer. Broadly, this work is intriguing, but could stand to benefit from a few improvements, suggested below. | SP:593fb6284ae338ae1536f86715e3b09f62b09e9f |
Towards Understanding the Condensation of Neural Networks at Initial Training | 1 INTRODUCTION . Over-parameterized neural networks often show good generalization performance on real-world problems by minimizing loss functions without explicit regularization ( Breiman , 1995 ; Zhang et al. , 2017 ) . For over-parameterized NNs , there are infinite possible sets of training parameters that can reach a satisfying training loss . However , their generalization performances can be very different . It is important to study what implicit regularization is imposed aside to the loss function during the training that leads the NN to a specific type of solutions . Empirical works suggest that NNs may learn the data from simple to complex patterns ( Arpit et al. , 2017 ; Xu et al. , 2019 ; Rahaman et al. , 2019 ; Xu et al. , 2020 ; Jin et al. , 2020 ; Kalimeris et al. , 2019 ) . For example , an implicit bias of frequency principle is widely observed that NNs often learn the target function from low to high frequency ( Xu et al. , 2019 ; Rahaman et al. , 2019 ; Xu et al. , 2020 ) , which has been utilized to understand various phenomena ( Ma et al. , 2020 ; Xu & Zhou , 2021 ) and inspiring algorithm design Liu et al . ( 2020 ) . The NN output , either simple or complex , is a collective result of all neurons . The study of how neuron weights evolve during the training is central to understanding the collective behavior , including the complexity , of the NN output . Luo et al . ( 2021 ) establish a phase diagram to study the effect of initialization on weight evolution for two-layer ReLU NNs at the infinite-width limit and find three distinct regimes in the phase diagram , i.e. , linear regime , critical regime and condensed regime . The non-linear regime , a largely unexplored non-linear regime , is named as condensed regime because the input weights of hidden neurons ( the input weight or the feature of a hidden neuron consists of the weight from its input layer to the hidden neuron and its bias term ) condense on isolated orientations during the training ( Luo et al. , 2021 ) . The three regimes are identified based on the relative change of input weights as the width approaches infinity , which tends to 0 , O ( 1 ) and +∞ , respectively . The condensation is a feature learning process , which is important to the learning of DNNs . Note that in the following , condensation is accompanied by a default assumption of small initialization or large relative change of input weights during training . For practical networks , such as resnet18-like ( He et al. , 2016 ) in learning CIFAR10 , as shown in Fig . 1 ( a ) and Table 1 , we find that the performance of networks with initialization in the condensed regime is very similar to the common initialization methods . However , the condensation phenomenon provides an intuitive explanation of the good performance as follows , which may lead to a quantitative theoretical explanation in future work . The condensation transforms a large network to a network of only a few effective neurons , leading to an output function with low complexity . Since the complexity bounds the generalization error ( Bartlett & Mendelson , 2002 ) , the study of condensation could provide insight to how NNs are implicitly regularized to achieve good generalization performance in practice . For two-layer ReLU NN , Maennel et al . ( 2018 ) prove that , as the initialization of parameters goes to zero , the features of hidden neurons condense at finite number of orientations depending on the input data ; when performing a linearly separable classification task with infinite data , Pellegrini & Biroli ( 2020 ) show that at mean-field limit , a two-layer infinite-width ReLU NN is effectively equal to a NN of one hidden neuron , i.e. , condensation on a single orientation . Both works ( Maennel et al. , 2018 ; Pellegrini & Biroli , 2020 ) study the condensation behavior for ReLU-NNs at an initial training stage in which the magnitudes of NN parameters are far smaller from well-fitting an O ( 1 ) target function . However , it still remains unclear that for NNs of more general activation functions , how the condensation emerges at the initial training stage . In this work , we show that the condensation at the initial stage is closely related to the multiplicity p at x = 0 , which means the derivative of activation at x = 0 is zero up to the ( p − 1 ) th-order and is non-zero for the p-th order . To verify their relation , we use the common activation function sigmoid ( x ) , softplus ( x ) , tanh ( x ) , which are multiplicity one , and variants of tanh ( x ) , i.e . x tanh ( x ) and x2 tanh ( x ) with multiplicity two and three , for our experiments . For comparison , we also show the initial condensation of ReLU ( x ) , which is studied previously ( Maennel et al. , 2018 ) and has totally different properties at origin compared with tanh ( x ) . Our experiments suggest that the maximal number of condensed orientations is twice the multiplicity of the activation function used in general NNs . For finite-width two-layer NNs with small initialization at the initial training stage , each hidden neuron ’ s output in a finite domain around 0 can be approximated by a p-th order polynomial and so is the NN output function . Based on the p-th order approximation , we show a preliminary theoretical support for condensation by a theoretical analysis for two cases , one is for the activation function of multiplicity one with arbitrary dimension input , which contains many common activation functions , and the other is for the layer with one-dimensional input and arbitrary multiplicity . Therefore , small initialization imposes an implicit regularization that restricts the NN to be effectively much narrower neural network at the initial training stage.As commonly used activation functions , such as tanh ( x ) , sigmoid ( x ) , softplus ( x ) , etc. , are all multiplicity one , our study of initial training behavior lays an important basis for the further study of implicit regularization throughout the training . 2 RELATED WORKS . A research line studies how initialization affects the weight evolution of NNs with a sufficiently large or infinite width . For example , with an initialization in the neural tangent kernel ( NTK ) regime or lazy training regime ( weights change slightly during the training ) , the gradient flow of infinite-width NN , can be approximated by a linear dynamics of random feature model ( Jacot et al. , 2018 ; Arora et al. , 2019 ; Zhang et al. , 2020 ; E et al. , 2020 ; Chizat & Bach , 2019 ) , whereas for the initialization in the mean-field regime ( weights change significantly during the training ) , the gradient flow of infinite-width NN exhibits highly nonlinear dynamics ( Mei et al. , 2019 ; Rotskoff & Vanden-Eijnden , 2018 ; Chizat & Bach , 2018 ; Sirignano & Spiliopoulos , 2020 ) . Pellegrini & Biroli ( 2020 ) analyze how the dynamics of each parameter transforms from a lazy regime ( NTK initialization ) to a rich regime ( mean-field initialization ) for an two-layer infinite-width ReLU NN to perform a linearly separable classification task with infinite data . Luo et al . ( 2021 ) systematically study the effect of initialization for two-layer ReLU NN with infinite width by establishing a phase diagram , which shows three distinct regimes , i.e. , linear regime ( similar to the lazy regime ) , critical regime and condensed regime ( similar to the rich regime ) , based on the relative change of input weights as the width approaches infinity , which tends to 0 , O ( 1 ) and +∞ , respectively . Luo et al . ( 2021 ) also empirically find that , in the condensed regime , the features of hidden neurons ( orientation of the input weight ) condense in several isolated orientations , which is a strong feature learning behavior , an important characteristic of deep learning , however , in Luo et al . ( 2021 ) , it is not clear how general of the condensation when other activation functions are used and why there is condensation . 3 PRELIMINARY : NEURAL NETWORKS AND INITIAL STAGE . A two-layer NN is fθ ( x ) = m∑ j=1 ajσ ( wj · x ) , ( 1 ) where σ ( · ) is the activation function , wj = ( w̄j , bj ) ∈ Rd+1 is the neuron feature including the input weight and bias terms , and x = ( x̄ , 1 ) ∈ Rd+1 is combination of the input sample and scalar 1 , θ is the set of all parameters , i.e. , { aj , wj } mj=1 . For simplicity , we call wj as input weight or weight and x as input sample . A L-layer NN can be recursively defined by feeding the output of the previous layer as the input to the current hidden layer i.e. , x [ 0 ] = ( x , 1 ) , x [ 1 ] = ( σ ( W [ 1 ] x [ 0 ] ) , 1 ) , x [ l ] = ( σ ( W [ l ] x [ l−1 ] ) , 1 ) , for l ∈ { 2 , 3 , ... , L } f ( θ , x ) = 1 α aᵀx [ L ] , fθ ( x ) , ( 2 ) where W [ l ] = ( W̄ [ l ] , b [ l ] ) ∈ Rml× ( ml−1+1 ) , and ml represents the dimension of the l-th hidden layer . For simplicity , we also call each row of W [ l ] as input weight or weight and x [ l−1 ] as input neurons . The target function is denoted as f∗ ( x ) . The training loss function is mean squared error RS ( θ ) = 1 2n n∑ i=1 ( fθ ( xi ) − f∗ ( xi ) ) 2 . ( 3 ) Without loss of generality , we assume that the output is one-dimensional for theoretical analysis , because for high-dimensional cases , we only need to sum the components directly . For summation , it does not affect the results of our theories . We consider the gradient flow training θ̇ = −∇θRS ( θ ) . ( 4 ) For convenience , we characterize the activation function by the following definition . Definition 1 ( multiplicity p ) . Suppose that σ ( x ) satisfies the following condition , there exists a p ∈ N and p ≥ 1 , such that the k-th order derivative σ ( k ) ( 0 ) = 0 for k = 1 , 2 , · · · , p − 1 , and σ ( p ) ( 0 ) 6= 0 , then we say σ has multiplicity p. In the experiments , we study the condensation at the initial stage of training . For a fixed loss , the step we need to achieve it is highly related to the size of learning rate . Therefore , we propose a definition of the initial stage of training by the size of loss in this article , that is the stage before the value of loss function decays to 70 % of its initial value . Such a definition is reasonable , for generally a loss could decay to 1 % of its initial value or even lower . The loss of the all experiments in the article can be found in Appendix A.3 , and they do meet the definition of the initial stage here . | This paper investigates the condensation of weights of neural networks during the initial training stage. It showed theoretically and empirically that the maximal number of condensed orientations in the initial training stage is twice the multiplicity of the activation function under the small initialization of weights. This condensation restricts the capacity of NNs at the beginning, working as implicit regularization. | SP:593fb6284ae338ae1536f86715e3b09f62b09e9f |
Information Prioritization through Empowerment in Visual Model-based RL | 1 INTRODUCTION . Model-based reinforcement learning ( RL ) provides a promising approach to accelerating skill learning : by acquiring a predictive model that represents how the world works , an agent can quickly derive effective strategies , either by planning or by simulating synthetic experience under the model . However , in complex environments with high-dimensional observations ( e.g. , images ) , modeling the full observation space can present major challenges . While large neural network models have made progress on this problem ( Finn & Levine , 2017 ; Ha & Schmidhuber , 2018 ; Hafner et al. , 2019a ; Watter et al. , 2015 ; Babaeizadeh et al. , 2017 ) , sample-efficient learning necessitates some mechanism to prioritize modeling latent representations from observations such that functionally-relevant factors for the task can be captured . This needs to be done without wasting effort and capacity on irrelevant distractors , and without detailed reconstruction . Several recent works have proposed contrastive objectives that maximize mutual information between observations and latent states ( Hjelm et al. , 2018 ; Ma et al. , 2020 ; Oord et al. , 2018 ; Srinivas et al. , 2020 ) . While such objectives avoid reconstruction , they still do not distinguish between relevant irrelevant factors of variation . We thus pose the question : can we devise non-reconstructive representation learning methods that explicitly prioritize information that is most likely to be functionally relevant to the agent ? In this work , we derive a model-based RL algorithm from a combination of representation learning via mutual information maximization ( Poole et al. , 2019 ) and empowerment ( Mohamed & Rezende , 2015 ) . The latter serves to drive both the representation and the policy toward exploring and representing functionally relevant factors of variation . By integrating an empowerment-based term into a ∗Work done during Homanga ’ s research internship at Google . hbharadh @ cs.cmu.edu mutual information framework for learning state representations , we effectively prioritize information that is most likely to have functional relevance , which mitigates distractions due to irrelevant factors of variation in the observations . By integrating this same term into policy learning , we further improve exploration , particularly in the early stages of learning in sparse-reward environments , where the reward signal provides comparatively little guidance . Our main contribution is InfoPower , a model-based RL algorithm for high-dimensional systems with image observations that integrates empowerment into a mutual information based , nonreconstructive framework for learning state space models . Our approach explicitly prioritizes information that is most likely to be functionally relevant , which significantly improves performance in the presence of time-correlated distractors ( e.g. , background videos ) , and also accelerates exploration in environments when the reward signal is weak . We evaluate the proposed objectives on a suite of simulated robotic control tasks with explicit video distractors , and demonstrate up to 20 % better performance in terms of cumulative rewards at 1M environment interactions with 30 % higher sample efficiency at 100k interactions . 2 PROBLEM STATEMENT AND NOTATION . A partially observed Markov decision process ( POMDP ) is a tuple ( S , A , T , R , O ) that consists of states s ∈ S , actions a ∈ A , rewards r ∈ R , observations o ∈ O , and a state-transition distribution T ( s′|s , a ) . In most practical settings , the agent interacting with the environment doesn ’ t have access to the actual states in S , but to some partial information in the form of observations O . The underlying state-transition distribution T and reward distribution R are also unknown to the agent . In this paper , we consider the observations o ∈ O to be high-dimensional images , and so , the agent should learn a compact representation space Z for the latent state-space model . The problem statement is to learn effective representations from observations O when there are visual distractors present in the scene , and plan using the learned representations to maximize the cumulative sum of discounted rewards , J = E [ ∑ t γ t−1rt ] . The value of a state V ( Zt ) is defined as the expected cumulative sum of discounted rewards starting at state Zt . We use q ( · ) to denote parameterized variational approximations to learned distributions . We denote random variables with capital letters and use small letters to denote particular realizations ( e.g. , zt denotes the value of Zt ) . Since the underlying distributions are unknown , we evaluate all expectations through Monte-Carlo sampling with observed state-transition tuples ( ot , at−1 , ot−1 , zt , zt−1 , rt ) . 3 INFORMATION PRIORITIZATION FOR THE LATENT STATE-SPACE MODEL . Our goal is to learn a latent state-space model with a representation Z that prioritizes capturing functionally relevant parts of observations O , and devise a planning objective that explores with the learned representation . To achieve this , our key insight is integration of empowerment in the visual model-based RL pipeline . For representation learning we maximize MI maxZ I ( O , Z ) subject to a prioritization of the empowerment objective maxZ I ( At−1 ; Zt|Zt−1 ) . For planning , we maximize the empowerment objective along with reward-based value with respect to the policy maxA I ( At−1 ; Zt|Zt−1 ) + I ( Rt ; Zt ) . In the subsequent sections , we elaborate on our approach , InfoPower , and describe lower bounds to MI that yield a tractable algorithm . 3.1 LEARNING CONTROLLABLE FACTORS AND PLANNING THROUGH EMPOWERMENT . Controllable representations are features of the observation that correspond to entities which the agent can influence through its actions . For example , in quadrupedal locomotion , this could include the joint positions , velocities , motor torques , and the configurations of any object in the environment that the robot can interact with . For robotic manipulation , it could include the joint actuators of the robot arm , and the configurations of objects in the scene that it can interact with . Such representations are denoted by S+ in Fig . 2 , which we can formally define through conditional independence as the smallest subspace of S , S+ ≤ S , such that I ( At−1 ; St|S+t ) = 0 . This conditional independence relation can be seen in Fig . 2 . We explicitly priori- tize the learning of such representations in the latent space by drawing inspiration from variational empowerment ( Mohamed & Rezende , 2015 ) . The empowerment objective can be cast as maximizing a conditional information term I ( At−1 ; Zt|Zt−1 ) = H ( At−1|Zt−1 ) −H ( At−1|Zt , Zt−1 ) . The first term H ( At−1|Zt−1 ) encourages the chosen actions to be as diverse as possible , while the second term−H ( At−1|Zt , Zt−1 ) encourages the representations Zt and Zt+1 to be such that the action At for transition is predictable . While prior approaches have used empowerment in the model-free setting to learn policies by exploration through intrinsic motivation ( Mohamed & Rezende , 2015 ) , we specifically use this objective in combination with MI maximization for prioritizing the learning of controllable representations from distracting images in the latent state-space model . We include the same empowerment objective in both representation learning and policy learning . For this , we augment the maximization of the latent value function that is standard for policy learning in visual model-based RL ( Sutton , 1991 ) , with maxA I ( At−1 ; Zt|Zt−1 ) . This objectives complements value based-learning and further improves exploration by seeking controllable states . We empirically analyze the benefits of this in sections 4.3 and 4.5 . In the Appendix A.1 we next describe two theorems regarding learning controllable representations . We observe that the max ∑ t I ( At−1 ; Zt|Zt−1 ) objective alone for learning latent representations Z , along with the planning objective provably recovers controllable parts of the observationO , namely S+ . This result in Theorem 1 is important because in practice , we may not be able to represent every possible factor of variation in a complex environment . In this situation , we would expect that when |Z| |O| , learning Z under the objective max ∑ t I ( At−1 ; Zt|Zt−1 ) would encode S+ . We further show through Theorem 2 that the inverse information objective alone can be used to train a latent-state space model and a policy through an alternating optimization algorithm that converges to a local minimum of the objective max ∑ t I ( At−1 ; Zt|Zt−1 ) at a rate inversely proportional to the number of iterations . In Section 4.3 we empirically show how this objective helps achieve higher sample efficiency compared to pure value-based policy learning . 3.2 MUTUAL INFORMATION MAXIMIZATION FOR REPRESENTATION LEARNING . Algorithm 1 : Information Prioritization in Visual Model-based RL ( InfoPower ) Initialize dataset D with random episodes . Initialize model parameters φ , χ , ψ , η. Initialize dual variable λ. while not converged do for update step c = 1 .. C do // Model learning Sample data { ( at , ot , rt ) } k+Lt=k ∼ D. Compute latents zt ∼ pφ ( zt|zt−1 , at−1 , ot ) . Calculate L based on section 3.4 . ( φ , χ , ψ , η ) ← ( φ , χ , ψ , η ) +∇φ , χ , ψ , ηL λ← λ−∇λL // Behavior learning Rollout latent plan , S ← S ∪ { zt , at , rt } V ( zt ) ≈ Eπ [ ln qη ( rt|zt ) + ln qψ ( at−1|zt , zt−1 ) ] Update policy π and value model end // Environment interaction for time step t = 0 .. T − 1 do zt ∼ pφ ( zt|zt−1 , at−1 , ot ) ; at ∼ π ( at|zt ) rt , ot+1 ← env.step ( at ) . end Add data D ← D ∪ { ( ot , at , rt ) Tt=1 } . end For visual model-based RL , we need to learn a representation space Z , such that a forward dynamics model defining the probability of the next state in terms of the current state and the current action can be learned . The objective for this is∑ t−I ( it ; Zt|Zt−1 , At−1 ) . Here , it denotes the dataset indices that determine the observations p ( ot|it ) = δ ( ot − ot′ ) . In addition to the forward dynamics model , we need to learn a reward predictor by maximizing ∑ t I ( Rt ; Zt ) , such that the agent can plan ahead in the future by rolling forward latent states , without having to execute actions and observe rewards in the real environment . Finally , we need to learn an encoder for encoding observations O to latents Z . Most successful prior works have used reconstructionloss as a natural objective for learning this encoder ( Babaeizadeh et al. , 2017 ; Hafner et al. , 2019b ; a ) . A reconstruction-loss can be motivated by considering the objective I ( O , Z ) and computing its BA lower bound ( Agakov , 2004 ) . I ( ot ; zt ) ≥ Ep ( ot , zt ) [ log qφ′ ( ot|zt ) ] +H ( p ( ot ) ) . The first term here is the reconstruction objective , with qφ′ ( ot|zt ) being the decoder , and the second term can be ignored as it doesn ’ t depend on Z . However , this reconstruction objective explicitly encourages encoding the information from every pixel in the latent space ( such that reconstructing the image is possible ) and hence is prone to not ignoring distractors . In contrast , if we consider other lower bounds to I ( O , Z ) , we can obtain tractable objectives that do not involve reconstructing high-dimensional images . We can obtain an NCE-based lower bound ( Hjelm et al. , 2018 ) : I ( ot ; zt ) ≥ Eqφ ( zt|ot ) p ( ot ) [ log fθ ( zt , ot ) − log ∑ t′ 6=t fθ ( zt , ot′ ) ] , where qφ ( zt|ot ) is the learned encoder , ot is the observation at timestep t ( positive sample ) , and all observations in the replay buffer ot′ are negative samples . fθ ( zt , ot′ ) = exp ( zTt Wθzt′ ) The lower-bound is a form of contrastive learning as it maximizes compatibility of zt with the corresponding observation ot while minimizing compatibility with all other observations across time and batch . Although prior work has explored NCE-based bounds for contrastive learning in RL ( Srinivas et al. , 2020 ) , to the best of our knowledge , prior work has not used this in conjunction with empowerment for prioritizing information in visual model-based RL . Similarly , the Nguyen-Wainwright-Jordan ( NWJ ) bound ( Nguyen et al. , 2010 ) , which to the best our knowledge has not been used by prior works in visual model-based RL , can be obtained as , I ( ot ; zt ) ≥ Eqφ ( zt|ot ) p ( ot ) [ fθ ( zt , ot ) ] − e −1Eqφ ( zt|ot ) p ( ot ) e fθ ( zt , ot ) , where fθ is a critic . There exists an optimal critic function for which the bound is tightest and equality holds . We refer to the InfoNCE and NWJ lower bound based objectives as contrastive learning , in order to distinguish them from a reconstruction-loss based objective , though both are bounds on mutual information . We denote a lower bound to MI by I ( ot , zt ) . We empirically find the NWJ-bound to perform slightly better than the NCE-bound for our approach , explained in section 4.5 . | This paper tackles the problem of prioritizing functionally relevant information from complex observations for model-based RL. To this end, previous work has proposed to replace the reconstruction loss with contrastive loss. Building on that, this paper introduces an additional empowerment objective that is used for both representation learning and policy learning. Experiments show that the proposed model outperforms baselines on a set of DeepMind Control tasks with custom background distractions, which include other visually similar but uncontrollable agents. It is also shown that the similarity between learned states can match well to the similarity between groundtruth simulator states, according to the proposed metric. | SP:b94e9808269fce8bb46330fc76d5a7afec946fa5 |
Information Prioritization through Empowerment in Visual Model-based RL | 1 INTRODUCTION . Model-based reinforcement learning ( RL ) provides a promising approach to accelerating skill learning : by acquiring a predictive model that represents how the world works , an agent can quickly derive effective strategies , either by planning or by simulating synthetic experience under the model . However , in complex environments with high-dimensional observations ( e.g. , images ) , modeling the full observation space can present major challenges . While large neural network models have made progress on this problem ( Finn & Levine , 2017 ; Ha & Schmidhuber , 2018 ; Hafner et al. , 2019a ; Watter et al. , 2015 ; Babaeizadeh et al. , 2017 ) , sample-efficient learning necessitates some mechanism to prioritize modeling latent representations from observations such that functionally-relevant factors for the task can be captured . This needs to be done without wasting effort and capacity on irrelevant distractors , and without detailed reconstruction . Several recent works have proposed contrastive objectives that maximize mutual information between observations and latent states ( Hjelm et al. , 2018 ; Ma et al. , 2020 ; Oord et al. , 2018 ; Srinivas et al. , 2020 ) . While such objectives avoid reconstruction , they still do not distinguish between relevant irrelevant factors of variation . We thus pose the question : can we devise non-reconstructive representation learning methods that explicitly prioritize information that is most likely to be functionally relevant to the agent ? In this work , we derive a model-based RL algorithm from a combination of representation learning via mutual information maximization ( Poole et al. , 2019 ) and empowerment ( Mohamed & Rezende , 2015 ) . The latter serves to drive both the representation and the policy toward exploring and representing functionally relevant factors of variation . By integrating an empowerment-based term into a ∗Work done during Homanga ’ s research internship at Google . hbharadh @ cs.cmu.edu mutual information framework for learning state representations , we effectively prioritize information that is most likely to have functional relevance , which mitigates distractions due to irrelevant factors of variation in the observations . By integrating this same term into policy learning , we further improve exploration , particularly in the early stages of learning in sparse-reward environments , where the reward signal provides comparatively little guidance . Our main contribution is InfoPower , a model-based RL algorithm for high-dimensional systems with image observations that integrates empowerment into a mutual information based , nonreconstructive framework for learning state space models . Our approach explicitly prioritizes information that is most likely to be functionally relevant , which significantly improves performance in the presence of time-correlated distractors ( e.g. , background videos ) , and also accelerates exploration in environments when the reward signal is weak . We evaluate the proposed objectives on a suite of simulated robotic control tasks with explicit video distractors , and demonstrate up to 20 % better performance in terms of cumulative rewards at 1M environment interactions with 30 % higher sample efficiency at 100k interactions . 2 PROBLEM STATEMENT AND NOTATION . A partially observed Markov decision process ( POMDP ) is a tuple ( S , A , T , R , O ) that consists of states s ∈ S , actions a ∈ A , rewards r ∈ R , observations o ∈ O , and a state-transition distribution T ( s′|s , a ) . In most practical settings , the agent interacting with the environment doesn ’ t have access to the actual states in S , but to some partial information in the form of observations O . The underlying state-transition distribution T and reward distribution R are also unknown to the agent . In this paper , we consider the observations o ∈ O to be high-dimensional images , and so , the agent should learn a compact representation space Z for the latent state-space model . The problem statement is to learn effective representations from observations O when there are visual distractors present in the scene , and plan using the learned representations to maximize the cumulative sum of discounted rewards , J = E [ ∑ t γ t−1rt ] . The value of a state V ( Zt ) is defined as the expected cumulative sum of discounted rewards starting at state Zt . We use q ( · ) to denote parameterized variational approximations to learned distributions . We denote random variables with capital letters and use small letters to denote particular realizations ( e.g. , zt denotes the value of Zt ) . Since the underlying distributions are unknown , we evaluate all expectations through Monte-Carlo sampling with observed state-transition tuples ( ot , at−1 , ot−1 , zt , zt−1 , rt ) . 3 INFORMATION PRIORITIZATION FOR THE LATENT STATE-SPACE MODEL . Our goal is to learn a latent state-space model with a representation Z that prioritizes capturing functionally relevant parts of observations O , and devise a planning objective that explores with the learned representation . To achieve this , our key insight is integration of empowerment in the visual model-based RL pipeline . For representation learning we maximize MI maxZ I ( O , Z ) subject to a prioritization of the empowerment objective maxZ I ( At−1 ; Zt|Zt−1 ) . For planning , we maximize the empowerment objective along with reward-based value with respect to the policy maxA I ( At−1 ; Zt|Zt−1 ) + I ( Rt ; Zt ) . In the subsequent sections , we elaborate on our approach , InfoPower , and describe lower bounds to MI that yield a tractable algorithm . 3.1 LEARNING CONTROLLABLE FACTORS AND PLANNING THROUGH EMPOWERMENT . Controllable representations are features of the observation that correspond to entities which the agent can influence through its actions . For example , in quadrupedal locomotion , this could include the joint positions , velocities , motor torques , and the configurations of any object in the environment that the robot can interact with . For robotic manipulation , it could include the joint actuators of the robot arm , and the configurations of objects in the scene that it can interact with . Such representations are denoted by S+ in Fig . 2 , which we can formally define through conditional independence as the smallest subspace of S , S+ ≤ S , such that I ( At−1 ; St|S+t ) = 0 . This conditional independence relation can be seen in Fig . 2 . We explicitly priori- tize the learning of such representations in the latent space by drawing inspiration from variational empowerment ( Mohamed & Rezende , 2015 ) . The empowerment objective can be cast as maximizing a conditional information term I ( At−1 ; Zt|Zt−1 ) = H ( At−1|Zt−1 ) −H ( At−1|Zt , Zt−1 ) . The first term H ( At−1|Zt−1 ) encourages the chosen actions to be as diverse as possible , while the second term−H ( At−1|Zt , Zt−1 ) encourages the representations Zt and Zt+1 to be such that the action At for transition is predictable . While prior approaches have used empowerment in the model-free setting to learn policies by exploration through intrinsic motivation ( Mohamed & Rezende , 2015 ) , we specifically use this objective in combination with MI maximization for prioritizing the learning of controllable representations from distracting images in the latent state-space model . We include the same empowerment objective in both representation learning and policy learning . For this , we augment the maximization of the latent value function that is standard for policy learning in visual model-based RL ( Sutton , 1991 ) , with maxA I ( At−1 ; Zt|Zt−1 ) . This objectives complements value based-learning and further improves exploration by seeking controllable states . We empirically analyze the benefits of this in sections 4.3 and 4.5 . In the Appendix A.1 we next describe two theorems regarding learning controllable representations . We observe that the max ∑ t I ( At−1 ; Zt|Zt−1 ) objective alone for learning latent representations Z , along with the planning objective provably recovers controllable parts of the observationO , namely S+ . This result in Theorem 1 is important because in practice , we may not be able to represent every possible factor of variation in a complex environment . In this situation , we would expect that when |Z| |O| , learning Z under the objective max ∑ t I ( At−1 ; Zt|Zt−1 ) would encode S+ . We further show through Theorem 2 that the inverse information objective alone can be used to train a latent-state space model and a policy through an alternating optimization algorithm that converges to a local minimum of the objective max ∑ t I ( At−1 ; Zt|Zt−1 ) at a rate inversely proportional to the number of iterations . In Section 4.3 we empirically show how this objective helps achieve higher sample efficiency compared to pure value-based policy learning . 3.2 MUTUAL INFORMATION MAXIMIZATION FOR REPRESENTATION LEARNING . Algorithm 1 : Information Prioritization in Visual Model-based RL ( InfoPower ) Initialize dataset D with random episodes . Initialize model parameters φ , χ , ψ , η. Initialize dual variable λ. while not converged do for update step c = 1 .. C do // Model learning Sample data { ( at , ot , rt ) } k+Lt=k ∼ D. Compute latents zt ∼ pφ ( zt|zt−1 , at−1 , ot ) . Calculate L based on section 3.4 . ( φ , χ , ψ , η ) ← ( φ , χ , ψ , η ) +∇φ , χ , ψ , ηL λ← λ−∇λL // Behavior learning Rollout latent plan , S ← S ∪ { zt , at , rt } V ( zt ) ≈ Eπ [ ln qη ( rt|zt ) + ln qψ ( at−1|zt , zt−1 ) ] Update policy π and value model end // Environment interaction for time step t = 0 .. T − 1 do zt ∼ pφ ( zt|zt−1 , at−1 , ot ) ; at ∼ π ( at|zt ) rt , ot+1 ← env.step ( at ) . end Add data D ← D ∪ { ( ot , at , rt ) Tt=1 } . end For visual model-based RL , we need to learn a representation space Z , such that a forward dynamics model defining the probability of the next state in terms of the current state and the current action can be learned . The objective for this is∑ t−I ( it ; Zt|Zt−1 , At−1 ) . Here , it denotes the dataset indices that determine the observations p ( ot|it ) = δ ( ot − ot′ ) . In addition to the forward dynamics model , we need to learn a reward predictor by maximizing ∑ t I ( Rt ; Zt ) , such that the agent can plan ahead in the future by rolling forward latent states , without having to execute actions and observe rewards in the real environment . Finally , we need to learn an encoder for encoding observations O to latents Z . Most successful prior works have used reconstructionloss as a natural objective for learning this encoder ( Babaeizadeh et al. , 2017 ; Hafner et al. , 2019b ; a ) . A reconstruction-loss can be motivated by considering the objective I ( O , Z ) and computing its BA lower bound ( Agakov , 2004 ) . I ( ot ; zt ) ≥ Ep ( ot , zt ) [ log qφ′ ( ot|zt ) ] +H ( p ( ot ) ) . The first term here is the reconstruction objective , with qφ′ ( ot|zt ) being the decoder , and the second term can be ignored as it doesn ’ t depend on Z . However , this reconstruction objective explicitly encourages encoding the information from every pixel in the latent space ( such that reconstructing the image is possible ) and hence is prone to not ignoring distractors . In contrast , if we consider other lower bounds to I ( O , Z ) , we can obtain tractable objectives that do not involve reconstructing high-dimensional images . We can obtain an NCE-based lower bound ( Hjelm et al. , 2018 ) : I ( ot ; zt ) ≥ Eqφ ( zt|ot ) p ( ot ) [ log fθ ( zt , ot ) − log ∑ t′ 6=t fθ ( zt , ot′ ) ] , where qφ ( zt|ot ) is the learned encoder , ot is the observation at timestep t ( positive sample ) , and all observations in the replay buffer ot′ are negative samples . fθ ( zt , ot′ ) = exp ( zTt Wθzt′ ) The lower-bound is a form of contrastive learning as it maximizes compatibility of zt with the corresponding observation ot while minimizing compatibility with all other observations across time and batch . Although prior work has explored NCE-based bounds for contrastive learning in RL ( Srinivas et al. , 2020 ) , to the best of our knowledge , prior work has not used this in conjunction with empowerment for prioritizing information in visual model-based RL . Similarly , the Nguyen-Wainwright-Jordan ( NWJ ) bound ( Nguyen et al. , 2010 ) , which to the best our knowledge has not been used by prior works in visual model-based RL , can be obtained as , I ( ot ; zt ) ≥ Eqφ ( zt|ot ) p ( ot ) [ fθ ( zt , ot ) ] − e −1Eqφ ( zt|ot ) p ( ot ) e fθ ( zt , ot ) , where fθ is a critic . There exists an optimal critic function for which the bound is tightest and equality holds . We refer to the InfoNCE and NWJ lower bound based objectives as contrastive learning , in order to distinguish them from a reconstruction-loss based objective , though both are bounds on mutual information . We denote a lower bound to MI by I ( ot , zt ) . We empirically find the NWJ-bound to perform slightly better than the NCE-bound for our approach , explained in section 4.5 . | The paper proposes a new non-reconstruction method for model-based RL from high-dimensional observations. The main idea is to make use of an information empowerment objective that prioritizes encoding parts of the environment that are influenceable by the actions. This allows the model to focus on functionally relevant information and filter out distractors. The same empowerment term can also be used to promote faster exploration when the reward signal is sparse. The proposed method outperformed the existing baselines in difficult Deepmind control tasks with natural video backgrounds. | SP:b94e9808269fce8bb46330fc76d5a7afec946fa5 |
Information Prioritization through Empowerment in Visual Model-based RL | 1 INTRODUCTION . Model-based reinforcement learning ( RL ) provides a promising approach to accelerating skill learning : by acquiring a predictive model that represents how the world works , an agent can quickly derive effective strategies , either by planning or by simulating synthetic experience under the model . However , in complex environments with high-dimensional observations ( e.g. , images ) , modeling the full observation space can present major challenges . While large neural network models have made progress on this problem ( Finn & Levine , 2017 ; Ha & Schmidhuber , 2018 ; Hafner et al. , 2019a ; Watter et al. , 2015 ; Babaeizadeh et al. , 2017 ) , sample-efficient learning necessitates some mechanism to prioritize modeling latent representations from observations such that functionally-relevant factors for the task can be captured . This needs to be done without wasting effort and capacity on irrelevant distractors , and without detailed reconstruction . Several recent works have proposed contrastive objectives that maximize mutual information between observations and latent states ( Hjelm et al. , 2018 ; Ma et al. , 2020 ; Oord et al. , 2018 ; Srinivas et al. , 2020 ) . While such objectives avoid reconstruction , they still do not distinguish between relevant irrelevant factors of variation . We thus pose the question : can we devise non-reconstructive representation learning methods that explicitly prioritize information that is most likely to be functionally relevant to the agent ? In this work , we derive a model-based RL algorithm from a combination of representation learning via mutual information maximization ( Poole et al. , 2019 ) and empowerment ( Mohamed & Rezende , 2015 ) . The latter serves to drive both the representation and the policy toward exploring and representing functionally relevant factors of variation . By integrating an empowerment-based term into a ∗Work done during Homanga ’ s research internship at Google . hbharadh @ cs.cmu.edu mutual information framework for learning state representations , we effectively prioritize information that is most likely to have functional relevance , which mitigates distractions due to irrelevant factors of variation in the observations . By integrating this same term into policy learning , we further improve exploration , particularly in the early stages of learning in sparse-reward environments , where the reward signal provides comparatively little guidance . Our main contribution is InfoPower , a model-based RL algorithm for high-dimensional systems with image observations that integrates empowerment into a mutual information based , nonreconstructive framework for learning state space models . Our approach explicitly prioritizes information that is most likely to be functionally relevant , which significantly improves performance in the presence of time-correlated distractors ( e.g. , background videos ) , and also accelerates exploration in environments when the reward signal is weak . We evaluate the proposed objectives on a suite of simulated robotic control tasks with explicit video distractors , and demonstrate up to 20 % better performance in terms of cumulative rewards at 1M environment interactions with 30 % higher sample efficiency at 100k interactions . 2 PROBLEM STATEMENT AND NOTATION . A partially observed Markov decision process ( POMDP ) is a tuple ( S , A , T , R , O ) that consists of states s ∈ S , actions a ∈ A , rewards r ∈ R , observations o ∈ O , and a state-transition distribution T ( s′|s , a ) . In most practical settings , the agent interacting with the environment doesn ’ t have access to the actual states in S , but to some partial information in the form of observations O . The underlying state-transition distribution T and reward distribution R are also unknown to the agent . In this paper , we consider the observations o ∈ O to be high-dimensional images , and so , the agent should learn a compact representation space Z for the latent state-space model . The problem statement is to learn effective representations from observations O when there are visual distractors present in the scene , and plan using the learned representations to maximize the cumulative sum of discounted rewards , J = E [ ∑ t γ t−1rt ] . The value of a state V ( Zt ) is defined as the expected cumulative sum of discounted rewards starting at state Zt . We use q ( · ) to denote parameterized variational approximations to learned distributions . We denote random variables with capital letters and use small letters to denote particular realizations ( e.g. , zt denotes the value of Zt ) . Since the underlying distributions are unknown , we evaluate all expectations through Monte-Carlo sampling with observed state-transition tuples ( ot , at−1 , ot−1 , zt , zt−1 , rt ) . 3 INFORMATION PRIORITIZATION FOR THE LATENT STATE-SPACE MODEL . Our goal is to learn a latent state-space model with a representation Z that prioritizes capturing functionally relevant parts of observations O , and devise a planning objective that explores with the learned representation . To achieve this , our key insight is integration of empowerment in the visual model-based RL pipeline . For representation learning we maximize MI maxZ I ( O , Z ) subject to a prioritization of the empowerment objective maxZ I ( At−1 ; Zt|Zt−1 ) . For planning , we maximize the empowerment objective along with reward-based value with respect to the policy maxA I ( At−1 ; Zt|Zt−1 ) + I ( Rt ; Zt ) . In the subsequent sections , we elaborate on our approach , InfoPower , and describe lower bounds to MI that yield a tractable algorithm . 3.1 LEARNING CONTROLLABLE FACTORS AND PLANNING THROUGH EMPOWERMENT . Controllable representations are features of the observation that correspond to entities which the agent can influence through its actions . For example , in quadrupedal locomotion , this could include the joint positions , velocities , motor torques , and the configurations of any object in the environment that the robot can interact with . For robotic manipulation , it could include the joint actuators of the robot arm , and the configurations of objects in the scene that it can interact with . Such representations are denoted by S+ in Fig . 2 , which we can formally define through conditional independence as the smallest subspace of S , S+ ≤ S , such that I ( At−1 ; St|S+t ) = 0 . This conditional independence relation can be seen in Fig . 2 . We explicitly priori- tize the learning of such representations in the latent space by drawing inspiration from variational empowerment ( Mohamed & Rezende , 2015 ) . The empowerment objective can be cast as maximizing a conditional information term I ( At−1 ; Zt|Zt−1 ) = H ( At−1|Zt−1 ) −H ( At−1|Zt , Zt−1 ) . The first term H ( At−1|Zt−1 ) encourages the chosen actions to be as diverse as possible , while the second term−H ( At−1|Zt , Zt−1 ) encourages the representations Zt and Zt+1 to be such that the action At for transition is predictable . While prior approaches have used empowerment in the model-free setting to learn policies by exploration through intrinsic motivation ( Mohamed & Rezende , 2015 ) , we specifically use this objective in combination with MI maximization for prioritizing the learning of controllable representations from distracting images in the latent state-space model . We include the same empowerment objective in both representation learning and policy learning . For this , we augment the maximization of the latent value function that is standard for policy learning in visual model-based RL ( Sutton , 1991 ) , with maxA I ( At−1 ; Zt|Zt−1 ) . This objectives complements value based-learning and further improves exploration by seeking controllable states . We empirically analyze the benefits of this in sections 4.3 and 4.5 . In the Appendix A.1 we next describe two theorems regarding learning controllable representations . We observe that the max ∑ t I ( At−1 ; Zt|Zt−1 ) objective alone for learning latent representations Z , along with the planning objective provably recovers controllable parts of the observationO , namely S+ . This result in Theorem 1 is important because in practice , we may not be able to represent every possible factor of variation in a complex environment . In this situation , we would expect that when |Z| |O| , learning Z under the objective max ∑ t I ( At−1 ; Zt|Zt−1 ) would encode S+ . We further show through Theorem 2 that the inverse information objective alone can be used to train a latent-state space model and a policy through an alternating optimization algorithm that converges to a local minimum of the objective max ∑ t I ( At−1 ; Zt|Zt−1 ) at a rate inversely proportional to the number of iterations . In Section 4.3 we empirically show how this objective helps achieve higher sample efficiency compared to pure value-based policy learning . 3.2 MUTUAL INFORMATION MAXIMIZATION FOR REPRESENTATION LEARNING . Algorithm 1 : Information Prioritization in Visual Model-based RL ( InfoPower ) Initialize dataset D with random episodes . Initialize model parameters φ , χ , ψ , η. Initialize dual variable λ. while not converged do for update step c = 1 .. C do // Model learning Sample data { ( at , ot , rt ) } k+Lt=k ∼ D. Compute latents zt ∼ pφ ( zt|zt−1 , at−1 , ot ) . Calculate L based on section 3.4 . ( φ , χ , ψ , η ) ← ( φ , χ , ψ , η ) +∇φ , χ , ψ , ηL λ← λ−∇λL // Behavior learning Rollout latent plan , S ← S ∪ { zt , at , rt } V ( zt ) ≈ Eπ [ ln qη ( rt|zt ) + ln qψ ( at−1|zt , zt−1 ) ] Update policy π and value model end // Environment interaction for time step t = 0 .. T − 1 do zt ∼ pφ ( zt|zt−1 , at−1 , ot ) ; at ∼ π ( at|zt ) rt , ot+1 ← env.step ( at ) . end Add data D ← D ∪ { ( ot , at , rt ) Tt=1 } . end For visual model-based RL , we need to learn a representation space Z , such that a forward dynamics model defining the probability of the next state in terms of the current state and the current action can be learned . The objective for this is∑ t−I ( it ; Zt|Zt−1 , At−1 ) . Here , it denotes the dataset indices that determine the observations p ( ot|it ) = δ ( ot − ot′ ) . In addition to the forward dynamics model , we need to learn a reward predictor by maximizing ∑ t I ( Rt ; Zt ) , such that the agent can plan ahead in the future by rolling forward latent states , without having to execute actions and observe rewards in the real environment . Finally , we need to learn an encoder for encoding observations O to latents Z . Most successful prior works have used reconstructionloss as a natural objective for learning this encoder ( Babaeizadeh et al. , 2017 ; Hafner et al. , 2019b ; a ) . A reconstruction-loss can be motivated by considering the objective I ( O , Z ) and computing its BA lower bound ( Agakov , 2004 ) . I ( ot ; zt ) ≥ Ep ( ot , zt ) [ log qφ′ ( ot|zt ) ] +H ( p ( ot ) ) . The first term here is the reconstruction objective , with qφ′ ( ot|zt ) being the decoder , and the second term can be ignored as it doesn ’ t depend on Z . However , this reconstruction objective explicitly encourages encoding the information from every pixel in the latent space ( such that reconstructing the image is possible ) and hence is prone to not ignoring distractors . In contrast , if we consider other lower bounds to I ( O , Z ) , we can obtain tractable objectives that do not involve reconstructing high-dimensional images . We can obtain an NCE-based lower bound ( Hjelm et al. , 2018 ) : I ( ot ; zt ) ≥ Eqφ ( zt|ot ) p ( ot ) [ log fθ ( zt , ot ) − log ∑ t′ 6=t fθ ( zt , ot′ ) ] , where qφ ( zt|ot ) is the learned encoder , ot is the observation at timestep t ( positive sample ) , and all observations in the replay buffer ot′ are negative samples . fθ ( zt , ot′ ) = exp ( zTt Wθzt′ ) The lower-bound is a form of contrastive learning as it maximizes compatibility of zt with the corresponding observation ot while minimizing compatibility with all other observations across time and batch . Although prior work has explored NCE-based bounds for contrastive learning in RL ( Srinivas et al. , 2020 ) , to the best of our knowledge , prior work has not used this in conjunction with empowerment for prioritizing information in visual model-based RL . Similarly , the Nguyen-Wainwright-Jordan ( NWJ ) bound ( Nguyen et al. , 2010 ) , which to the best our knowledge has not been used by prior works in visual model-based RL , can be obtained as , I ( ot ; zt ) ≥ Eqφ ( zt|ot ) p ( ot ) [ fθ ( zt , ot ) ] − e −1Eqφ ( zt|ot ) p ( ot ) e fθ ( zt , ot ) , where fθ is a critic . There exists an optimal critic function for which the bound is tightest and equality holds . We refer to the InfoNCE and NWJ lower bound based objectives as contrastive learning , in order to distinguish them from a reconstruction-loss based objective , though both are bounds on mutual information . We denote a lower bound to MI by I ( ot , zt ) . We empirically find the NWJ-bound to perform slightly better than the NCE-bound for our approach , explained in section 4.5 . | The paper introduces a method to learn representations for model-based RL. The key idea is to find a representation that maximises information between the action at the previous step and the current latent representation, given the representation at the previous step, thereby maximising the information that the representation encodes about effects of an action. The work contributes a method and evaluation on tasks where high-dimensional visual inputs are used to control an agent in an environment with complex backgrounds as distractors. | SP:b94e9808269fce8bb46330fc76d5a7afec946fa5 |
Distribution-Driven Disjoint Prediction Intervals for Deep Learning | This paper redefines prediction intervals ( PIs ) as the form of a union of disjoint intervals . PIs represent predictive uncertainty in the regression problem . Since previous PI methods assumed a single continuous PI ( one lower and upper bound ) , it suffers from performance degradation in the uncertainty estimation when the conditional density function has multiple modes . This paper demonstrates that multimodality should be considered in regression uncertainty estimation . To address the issue , we propose a novel method that generates a union of disjoint PIs . Throughout UCI benchmark experiments , our method improves over current state-of-the-art uncertainty quantification methods , reducing an average PI width by over 27 % . Through qualitative experiments , we visualized that the multi-mode often exists in real-world datasets and why our method produces high-quality PIs compared to the previous PI . 1 INTRODUCTION . Deep neural networks ( NNs ) show remarkable performance in predicting a target for regression problems . However , the prediction is not enough to make it trustworthy : minimization of objective functions the NN leads to network outputs which approximate the conditional averages of the target data with no information about sampling errors and prediction accuracy . Moreover , if the target is multivalue , NN output can be far from the actual target in the regression problems . Incorporating the predictive uncertainty into the deterministic approximation generated by NNs improves the reliability and credibility of the predictions . This issue is being discussed in various domains such as autonomous driving ( Feng et al. , 2018 ) , object detection ( He et al. , 2019 ) , solar energy forecasting ( Galván et al. , 2017 ) , electricity demands and price estimation ( Shrivastava & Panigrahi , 2015 ) , and sensor anomaly detection ( Pang et al. , 2017 ) . Prediction interval ( PI ) represents and quantifies predictive uncertainty in the regression problem . Pearce et al . ( 2018 ) ; Tagasovska & Lopez-Paz ( 2019 ) ; Salem et al . ( 2020 ) have recently provided competitive performance by generating a PI to estimate predictive uncertainty . PI describes predictive uncertainty for each sample in the form of two values ( lower and upper bound ) between which a potential observation falls with a certain probability ( e.g. , 95 % or 99 % ) . PI can provide the amount of uncertainty for each sample by the width of PI . It also provides the possible range of prediction by bounds . It is a self-evident principle that high-quality PI should be as narrow as possible while containing some specified proportion of data points ( hereafter referred to as the HQ principle ) . The quality of a PI is often evaluated by the metric derived from the HQ principle ( Khosravi et al. , 2010 ; Galván et al. , 2017 ; Pearce et al. , 2018 ; Tagasovska & Lopez-Paz , 2019 ; Salem et al. , 2020 ) . Previous methods estimate the regression uncertainty with a single continuous PI , but it may suffer from performance degradation in the regression having multimodality . A toy example in Figure 1 is a one-dimensional regression example that has two modes . We observe that a single continuous PI ( gray shade ) provides unnecessarily large PIs to fill in the gap between the two modes compared to disjoint PIs ( blue shade ) . This means that a single continuous PI provides low-quality PIs in terms of the HQ principle . Including intervals that are unlikely to contain future observations makes PIs less reliable . Note that this issue becomes severe as the distance between modes increases . We qualitatively confirmed that multimodality often exists in real-world regression datasets through approximating conditional probability density function . We also confirmed that state-of-the-art methods generate low-quality PIs on real-world samples with multimodality . This is covered in more detail in Section 5.4 . Considering multimodality has been successful at handling the underlying stochastic structure in various fields ( Ameijeiras-Alonso et al. , 2019 ; Lerch et al. , 2020 ) . Concerning multimodality , various works such as clustering , multi-object detection ( Yoo et al. , 2019 ) , missing data reconstruction ( Smieja et al. , 2018 ) , multiple-choice learning ( Lee et al. , 2017 ) , and multi-output prediction ( Guzman-Rivera et al. , 2014 ) have been conducted . However , recent regression uncertainty estimation studies do not consider multimodality in depth . In this work , we redefine PI as a union of disjoint PIs due to the limitation of a single continuous PI in multimodality ( Section 3 ) . Since prior PI methods and loss functions do not apply to the union of disjoint PIs , we propose a new differentiable objective function and NN architecture that produce the union of disjoint PIs ( Section 4 ) . Additionally , we use the ensemble method to boost the performance for both in- and out-of-distribution regions ( Section 5.2 ) . As a result , our method improves over current state-of-the-art methods , reducing an average PI width by 27 % throughout eleven real-world datasets ( Section 5.3 ) . In addition , our method can provide the coverage probability of each disjoint PI ( e.g. , 20 % chance of being between 1 and 3 , 75 % chance of being between 5 and 9 ) . This means that our method gives information about how reliable each interval is ( Section 5.5 ) . 2 RELATED WORK . There are two approaches for estimating the predictive uncertainty for regression problems : Bayesian and non-Bayesian . In the Bayesian approach , NN parameters are considered as a distribution , and the uncertainty is calculated by marginalizing the parameters ( Graves , 2011 ; Blundell et al. , 2015 ; Hernández-Lobato & Adams , 2015 ; Gal et al. , 2017 ; Khan et al. , 2018 ; Wu et al. , 2018 ; Yao et al. , 2019 ; Izmailov et al. , 2020 ) . Though theoretically grounded , an approximation is needed since calculating the posterior distribution of NN parameters is computationally intractable . It also requires high computational demand in the inference time . The non-Bayesian approach , on the other hand , defines the output of NN as parameters to describe the predictive uncertainty . It is usually less computational than the Bayesian approach . However , since the NN parameters are fixed , non-Bayesian methods have a limitation in expressing the model uncertainty . Therefore , the deep ensemble with random initialization is additionally used to deal with model uncertainty . Several papers in the non-Bayesian branch have recently provided competitive performance ( Lakshminarayanan et al. , 2017 ; Pearce et al. , 2018 ; Tagasovska & Lopez-Paz , 2019 ; Salem et al. , 2020 ) . Our paper focuses on the Non-Bayesian approach , especially for the regression problem . Therefore , we would take a closer look at the non-Bayesian methods by dividing them into PI and non-PI methods . As PI methods for non-Bayesian methods , Khosravi et al . ( 2010 ) propose the Lower Upper Bound Estimation ( LUBE ) method that produces PI for the first time . Followed by that , Pearce et al . ( 2018 ) propose a quality-driven ( QD ) loss function that is compatible with gradient descent optimization . They also propose an ensemble method for PI with multiple predicted lower and upper bounds to estimate the model uncertainty . Salem et al . ( 2020 ) retrofit the QD loss function and propose a new ensemble method by fitting the split normal mixture distribution ( Wallis , 2014 ) to the PI and averaging the distribution , where they name it as SNM-QD+ . It increases the robustness of the training process compared to the QD method . However , SNM-QD+ has difficulties searching hyperparameters because the loss function contains various hyperparameters to achieve the advantages . Tagasovska & Lopez-Paz ( 2019 ) propose the simultaneous quantile regression ( SQR ) and the orthonormal certificates ( OC ) to estimate data noise and model uncertainty , respectively . However , this strategy generates PI only by the SQR without an ensemble method , and model uncertainty from OC is not included in the PI . Therefore , the PI of SQR does not consider the model uncertainty . Aforementioned loss functions and methods can only generate a single continuous PI but not a union of disjoint PIs . As non-PI methods , Mean-Variance Estimation ( MVE ) ( Nix & Weigend , 1994 ) uses a NN with two output nodes that are considered as a mean and a standard deviation of the conditional probability distribution . Since NN parameters are fixed , it can not deal with the model uncertainty . Lakshminarayanan et al . ( 2017 ) demonstrate the deep ensemble of multiple MVE with random initialization improves the performance , especially in out-of-distribution regions . ( so-called MVEens ) . Fort et al . ( 2019 ) shows that ensemble with random initialization may sample different modes in function space and therefore perform well in exploring model uncertainty . 3 UNION OF DISJOINT PREDICTION INTERVALS . 3.1 PROBLEM SETUP . Consider a dataset { xi , yi } Ni=1 where xi is an input and yi is a target . For each data point { xi , yi } , the disjoint set of PIs that covers the desired given proportion γ ∈ [ 0 , 1 ] is defined as follows : PIi = J ( i ) ⋃ j=1 [ Lij , Uij ) ( 1 ) where Pr ( yi ∈ PIi ) ≥ γ and Lij ≤ Uij < Li ( j+1 ) for all j Lij and Uij is a lower and upper bound of jth PI related with ith data point . J ( i ) is the number of disjoint intervals when PI is expressed with the smallest number of disjoint intervals . That is , J ( i ) is unique for a given interval . Note J ( i ) may have a different value for each data point . The previous methods assume a single continuous PI that is J ( i ) = 1 for all i . 3.2 PERFORMANCE METRIC : PICP AND MPIW . To measure the quality of PI methods based on the HQ principle , let Prediction Interval Coverage Probability ( PICP ) and Mean Prediction Interval Width ( MPIW ) be defined as , PICP = c N where c = N∑ i=1 ci and ci = { 1 , if yi ∈ PIi 0 , otherwise ( 2 ) MPIW = 1 N N∑ i=1 J ( i ) ∑ j=1 ( Uij − Lij ) ( 3 ) PICP measures the ratio of the target that is captured within PIs while MPIW measures total length of PIs over the entire samples . According to the HQ principle , PIs should minimize MPIW subject to PICP ≥ γ ( e.g . γ = 0.95 or 0.99 ) . This metric is widely used to compare performance of PI-related methods ( Khosravi et al. , 2010 ; Pearce et al. , 2018 ; Tagasovska & Lopez-Paz , 2019 ; Salem et al. , 2020 ) . 4 DISTRIBUTION-DRIVEN-DISJOINT METHOD . We propose a learning-based method that generates distribution-driven disjoint ( DDD ) PIs , and we call our method as DDD method . The DDD method produces high-quality PIs without the assumption that J ( i ) = 1 . To generate multiple disjoint PIs with a learning-based method , we need to formulate a differentiable loss function that reflects the HQ principle . However , this is a challenging problem because c from ( 2 ) is non-differentiable . Pearce et al . ( 2018 ) ; Salem et al . ( 2020 ) proposed a QD and QD+ loss function in the form of constraint optimization by approximating c in a differentiable way . Tagasovska & Lopez-Paz ( 2019 ) employed a pinball loss which reflects HQ principle and differentiable . However , these loss functions have a limitation in that they work for a single continuous PI which has J ( i ) = 1 . To derive a new differentiable loss function ( so-called DDD loss ) for multiple disjoint intervals , we first approximate the conditional distribution given input , p̂ ( yi|xi ) , with a Gaussian mixture model . Then , we derive the DDD loss by using the cumulative density function of the Gaussian mixture ( why we call our method distribution-driven disjoint ) . Our DDD method trains NN by minimizing the DDD loss . Another major problem is that the optimal number of disjoint prediction intervals J ( i ) opt may differ for each p̂ ( yi|xi ) , making it hard to implement [ L̂ij , Ûij ) J ( i ) j=1 as an output of NN . To deal with the problem , after producing K intervals regardless of conditional density , the union process removes the overlapping part . We propose a novel architecture that can implement this in a differentiable way . Additionally , we employed a simple ensemble method to improve performance for both inand out-of-distribution observations . | This paper provides the algorithm for the construction of prediction intervals composed of disjoint intervals. The authors proposed the motivation of disjoint intervals well, and the motivating example is impressive. The algorithms are also well accommodated with the statistical or learning-based prediction intervals, which contribute to the assessment of the uncertainty of prediction in general. | SP:077e63b28356b13b5309d4dae177b22d04636903 |
Distribution-Driven Disjoint Prediction Intervals for Deep Learning | This paper redefines prediction intervals ( PIs ) as the form of a union of disjoint intervals . PIs represent predictive uncertainty in the regression problem . Since previous PI methods assumed a single continuous PI ( one lower and upper bound ) , it suffers from performance degradation in the uncertainty estimation when the conditional density function has multiple modes . This paper demonstrates that multimodality should be considered in regression uncertainty estimation . To address the issue , we propose a novel method that generates a union of disjoint PIs . Throughout UCI benchmark experiments , our method improves over current state-of-the-art uncertainty quantification methods , reducing an average PI width by over 27 % . Through qualitative experiments , we visualized that the multi-mode often exists in real-world datasets and why our method produces high-quality PIs compared to the previous PI . 1 INTRODUCTION . Deep neural networks ( NNs ) show remarkable performance in predicting a target for regression problems . However , the prediction is not enough to make it trustworthy : minimization of objective functions the NN leads to network outputs which approximate the conditional averages of the target data with no information about sampling errors and prediction accuracy . Moreover , if the target is multivalue , NN output can be far from the actual target in the regression problems . Incorporating the predictive uncertainty into the deterministic approximation generated by NNs improves the reliability and credibility of the predictions . This issue is being discussed in various domains such as autonomous driving ( Feng et al. , 2018 ) , object detection ( He et al. , 2019 ) , solar energy forecasting ( Galván et al. , 2017 ) , electricity demands and price estimation ( Shrivastava & Panigrahi , 2015 ) , and sensor anomaly detection ( Pang et al. , 2017 ) . Prediction interval ( PI ) represents and quantifies predictive uncertainty in the regression problem . Pearce et al . ( 2018 ) ; Tagasovska & Lopez-Paz ( 2019 ) ; Salem et al . ( 2020 ) have recently provided competitive performance by generating a PI to estimate predictive uncertainty . PI describes predictive uncertainty for each sample in the form of two values ( lower and upper bound ) between which a potential observation falls with a certain probability ( e.g. , 95 % or 99 % ) . PI can provide the amount of uncertainty for each sample by the width of PI . It also provides the possible range of prediction by bounds . It is a self-evident principle that high-quality PI should be as narrow as possible while containing some specified proportion of data points ( hereafter referred to as the HQ principle ) . The quality of a PI is often evaluated by the metric derived from the HQ principle ( Khosravi et al. , 2010 ; Galván et al. , 2017 ; Pearce et al. , 2018 ; Tagasovska & Lopez-Paz , 2019 ; Salem et al. , 2020 ) . Previous methods estimate the regression uncertainty with a single continuous PI , but it may suffer from performance degradation in the regression having multimodality . A toy example in Figure 1 is a one-dimensional regression example that has two modes . We observe that a single continuous PI ( gray shade ) provides unnecessarily large PIs to fill in the gap between the two modes compared to disjoint PIs ( blue shade ) . This means that a single continuous PI provides low-quality PIs in terms of the HQ principle . Including intervals that are unlikely to contain future observations makes PIs less reliable . Note that this issue becomes severe as the distance between modes increases . We qualitatively confirmed that multimodality often exists in real-world regression datasets through approximating conditional probability density function . We also confirmed that state-of-the-art methods generate low-quality PIs on real-world samples with multimodality . This is covered in more detail in Section 5.4 . Considering multimodality has been successful at handling the underlying stochastic structure in various fields ( Ameijeiras-Alonso et al. , 2019 ; Lerch et al. , 2020 ) . Concerning multimodality , various works such as clustering , multi-object detection ( Yoo et al. , 2019 ) , missing data reconstruction ( Smieja et al. , 2018 ) , multiple-choice learning ( Lee et al. , 2017 ) , and multi-output prediction ( Guzman-Rivera et al. , 2014 ) have been conducted . However , recent regression uncertainty estimation studies do not consider multimodality in depth . In this work , we redefine PI as a union of disjoint PIs due to the limitation of a single continuous PI in multimodality ( Section 3 ) . Since prior PI methods and loss functions do not apply to the union of disjoint PIs , we propose a new differentiable objective function and NN architecture that produce the union of disjoint PIs ( Section 4 ) . Additionally , we use the ensemble method to boost the performance for both in- and out-of-distribution regions ( Section 5.2 ) . As a result , our method improves over current state-of-the-art methods , reducing an average PI width by 27 % throughout eleven real-world datasets ( Section 5.3 ) . In addition , our method can provide the coverage probability of each disjoint PI ( e.g. , 20 % chance of being between 1 and 3 , 75 % chance of being between 5 and 9 ) . This means that our method gives information about how reliable each interval is ( Section 5.5 ) . 2 RELATED WORK . There are two approaches for estimating the predictive uncertainty for regression problems : Bayesian and non-Bayesian . In the Bayesian approach , NN parameters are considered as a distribution , and the uncertainty is calculated by marginalizing the parameters ( Graves , 2011 ; Blundell et al. , 2015 ; Hernández-Lobato & Adams , 2015 ; Gal et al. , 2017 ; Khan et al. , 2018 ; Wu et al. , 2018 ; Yao et al. , 2019 ; Izmailov et al. , 2020 ) . Though theoretically grounded , an approximation is needed since calculating the posterior distribution of NN parameters is computationally intractable . It also requires high computational demand in the inference time . The non-Bayesian approach , on the other hand , defines the output of NN as parameters to describe the predictive uncertainty . It is usually less computational than the Bayesian approach . However , since the NN parameters are fixed , non-Bayesian methods have a limitation in expressing the model uncertainty . Therefore , the deep ensemble with random initialization is additionally used to deal with model uncertainty . Several papers in the non-Bayesian branch have recently provided competitive performance ( Lakshminarayanan et al. , 2017 ; Pearce et al. , 2018 ; Tagasovska & Lopez-Paz , 2019 ; Salem et al. , 2020 ) . Our paper focuses on the Non-Bayesian approach , especially for the regression problem . Therefore , we would take a closer look at the non-Bayesian methods by dividing them into PI and non-PI methods . As PI methods for non-Bayesian methods , Khosravi et al . ( 2010 ) propose the Lower Upper Bound Estimation ( LUBE ) method that produces PI for the first time . Followed by that , Pearce et al . ( 2018 ) propose a quality-driven ( QD ) loss function that is compatible with gradient descent optimization . They also propose an ensemble method for PI with multiple predicted lower and upper bounds to estimate the model uncertainty . Salem et al . ( 2020 ) retrofit the QD loss function and propose a new ensemble method by fitting the split normal mixture distribution ( Wallis , 2014 ) to the PI and averaging the distribution , where they name it as SNM-QD+ . It increases the robustness of the training process compared to the QD method . However , SNM-QD+ has difficulties searching hyperparameters because the loss function contains various hyperparameters to achieve the advantages . Tagasovska & Lopez-Paz ( 2019 ) propose the simultaneous quantile regression ( SQR ) and the orthonormal certificates ( OC ) to estimate data noise and model uncertainty , respectively . However , this strategy generates PI only by the SQR without an ensemble method , and model uncertainty from OC is not included in the PI . Therefore , the PI of SQR does not consider the model uncertainty . Aforementioned loss functions and methods can only generate a single continuous PI but not a union of disjoint PIs . As non-PI methods , Mean-Variance Estimation ( MVE ) ( Nix & Weigend , 1994 ) uses a NN with two output nodes that are considered as a mean and a standard deviation of the conditional probability distribution . Since NN parameters are fixed , it can not deal with the model uncertainty . Lakshminarayanan et al . ( 2017 ) demonstrate the deep ensemble of multiple MVE with random initialization improves the performance , especially in out-of-distribution regions . ( so-called MVEens ) . Fort et al . ( 2019 ) shows that ensemble with random initialization may sample different modes in function space and therefore perform well in exploring model uncertainty . 3 UNION OF DISJOINT PREDICTION INTERVALS . 3.1 PROBLEM SETUP . Consider a dataset { xi , yi } Ni=1 where xi is an input and yi is a target . For each data point { xi , yi } , the disjoint set of PIs that covers the desired given proportion γ ∈ [ 0 , 1 ] is defined as follows : PIi = J ( i ) ⋃ j=1 [ Lij , Uij ) ( 1 ) where Pr ( yi ∈ PIi ) ≥ γ and Lij ≤ Uij < Li ( j+1 ) for all j Lij and Uij is a lower and upper bound of jth PI related with ith data point . J ( i ) is the number of disjoint intervals when PI is expressed with the smallest number of disjoint intervals . That is , J ( i ) is unique for a given interval . Note J ( i ) may have a different value for each data point . The previous methods assume a single continuous PI that is J ( i ) = 1 for all i . 3.2 PERFORMANCE METRIC : PICP AND MPIW . To measure the quality of PI methods based on the HQ principle , let Prediction Interval Coverage Probability ( PICP ) and Mean Prediction Interval Width ( MPIW ) be defined as , PICP = c N where c = N∑ i=1 ci and ci = { 1 , if yi ∈ PIi 0 , otherwise ( 2 ) MPIW = 1 N N∑ i=1 J ( i ) ∑ j=1 ( Uij − Lij ) ( 3 ) PICP measures the ratio of the target that is captured within PIs while MPIW measures total length of PIs over the entire samples . According to the HQ principle , PIs should minimize MPIW subject to PICP ≥ γ ( e.g . γ = 0.95 or 0.99 ) . This metric is widely used to compare performance of PI-related methods ( Khosravi et al. , 2010 ; Pearce et al. , 2018 ; Tagasovska & Lopez-Paz , 2019 ; Salem et al. , 2020 ) . 4 DISTRIBUTION-DRIVEN-DISJOINT METHOD . We propose a learning-based method that generates distribution-driven disjoint ( DDD ) PIs , and we call our method as DDD method . The DDD method produces high-quality PIs without the assumption that J ( i ) = 1 . To generate multiple disjoint PIs with a learning-based method , we need to formulate a differentiable loss function that reflects the HQ principle . However , this is a challenging problem because c from ( 2 ) is non-differentiable . Pearce et al . ( 2018 ) ; Salem et al . ( 2020 ) proposed a QD and QD+ loss function in the form of constraint optimization by approximating c in a differentiable way . Tagasovska & Lopez-Paz ( 2019 ) employed a pinball loss which reflects HQ principle and differentiable . However , these loss functions have a limitation in that they work for a single continuous PI which has J ( i ) = 1 . To derive a new differentiable loss function ( so-called DDD loss ) for multiple disjoint intervals , we first approximate the conditional distribution given input , p̂ ( yi|xi ) , with a Gaussian mixture model . Then , we derive the DDD loss by using the cumulative density function of the Gaussian mixture ( why we call our method distribution-driven disjoint ) . Our DDD method trains NN by minimizing the DDD loss . Another major problem is that the optimal number of disjoint prediction intervals J ( i ) opt may differ for each p̂ ( yi|xi ) , making it hard to implement [ L̂ij , Ûij ) J ( i ) j=1 as an output of NN . To deal with the problem , after producing K intervals regardless of conditional density , the union process removes the overlapping part . We propose a novel architecture that can implement this in a differentiable way . Additionally , we employed a simple ensemble method to improve performance for both inand out-of-distribution observations . | The paper addresses the problem of determining prediction intervals (PI) in regression task. The prediction interval problem can be summarized as predicting a lower and upper bound between which the potential observation falls with a certain probability. The paper proposes a method to report the prediction interval as the union of disjoint intervals, in contrast with the previous methods which report a unified continuous interval. The motivation is that if the conditional density function has multiple modes, a single prediction interval may not be well descriptive of the uncertainty of the predictive model. To achieve this goal, they propose a differentiable objective function together with a Neural Network architecture that produces the union of disjoint prediction intervals. Through experiments, they show that multimodality often exists in real-world datasets and that their method manages to produce prediction intervals of higher quality (in terms of the commonly used metrics to assess the quality of the prediction intervals such as coverage probability and interval width) compared to the previous work. | SP:077e63b28356b13b5309d4dae177b22d04636903 |
Distribution-Driven Disjoint Prediction Intervals for Deep Learning | This paper redefines prediction intervals ( PIs ) as the form of a union of disjoint intervals . PIs represent predictive uncertainty in the regression problem . Since previous PI methods assumed a single continuous PI ( one lower and upper bound ) , it suffers from performance degradation in the uncertainty estimation when the conditional density function has multiple modes . This paper demonstrates that multimodality should be considered in regression uncertainty estimation . To address the issue , we propose a novel method that generates a union of disjoint PIs . Throughout UCI benchmark experiments , our method improves over current state-of-the-art uncertainty quantification methods , reducing an average PI width by over 27 % . Through qualitative experiments , we visualized that the multi-mode often exists in real-world datasets and why our method produces high-quality PIs compared to the previous PI . 1 INTRODUCTION . Deep neural networks ( NNs ) show remarkable performance in predicting a target for regression problems . However , the prediction is not enough to make it trustworthy : minimization of objective functions the NN leads to network outputs which approximate the conditional averages of the target data with no information about sampling errors and prediction accuracy . Moreover , if the target is multivalue , NN output can be far from the actual target in the regression problems . Incorporating the predictive uncertainty into the deterministic approximation generated by NNs improves the reliability and credibility of the predictions . This issue is being discussed in various domains such as autonomous driving ( Feng et al. , 2018 ) , object detection ( He et al. , 2019 ) , solar energy forecasting ( Galván et al. , 2017 ) , electricity demands and price estimation ( Shrivastava & Panigrahi , 2015 ) , and sensor anomaly detection ( Pang et al. , 2017 ) . Prediction interval ( PI ) represents and quantifies predictive uncertainty in the regression problem . Pearce et al . ( 2018 ) ; Tagasovska & Lopez-Paz ( 2019 ) ; Salem et al . ( 2020 ) have recently provided competitive performance by generating a PI to estimate predictive uncertainty . PI describes predictive uncertainty for each sample in the form of two values ( lower and upper bound ) between which a potential observation falls with a certain probability ( e.g. , 95 % or 99 % ) . PI can provide the amount of uncertainty for each sample by the width of PI . It also provides the possible range of prediction by bounds . It is a self-evident principle that high-quality PI should be as narrow as possible while containing some specified proportion of data points ( hereafter referred to as the HQ principle ) . The quality of a PI is often evaluated by the metric derived from the HQ principle ( Khosravi et al. , 2010 ; Galván et al. , 2017 ; Pearce et al. , 2018 ; Tagasovska & Lopez-Paz , 2019 ; Salem et al. , 2020 ) . Previous methods estimate the regression uncertainty with a single continuous PI , but it may suffer from performance degradation in the regression having multimodality . A toy example in Figure 1 is a one-dimensional regression example that has two modes . We observe that a single continuous PI ( gray shade ) provides unnecessarily large PIs to fill in the gap between the two modes compared to disjoint PIs ( blue shade ) . This means that a single continuous PI provides low-quality PIs in terms of the HQ principle . Including intervals that are unlikely to contain future observations makes PIs less reliable . Note that this issue becomes severe as the distance between modes increases . We qualitatively confirmed that multimodality often exists in real-world regression datasets through approximating conditional probability density function . We also confirmed that state-of-the-art methods generate low-quality PIs on real-world samples with multimodality . This is covered in more detail in Section 5.4 . Considering multimodality has been successful at handling the underlying stochastic structure in various fields ( Ameijeiras-Alonso et al. , 2019 ; Lerch et al. , 2020 ) . Concerning multimodality , various works such as clustering , multi-object detection ( Yoo et al. , 2019 ) , missing data reconstruction ( Smieja et al. , 2018 ) , multiple-choice learning ( Lee et al. , 2017 ) , and multi-output prediction ( Guzman-Rivera et al. , 2014 ) have been conducted . However , recent regression uncertainty estimation studies do not consider multimodality in depth . In this work , we redefine PI as a union of disjoint PIs due to the limitation of a single continuous PI in multimodality ( Section 3 ) . Since prior PI methods and loss functions do not apply to the union of disjoint PIs , we propose a new differentiable objective function and NN architecture that produce the union of disjoint PIs ( Section 4 ) . Additionally , we use the ensemble method to boost the performance for both in- and out-of-distribution regions ( Section 5.2 ) . As a result , our method improves over current state-of-the-art methods , reducing an average PI width by 27 % throughout eleven real-world datasets ( Section 5.3 ) . In addition , our method can provide the coverage probability of each disjoint PI ( e.g. , 20 % chance of being between 1 and 3 , 75 % chance of being between 5 and 9 ) . This means that our method gives information about how reliable each interval is ( Section 5.5 ) . 2 RELATED WORK . There are two approaches for estimating the predictive uncertainty for regression problems : Bayesian and non-Bayesian . In the Bayesian approach , NN parameters are considered as a distribution , and the uncertainty is calculated by marginalizing the parameters ( Graves , 2011 ; Blundell et al. , 2015 ; Hernández-Lobato & Adams , 2015 ; Gal et al. , 2017 ; Khan et al. , 2018 ; Wu et al. , 2018 ; Yao et al. , 2019 ; Izmailov et al. , 2020 ) . Though theoretically grounded , an approximation is needed since calculating the posterior distribution of NN parameters is computationally intractable . It also requires high computational demand in the inference time . The non-Bayesian approach , on the other hand , defines the output of NN as parameters to describe the predictive uncertainty . It is usually less computational than the Bayesian approach . However , since the NN parameters are fixed , non-Bayesian methods have a limitation in expressing the model uncertainty . Therefore , the deep ensemble with random initialization is additionally used to deal with model uncertainty . Several papers in the non-Bayesian branch have recently provided competitive performance ( Lakshminarayanan et al. , 2017 ; Pearce et al. , 2018 ; Tagasovska & Lopez-Paz , 2019 ; Salem et al. , 2020 ) . Our paper focuses on the Non-Bayesian approach , especially for the regression problem . Therefore , we would take a closer look at the non-Bayesian methods by dividing them into PI and non-PI methods . As PI methods for non-Bayesian methods , Khosravi et al . ( 2010 ) propose the Lower Upper Bound Estimation ( LUBE ) method that produces PI for the first time . Followed by that , Pearce et al . ( 2018 ) propose a quality-driven ( QD ) loss function that is compatible with gradient descent optimization . They also propose an ensemble method for PI with multiple predicted lower and upper bounds to estimate the model uncertainty . Salem et al . ( 2020 ) retrofit the QD loss function and propose a new ensemble method by fitting the split normal mixture distribution ( Wallis , 2014 ) to the PI and averaging the distribution , where they name it as SNM-QD+ . It increases the robustness of the training process compared to the QD method . However , SNM-QD+ has difficulties searching hyperparameters because the loss function contains various hyperparameters to achieve the advantages . Tagasovska & Lopez-Paz ( 2019 ) propose the simultaneous quantile regression ( SQR ) and the orthonormal certificates ( OC ) to estimate data noise and model uncertainty , respectively . However , this strategy generates PI only by the SQR without an ensemble method , and model uncertainty from OC is not included in the PI . Therefore , the PI of SQR does not consider the model uncertainty . Aforementioned loss functions and methods can only generate a single continuous PI but not a union of disjoint PIs . As non-PI methods , Mean-Variance Estimation ( MVE ) ( Nix & Weigend , 1994 ) uses a NN with two output nodes that are considered as a mean and a standard deviation of the conditional probability distribution . Since NN parameters are fixed , it can not deal with the model uncertainty . Lakshminarayanan et al . ( 2017 ) demonstrate the deep ensemble of multiple MVE with random initialization improves the performance , especially in out-of-distribution regions . ( so-called MVEens ) . Fort et al . ( 2019 ) shows that ensemble with random initialization may sample different modes in function space and therefore perform well in exploring model uncertainty . 3 UNION OF DISJOINT PREDICTION INTERVALS . 3.1 PROBLEM SETUP . Consider a dataset { xi , yi } Ni=1 where xi is an input and yi is a target . For each data point { xi , yi } , the disjoint set of PIs that covers the desired given proportion γ ∈ [ 0 , 1 ] is defined as follows : PIi = J ( i ) ⋃ j=1 [ Lij , Uij ) ( 1 ) where Pr ( yi ∈ PIi ) ≥ γ and Lij ≤ Uij < Li ( j+1 ) for all j Lij and Uij is a lower and upper bound of jth PI related with ith data point . J ( i ) is the number of disjoint intervals when PI is expressed with the smallest number of disjoint intervals . That is , J ( i ) is unique for a given interval . Note J ( i ) may have a different value for each data point . The previous methods assume a single continuous PI that is J ( i ) = 1 for all i . 3.2 PERFORMANCE METRIC : PICP AND MPIW . To measure the quality of PI methods based on the HQ principle , let Prediction Interval Coverage Probability ( PICP ) and Mean Prediction Interval Width ( MPIW ) be defined as , PICP = c N where c = N∑ i=1 ci and ci = { 1 , if yi ∈ PIi 0 , otherwise ( 2 ) MPIW = 1 N N∑ i=1 J ( i ) ∑ j=1 ( Uij − Lij ) ( 3 ) PICP measures the ratio of the target that is captured within PIs while MPIW measures total length of PIs over the entire samples . According to the HQ principle , PIs should minimize MPIW subject to PICP ≥ γ ( e.g . γ = 0.95 or 0.99 ) . This metric is widely used to compare performance of PI-related methods ( Khosravi et al. , 2010 ; Pearce et al. , 2018 ; Tagasovska & Lopez-Paz , 2019 ; Salem et al. , 2020 ) . 4 DISTRIBUTION-DRIVEN-DISJOINT METHOD . We propose a learning-based method that generates distribution-driven disjoint ( DDD ) PIs , and we call our method as DDD method . The DDD method produces high-quality PIs without the assumption that J ( i ) = 1 . To generate multiple disjoint PIs with a learning-based method , we need to formulate a differentiable loss function that reflects the HQ principle . However , this is a challenging problem because c from ( 2 ) is non-differentiable . Pearce et al . ( 2018 ) ; Salem et al . ( 2020 ) proposed a QD and QD+ loss function in the form of constraint optimization by approximating c in a differentiable way . Tagasovska & Lopez-Paz ( 2019 ) employed a pinball loss which reflects HQ principle and differentiable . However , these loss functions have a limitation in that they work for a single continuous PI which has J ( i ) = 1 . To derive a new differentiable loss function ( so-called DDD loss ) for multiple disjoint intervals , we first approximate the conditional distribution given input , p̂ ( yi|xi ) , with a Gaussian mixture model . Then , we derive the DDD loss by using the cumulative density function of the Gaussian mixture ( why we call our method distribution-driven disjoint ) . Our DDD method trains NN by minimizing the DDD loss . Another major problem is that the optimal number of disjoint prediction intervals J ( i ) opt may differ for each p̂ ( yi|xi ) , making it hard to implement [ L̂ij , Ûij ) J ( i ) j=1 as an output of NN . To deal with the problem , after producing K intervals regardless of conditional density , the union process removes the overlapping part . We propose a novel architecture that can implement this in a differentiable way . Additionally , we employed a simple ensemble method to improve performance for both inand out-of-distribution observations . | This paper notes that existing approaches to predictive interval (PI) generation and evaluation assume unimodal predictive distributions. This results in unnecessarily loose PIs in the presence of multimodality in the predictive distribution or multimodality in the targets. The authors first propose to extend the definition of PI to be a union of disjoint intervals, allowing for more fidelity in evaluation. They then propose a method to explicitly generate multimodal PIs. This method is based on feeding the output of a mixture density network (conditional GMM) through a secondary NN that outputs a set of lower and upper bounds for PIs. The authors compare their method against alternative heteroscedastic noise models in terms of PICP and MPIW on 11 UCI datasets. | SP:077e63b28356b13b5309d4dae177b22d04636903 |
ADAVI: Automatic Dual Amortized Variational Inference Applied To Pyramidal Bayesian Models | 1 INTRODUCTION . Inference aims at obtaining the posterior distribution p ( θ|X ) of latent model parameters θ given the observed data X . In the context of Hierarchical Bayesian Models ( HBM ) , p ( θ|X ) usually has no known analytical form , and can be of a complex shape -different from the prior ’ s ( Gelman et al. , 2004 ) . Modern normalizing-flows based techniques -universal density estimators- can overcome this difficulty ( Papamakarios et al. , 2019a ; Ambrogioni et al. , 2021 ) . Yet , in setups such as neuroimaging , featuring HBMs representing large population studies ( Kong et al. , 2018 ; Bonkhoff et al. , 2021 ) , the dimensionality of θ can go over the million . This high dimensionality hinders the usage of normalizing flows , since their parameterization usually scales quadratically with the size of the parameter space ( e.g . Dinh et al. , 2017 ; Papamakarios et al. , 2018 ; Grathwohl et al. , 2018 ) . Population studies with large dimensional features are therefore inaccessible to off-the-shelf flow-based techniques and their superior expressivity . This can in turn lead to complex , problem-specific derivations : for instance Kong et al . ( 2018 ) rely on a manually-derived Expectation Maximization ( EM ) technique . Such an analytical complexity constitutes a strong barrier to entry , and limits the wide and fruitful usage of Bayesian modelling in fields such as neuroimaging . Our main aim is to meet that experimental need : how can we derive a technique both automatic and efficient in the context of very large , hierarchically-organised data ? Approximate inference features a large corpus of methods including Monte Carlo methods ( Koller & Friedman , 2009 ) and Variational Auto Encoders ( Zhang et al. , 2019 ) . We take particular inspiration from the field of Variational Inference ( VI ) ( Blei et al. , 2017 ) , deemed to be most adapted to large parameter spaces . In VI , the experimenter posits a variational familyQ so as to approximate q ( θ ) ≈ Generative Pyramidal . p ( θ|X ) . In practice , deriving an expressive , yet computationally attractive variational family can be challenging ( Blei et al. , 2017 ) . This triggered a trend towards the derivation of automatic VI techniques ( Kucukelbir et al. , 2016 ; Ranganath et al. , 2013 ; Ambrogioni et al. , 2021 ) . We follow that logic and present a methodology that automatically derives a variational family Q . In Fig . 1 , from the HBM on the left we derive automatically a neural network architecture on the right . We aim at deriving our variational family Q in the context of amortized inference ( Rezende & Mohamed , 2016 ; Cranmer et al. , 2020 ) . Amortization is usually obtained at the cost of an amortization gap from the true posterior , that accumulates on top of a approximation gap dependent on the expressivity of the variational family Q ( Cremer et al. , 2018 ) . However , once an initial training overhead has been “ paid for ” , amortization means that our technique can be applied to a any number of data points to perform inference in a few seconds . Due to the very large parameter spaces presented above , our target applications aren ’ t amenable to the generic flow-based techniques described in Cranmer et al . ( 2020 ) or Ambrogioni et al . ( 2021 ) . We therefore differentiate ourselves in exploiting the invariance of the problem not only through the design of an adapted encoder , but down to the very architecture of our density estimator . Specifically , we focus on the inference problem for Hierarchical Bayesian Models ( HBMs ) ( Gelman et al. , 2004 ; Rodrigues et al. , 2021 ) . The idea to condition the architecture of a density estimator by an analysis of the dependency structure of an HBM has been studied in ( Wehenkel & Louppe , 2020 ; Weilbach et al . ) , in the form of the masking of a single normalizing flow . With Ambrogioni et al . ( 2021 ) , we instead share the idea to combine multiple separate flows . More generally , our static analysis of a generative model can be associated with structured VI ( Hoffman & Blei , 2014 ; Ambrogioni et al . ; 2021 ) . Yet our working principles are rather orthogonal : structured VI usually aims at exploiting model structure to augment the expressivity of a variational family , whereas we aim at reducing its parameterization . Our objective is therefore to derive an automatic methodology that takes as input a generative HBM and generates a dual variational family able to perform amortized parameter inference . This variational family exploits the exchangeability in the HBM to reduce its parameterization by orders of magnitude compared to generic methods ( Papamakarios et al. , 2019b ; Greenberg et al. , 2019 ; Am- brogioni et al. , 2021 ) . Consequently , our method can be applied in the context of large , pyramidallystructured data , a challenging setup inaccessible to existing flow-based methods and their superior expressivity . We apply our method to such a large pyramidal setup in the context of neuroimaging ( section 3.5 ) , but demonstrate the benefit of our method beyond that scope . Our general scheme is visible in Fig . 1 , a figure that we will explain throughout the course of the next section . 2 METHODS . 2.1 PYRAMIDAL BAYESIAN MODELS . We are interested in experimental setups modelled using plate-enriched Hierarchical Bayesian Models ( HBMs ) ( Kong et al. , 2018 ; Bonkhoff et al. , 2021 ) . These models feature independent sampling from a common conditional distribution at multiple levels , translating the graphical notion of plates ( Gilks et al. , 1994 ) . This nested structure , combined with large measurements -such as the ones in fMRI- can result in massive latent parameter spaces . For instance the population study in Kong et al . ( 2018 ) features multiple subjects , with multiple measures per subject , and multiple brain vertices per measure , for a latent space of around 0.4 million parameters . Our method aims at performing inference in the context of those large plate-enriched HBMs . Such HBMs can be represented with Directed Acyclic Graphs ( DAG ) templates ( Koller & Friedman , 2009 ) with vertices -corresponding to RVs- { θi } i=0 ... L and plates { Pp } p=0 ... P . We denote as Card ( P ) the -fixed- cardinality of the plate P , i.e . the number of independent draws from a common conditional distribution it corresponds to . In a template DAG , a given RV θ can belong to multiple plates Ph , . . .PP . When grounding the template DAG into a ground graph -instantiating the repeated structure symbolized by the plates P- θ would correspond to multiple RVs of similar parametric form { θih , ... , iP } , with ih = 1 . . .Card ( Ph ) , . . . , iP = 1 . . .Card ( PP ) . This equivalence visible on the left on Fig . 1 , where the template RV Γ corresponds to the ground RVs [ γ1 , γ2 ] . We wish to exploit this plate-induced exchangeability . We define the sub-class of models we specialize upon as pyramidal models , which are plate-enriched DAG templates with the 2 following differentiating properties . First , we consider a single stack of the plates P0 , . . . , PP . This means that any RV θ belonging to plate Pp also belongs to plates { Pq } q > p . We thus don ’ t treat in this work the case of colliding plates ( Koller & Friedman , 2009 ) . Second , we consider a single observed RV θ0 , with observed value X , belonging to the plate P0 ( with no other -latent- RV belonging to P0 ) . The obtained graph follows a typical pyramidal structure , with the observed RV at the basis of the pyramid , as seen in Fig . 1 . This figure features 2 plates P0 and P1 , the observed RV is X , at the basis of the pyramid , and latent RVs are Γ , λ and κ at upper levels of the pyramid . Pyramidal HBMs delineate models that typically arise as part of population studies -for instance in neuroimaging- featuring a nested group structure and data observed at the subject level only ( Kong et al. , 2018 ; Bonkhoff et al. , 2021 ) . The fact that we consider a single pyramid of plates allows us to define the hierarchy of an RV θi denoted Hier ( θi ) . An RV ’ s hierarchy is the level of the pyramid it is placed at . Due to our pyramidal structure , the observed RV will systematically be at hierarchy 0 and latent RVs at hierarchies > 0 . For instance , in the example in Fig . 1 the observed RV X is at hierarchy 0 , Γ is at hierarchy 1 and both λ and κ are at hierarchy 2 . Our methodology is designed to process generative models whose dependency structure follows a pyramidal graph , and to scale favorably when the plate cardinality in such models augments . Given the observed data X , we wish to obtain the posterior density for latent parameters θ1 , . . . , θL , exploiting the exchangeability induced by the plates P0 , . . . , PP . 2.2 AUTOMATIC DERIVATION OF A DUAL AMORTIZED VARIATIONAL FAMILY . In this section , we derive our main methodological contribution . We aim at obtaining posterior distributions for a generative model of pyramidal structure . For this purpose , we construct a family of variational distributions Q dual to the model . This architecture consists in the combination of 2 items . First , a Hierarchical Encoder ( HE ) that aggregates summary statistics from the data . Second , a set of conditional density estimators . Tensor functions We first introduce the notations for tensor functions which we define in the spirit of Magnus & Neudecker ( 1999 ) . We leverage tensor functions throughout our entire architecture to reduce its parameterization . Consider a function f : F → G , and a tensor TF ∈ FB of shape B . We denote the tensor TG ∈ GB resulting from the element-wise application of f over TF as TG =−→ f ( B ) ( TF ) ( in reference to the programming notion of vectorization in Harris et al . ( 2020 ) ) . In Fig . 1 , −→ ST ( B1 ) 0 and −−−−→ lγ ◦ Fγ ( B1 ) are examples of tensor functions . At multiple points in our architecture , we will translate the repeated structure in the HBM induced by plates into the repeated usage of functions across plates . Hierarchical Encoder For our encoder , our goal is to learn a function HE that takes as input the observed data X and successively exploits the permutation invariance across plates P0 , . . . , PP . In doing so , HE produces encodings E at different hierarchy levels . Through those encodings , our goal is to learn summary statistics from the observed data , that will condition our amortized inference . For instance in Fig . 1 , the application of HE over X produces the encodings E1 and E2 . To build HE , we need at multiple hierarchies to collect summary statistics across i.i.d samples from a common distribution . To this end we leverage SetTransformers ( Lee et al. , 2019 ) : an attention-based , permutation-invariant architecture . We use SetTransformers to derive encodings across a given plate , repeating their usage for all larger-rank plates . We cast the observed data X as the encoding E0 . Then , recursively for every hierarchy h = 1 . . . P + 1 , we define the encoding Eh as the application to the encoding Eh−1 of the tensor function corresponding to the set transformer STh−1 . HE ( X ) then corresponds to the set of encodings { E1 , . . . , EP+1 } obtained from the successive application of { STh } h=0 , ... , P . If we denote the batch shape Bh = Card ( Ph ) × . . .× Card ( PP ) : Eh = −→ ST ( Bh ) h−1 ( Eh−1 ) HE ( X ) = { E1 , . . . , EP+1 } ( 1 ) In collecting summary statistics across the i.i.d . samples in plate Ph−1 , we decrease the order of the encoding tensor Eh−1 . We repeat this operation in parallel on every plate of larger rank than the rank of the contracted plate . We consequently produce an encoding tensor Eh with the batch shape Bh , which is the batch shape of every RV of hierarchy h. In that line , successively summarizing plates P0 , , . . . , PP , of increasing rank results in encoding tensors E1 , . . . , EP+1 of decreasing order . In Fig . 1 , there are 2 plates P0 and P1 , hence 2 encodings E1 = −→ ST ( B1 ) 0 ( X ) and E2 = ST1 ( E1 ) . E1 is an order 2 tensor : it has a batch shape of B1 = Card ( P1 ) -similar to Γ- whereas E2 is an order 1 tensor . We can decompose E1 = [ e11 , e21 ] = [ ST0 ( [ X1,1 , X1,2 ] ) , ST0 ( [ X2,1 , X2,2 ] ) ] . Conditional density estimators We now will use the encodings E , gathering hierarchical summary statistics on the data X , to condition the inference on the parameters θ . The encodings { Eh } h=1 ... P+1 will respectively condition the density estimators for the posterior distribution of parameters sharing their hierarchy { { θi : Hier ( θi ) = h } } h=1 ... P+1 . Consider a latent RV θi of hierarchy hi = Hier ( θi ) . Due to the plate structure of the graph , θi can be decomposed in a batch of shape Bhi = Card ( Phi ) × . . . × Card ( PP ) of multiple similar , conditionally independent RVs of individual size Sθi . This decomposition is akin to the grounding of the considered graph template ( Koller & Friedman , 2009 ) . A conditional density estimator is a 2-step diffeomorphism from a latent space onto the event space in which the RV θi lives . We initially parameterize every variational density as a standard normal distribution in the latent space RSθi . First , this latent distribution is reparameterized by a conditional normalizing flowFi ( Rezende & Mohamed , 2016 ; Papamakarios et al. , 2019a ) into a distribution of more complex density in the space RSθi . The flow Fi is a diffeomorphism in the space RSθi conditioned by the encoding Ehi . Second , the obtained latent distribution is projected onto the event space in which θi lives by the application of a link function diffeomorphism li . For instance , if θi is a variance parameter , the link function would map R onto R+∗ ( li = Exp as an example ) . The usage of Fi and the link function li is repeated on plates of larger rank than the hierarchy hi of θi . The resulting conditional density estimator qi for the posterior distribution p ( θi|X ) is given by : ui ∼ N ( −→ 0 Bhi×Sθi , IBhi×Sθi ) θ̃i = −−−→ li ◦ Fi ( Bhi ) ( ui ; Ehi ) ∼ qi ( θi ; Ehi ) ( 2 ) In Fig . 1 Γ = [ γ1 , γ2 ] is associated to the diffeomorphism −−−−→ lγ ◦ Fγ ( B1 ) . This diffeomorphism is conditioned by the encoding E1 . Both Γ and E1 share the batch shape B1 = Card ( P1 ) . Decomposing the encoding E1 = [ e11 , e21 ] , e11 is used to condition the inference on γ1 , and e21 for γ2 . λ is associated to the diffeomorphism lλ ◦ Fλ , and κ to lκ ◦ Fκ , both conditioned by E2 . Parsimonious parameterization Our approach produces a parameterization effectively independent from plate cardinalities . Consider the latent RVs θ1 , . . . , θL . Normalizing flow-based density estimators have a parameterization quadratic with respect to the size of the space they are applied to ( e.g . Papamakarios et al. , 2018 ) . Applying a single normalizing flow to the total event space of θ1 , . . . , θL would thus result in O ( [ ∑L i=1 Sθi ∏P p=hi Card ( Pp ) ] 2 ) weights . But since we instead apply multiple flows on the spaces of size Sθi and repeat their usage across all plates Phi , . . . , PP , we effectively reduce this parameterization to : # weightsADAVI = O ( L∑ i=1 S2θi ) ( 3 ) As a consequence , our method can be applied to HBMs featuring large plate cardinalities without scaling up its parameterization to impractical ranges , preventing a computer memory blow-up . | This work introduces ADAVI, an approximate inference algorithm for hierarchical Bayesian models (HBMs). The approach is similar to NPE from simulation-based inference but exploits the hierarchical structure of the forward model to generate an efficient variational family automatically. Experiments demonstrate the applicability of the method on HBMs of increasing complexity, including a challenging neuroimaging model. Results indicate good performance against other amortized methods. | SP:ac514d656bc5dfacab04f803ffa6ce4224921eb5 |
ADAVI: Automatic Dual Amortized Variational Inference Applied To Pyramidal Bayesian Models | 1 INTRODUCTION . Inference aims at obtaining the posterior distribution p ( θ|X ) of latent model parameters θ given the observed data X . In the context of Hierarchical Bayesian Models ( HBM ) , p ( θ|X ) usually has no known analytical form , and can be of a complex shape -different from the prior ’ s ( Gelman et al. , 2004 ) . Modern normalizing-flows based techniques -universal density estimators- can overcome this difficulty ( Papamakarios et al. , 2019a ; Ambrogioni et al. , 2021 ) . Yet , in setups such as neuroimaging , featuring HBMs representing large population studies ( Kong et al. , 2018 ; Bonkhoff et al. , 2021 ) , the dimensionality of θ can go over the million . This high dimensionality hinders the usage of normalizing flows , since their parameterization usually scales quadratically with the size of the parameter space ( e.g . Dinh et al. , 2017 ; Papamakarios et al. , 2018 ; Grathwohl et al. , 2018 ) . Population studies with large dimensional features are therefore inaccessible to off-the-shelf flow-based techniques and their superior expressivity . This can in turn lead to complex , problem-specific derivations : for instance Kong et al . ( 2018 ) rely on a manually-derived Expectation Maximization ( EM ) technique . Such an analytical complexity constitutes a strong barrier to entry , and limits the wide and fruitful usage of Bayesian modelling in fields such as neuroimaging . Our main aim is to meet that experimental need : how can we derive a technique both automatic and efficient in the context of very large , hierarchically-organised data ? Approximate inference features a large corpus of methods including Monte Carlo methods ( Koller & Friedman , 2009 ) and Variational Auto Encoders ( Zhang et al. , 2019 ) . We take particular inspiration from the field of Variational Inference ( VI ) ( Blei et al. , 2017 ) , deemed to be most adapted to large parameter spaces . In VI , the experimenter posits a variational familyQ so as to approximate q ( θ ) ≈ Generative Pyramidal . p ( θ|X ) . In practice , deriving an expressive , yet computationally attractive variational family can be challenging ( Blei et al. , 2017 ) . This triggered a trend towards the derivation of automatic VI techniques ( Kucukelbir et al. , 2016 ; Ranganath et al. , 2013 ; Ambrogioni et al. , 2021 ) . We follow that logic and present a methodology that automatically derives a variational family Q . In Fig . 1 , from the HBM on the left we derive automatically a neural network architecture on the right . We aim at deriving our variational family Q in the context of amortized inference ( Rezende & Mohamed , 2016 ; Cranmer et al. , 2020 ) . Amortization is usually obtained at the cost of an amortization gap from the true posterior , that accumulates on top of a approximation gap dependent on the expressivity of the variational family Q ( Cremer et al. , 2018 ) . However , once an initial training overhead has been “ paid for ” , amortization means that our technique can be applied to a any number of data points to perform inference in a few seconds . Due to the very large parameter spaces presented above , our target applications aren ’ t amenable to the generic flow-based techniques described in Cranmer et al . ( 2020 ) or Ambrogioni et al . ( 2021 ) . We therefore differentiate ourselves in exploiting the invariance of the problem not only through the design of an adapted encoder , but down to the very architecture of our density estimator . Specifically , we focus on the inference problem for Hierarchical Bayesian Models ( HBMs ) ( Gelman et al. , 2004 ; Rodrigues et al. , 2021 ) . The idea to condition the architecture of a density estimator by an analysis of the dependency structure of an HBM has been studied in ( Wehenkel & Louppe , 2020 ; Weilbach et al . ) , in the form of the masking of a single normalizing flow . With Ambrogioni et al . ( 2021 ) , we instead share the idea to combine multiple separate flows . More generally , our static analysis of a generative model can be associated with structured VI ( Hoffman & Blei , 2014 ; Ambrogioni et al . ; 2021 ) . Yet our working principles are rather orthogonal : structured VI usually aims at exploiting model structure to augment the expressivity of a variational family , whereas we aim at reducing its parameterization . Our objective is therefore to derive an automatic methodology that takes as input a generative HBM and generates a dual variational family able to perform amortized parameter inference . This variational family exploits the exchangeability in the HBM to reduce its parameterization by orders of magnitude compared to generic methods ( Papamakarios et al. , 2019b ; Greenberg et al. , 2019 ; Am- brogioni et al. , 2021 ) . Consequently , our method can be applied in the context of large , pyramidallystructured data , a challenging setup inaccessible to existing flow-based methods and their superior expressivity . We apply our method to such a large pyramidal setup in the context of neuroimaging ( section 3.5 ) , but demonstrate the benefit of our method beyond that scope . Our general scheme is visible in Fig . 1 , a figure that we will explain throughout the course of the next section . 2 METHODS . 2.1 PYRAMIDAL BAYESIAN MODELS . We are interested in experimental setups modelled using plate-enriched Hierarchical Bayesian Models ( HBMs ) ( Kong et al. , 2018 ; Bonkhoff et al. , 2021 ) . These models feature independent sampling from a common conditional distribution at multiple levels , translating the graphical notion of plates ( Gilks et al. , 1994 ) . This nested structure , combined with large measurements -such as the ones in fMRI- can result in massive latent parameter spaces . For instance the population study in Kong et al . ( 2018 ) features multiple subjects , with multiple measures per subject , and multiple brain vertices per measure , for a latent space of around 0.4 million parameters . Our method aims at performing inference in the context of those large plate-enriched HBMs . Such HBMs can be represented with Directed Acyclic Graphs ( DAG ) templates ( Koller & Friedman , 2009 ) with vertices -corresponding to RVs- { θi } i=0 ... L and plates { Pp } p=0 ... P . We denote as Card ( P ) the -fixed- cardinality of the plate P , i.e . the number of independent draws from a common conditional distribution it corresponds to . In a template DAG , a given RV θ can belong to multiple plates Ph , . . .PP . When grounding the template DAG into a ground graph -instantiating the repeated structure symbolized by the plates P- θ would correspond to multiple RVs of similar parametric form { θih , ... , iP } , with ih = 1 . . .Card ( Ph ) , . . . , iP = 1 . . .Card ( PP ) . This equivalence visible on the left on Fig . 1 , where the template RV Γ corresponds to the ground RVs [ γ1 , γ2 ] . We wish to exploit this plate-induced exchangeability . We define the sub-class of models we specialize upon as pyramidal models , which are plate-enriched DAG templates with the 2 following differentiating properties . First , we consider a single stack of the plates P0 , . . . , PP . This means that any RV θ belonging to plate Pp also belongs to plates { Pq } q > p . We thus don ’ t treat in this work the case of colliding plates ( Koller & Friedman , 2009 ) . Second , we consider a single observed RV θ0 , with observed value X , belonging to the plate P0 ( with no other -latent- RV belonging to P0 ) . The obtained graph follows a typical pyramidal structure , with the observed RV at the basis of the pyramid , as seen in Fig . 1 . This figure features 2 plates P0 and P1 , the observed RV is X , at the basis of the pyramid , and latent RVs are Γ , λ and κ at upper levels of the pyramid . Pyramidal HBMs delineate models that typically arise as part of population studies -for instance in neuroimaging- featuring a nested group structure and data observed at the subject level only ( Kong et al. , 2018 ; Bonkhoff et al. , 2021 ) . The fact that we consider a single pyramid of plates allows us to define the hierarchy of an RV θi denoted Hier ( θi ) . An RV ’ s hierarchy is the level of the pyramid it is placed at . Due to our pyramidal structure , the observed RV will systematically be at hierarchy 0 and latent RVs at hierarchies > 0 . For instance , in the example in Fig . 1 the observed RV X is at hierarchy 0 , Γ is at hierarchy 1 and both λ and κ are at hierarchy 2 . Our methodology is designed to process generative models whose dependency structure follows a pyramidal graph , and to scale favorably when the plate cardinality in such models augments . Given the observed data X , we wish to obtain the posterior density for latent parameters θ1 , . . . , θL , exploiting the exchangeability induced by the plates P0 , . . . , PP . 2.2 AUTOMATIC DERIVATION OF A DUAL AMORTIZED VARIATIONAL FAMILY . In this section , we derive our main methodological contribution . We aim at obtaining posterior distributions for a generative model of pyramidal structure . For this purpose , we construct a family of variational distributions Q dual to the model . This architecture consists in the combination of 2 items . First , a Hierarchical Encoder ( HE ) that aggregates summary statistics from the data . Second , a set of conditional density estimators . Tensor functions We first introduce the notations for tensor functions which we define in the spirit of Magnus & Neudecker ( 1999 ) . We leverage tensor functions throughout our entire architecture to reduce its parameterization . Consider a function f : F → G , and a tensor TF ∈ FB of shape B . We denote the tensor TG ∈ GB resulting from the element-wise application of f over TF as TG =−→ f ( B ) ( TF ) ( in reference to the programming notion of vectorization in Harris et al . ( 2020 ) ) . In Fig . 1 , −→ ST ( B1 ) 0 and −−−−→ lγ ◦ Fγ ( B1 ) are examples of tensor functions . At multiple points in our architecture , we will translate the repeated structure in the HBM induced by plates into the repeated usage of functions across plates . Hierarchical Encoder For our encoder , our goal is to learn a function HE that takes as input the observed data X and successively exploits the permutation invariance across plates P0 , . . . , PP . In doing so , HE produces encodings E at different hierarchy levels . Through those encodings , our goal is to learn summary statistics from the observed data , that will condition our amortized inference . For instance in Fig . 1 , the application of HE over X produces the encodings E1 and E2 . To build HE , we need at multiple hierarchies to collect summary statistics across i.i.d samples from a common distribution . To this end we leverage SetTransformers ( Lee et al. , 2019 ) : an attention-based , permutation-invariant architecture . We use SetTransformers to derive encodings across a given plate , repeating their usage for all larger-rank plates . We cast the observed data X as the encoding E0 . Then , recursively for every hierarchy h = 1 . . . P + 1 , we define the encoding Eh as the application to the encoding Eh−1 of the tensor function corresponding to the set transformer STh−1 . HE ( X ) then corresponds to the set of encodings { E1 , . . . , EP+1 } obtained from the successive application of { STh } h=0 , ... , P . If we denote the batch shape Bh = Card ( Ph ) × . . .× Card ( PP ) : Eh = −→ ST ( Bh ) h−1 ( Eh−1 ) HE ( X ) = { E1 , . . . , EP+1 } ( 1 ) In collecting summary statistics across the i.i.d . samples in plate Ph−1 , we decrease the order of the encoding tensor Eh−1 . We repeat this operation in parallel on every plate of larger rank than the rank of the contracted plate . We consequently produce an encoding tensor Eh with the batch shape Bh , which is the batch shape of every RV of hierarchy h. In that line , successively summarizing plates P0 , , . . . , PP , of increasing rank results in encoding tensors E1 , . . . , EP+1 of decreasing order . In Fig . 1 , there are 2 plates P0 and P1 , hence 2 encodings E1 = −→ ST ( B1 ) 0 ( X ) and E2 = ST1 ( E1 ) . E1 is an order 2 tensor : it has a batch shape of B1 = Card ( P1 ) -similar to Γ- whereas E2 is an order 1 tensor . We can decompose E1 = [ e11 , e21 ] = [ ST0 ( [ X1,1 , X1,2 ] ) , ST0 ( [ X2,1 , X2,2 ] ) ] . Conditional density estimators We now will use the encodings E , gathering hierarchical summary statistics on the data X , to condition the inference on the parameters θ . The encodings { Eh } h=1 ... P+1 will respectively condition the density estimators for the posterior distribution of parameters sharing their hierarchy { { θi : Hier ( θi ) = h } } h=1 ... P+1 . Consider a latent RV θi of hierarchy hi = Hier ( θi ) . Due to the plate structure of the graph , θi can be decomposed in a batch of shape Bhi = Card ( Phi ) × . . . × Card ( PP ) of multiple similar , conditionally independent RVs of individual size Sθi . This decomposition is akin to the grounding of the considered graph template ( Koller & Friedman , 2009 ) . A conditional density estimator is a 2-step diffeomorphism from a latent space onto the event space in which the RV θi lives . We initially parameterize every variational density as a standard normal distribution in the latent space RSθi . First , this latent distribution is reparameterized by a conditional normalizing flowFi ( Rezende & Mohamed , 2016 ; Papamakarios et al. , 2019a ) into a distribution of more complex density in the space RSθi . The flow Fi is a diffeomorphism in the space RSθi conditioned by the encoding Ehi . Second , the obtained latent distribution is projected onto the event space in which θi lives by the application of a link function diffeomorphism li . For instance , if θi is a variance parameter , the link function would map R onto R+∗ ( li = Exp as an example ) . The usage of Fi and the link function li is repeated on plates of larger rank than the hierarchy hi of θi . The resulting conditional density estimator qi for the posterior distribution p ( θi|X ) is given by : ui ∼ N ( −→ 0 Bhi×Sθi , IBhi×Sθi ) θ̃i = −−−→ li ◦ Fi ( Bhi ) ( ui ; Ehi ) ∼ qi ( θi ; Ehi ) ( 2 ) In Fig . 1 Γ = [ γ1 , γ2 ] is associated to the diffeomorphism −−−−→ lγ ◦ Fγ ( B1 ) . This diffeomorphism is conditioned by the encoding E1 . Both Γ and E1 share the batch shape B1 = Card ( P1 ) . Decomposing the encoding E1 = [ e11 , e21 ] , e11 is used to condition the inference on γ1 , and e21 for γ2 . λ is associated to the diffeomorphism lλ ◦ Fλ , and κ to lκ ◦ Fκ , both conditioned by E2 . Parsimonious parameterization Our approach produces a parameterization effectively independent from plate cardinalities . Consider the latent RVs θ1 , . . . , θL . Normalizing flow-based density estimators have a parameterization quadratic with respect to the size of the space they are applied to ( e.g . Papamakarios et al. , 2018 ) . Applying a single normalizing flow to the total event space of θ1 , . . . , θL would thus result in O ( [ ∑L i=1 Sθi ∏P p=hi Card ( Pp ) ] 2 ) weights . But since we instead apply multiple flows on the spaces of size Sθi and repeat their usage across all plates Phi , . . . , PP , we effectively reduce this parameterization to : # weightsADAVI = O ( L∑ i=1 S2θi ) ( 3 ) As a consequence , our method can be applied to HBMs featuring large plate cardinalities without scaling up its parameterization to impractical ranges , preventing a computer memory blow-up . | The paper tackles approximate inference for hierarchical Bayesian models with fully nested structure. The specific approach taken is variational inference with a q-distribution that iteratively applies conditional normalizing flows to derive a hierarchical representation, and factorizes in a manner parallel to the generative model by reusing flow parameters, thus having a number of parameters that does not grow with cardinality of each plate. The benefits of the model are illustrated in a few synthetic experiments as well as in application to a human neuroimaging dataset. | SP:ac514d656bc5dfacab04f803ffa6ce4224921eb5 |
ADAVI: Automatic Dual Amortized Variational Inference Applied To Pyramidal Bayesian Models | 1 INTRODUCTION . Inference aims at obtaining the posterior distribution p ( θ|X ) of latent model parameters θ given the observed data X . In the context of Hierarchical Bayesian Models ( HBM ) , p ( θ|X ) usually has no known analytical form , and can be of a complex shape -different from the prior ’ s ( Gelman et al. , 2004 ) . Modern normalizing-flows based techniques -universal density estimators- can overcome this difficulty ( Papamakarios et al. , 2019a ; Ambrogioni et al. , 2021 ) . Yet , in setups such as neuroimaging , featuring HBMs representing large population studies ( Kong et al. , 2018 ; Bonkhoff et al. , 2021 ) , the dimensionality of θ can go over the million . This high dimensionality hinders the usage of normalizing flows , since their parameterization usually scales quadratically with the size of the parameter space ( e.g . Dinh et al. , 2017 ; Papamakarios et al. , 2018 ; Grathwohl et al. , 2018 ) . Population studies with large dimensional features are therefore inaccessible to off-the-shelf flow-based techniques and their superior expressivity . This can in turn lead to complex , problem-specific derivations : for instance Kong et al . ( 2018 ) rely on a manually-derived Expectation Maximization ( EM ) technique . Such an analytical complexity constitutes a strong barrier to entry , and limits the wide and fruitful usage of Bayesian modelling in fields such as neuroimaging . Our main aim is to meet that experimental need : how can we derive a technique both automatic and efficient in the context of very large , hierarchically-organised data ? Approximate inference features a large corpus of methods including Monte Carlo methods ( Koller & Friedman , 2009 ) and Variational Auto Encoders ( Zhang et al. , 2019 ) . We take particular inspiration from the field of Variational Inference ( VI ) ( Blei et al. , 2017 ) , deemed to be most adapted to large parameter spaces . In VI , the experimenter posits a variational familyQ so as to approximate q ( θ ) ≈ Generative Pyramidal . p ( θ|X ) . In practice , deriving an expressive , yet computationally attractive variational family can be challenging ( Blei et al. , 2017 ) . This triggered a trend towards the derivation of automatic VI techniques ( Kucukelbir et al. , 2016 ; Ranganath et al. , 2013 ; Ambrogioni et al. , 2021 ) . We follow that logic and present a methodology that automatically derives a variational family Q . In Fig . 1 , from the HBM on the left we derive automatically a neural network architecture on the right . We aim at deriving our variational family Q in the context of amortized inference ( Rezende & Mohamed , 2016 ; Cranmer et al. , 2020 ) . Amortization is usually obtained at the cost of an amortization gap from the true posterior , that accumulates on top of a approximation gap dependent on the expressivity of the variational family Q ( Cremer et al. , 2018 ) . However , once an initial training overhead has been “ paid for ” , amortization means that our technique can be applied to a any number of data points to perform inference in a few seconds . Due to the very large parameter spaces presented above , our target applications aren ’ t amenable to the generic flow-based techniques described in Cranmer et al . ( 2020 ) or Ambrogioni et al . ( 2021 ) . We therefore differentiate ourselves in exploiting the invariance of the problem not only through the design of an adapted encoder , but down to the very architecture of our density estimator . Specifically , we focus on the inference problem for Hierarchical Bayesian Models ( HBMs ) ( Gelman et al. , 2004 ; Rodrigues et al. , 2021 ) . The idea to condition the architecture of a density estimator by an analysis of the dependency structure of an HBM has been studied in ( Wehenkel & Louppe , 2020 ; Weilbach et al . ) , in the form of the masking of a single normalizing flow . With Ambrogioni et al . ( 2021 ) , we instead share the idea to combine multiple separate flows . More generally , our static analysis of a generative model can be associated with structured VI ( Hoffman & Blei , 2014 ; Ambrogioni et al . ; 2021 ) . Yet our working principles are rather orthogonal : structured VI usually aims at exploiting model structure to augment the expressivity of a variational family , whereas we aim at reducing its parameterization . Our objective is therefore to derive an automatic methodology that takes as input a generative HBM and generates a dual variational family able to perform amortized parameter inference . This variational family exploits the exchangeability in the HBM to reduce its parameterization by orders of magnitude compared to generic methods ( Papamakarios et al. , 2019b ; Greenberg et al. , 2019 ; Am- brogioni et al. , 2021 ) . Consequently , our method can be applied in the context of large , pyramidallystructured data , a challenging setup inaccessible to existing flow-based methods and their superior expressivity . We apply our method to such a large pyramidal setup in the context of neuroimaging ( section 3.5 ) , but demonstrate the benefit of our method beyond that scope . Our general scheme is visible in Fig . 1 , a figure that we will explain throughout the course of the next section . 2 METHODS . 2.1 PYRAMIDAL BAYESIAN MODELS . We are interested in experimental setups modelled using plate-enriched Hierarchical Bayesian Models ( HBMs ) ( Kong et al. , 2018 ; Bonkhoff et al. , 2021 ) . These models feature independent sampling from a common conditional distribution at multiple levels , translating the graphical notion of plates ( Gilks et al. , 1994 ) . This nested structure , combined with large measurements -such as the ones in fMRI- can result in massive latent parameter spaces . For instance the population study in Kong et al . ( 2018 ) features multiple subjects , with multiple measures per subject , and multiple brain vertices per measure , for a latent space of around 0.4 million parameters . Our method aims at performing inference in the context of those large plate-enriched HBMs . Such HBMs can be represented with Directed Acyclic Graphs ( DAG ) templates ( Koller & Friedman , 2009 ) with vertices -corresponding to RVs- { θi } i=0 ... L and plates { Pp } p=0 ... P . We denote as Card ( P ) the -fixed- cardinality of the plate P , i.e . the number of independent draws from a common conditional distribution it corresponds to . In a template DAG , a given RV θ can belong to multiple plates Ph , . . .PP . When grounding the template DAG into a ground graph -instantiating the repeated structure symbolized by the plates P- θ would correspond to multiple RVs of similar parametric form { θih , ... , iP } , with ih = 1 . . .Card ( Ph ) , . . . , iP = 1 . . .Card ( PP ) . This equivalence visible on the left on Fig . 1 , where the template RV Γ corresponds to the ground RVs [ γ1 , γ2 ] . We wish to exploit this plate-induced exchangeability . We define the sub-class of models we specialize upon as pyramidal models , which are plate-enriched DAG templates with the 2 following differentiating properties . First , we consider a single stack of the plates P0 , . . . , PP . This means that any RV θ belonging to plate Pp also belongs to plates { Pq } q > p . We thus don ’ t treat in this work the case of colliding plates ( Koller & Friedman , 2009 ) . Second , we consider a single observed RV θ0 , with observed value X , belonging to the plate P0 ( with no other -latent- RV belonging to P0 ) . The obtained graph follows a typical pyramidal structure , with the observed RV at the basis of the pyramid , as seen in Fig . 1 . This figure features 2 plates P0 and P1 , the observed RV is X , at the basis of the pyramid , and latent RVs are Γ , λ and κ at upper levels of the pyramid . Pyramidal HBMs delineate models that typically arise as part of population studies -for instance in neuroimaging- featuring a nested group structure and data observed at the subject level only ( Kong et al. , 2018 ; Bonkhoff et al. , 2021 ) . The fact that we consider a single pyramid of plates allows us to define the hierarchy of an RV θi denoted Hier ( θi ) . An RV ’ s hierarchy is the level of the pyramid it is placed at . Due to our pyramidal structure , the observed RV will systematically be at hierarchy 0 and latent RVs at hierarchies > 0 . For instance , in the example in Fig . 1 the observed RV X is at hierarchy 0 , Γ is at hierarchy 1 and both λ and κ are at hierarchy 2 . Our methodology is designed to process generative models whose dependency structure follows a pyramidal graph , and to scale favorably when the plate cardinality in such models augments . Given the observed data X , we wish to obtain the posterior density for latent parameters θ1 , . . . , θL , exploiting the exchangeability induced by the plates P0 , . . . , PP . 2.2 AUTOMATIC DERIVATION OF A DUAL AMORTIZED VARIATIONAL FAMILY . In this section , we derive our main methodological contribution . We aim at obtaining posterior distributions for a generative model of pyramidal structure . For this purpose , we construct a family of variational distributions Q dual to the model . This architecture consists in the combination of 2 items . First , a Hierarchical Encoder ( HE ) that aggregates summary statistics from the data . Second , a set of conditional density estimators . Tensor functions We first introduce the notations for tensor functions which we define in the spirit of Magnus & Neudecker ( 1999 ) . We leverage tensor functions throughout our entire architecture to reduce its parameterization . Consider a function f : F → G , and a tensor TF ∈ FB of shape B . We denote the tensor TG ∈ GB resulting from the element-wise application of f over TF as TG =−→ f ( B ) ( TF ) ( in reference to the programming notion of vectorization in Harris et al . ( 2020 ) ) . In Fig . 1 , −→ ST ( B1 ) 0 and −−−−→ lγ ◦ Fγ ( B1 ) are examples of tensor functions . At multiple points in our architecture , we will translate the repeated structure in the HBM induced by plates into the repeated usage of functions across plates . Hierarchical Encoder For our encoder , our goal is to learn a function HE that takes as input the observed data X and successively exploits the permutation invariance across plates P0 , . . . , PP . In doing so , HE produces encodings E at different hierarchy levels . Through those encodings , our goal is to learn summary statistics from the observed data , that will condition our amortized inference . For instance in Fig . 1 , the application of HE over X produces the encodings E1 and E2 . To build HE , we need at multiple hierarchies to collect summary statistics across i.i.d samples from a common distribution . To this end we leverage SetTransformers ( Lee et al. , 2019 ) : an attention-based , permutation-invariant architecture . We use SetTransformers to derive encodings across a given plate , repeating their usage for all larger-rank plates . We cast the observed data X as the encoding E0 . Then , recursively for every hierarchy h = 1 . . . P + 1 , we define the encoding Eh as the application to the encoding Eh−1 of the tensor function corresponding to the set transformer STh−1 . HE ( X ) then corresponds to the set of encodings { E1 , . . . , EP+1 } obtained from the successive application of { STh } h=0 , ... , P . If we denote the batch shape Bh = Card ( Ph ) × . . .× Card ( PP ) : Eh = −→ ST ( Bh ) h−1 ( Eh−1 ) HE ( X ) = { E1 , . . . , EP+1 } ( 1 ) In collecting summary statistics across the i.i.d . samples in plate Ph−1 , we decrease the order of the encoding tensor Eh−1 . We repeat this operation in parallel on every plate of larger rank than the rank of the contracted plate . We consequently produce an encoding tensor Eh with the batch shape Bh , which is the batch shape of every RV of hierarchy h. In that line , successively summarizing plates P0 , , . . . , PP , of increasing rank results in encoding tensors E1 , . . . , EP+1 of decreasing order . In Fig . 1 , there are 2 plates P0 and P1 , hence 2 encodings E1 = −→ ST ( B1 ) 0 ( X ) and E2 = ST1 ( E1 ) . E1 is an order 2 tensor : it has a batch shape of B1 = Card ( P1 ) -similar to Γ- whereas E2 is an order 1 tensor . We can decompose E1 = [ e11 , e21 ] = [ ST0 ( [ X1,1 , X1,2 ] ) , ST0 ( [ X2,1 , X2,2 ] ) ] . Conditional density estimators We now will use the encodings E , gathering hierarchical summary statistics on the data X , to condition the inference on the parameters θ . The encodings { Eh } h=1 ... P+1 will respectively condition the density estimators for the posterior distribution of parameters sharing their hierarchy { { θi : Hier ( θi ) = h } } h=1 ... P+1 . Consider a latent RV θi of hierarchy hi = Hier ( θi ) . Due to the plate structure of the graph , θi can be decomposed in a batch of shape Bhi = Card ( Phi ) × . . . × Card ( PP ) of multiple similar , conditionally independent RVs of individual size Sθi . This decomposition is akin to the grounding of the considered graph template ( Koller & Friedman , 2009 ) . A conditional density estimator is a 2-step diffeomorphism from a latent space onto the event space in which the RV θi lives . We initially parameterize every variational density as a standard normal distribution in the latent space RSθi . First , this latent distribution is reparameterized by a conditional normalizing flowFi ( Rezende & Mohamed , 2016 ; Papamakarios et al. , 2019a ) into a distribution of more complex density in the space RSθi . The flow Fi is a diffeomorphism in the space RSθi conditioned by the encoding Ehi . Second , the obtained latent distribution is projected onto the event space in which θi lives by the application of a link function diffeomorphism li . For instance , if θi is a variance parameter , the link function would map R onto R+∗ ( li = Exp as an example ) . The usage of Fi and the link function li is repeated on plates of larger rank than the hierarchy hi of θi . The resulting conditional density estimator qi for the posterior distribution p ( θi|X ) is given by : ui ∼ N ( −→ 0 Bhi×Sθi , IBhi×Sθi ) θ̃i = −−−→ li ◦ Fi ( Bhi ) ( ui ; Ehi ) ∼ qi ( θi ; Ehi ) ( 2 ) In Fig . 1 Γ = [ γ1 , γ2 ] is associated to the diffeomorphism −−−−→ lγ ◦ Fγ ( B1 ) . This diffeomorphism is conditioned by the encoding E1 . Both Γ and E1 share the batch shape B1 = Card ( P1 ) . Decomposing the encoding E1 = [ e11 , e21 ] , e11 is used to condition the inference on γ1 , and e21 for γ2 . λ is associated to the diffeomorphism lλ ◦ Fλ , and κ to lκ ◦ Fκ , both conditioned by E2 . Parsimonious parameterization Our approach produces a parameterization effectively independent from plate cardinalities . Consider the latent RVs θ1 , . . . , θL . Normalizing flow-based density estimators have a parameterization quadratic with respect to the size of the space they are applied to ( e.g . Papamakarios et al. , 2018 ) . Applying a single normalizing flow to the total event space of θ1 , . . . , θL would thus result in O ( [ ∑L i=1 Sθi ∏P p=hi Card ( Pp ) ] 2 ) weights . But since we instead apply multiple flows on the spaces of size Sθi and repeat their usage across all plates Phi , . . . , PP , we effectively reduce this parameterization to : # weightsADAVI = O ( L∑ i=1 S2θi ) ( 3 ) As a consequence , our method can be applied to HBMs featuring large plate cardinalities without scaling up its parameterization to impractical ranges , preventing a computer memory blow-up . | This manuscript proposes an amortized variational inference to produce a dual variational family for hierarchical Bayesian models (HBM) that can be represented as pyramidal Bayesian models. The presented method exploits the exchangeability of parameters in HBM to reduce its parametrization for faster inference on high-dimensional data such as neuroimaging. The authors compare empirically the proposed method with several amortized and non-amortized alternatives and on several experimental data in terms of the size of parametrization, inference time, and quality of inferences. | SP:ac514d656bc5dfacab04f803ffa6ce4224921eb5 |
Sequence Approximation using Feedforward Spiking Neural Network for Spatiotemporal Learning: Theory and Optimization Methods | 1 INTRODUCTION . Spiking neural network ( SNN ) ( Ponulak & Kasinski , 2011 ) uses biologically inspired neurons and synaptic connections trainable with either biological learning rules such as spike-timingdependent plasticity ( STDP ) ( Gerstner & Kistler , 2002 ) or statistical training algorithms such as backpropagation-through-time ( BPTT ) ( Werbos , 1990 ) . The SNNs with simple leaky integrate-andfire ( LIF ) neurons and supervised training have shown classification performance similar to deep neural networks ( DNN ) while being energy efficient ( Kim et al. , 2020b ; Wu et al. , 2019 ; Srinivasan & Roy , 2019 ) . One of SNN ’ s main difference from DNN is that the neurons are dynamical systems with internal states evolving over time , making it possible for SNN to learn temporal patterns without recurrent connections . Empirical results on feedforward-only SNN models show good performance for spatiotemporal data classification , using either supervised training ( Lee et al. , 2016 ; Kaiser et al. , 2020 ; Khoei et al. , 2020 ) , or unsupervised learning ( She et al. , 2021 ) . However , while empirical results are promising , a lack of theoretical understanding of sequence approximation using SNN makes it challenging to optimize performance on complex spatiotemporal datasets . In this work , we develop a theoretical framework for analyzing and improving sequence approximation using feedforward SNN . We view a feedforward connections of spiking neurons as a spike propagation path , hereafter referred to as a memory pathways ( She et al. , 2021 ) , that maps an input spike train with an arbitrary frequency to an output spike train with a target frequency . Consequently , we argue that an SNN with many memory pathways can approximate a temporal sequence of spike trains with time-varying unknown frequencies using a series of pre-defined output spike trains with known frequencies . Our theoretical framework aims to first establish SNN ’ s ability to map frequencies of input/output spike trains within arbitrarily small error ; and next , derive the basic principles for adapting neuron dynamics and SNN architecture to improve sequence approximation . The theoretical derivations are then investigated with experimental studies on feedforward SNN for spatiotemporal classifications . We adopt the basic design principles for improving sequence approx- imation to optimize SNN architectures and study whether these networks can be trained to improve performance for spatiotemporal classification tasks . The key contributions of this work are : • We prove that any spike-sequence-to-spike-sequence mapping functions on a compact domain can be approximated by feedforward SNN with one neuron per layer using skip-layer connections , which can not be achieved if no skip-layer connection is used . • We prove that using heterogeneous neurons having different dynamics and skip-layer connection increases the number of memory pathways a feedforward SNN can achieve and hence , improves SNN ’ s capability to represent arbitrary sequences . • We develop complex SNN architectures using the preceding theoretical observations and experimentally demonstrate that they can trained with supervised BPTT and unsupervised STDP for classification on spatiotemporal data . • We design a dual-search-space option for Bayesian optimization process to sequentially optimize network architectures and neuron dynamics of a feedforward SNN considering heterogeneity and skip-layer connection to improve learning and classification of spatiotemporal patterns . We experimentally demonstrate that our network design principles coupled with the dual-searchspace Bayesian optimization improve classification performance on DVS Gesture ( Amir et al. , 2017 ) , N-caltech ( Orchard et al. , 2015 ) , and sequential MNIST . Results show that the design principles derived using our theoretical framework for sequence approximation can improve spatiotemporal classification performance of SNN . 2 RELATED WORK . Prior theoretical approaches to analyze SNN often focus on the storage and retrieval of precise spike patterns ( Amit & Huang , 2010 ; Brea et al. , 2013 ) . There are also works that consider SNN for solving optimization problems ( Chou et al. , 2018 ; Binas et al. , 2016 ) and works that analyze the dynamics of SNN ( Zhang et al. , 2019 ; Barrett et al. , 2013 ) . Those are different topics from the approximation of spike-sequence-to-spike-sequence mappings functions . SNN that incorporates excitatory and inhibitory signal is shown for its ability to emulate sigmoidal networks ( Maass , 1997 ) and is theoretically capable of universal function approximation . Feedforward SNN with specially designed spiking neuron models ( Iannella & Back , 2001 ; Torikai et al. , 2008 ) have been demonstrated for function approximation , while for networks using LIF neurons , function approximation has been shown with only empirical results ( Farsa et al. , 2015 ) . On the other hand , the existing works that has developed efficient training process for SNN and demonstrated classification performance comparable to deep learning models , have mostly used simpler and generic LIF neuron models ( Lee et al. , 2016 ; Kaiser et al. , 2020 ; Kim et al. , 2020b ; Wu et al. , 2019 ; Sengupta et al. , 2019 ; Safa et al. , 2021 ; Han et al. , 2020 ) . Therefore , this paper develops the theoretical basis for function approximation using feedforward SNN with LIF neurons , and studies applications of the developed theoretical constructs in improving SNN-based spatiotemporal pattern classification . The effectiveness of heterogeneous neurons ( She et al. , 2021 ) and skip-layer connections ( Srinivasan & Roy , 2019 ; Sengupta et al. , 2019 ) in SNN has been empirically studied in the past . However , no theoretical approach has been presented to understand why such methods improve learning of spike sequences , and how to optimize a SNN ’ s architecture and parameters to effectively exploit these design constructs . It is possible to search for the optimal SNN configurations through optimization algorithms , but the large number of hyper-parameters for spiking neurons and network structure creates a high-dimensional search space that is long and difficult to solve . Bayesian optimization ( Snoek et al. , 2012 ) uses collected data points to make decisions on the next test point that could provide improvement , thus accelerates the optimization process . Prior works ( Parsa et al. , 2019 ; Kim et al. , 2020a ) have shown that SNN performance can also be effectively improved with Bayesian optimization . While those works consider a single or a few neuron parameters , the dualsearch-space Bayesian optimization proposed in this work optimizes both network architecture and neuron parameters efficiently by separating the discrete search spaces from the continuous search spaces . t 3 APPROXIMATION THEORY OF FEEDFORWARD SNN . 3.1 DEFINITIONS AND NOTATIONS . Definition 1 Neuron Response Rate γ For a spiking neuron n with membrane potential at vreset and input spike sequence with period tin , γ is the number of input spike n needs to reach vth . Definition 2 Memory Pathways For a feedforward SNN with m layers , a memory pathway is defined as a spike propagation path from input to the output layer . Two memory pathways are considered distinct if the set of neurons contained in them is different . Definition 3 Minimal Multi-neuron-dynamic ( mMND ) Network A densely connected network in which each layer has an arbitrary number of neurons that have different neuron parameters . All synapses from one pre-synaptic neuron have the same synaptic conductance . Notations Neuron Delay tnd is the time for a spike from pre-synaptic neuron to arrive at its postsynaptic neurons , as shown in Figure 1 ( a ) . For a feedforward SNN with m layers , a skip-layer connection can be defined with source layer and target layer pair ( ls , lt ) . The output feature map from source layer is concatenated to the original input feature map of the target layer . For the analysis of spike sequence in temporal space , the notation of Tmax and Tmin are defined as positive real numbers such that Tmax > Tmin . ϵ > 0 is the error of approximation . Figure 1 ( a ) shows two memory pathways receiving an input spike sequence with time-varying periods . As the neurons have different dynamics , the two memory pathways have different response to the input spike sequence . An example of mMND network with m layers and n neuron dynamics is shown in Figure 1 ( b ) . SNN with multilayer perceptron ( MLP ) structure can be considered a scaled-up mMND network with multiple neurons for each dynamic . A network with convolutional structure can be considered a scaled-up mMND network with duplicated connections in each layer . We analyze the correlation of network capacity and structure based on mMND networks . The design of neuron heterogeneity can also be implemented in MLP-SNN and Conv-SNN as described in Section 4 . The analysis for network capacity can be extended to those networks according to their specific layer dimensions . 3.2 MODELING OF SPIKING NEURON . SNN consists of spiking neurons connected with synapses . The spiking neuron model studied in this work is leaky integrate-and-fire ( LIF ) as defined by the following equations : τm dv dt = a+RmI − v ; v = vreset , if v > vthreshold ( 1 ) Rm is membrane resistance , τm = RmCm is time constant and Cm is membrane capacitance . a is the resting potential . I is the sum of current from all input synapses that connect to the neuron . A spike is generated when membrane potential v cross threshold and the neuron enters refractory period r , during which the neuron maintains its membrane potential at vreset . The time it take for a pre-synaptic neuron to send a spike to its post-synaptic neurons is tnd . Neuron response rate γ is a property of a spiking neuron ’ s response to certain input spike sequence . We show how the value of γ can be evaluated below . Remark For any input spike sequence , individual spike can be described with Dirac delta function δ ( t− ti ) where ti is the time of the i-th input spike . For the membrane potential of a spiking neuron receiving the input before reaching spiking threshold , with initial state at t = 0 with v = vreset , solving the differential equation ( 1 ) leads to : v ( t ) = vresete − tτm + a ( 1− e− t τm ) + Rm τm e− t τm ∑ i G ∫ t 0 δ ( t− ti ) e t τm dt ( 2 ) Here , G is the conductance of input synapse connected to the neuron , which is trainable . From ( 2 ) , there exists a value of u such that vm ( tu−1 ) < vthreshold and vm ( tu ) > = vthreshold . By evaluating ( 2 ) for u given neuron parameters and input spike sequence , the neuron response rate γ can be found . 3.3 APPROXIMATION THEOREM OF FEEDFORWARD SNN . To develop the approximation theorem for feedforward SNN , we first aim to understand the range of neuron response rate that can be achieved . We show with Lemma 1 that for any input spike sequence with periods in a closed interval , it is possible to set the neuron response rate γ to any positive integer . Based on this property , we show with Theorem 1 that by connecting a list of spiking neurons with certain γ sequentially and inserting skip-layer connections , approximation of spikesequence mapping functions can be achieved . To understand whether this capability of feedforward SNN relies on skip-layer connections , we develop Lemma 2 to prove that skip-layer connections are indeed necessary . In subsection 3.4 we investigate the correlation between approximation capability and network structures by analyzing the cutoff property of spiking neurons , which can change the network ’ s connectivity . In our analysis , we focus on two particular designs : heterogeneous network ( Lemma 4 ) and skip-layer connection ( Lemma 5 ) , and show their impact on the number of distinct memory pathways in a network . All lemmas are formally proved in the appendix . Lemma 1 For any input spike sequence with period tin in range [ Tmin , Tmax ] , there exist a spiking neuron n with fixed parameters vth , vreset , a , Rm and τm , such that by changing synaptic conductance G , it is possible to set the neuron response rate γn to be any positive integer . Proof Sketch . ( Formal proof in Appendix A ) Given an input spike sequence , it is possible to derive the highest possible membrane potential decay ∆v for any input tin ∈ [ Tmin , Tmax ] as a function of neuron parameters . We show that it is possible to make ∆v tends to zero by configuring the neuron parameters . Since the decay of v can be negligible , γn can be set to any positive integer by changing G. Theorem 1 For any input and target output spike sequence pair with periods ( tin , tout ) ∈ [ Tmin , Tmax ] × [ Tmin , Tmax ] , there exist a minimal-layer-size network with skip-layer connections that has memory pathway with output spike period function P ( t ) such that |P ( tin ) − tout| < ϵ . Proof Sketch . ( Formal proof in Appendix B ) With skip-layer connections , there can be multiple memory pathways in a minimal-layer-size network as neurons can be either included or skipped through . Hence it is possible to create memory pathways with different delay times for each input spike in a network and by connecting the output of those memory pathways to a common neuron n′ , spike sequence of any arbitrary period tint such that tint < = tin can be generated within ϵ . By setting γn′ > 1 , the output from n′ receiving input spike sequence with tint is tn ′ out = γ · tint . Hence it is possible to achieve a network with output spike period P ( t ) such that |P ( tin ) − tout| < ϵ. Lemma 2 With no skip-layer connection , there does not exist a minimal-layer-size network that has output spike period function P ( t ) such that for any input and target output spike sequence pair with periods ( tin , tout ) ∈ [ Tmin , Tmax ] × [ Tmin , Tmax ] , |P ( tin ) − tout| < ϵ . Proof Sketch . ( Formal proof in Appendix C ) A minimal-layer-size network without skip-layer connection has only one memory pathway . For a particular input spike sequence with period tin , different output period P ( tin ) can be achieved by changing γ of neurons in the memory pathway . We show that there exists two output spike periods P ( tin ) and P ′ ( tin ) , such that P ( tin ) − P ′ ( tin ) is a constant value independent of network or neuron configurations , and there can be no P ′′ ( tin ) such that P ( tin ) < P ′′ ( tin ) < P ′ ( tin ) . Therefore , for any minimal-layer-size network , there exists tout within the range of ( P ( tin ) , P ′ ( tin ) ) such that |P ( tin ) − tout| < ϵ does not hold true . | In this publication, the authors present an interesting framework for approximation the mapping between sequences by feedforward spiking neural networks (SNN), providing insights on the computational capabilities of feedforward SNNs for the approximation of the mapping of input to output spike trains. And how these are influenced by hyperparameters such as the network architecture, heterogenic properties of neurons and skipping connections. The authors also provide an ansatz how these hyperparameters can be optimised in a two-steps Bayesian optimisation fashion. The performance of the approach is demonstrated by several spatiotemporal classification tasks, on datasets like DVS Gesture, N-Caltech 101 and sequential MNIST. | SP:502b85913acd50d279e0794db625c62654106ee5 |
Sequence Approximation using Feedforward Spiking Neural Network for Spatiotemporal Learning: Theory and Optimization Methods | 1 INTRODUCTION . Spiking neural network ( SNN ) ( Ponulak & Kasinski , 2011 ) uses biologically inspired neurons and synaptic connections trainable with either biological learning rules such as spike-timingdependent plasticity ( STDP ) ( Gerstner & Kistler , 2002 ) or statistical training algorithms such as backpropagation-through-time ( BPTT ) ( Werbos , 1990 ) . The SNNs with simple leaky integrate-andfire ( LIF ) neurons and supervised training have shown classification performance similar to deep neural networks ( DNN ) while being energy efficient ( Kim et al. , 2020b ; Wu et al. , 2019 ; Srinivasan & Roy , 2019 ) . One of SNN ’ s main difference from DNN is that the neurons are dynamical systems with internal states evolving over time , making it possible for SNN to learn temporal patterns without recurrent connections . Empirical results on feedforward-only SNN models show good performance for spatiotemporal data classification , using either supervised training ( Lee et al. , 2016 ; Kaiser et al. , 2020 ; Khoei et al. , 2020 ) , or unsupervised learning ( She et al. , 2021 ) . However , while empirical results are promising , a lack of theoretical understanding of sequence approximation using SNN makes it challenging to optimize performance on complex spatiotemporal datasets . In this work , we develop a theoretical framework for analyzing and improving sequence approximation using feedforward SNN . We view a feedforward connections of spiking neurons as a spike propagation path , hereafter referred to as a memory pathways ( She et al. , 2021 ) , that maps an input spike train with an arbitrary frequency to an output spike train with a target frequency . Consequently , we argue that an SNN with many memory pathways can approximate a temporal sequence of spike trains with time-varying unknown frequencies using a series of pre-defined output spike trains with known frequencies . Our theoretical framework aims to first establish SNN ’ s ability to map frequencies of input/output spike trains within arbitrarily small error ; and next , derive the basic principles for adapting neuron dynamics and SNN architecture to improve sequence approximation . The theoretical derivations are then investigated with experimental studies on feedforward SNN for spatiotemporal classifications . We adopt the basic design principles for improving sequence approx- imation to optimize SNN architectures and study whether these networks can be trained to improve performance for spatiotemporal classification tasks . The key contributions of this work are : • We prove that any spike-sequence-to-spike-sequence mapping functions on a compact domain can be approximated by feedforward SNN with one neuron per layer using skip-layer connections , which can not be achieved if no skip-layer connection is used . • We prove that using heterogeneous neurons having different dynamics and skip-layer connection increases the number of memory pathways a feedforward SNN can achieve and hence , improves SNN ’ s capability to represent arbitrary sequences . • We develop complex SNN architectures using the preceding theoretical observations and experimentally demonstrate that they can trained with supervised BPTT and unsupervised STDP for classification on spatiotemporal data . • We design a dual-search-space option for Bayesian optimization process to sequentially optimize network architectures and neuron dynamics of a feedforward SNN considering heterogeneity and skip-layer connection to improve learning and classification of spatiotemporal patterns . We experimentally demonstrate that our network design principles coupled with the dual-searchspace Bayesian optimization improve classification performance on DVS Gesture ( Amir et al. , 2017 ) , N-caltech ( Orchard et al. , 2015 ) , and sequential MNIST . Results show that the design principles derived using our theoretical framework for sequence approximation can improve spatiotemporal classification performance of SNN . 2 RELATED WORK . Prior theoretical approaches to analyze SNN often focus on the storage and retrieval of precise spike patterns ( Amit & Huang , 2010 ; Brea et al. , 2013 ) . There are also works that consider SNN for solving optimization problems ( Chou et al. , 2018 ; Binas et al. , 2016 ) and works that analyze the dynamics of SNN ( Zhang et al. , 2019 ; Barrett et al. , 2013 ) . Those are different topics from the approximation of spike-sequence-to-spike-sequence mappings functions . SNN that incorporates excitatory and inhibitory signal is shown for its ability to emulate sigmoidal networks ( Maass , 1997 ) and is theoretically capable of universal function approximation . Feedforward SNN with specially designed spiking neuron models ( Iannella & Back , 2001 ; Torikai et al. , 2008 ) have been demonstrated for function approximation , while for networks using LIF neurons , function approximation has been shown with only empirical results ( Farsa et al. , 2015 ) . On the other hand , the existing works that has developed efficient training process for SNN and demonstrated classification performance comparable to deep learning models , have mostly used simpler and generic LIF neuron models ( Lee et al. , 2016 ; Kaiser et al. , 2020 ; Kim et al. , 2020b ; Wu et al. , 2019 ; Sengupta et al. , 2019 ; Safa et al. , 2021 ; Han et al. , 2020 ) . Therefore , this paper develops the theoretical basis for function approximation using feedforward SNN with LIF neurons , and studies applications of the developed theoretical constructs in improving SNN-based spatiotemporal pattern classification . The effectiveness of heterogeneous neurons ( She et al. , 2021 ) and skip-layer connections ( Srinivasan & Roy , 2019 ; Sengupta et al. , 2019 ) in SNN has been empirically studied in the past . However , no theoretical approach has been presented to understand why such methods improve learning of spike sequences , and how to optimize a SNN ’ s architecture and parameters to effectively exploit these design constructs . It is possible to search for the optimal SNN configurations through optimization algorithms , but the large number of hyper-parameters for spiking neurons and network structure creates a high-dimensional search space that is long and difficult to solve . Bayesian optimization ( Snoek et al. , 2012 ) uses collected data points to make decisions on the next test point that could provide improvement , thus accelerates the optimization process . Prior works ( Parsa et al. , 2019 ; Kim et al. , 2020a ) have shown that SNN performance can also be effectively improved with Bayesian optimization . While those works consider a single or a few neuron parameters , the dualsearch-space Bayesian optimization proposed in this work optimizes both network architecture and neuron parameters efficiently by separating the discrete search spaces from the continuous search spaces . t 3 APPROXIMATION THEORY OF FEEDFORWARD SNN . 3.1 DEFINITIONS AND NOTATIONS . Definition 1 Neuron Response Rate γ For a spiking neuron n with membrane potential at vreset and input spike sequence with period tin , γ is the number of input spike n needs to reach vth . Definition 2 Memory Pathways For a feedforward SNN with m layers , a memory pathway is defined as a spike propagation path from input to the output layer . Two memory pathways are considered distinct if the set of neurons contained in them is different . Definition 3 Minimal Multi-neuron-dynamic ( mMND ) Network A densely connected network in which each layer has an arbitrary number of neurons that have different neuron parameters . All synapses from one pre-synaptic neuron have the same synaptic conductance . Notations Neuron Delay tnd is the time for a spike from pre-synaptic neuron to arrive at its postsynaptic neurons , as shown in Figure 1 ( a ) . For a feedforward SNN with m layers , a skip-layer connection can be defined with source layer and target layer pair ( ls , lt ) . The output feature map from source layer is concatenated to the original input feature map of the target layer . For the analysis of spike sequence in temporal space , the notation of Tmax and Tmin are defined as positive real numbers such that Tmax > Tmin . ϵ > 0 is the error of approximation . Figure 1 ( a ) shows two memory pathways receiving an input spike sequence with time-varying periods . As the neurons have different dynamics , the two memory pathways have different response to the input spike sequence . An example of mMND network with m layers and n neuron dynamics is shown in Figure 1 ( b ) . SNN with multilayer perceptron ( MLP ) structure can be considered a scaled-up mMND network with multiple neurons for each dynamic . A network with convolutional structure can be considered a scaled-up mMND network with duplicated connections in each layer . We analyze the correlation of network capacity and structure based on mMND networks . The design of neuron heterogeneity can also be implemented in MLP-SNN and Conv-SNN as described in Section 4 . The analysis for network capacity can be extended to those networks according to their specific layer dimensions . 3.2 MODELING OF SPIKING NEURON . SNN consists of spiking neurons connected with synapses . The spiking neuron model studied in this work is leaky integrate-and-fire ( LIF ) as defined by the following equations : τm dv dt = a+RmI − v ; v = vreset , if v > vthreshold ( 1 ) Rm is membrane resistance , τm = RmCm is time constant and Cm is membrane capacitance . a is the resting potential . I is the sum of current from all input synapses that connect to the neuron . A spike is generated when membrane potential v cross threshold and the neuron enters refractory period r , during which the neuron maintains its membrane potential at vreset . The time it take for a pre-synaptic neuron to send a spike to its post-synaptic neurons is tnd . Neuron response rate γ is a property of a spiking neuron ’ s response to certain input spike sequence . We show how the value of γ can be evaluated below . Remark For any input spike sequence , individual spike can be described with Dirac delta function δ ( t− ti ) where ti is the time of the i-th input spike . For the membrane potential of a spiking neuron receiving the input before reaching spiking threshold , with initial state at t = 0 with v = vreset , solving the differential equation ( 1 ) leads to : v ( t ) = vresete − tτm + a ( 1− e− t τm ) + Rm τm e− t τm ∑ i G ∫ t 0 δ ( t− ti ) e t τm dt ( 2 ) Here , G is the conductance of input synapse connected to the neuron , which is trainable . From ( 2 ) , there exists a value of u such that vm ( tu−1 ) < vthreshold and vm ( tu ) > = vthreshold . By evaluating ( 2 ) for u given neuron parameters and input spike sequence , the neuron response rate γ can be found . 3.3 APPROXIMATION THEOREM OF FEEDFORWARD SNN . To develop the approximation theorem for feedforward SNN , we first aim to understand the range of neuron response rate that can be achieved . We show with Lemma 1 that for any input spike sequence with periods in a closed interval , it is possible to set the neuron response rate γ to any positive integer . Based on this property , we show with Theorem 1 that by connecting a list of spiking neurons with certain γ sequentially and inserting skip-layer connections , approximation of spikesequence mapping functions can be achieved . To understand whether this capability of feedforward SNN relies on skip-layer connections , we develop Lemma 2 to prove that skip-layer connections are indeed necessary . In subsection 3.4 we investigate the correlation between approximation capability and network structures by analyzing the cutoff property of spiking neurons , which can change the network ’ s connectivity . In our analysis , we focus on two particular designs : heterogeneous network ( Lemma 4 ) and skip-layer connection ( Lemma 5 ) , and show their impact on the number of distinct memory pathways in a network . All lemmas are formally proved in the appendix . Lemma 1 For any input spike sequence with period tin in range [ Tmin , Tmax ] , there exist a spiking neuron n with fixed parameters vth , vreset , a , Rm and τm , such that by changing synaptic conductance G , it is possible to set the neuron response rate γn to be any positive integer . Proof Sketch . ( Formal proof in Appendix A ) Given an input spike sequence , it is possible to derive the highest possible membrane potential decay ∆v for any input tin ∈ [ Tmin , Tmax ] as a function of neuron parameters . We show that it is possible to make ∆v tends to zero by configuring the neuron parameters . Since the decay of v can be negligible , γn can be set to any positive integer by changing G. Theorem 1 For any input and target output spike sequence pair with periods ( tin , tout ) ∈ [ Tmin , Tmax ] × [ Tmin , Tmax ] , there exist a minimal-layer-size network with skip-layer connections that has memory pathway with output spike period function P ( t ) such that |P ( tin ) − tout| < ϵ . Proof Sketch . ( Formal proof in Appendix B ) With skip-layer connections , there can be multiple memory pathways in a minimal-layer-size network as neurons can be either included or skipped through . Hence it is possible to create memory pathways with different delay times for each input spike in a network and by connecting the output of those memory pathways to a common neuron n′ , spike sequence of any arbitrary period tint such that tint < = tin can be generated within ϵ . By setting γn′ > 1 , the output from n′ receiving input spike sequence with tint is tn ′ out = γ · tint . Hence it is possible to achieve a network with output spike period P ( t ) such that |P ( tin ) − tout| < ϵ. Lemma 2 With no skip-layer connection , there does not exist a minimal-layer-size network that has output spike period function P ( t ) such that for any input and target output spike sequence pair with periods ( tin , tout ) ∈ [ Tmin , Tmax ] × [ Tmin , Tmax ] , |P ( tin ) − tout| < ϵ . Proof Sketch . ( Formal proof in Appendix C ) A minimal-layer-size network without skip-layer connection has only one memory pathway . For a particular input spike sequence with period tin , different output period P ( tin ) can be achieved by changing γ of neurons in the memory pathway . We show that there exists two output spike periods P ( tin ) and P ′ ( tin ) , such that P ( tin ) − P ′ ( tin ) is a constant value independent of network or neuron configurations , and there can be no P ′′ ( tin ) such that P ( tin ) < P ′′ ( tin ) < P ′ ( tin ) . Therefore , for any minimal-layer-size network , there exists tout within the range of ( P ( tin ) , P ′ ( tin ) ) such that |P ( tin ) − tout| < ϵ does not hold true . | This paper presents some theoretical understanding of sequence approximation using feedforward SNNs. The main conclusions are two folds: (1) a feedforward SNN with one neuron per layer and skip-layer connections can approximate the mapping rate-coding function; (2) a feedforward SNN constructed by heterogeneous neurons with varying dynamics and skip-connections can improve sequence approximation. Besides, the authors provide the DSBO to optimize the architecture and parameters of their proposed SNNs. | SP:502b85913acd50d279e0794db625c62654106ee5 |
Sequence Approximation using Feedforward Spiking Neural Network for Spatiotemporal Learning: Theory and Optimization Methods | 1 INTRODUCTION . Spiking neural network ( SNN ) ( Ponulak & Kasinski , 2011 ) uses biologically inspired neurons and synaptic connections trainable with either biological learning rules such as spike-timingdependent plasticity ( STDP ) ( Gerstner & Kistler , 2002 ) or statistical training algorithms such as backpropagation-through-time ( BPTT ) ( Werbos , 1990 ) . The SNNs with simple leaky integrate-andfire ( LIF ) neurons and supervised training have shown classification performance similar to deep neural networks ( DNN ) while being energy efficient ( Kim et al. , 2020b ; Wu et al. , 2019 ; Srinivasan & Roy , 2019 ) . One of SNN ’ s main difference from DNN is that the neurons are dynamical systems with internal states evolving over time , making it possible for SNN to learn temporal patterns without recurrent connections . Empirical results on feedforward-only SNN models show good performance for spatiotemporal data classification , using either supervised training ( Lee et al. , 2016 ; Kaiser et al. , 2020 ; Khoei et al. , 2020 ) , or unsupervised learning ( She et al. , 2021 ) . However , while empirical results are promising , a lack of theoretical understanding of sequence approximation using SNN makes it challenging to optimize performance on complex spatiotemporal datasets . In this work , we develop a theoretical framework for analyzing and improving sequence approximation using feedforward SNN . We view a feedforward connections of spiking neurons as a spike propagation path , hereafter referred to as a memory pathways ( She et al. , 2021 ) , that maps an input spike train with an arbitrary frequency to an output spike train with a target frequency . Consequently , we argue that an SNN with many memory pathways can approximate a temporal sequence of spike trains with time-varying unknown frequencies using a series of pre-defined output spike trains with known frequencies . Our theoretical framework aims to first establish SNN ’ s ability to map frequencies of input/output spike trains within arbitrarily small error ; and next , derive the basic principles for adapting neuron dynamics and SNN architecture to improve sequence approximation . The theoretical derivations are then investigated with experimental studies on feedforward SNN for spatiotemporal classifications . We adopt the basic design principles for improving sequence approx- imation to optimize SNN architectures and study whether these networks can be trained to improve performance for spatiotemporal classification tasks . The key contributions of this work are : • We prove that any spike-sequence-to-spike-sequence mapping functions on a compact domain can be approximated by feedforward SNN with one neuron per layer using skip-layer connections , which can not be achieved if no skip-layer connection is used . • We prove that using heterogeneous neurons having different dynamics and skip-layer connection increases the number of memory pathways a feedforward SNN can achieve and hence , improves SNN ’ s capability to represent arbitrary sequences . • We develop complex SNN architectures using the preceding theoretical observations and experimentally demonstrate that they can trained with supervised BPTT and unsupervised STDP for classification on spatiotemporal data . • We design a dual-search-space option for Bayesian optimization process to sequentially optimize network architectures and neuron dynamics of a feedforward SNN considering heterogeneity and skip-layer connection to improve learning and classification of spatiotemporal patterns . We experimentally demonstrate that our network design principles coupled with the dual-searchspace Bayesian optimization improve classification performance on DVS Gesture ( Amir et al. , 2017 ) , N-caltech ( Orchard et al. , 2015 ) , and sequential MNIST . Results show that the design principles derived using our theoretical framework for sequence approximation can improve spatiotemporal classification performance of SNN . 2 RELATED WORK . Prior theoretical approaches to analyze SNN often focus on the storage and retrieval of precise spike patterns ( Amit & Huang , 2010 ; Brea et al. , 2013 ) . There are also works that consider SNN for solving optimization problems ( Chou et al. , 2018 ; Binas et al. , 2016 ) and works that analyze the dynamics of SNN ( Zhang et al. , 2019 ; Barrett et al. , 2013 ) . Those are different topics from the approximation of spike-sequence-to-spike-sequence mappings functions . SNN that incorporates excitatory and inhibitory signal is shown for its ability to emulate sigmoidal networks ( Maass , 1997 ) and is theoretically capable of universal function approximation . Feedforward SNN with specially designed spiking neuron models ( Iannella & Back , 2001 ; Torikai et al. , 2008 ) have been demonstrated for function approximation , while for networks using LIF neurons , function approximation has been shown with only empirical results ( Farsa et al. , 2015 ) . On the other hand , the existing works that has developed efficient training process for SNN and demonstrated classification performance comparable to deep learning models , have mostly used simpler and generic LIF neuron models ( Lee et al. , 2016 ; Kaiser et al. , 2020 ; Kim et al. , 2020b ; Wu et al. , 2019 ; Sengupta et al. , 2019 ; Safa et al. , 2021 ; Han et al. , 2020 ) . Therefore , this paper develops the theoretical basis for function approximation using feedforward SNN with LIF neurons , and studies applications of the developed theoretical constructs in improving SNN-based spatiotemporal pattern classification . The effectiveness of heterogeneous neurons ( She et al. , 2021 ) and skip-layer connections ( Srinivasan & Roy , 2019 ; Sengupta et al. , 2019 ) in SNN has been empirically studied in the past . However , no theoretical approach has been presented to understand why such methods improve learning of spike sequences , and how to optimize a SNN ’ s architecture and parameters to effectively exploit these design constructs . It is possible to search for the optimal SNN configurations through optimization algorithms , but the large number of hyper-parameters for spiking neurons and network structure creates a high-dimensional search space that is long and difficult to solve . Bayesian optimization ( Snoek et al. , 2012 ) uses collected data points to make decisions on the next test point that could provide improvement , thus accelerates the optimization process . Prior works ( Parsa et al. , 2019 ; Kim et al. , 2020a ) have shown that SNN performance can also be effectively improved with Bayesian optimization . While those works consider a single or a few neuron parameters , the dualsearch-space Bayesian optimization proposed in this work optimizes both network architecture and neuron parameters efficiently by separating the discrete search spaces from the continuous search spaces . t 3 APPROXIMATION THEORY OF FEEDFORWARD SNN . 3.1 DEFINITIONS AND NOTATIONS . Definition 1 Neuron Response Rate γ For a spiking neuron n with membrane potential at vreset and input spike sequence with period tin , γ is the number of input spike n needs to reach vth . Definition 2 Memory Pathways For a feedforward SNN with m layers , a memory pathway is defined as a spike propagation path from input to the output layer . Two memory pathways are considered distinct if the set of neurons contained in them is different . Definition 3 Minimal Multi-neuron-dynamic ( mMND ) Network A densely connected network in which each layer has an arbitrary number of neurons that have different neuron parameters . All synapses from one pre-synaptic neuron have the same synaptic conductance . Notations Neuron Delay tnd is the time for a spike from pre-synaptic neuron to arrive at its postsynaptic neurons , as shown in Figure 1 ( a ) . For a feedforward SNN with m layers , a skip-layer connection can be defined with source layer and target layer pair ( ls , lt ) . The output feature map from source layer is concatenated to the original input feature map of the target layer . For the analysis of spike sequence in temporal space , the notation of Tmax and Tmin are defined as positive real numbers such that Tmax > Tmin . ϵ > 0 is the error of approximation . Figure 1 ( a ) shows two memory pathways receiving an input spike sequence with time-varying periods . As the neurons have different dynamics , the two memory pathways have different response to the input spike sequence . An example of mMND network with m layers and n neuron dynamics is shown in Figure 1 ( b ) . SNN with multilayer perceptron ( MLP ) structure can be considered a scaled-up mMND network with multiple neurons for each dynamic . A network with convolutional structure can be considered a scaled-up mMND network with duplicated connections in each layer . We analyze the correlation of network capacity and structure based on mMND networks . The design of neuron heterogeneity can also be implemented in MLP-SNN and Conv-SNN as described in Section 4 . The analysis for network capacity can be extended to those networks according to their specific layer dimensions . 3.2 MODELING OF SPIKING NEURON . SNN consists of spiking neurons connected with synapses . The spiking neuron model studied in this work is leaky integrate-and-fire ( LIF ) as defined by the following equations : τm dv dt = a+RmI − v ; v = vreset , if v > vthreshold ( 1 ) Rm is membrane resistance , τm = RmCm is time constant and Cm is membrane capacitance . a is the resting potential . I is the sum of current from all input synapses that connect to the neuron . A spike is generated when membrane potential v cross threshold and the neuron enters refractory period r , during which the neuron maintains its membrane potential at vreset . The time it take for a pre-synaptic neuron to send a spike to its post-synaptic neurons is tnd . Neuron response rate γ is a property of a spiking neuron ’ s response to certain input spike sequence . We show how the value of γ can be evaluated below . Remark For any input spike sequence , individual spike can be described with Dirac delta function δ ( t− ti ) where ti is the time of the i-th input spike . For the membrane potential of a spiking neuron receiving the input before reaching spiking threshold , with initial state at t = 0 with v = vreset , solving the differential equation ( 1 ) leads to : v ( t ) = vresete − tτm + a ( 1− e− t τm ) + Rm τm e− t τm ∑ i G ∫ t 0 δ ( t− ti ) e t τm dt ( 2 ) Here , G is the conductance of input synapse connected to the neuron , which is trainable . From ( 2 ) , there exists a value of u such that vm ( tu−1 ) < vthreshold and vm ( tu ) > = vthreshold . By evaluating ( 2 ) for u given neuron parameters and input spike sequence , the neuron response rate γ can be found . 3.3 APPROXIMATION THEOREM OF FEEDFORWARD SNN . To develop the approximation theorem for feedforward SNN , we first aim to understand the range of neuron response rate that can be achieved . We show with Lemma 1 that for any input spike sequence with periods in a closed interval , it is possible to set the neuron response rate γ to any positive integer . Based on this property , we show with Theorem 1 that by connecting a list of spiking neurons with certain γ sequentially and inserting skip-layer connections , approximation of spikesequence mapping functions can be achieved . To understand whether this capability of feedforward SNN relies on skip-layer connections , we develop Lemma 2 to prove that skip-layer connections are indeed necessary . In subsection 3.4 we investigate the correlation between approximation capability and network structures by analyzing the cutoff property of spiking neurons , which can change the network ’ s connectivity . In our analysis , we focus on two particular designs : heterogeneous network ( Lemma 4 ) and skip-layer connection ( Lemma 5 ) , and show their impact on the number of distinct memory pathways in a network . All lemmas are formally proved in the appendix . Lemma 1 For any input spike sequence with period tin in range [ Tmin , Tmax ] , there exist a spiking neuron n with fixed parameters vth , vreset , a , Rm and τm , such that by changing synaptic conductance G , it is possible to set the neuron response rate γn to be any positive integer . Proof Sketch . ( Formal proof in Appendix A ) Given an input spike sequence , it is possible to derive the highest possible membrane potential decay ∆v for any input tin ∈ [ Tmin , Tmax ] as a function of neuron parameters . We show that it is possible to make ∆v tends to zero by configuring the neuron parameters . Since the decay of v can be negligible , γn can be set to any positive integer by changing G. Theorem 1 For any input and target output spike sequence pair with periods ( tin , tout ) ∈ [ Tmin , Tmax ] × [ Tmin , Tmax ] , there exist a minimal-layer-size network with skip-layer connections that has memory pathway with output spike period function P ( t ) such that |P ( tin ) − tout| < ϵ . Proof Sketch . ( Formal proof in Appendix B ) With skip-layer connections , there can be multiple memory pathways in a minimal-layer-size network as neurons can be either included or skipped through . Hence it is possible to create memory pathways with different delay times for each input spike in a network and by connecting the output of those memory pathways to a common neuron n′ , spike sequence of any arbitrary period tint such that tint < = tin can be generated within ϵ . By setting γn′ > 1 , the output from n′ receiving input spike sequence with tint is tn ′ out = γ · tint . Hence it is possible to achieve a network with output spike period P ( t ) such that |P ( tin ) − tout| < ϵ. Lemma 2 With no skip-layer connection , there does not exist a minimal-layer-size network that has output spike period function P ( t ) such that for any input and target output spike sequence pair with periods ( tin , tout ) ∈ [ Tmin , Tmax ] × [ Tmin , Tmax ] , |P ( tin ) − tout| < ϵ . Proof Sketch . ( Formal proof in Appendix C ) A minimal-layer-size network without skip-layer connection has only one memory pathway . For a particular input spike sequence with period tin , different output period P ( tin ) can be achieved by changing γ of neurons in the memory pathway . We show that there exists two output spike periods P ( tin ) and P ′ ( tin ) , such that P ( tin ) − P ′ ( tin ) is a constant value independent of network or neuron configurations , and there can be no P ′′ ( tin ) such that P ( tin ) < P ′′ ( tin ) < P ′ ( tin ) . Therefore , for any minimal-layer-size network , there exists tout within the range of ( P ( tin ) , P ′ ( tin ) ) such that |P ( tin ) − tout| < ϵ does not hold true . | The authors introduce a theory on how to learn arbitrary input spike train - output spike train mappings, using a feedforward SNN with heterogeneous neurons and skip connections. Then this theory is used to train a deep SNN using either BPTT or STDP. The SNN is then evaluated on IBM DVS Gesture, N-Caltech101. The accuracy that they get with BPTT is beyond the SOTA. | SP:502b85913acd50d279e0794db625c62654106ee5 |
Temporal abstractions-augmented temporally contrastive learning: an alternative to the Laplacian in RL | 1 INTRODUCTION . Representation learning has been at the core of many recent machine learning advances ( c.f . Bengio et al. , 2013 ) . With the advent of deep reinforcement learning ( Mnih et al. , 2015 ) , representation learning has also become one of the main topics of interest in reinforcement learning ( RL ) . For example , in the goal-conditioned hierarchical setting ( Vezhnevets et al. , 2017 ; Nachum et al. , 2019a ) , one learns a representation that maps observations to an abstract space , the representation space , in which the higher-level policy defines the desired behavior of the lower-level policy . Distance in the representation space can then be used to reward and guide the lower-level policy towards specific goal states . Moreover , environments with rich observations and complex dynamics ( e.g. , Bellemare et al. , 2020 ) have motivated recent works on learning representations as controllable or contingent features ( Bengio et al. , 2017 ; Choi et al. , 2019 ) , on top of which one can potentially learn latent models in the perspective of planning ( Hafner et al. , 2019b ; Nasiriany et al. , 2019 ; Schrittwieser et al. , 2020 ) and control ( Watter et al. , 2015 ; Banijamali et al. , 2018 ; Hafner et al. , 2019a ) . In this work , we are interested in the reward-agnostic setting in which an RL agent first interacts with the environment to build a representation , φ , of the state space , S , without relying on any task-specific reward signal . This representation can later be used to solve a task posed in the environment in the form of a reward function . In this setting , the environment dynamics are the only informative interaction channel available to the agent . This has naturally motivated graph Laplacianbased methods to address the task-agnostic phase ; where the graph vertices correspond to the states and its edges to the transitions probabilities . The Laplacian ’ s eigenvectors can been leveraged as a holistic state representation , termed the Laplacian representation , which captures the environment ’ s dynamics structure and geometry ( Mahadevan , 2005 ; Mahadevan & Maggioni , 2007 ) . Wu et al . ( 2019 ) recently proposed an efficient approximation of the Laplacian representation ( LAPREP ) by framing the graph drawing objective as a temporally-contrastive loss ( see Section 2.2 ) . While this formulation works around potentially prohibitive eigendecompositions , which extends the representation ’ s applicability to large and continuous state spaces , it assumes access to a uniform sampling prior over S. In practice , this translates in the ability to reset the agent to a uniform random starting state in the environment , which artificially alleviates the exploration problem . As we will show in Section 4 , the uniformity of that prior is crucial for the quality of the learned representation . However , such sampling is not trivial in the absence of the uniform prior privilege , since the agent has to learn to explore the state space to be able to access arbitrary states . In effect , one must handle the exploration along the representation learning in order to preserve the representation ’ s quality . In this work , we propose a representation learning framework that conciliate a similar temporallycontrastive approach with exploration in the task-agnostic setting . In practice , the representation is trained on data collected with a uniformly random policy , πµ ( random walk trajectories ) . Without a uniform access to the state space , the collected data is concentrated around accessible starting states . To achieve a better data collection , we tie the representation learning problem to that of learning a covering strategy . Briefly , our method consists in using the available representation to learn a skill-based ( hierarchical ) covering policy that is in turn used to discover yet unseen parts of the state space , providing novel data to refine and expand the representation . Our approach , illustrated in Figure 1 , is inspired by the cyclic option discovery framework ( Machado , 2019 ) , which motivated several related methods ( Machado et al. , 2017 ; 2018 ; Jinnai et al. , 2020 ) . In addition , we propose to integrate the temporal abstractions learned by the skills in the contrastive representation learning objective to encourage temporally-extended exploration and enforce the representation ’ s dynamics-awareness , i.e . how representative the φ-induced euclidean distance is of distances in the state space . We empirically show our agent ’ s ability to progressively explore the state space and to consistently extend the representation covered domain in a non-uniform prior setting . We show that our representation leads to better value predictions than LAP-REP , and that it recovers the representation quality expected from a uniform prior . We also evaluate our representation in shaping rewards for goal-achieving tasks , and we show it outperforms LAP-REP , confirming both its superior ability in capturing dynamics and in scaling to larger environments . Finally , the skills learned in our framework also prove to be successful at difficult continuous navigation tasks with sparse rewards , where other standard skill discovery methods are limited . 2 PRELIMINARIES . 2.1 TASK-AGNOSTIC REINFORCEMENT LEARNING . We describe a task-agnostic RL environment as a task-agnostic Markov decision process ( MDP ) M = ( S , A , P , γ , d0 ) where S is the state space , A the action space , P : S × A → ∆ ( S ) the transition dynamics defining the next state distribution given current state and action taken , γ ∈ [ 0 , 1 ) the discount factor , and d0 the initial state distribution . A policy π : S → ∆ ( A ) maps states s ∈ S to distributions over actions . Knowledge acquired from task-agnostic interactions with the environment ( e.g. , a representation or a policy ) can then be leveraged for specific tasks . A task is instantiated with a reward function , R : S → R , which is combined with the task-agnostic MDP . The task objective is to find the optimal policy maximizing the expected discounted return , Eπ , d0 [ ∑ t γ tR ( st , at ) ] , starting from state s0 ∼ d0 and acting according to at ∼ π ( ·|st ) . 2.2 THE LAPLACIAN REPRESENTATION . The Laplacian representation ( LAP-REP ) , as proposed by Wu et al . ( 2019 ) , can be learned with the following contrastive objective : LLap ( φ ; Dπµ ) = E ( u , v ) ∼Dπµ [ ‖φ ( u ) − φ ( v ) ‖22 ] +β E u∼Dπµ v∼Dπµ [ ( φ ( u ) > φ ( v ) ) 2 − ‖φ ( u ) ‖22 − ‖φ ( v ) ‖22 ] , ( 1 ) where β is a hyperparameter , πµ is the uniformly random policy , Dπµ a set of trajectories from πµ ( random walks ) . We use ( u , v ) ∼ Dπµ to denote the sampling of a random transition from Dπµ , and similarly u ∼ Dπµ for a random state . Wu et al . ( 2019 ) showed the competitiveness of the Laplacian representation when provided with a uniform prior over S during the collection of Dπµ . Their objective ( Eq . 1 ) is a temporally-contrastive loss : it is comprised of an attractive term that forces temporally close states to have similar representations and a repulsive term that keeps temporally far states ’ representations far apart . Here , the repulsive term was specifically derived from the orthonormality constraint of the Laplacian eigenvectors . 2.3 THE NON-UNIFORM PRIOR SETTING . In RL , representation learning is deeply coupled to the problem of exploration . Indeed , the induced state distribution defines the representation ’ s training distribution . However , LAP-REP ( Wu et al. , 2019 ) has been learned in the specific uniform prior setting that alleviates the exploration challenge . In this setting , Dπµ , from Eq . 1 , is a collection of random walks with uniformly random starting states , which provides a uniform training distribution to the representation learning objective . In the case of a non-uniform prior , the induced visitation distribution can be quite concentrated around the start state distribution when solely relying on random walks , hence the need for an exploration strategy for a better covering distribution . To study the problem described above , we investigate the setting in which the environment has a fixed predefined state s0 to which it resets with a probability pr every K steps ; with K of the order of diameter of S. With a uniformly random behavior policy , this setting is equivalent to a initial state distribution that is concentrated around s0 and whose density decays exponentially away from it . We will refer to this setting as the non-uniform prior ( non-µ ) setting , as opposed to the uniform prior ( µ ) setting where the agent has access to the uniform state distribution . 3 TEMPORAL ABSTRACTIONS-AUGMENTED REPRESENTATION LEARNING . In this section , we present Temporal Abstractions-augmented Temporally-Contrastive learning ( TATC ) , a representation learning approach in which the representation works in tandem with a skill-based covering policy for a better representation learning in the non-uniform prior setting . We first propose an alternative objective to Eq . 1 that suits this setting , then describe the exploratory policy training . Finally , we introduce an augmentation of the proposed objective based on the learned temporal abstractions to improve exploration and enforce the representation ’ s dynamics-awareness . 3.1 TEMPORALLY-CONTRASTIVE REPRESENTATION OBJECTIVE . As mentioned in Section 2.2 , the repulsive term in LAP-REP ’ s objective ( Eq . 1 ) derives from the eigenvectors ’ orthonormality constraint . However , because the environment is expected to be progressively covered in the non-uniform prior setting , the orthonormality constraint can make online representation learning highly non-stationary.1 For this reason , we adopt the following objective with a generic repulsive term that is more amenable to online learning : Lcont ( φ ; Dπµ ) : = E ( u , v ) ∼Dπµ [ ‖φ ( u ) − φ ( v ) ‖22 ] + β E u∼Dπµ v∼Dπµ [ exp ( −‖φ ( u ) − φ ( v ) ‖2 ) ] . ( 2 ) 1In general , even within a given matrix ’ s perturbation neighborhood , its eigenvectors can show a highly nonlinear sensitivity ( Trefethen & Bau , 1997 ) . 3.2 REPRESENTATION-BASED COVERING POLICY . In the non-uniform prior setting , exploration is required to provide the representation with a better training distribution . To this purpose , we adopt a hierarchical RL approach to leverage the exploratory efficiency of options ( Sutton et al. , 1999 ; Nachum et al. , 2019b ) , or skills . The agent acts according to a bi-level policy ( πhi , πlow ) . The high-level policy πhi : S → ∆ ( Ω ) defines , at each state s , a distribution over a set Ω of directions ( unit vectors ) in the representation space ( Ω = { δ | δ ∈ Rd , ‖δ‖2=1 } ) . Each direction corresponds to a fixed length skill encoded by the low-level policy πlow : S × Ω → ∆ ( A ) . These skills are expected to travel in the representation space along the directions instructed by πhi . In short , given a sampled direction πhi ( ·|s ) ∼ δ ∈ Ω , the low-level policy executes the directional skill πlow ( ·|s , δ ) for a fixed number of steps c before a new direction is sampled . Now , we describe the intrinsic rewards used to train the policies πhi and πlow . Low-level Policy . πlow is simply trained to follow directions defined by πhi in the representation space . For a given δ ∈ Ω ⊂ Rd , the corresponding skill πlow ( ·|s , δ ) is trained to maximize the intrinsic reward function : rδ ( s , s′ ) : = cos ( δ , φ ( s′ ) − φ ( s ) ) = δ > ( φ ( s′ ) − φ ( s ) ) ‖φ ( s′ ) − φ ( s ) ‖ ( 3 ) where ( s , s′ ) is an observed state transition , and φ the representation being learned . We use the cosine similarity as a way to encourage learning diverse directional skills . Indeed , skills cospecialization is avoided by rewarding the agent for the steps induced along the instructed direction δ regardless of their magnitudes . High-level Policy . The high-level policy is expected to guide the covering strategy . It should do so by sampling the skills of the most promising directions in terms of exploration : affording new discoveries while avoiding to spend more time than needed in previously explored areas . For this purpose , we design a reward function defined over a sequence of L consecutive skills . Let { shik } Lk=1 be the sequence of their initial states and their respective sampled directions δk ∼ πhi ( ·|shik ) . Since φ is trained to capture the dynamics , the travelled distance in φ ’ s space is a good proxy of how far the choices made by πhi eventually brought the agent in the environment . Therefore , for a given high-level trajectory , τ hi = ( shi1 , s hi 2 , ... , s hi L , s hi f ) , with s hi f the final state reached by the last skill , the high-level policy is trained to maximize the following quantities : ∀k ∈ { 1 , ... , L } , Rhi ( shik , δk ) : = ‖φ ( shi1 ) − φ ( shif ) ‖2 , ( 4 ) where δk ∼ πhi ( ·|shik ) is the direction sampled at shik . From the policy optimization perspective , each of these quantities plays the role of the return cumulated along the sampled high-level trajectory and not just a single ( high-level ) step reward . This term looks at reaching shif as the result of a sequential collaboration of L skills , rewarding them equally . It values how far this sequence of skills has eventually brought the agent . These policy training choices are closely related to how the representation is trained . Indeed , the exploratory behavior emerges from the interaction between the policy and the representation while training . In the following section , we describe how the representation benefits from the temporal abstractions learned by the covering policy ( πhi , πlow ) . | The paper proposes an exploration strategy for adapting Lapliacan RL to settings without the possibility to uniformely sample from the state space. In particular, the work builds on the Wu et al. discussion of estimating a spectral decomposition of the state space. However, the estimation of the Laplacian's eigenvectors is based on uniform sampling from states and possible transitions. To address this restriciton the authors propose a hierarchical exploration scheme, which follows directions on a higher level and optimizes actions of the lower level. Based on these levels the authors propose to employ intrinsic rewards to improve exploration and achieve a better cover of the state space. Experiments are conducted on grid graphs and continous navigation tasks and demonstrate that the new sampling policy performs better than sampling from random walks. More interestingly the authors compare to two other task-agnostic skill discovery methods. | SP:dc679a554726072ea8d759e45233b0b096892fef |
Temporal abstractions-augmented temporally contrastive learning: an alternative to the Laplacian in RL | 1 INTRODUCTION . Representation learning has been at the core of many recent machine learning advances ( c.f . Bengio et al. , 2013 ) . With the advent of deep reinforcement learning ( Mnih et al. , 2015 ) , representation learning has also become one of the main topics of interest in reinforcement learning ( RL ) . For example , in the goal-conditioned hierarchical setting ( Vezhnevets et al. , 2017 ; Nachum et al. , 2019a ) , one learns a representation that maps observations to an abstract space , the representation space , in which the higher-level policy defines the desired behavior of the lower-level policy . Distance in the representation space can then be used to reward and guide the lower-level policy towards specific goal states . Moreover , environments with rich observations and complex dynamics ( e.g. , Bellemare et al. , 2020 ) have motivated recent works on learning representations as controllable or contingent features ( Bengio et al. , 2017 ; Choi et al. , 2019 ) , on top of which one can potentially learn latent models in the perspective of planning ( Hafner et al. , 2019b ; Nasiriany et al. , 2019 ; Schrittwieser et al. , 2020 ) and control ( Watter et al. , 2015 ; Banijamali et al. , 2018 ; Hafner et al. , 2019a ) . In this work , we are interested in the reward-agnostic setting in which an RL agent first interacts with the environment to build a representation , φ , of the state space , S , without relying on any task-specific reward signal . This representation can later be used to solve a task posed in the environment in the form of a reward function . In this setting , the environment dynamics are the only informative interaction channel available to the agent . This has naturally motivated graph Laplacianbased methods to address the task-agnostic phase ; where the graph vertices correspond to the states and its edges to the transitions probabilities . The Laplacian ’ s eigenvectors can been leveraged as a holistic state representation , termed the Laplacian representation , which captures the environment ’ s dynamics structure and geometry ( Mahadevan , 2005 ; Mahadevan & Maggioni , 2007 ) . Wu et al . ( 2019 ) recently proposed an efficient approximation of the Laplacian representation ( LAPREP ) by framing the graph drawing objective as a temporally-contrastive loss ( see Section 2.2 ) . While this formulation works around potentially prohibitive eigendecompositions , which extends the representation ’ s applicability to large and continuous state spaces , it assumes access to a uniform sampling prior over S. In practice , this translates in the ability to reset the agent to a uniform random starting state in the environment , which artificially alleviates the exploration problem . As we will show in Section 4 , the uniformity of that prior is crucial for the quality of the learned representation . However , such sampling is not trivial in the absence of the uniform prior privilege , since the agent has to learn to explore the state space to be able to access arbitrary states . In effect , one must handle the exploration along the representation learning in order to preserve the representation ’ s quality . In this work , we propose a representation learning framework that conciliate a similar temporallycontrastive approach with exploration in the task-agnostic setting . In practice , the representation is trained on data collected with a uniformly random policy , πµ ( random walk trajectories ) . Without a uniform access to the state space , the collected data is concentrated around accessible starting states . To achieve a better data collection , we tie the representation learning problem to that of learning a covering strategy . Briefly , our method consists in using the available representation to learn a skill-based ( hierarchical ) covering policy that is in turn used to discover yet unseen parts of the state space , providing novel data to refine and expand the representation . Our approach , illustrated in Figure 1 , is inspired by the cyclic option discovery framework ( Machado , 2019 ) , which motivated several related methods ( Machado et al. , 2017 ; 2018 ; Jinnai et al. , 2020 ) . In addition , we propose to integrate the temporal abstractions learned by the skills in the contrastive representation learning objective to encourage temporally-extended exploration and enforce the representation ’ s dynamics-awareness , i.e . how representative the φ-induced euclidean distance is of distances in the state space . We empirically show our agent ’ s ability to progressively explore the state space and to consistently extend the representation covered domain in a non-uniform prior setting . We show that our representation leads to better value predictions than LAP-REP , and that it recovers the representation quality expected from a uniform prior . We also evaluate our representation in shaping rewards for goal-achieving tasks , and we show it outperforms LAP-REP , confirming both its superior ability in capturing dynamics and in scaling to larger environments . Finally , the skills learned in our framework also prove to be successful at difficult continuous navigation tasks with sparse rewards , where other standard skill discovery methods are limited . 2 PRELIMINARIES . 2.1 TASK-AGNOSTIC REINFORCEMENT LEARNING . We describe a task-agnostic RL environment as a task-agnostic Markov decision process ( MDP ) M = ( S , A , P , γ , d0 ) where S is the state space , A the action space , P : S × A → ∆ ( S ) the transition dynamics defining the next state distribution given current state and action taken , γ ∈ [ 0 , 1 ) the discount factor , and d0 the initial state distribution . A policy π : S → ∆ ( A ) maps states s ∈ S to distributions over actions . Knowledge acquired from task-agnostic interactions with the environment ( e.g. , a representation or a policy ) can then be leveraged for specific tasks . A task is instantiated with a reward function , R : S → R , which is combined with the task-agnostic MDP . The task objective is to find the optimal policy maximizing the expected discounted return , Eπ , d0 [ ∑ t γ tR ( st , at ) ] , starting from state s0 ∼ d0 and acting according to at ∼ π ( ·|st ) . 2.2 THE LAPLACIAN REPRESENTATION . The Laplacian representation ( LAP-REP ) , as proposed by Wu et al . ( 2019 ) , can be learned with the following contrastive objective : LLap ( φ ; Dπµ ) = E ( u , v ) ∼Dπµ [ ‖φ ( u ) − φ ( v ) ‖22 ] +β E u∼Dπµ v∼Dπµ [ ( φ ( u ) > φ ( v ) ) 2 − ‖φ ( u ) ‖22 − ‖φ ( v ) ‖22 ] , ( 1 ) where β is a hyperparameter , πµ is the uniformly random policy , Dπµ a set of trajectories from πµ ( random walks ) . We use ( u , v ) ∼ Dπµ to denote the sampling of a random transition from Dπµ , and similarly u ∼ Dπµ for a random state . Wu et al . ( 2019 ) showed the competitiveness of the Laplacian representation when provided with a uniform prior over S during the collection of Dπµ . Their objective ( Eq . 1 ) is a temporally-contrastive loss : it is comprised of an attractive term that forces temporally close states to have similar representations and a repulsive term that keeps temporally far states ’ representations far apart . Here , the repulsive term was specifically derived from the orthonormality constraint of the Laplacian eigenvectors . 2.3 THE NON-UNIFORM PRIOR SETTING . In RL , representation learning is deeply coupled to the problem of exploration . Indeed , the induced state distribution defines the representation ’ s training distribution . However , LAP-REP ( Wu et al. , 2019 ) has been learned in the specific uniform prior setting that alleviates the exploration challenge . In this setting , Dπµ , from Eq . 1 , is a collection of random walks with uniformly random starting states , which provides a uniform training distribution to the representation learning objective . In the case of a non-uniform prior , the induced visitation distribution can be quite concentrated around the start state distribution when solely relying on random walks , hence the need for an exploration strategy for a better covering distribution . To study the problem described above , we investigate the setting in which the environment has a fixed predefined state s0 to which it resets with a probability pr every K steps ; with K of the order of diameter of S. With a uniformly random behavior policy , this setting is equivalent to a initial state distribution that is concentrated around s0 and whose density decays exponentially away from it . We will refer to this setting as the non-uniform prior ( non-µ ) setting , as opposed to the uniform prior ( µ ) setting where the agent has access to the uniform state distribution . 3 TEMPORAL ABSTRACTIONS-AUGMENTED REPRESENTATION LEARNING . In this section , we present Temporal Abstractions-augmented Temporally-Contrastive learning ( TATC ) , a representation learning approach in which the representation works in tandem with a skill-based covering policy for a better representation learning in the non-uniform prior setting . We first propose an alternative objective to Eq . 1 that suits this setting , then describe the exploratory policy training . Finally , we introduce an augmentation of the proposed objective based on the learned temporal abstractions to improve exploration and enforce the representation ’ s dynamics-awareness . 3.1 TEMPORALLY-CONTRASTIVE REPRESENTATION OBJECTIVE . As mentioned in Section 2.2 , the repulsive term in LAP-REP ’ s objective ( Eq . 1 ) derives from the eigenvectors ’ orthonormality constraint . However , because the environment is expected to be progressively covered in the non-uniform prior setting , the orthonormality constraint can make online representation learning highly non-stationary.1 For this reason , we adopt the following objective with a generic repulsive term that is more amenable to online learning : Lcont ( φ ; Dπµ ) : = E ( u , v ) ∼Dπµ [ ‖φ ( u ) − φ ( v ) ‖22 ] + β E u∼Dπµ v∼Dπµ [ exp ( −‖φ ( u ) − φ ( v ) ‖2 ) ] . ( 2 ) 1In general , even within a given matrix ’ s perturbation neighborhood , its eigenvectors can show a highly nonlinear sensitivity ( Trefethen & Bau , 1997 ) . 3.2 REPRESENTATION-BASED COVERING POLICY . In the non-uniform prior setting , exploration is required to provide the representation with a better training distribution . To this purpose , we adopt a hierarchical RL approach to leverage the exploratory efficiency of options ( Sutton et al. , 1999 ; Nachum et al. , 2019b ) , or skills . The agent acts according to a bi-level policy ( πhi , πlow ) . The high-level policy πhi : S → ∆ ( Ω ) defines , at each state s , a distribution over a set Ω of directions ( unit vectors ) in the representation space ( Ω = { δ | δ ∈ Rd , ‖δ‖2=1 } ) . Each direction corresponds to a fixed length skill encoded by the low-level policy πlow : S × Ω → ∆ ( A ) . These skills are expected to travel in the representation space along the directions instructed by πhi . In short , given a sampled direction πhi ( ·|s ) ∼ δ ∈ Ω , the low-level policy executes the directional skill πlow ( ·|s , δ ) for a fixed number of steps c before a new direction is sampled . Now , we describe the intrinsic rewards used to train the policies πhi and πlow . Low-level Policy . πlow is simply trained to follow directions defined by πhi in the representation space . For a given δ ∈ Ω ⊂ Rd , the corresponding skill πlow ( ·|s , δ ) is trained to maximize the intrinsic reward function : rδ ( s , s′ ) : = cos ( δ , φ ( s′ ) − φ ( s ) ) = δ > ( φ ( s′ ) − φ ( s ) ) ‖φ ( s′ ) − φ ( s ) ‖ ( 3 ) where ( s , s′ ) is an observed state transition , and φ the representation being learned . We use the cosine similarity as a way to encourage learning diverse directional skills . Indeed , skills cospecialization is avoided by rewarding the agent for the steps induced along the instructed direction δ regardless of their magnitudes . High-level Policy . The high-level policy is expected to guide the covering strategy . It should do so by sampling the skills of the most promising directions in terms of exploration : affording new discoveries while avoiding to spend more time than needed in previously explored areas . For this purpose , we design a reward function defined over a sequence of L consecutive skills . Let { shik } Lk=1 be the sequence of their initial states and their respective sampled directions δk ∼ πhi ( ·|shik ) . Since φ is trained to capture the dynamics , the travelled distance in φ ’ s space is a good proxy of how far the choices made by πhi eventually brought the agent in the environment . Therefore , for a given high-level trajectory , τ hi = ( shi1 , s hi 2 , ... , s hi L , s hi f ) , with s hi f the final state reached by the last skill , the high-level policy is trained to maximize the following quantities : ∀k ∈ { 1 , ... , L } , Rhi ( shik , δk ) : = ‖φ ( shi1 ) − φ ( shif ) ‖2 , ( 4 ) where δk ∼ πhi ( ·|shik ) is the direction sampled at shik . From the policy optimization perspective , each of these quantities plays the role of the return cumulated along the sampled high-level trajectory and not just a single ( high-level ) step reward . This term looks at reaching shif as the result of a sequential collaboration of L skills , rewarding them equally . It values how far this sequence of skills has eventually brought the agent . These policy training choices are closely related to how the representation is trained . Indeed , the exploratory behavior emerges from the interaction between the policy and the representation while training . In the following section , we describe how the representation benefits from the temporal abstractions learned by the covering policy ( πhi , πlow ) . | This paper introduces a representation learning framework for reward-agnostic RL, which is based on temporal contrastive learning. The proposed method (TATC) is motivated by the Laplacian representation approximation approach (Wu et al., 2019) and overcomes the limitation of requiring a uniform prior. The experiments demonstrate the effectiveness of TATC. | SP:dc679a554726072ea8d759e45233b0b096892fef |
Temporal abstractions-augmented temporally contrastive learning: an alternative to the Laplacian in RL | 1 INTRODUCTION . Representation learning has been at the core of many recent machine learning advances ( c.f . Bengio et al. , 2013 ) . With the advent of deep reinforcement learning ( Mnih et al. , 2015 ) , representation learning has also become one of the main topics of interest in reinforcement learning ( RL ) . For example , in the goal-conditioned hierarchical setting ( Vezhnevets et al. , 2017 ; Nachum et al. , 2019a ) , one learns a representation that maps observations to an abstract space , the representation space , in which the higher-level policy defines the desired behavior of the lower-level policy . Distance in the representation space can then be used to reward and guide the lower-level policy towards specific goal states . Moreover , environments with rich observations and complex dynamics ( e.g. , Bellemare et al. , 2020 ) have motivated recent works on learning representations as controllable or contingent features ( Bengio et al. , 2017 ; Choi et al. , 2019 ) , on top of which one can potentially learn latent models in the perspective of planning ( Hafner et al. , 2019b ; Nasiriany et al. , 2019 ; Schrittwieser et al. , 2020 ) and control ( Watter et al. , 2015 ; Banijamali et al. , 2018 ; Hafner et al. , 2019a ) . In this work , we are interested in the reward-agnostic setting in which an RL agent first interacts with the environment to build a representation , φ , of the state space , S , without relying on any task-specific reward signal . This representation can later be used to solve a task posed in the environment in the form of a reward function . In this setting , the environment dynamics are the only informative interaction channel available to the agent . This has naturally motivated graph Laplacianbased methods to address the task-agnostic phase ; where the graph vertices correspond to the states and its edges to the transitions probabilities . The Laplacian ’ s eigenvectors can been leveraged as a holistic state representation , termed the Laplacian representation , which captures the environment ’ s dynamics structure and geometry ( Mahadevan , 2005 ; Mahadevan & Maggioni , 2007 ) . Wu et al . ( 2019 ) recently proposed an efficient approximation of the Laplacian representation ( LAPREP ) by framing the graph drawing objective as a temporally-contrastive loss ( see Section 2.2 ) . While this formulation works around potentially prohibitive eigendecompositions , which extends the representation ’ s applicability to large and continuous state spaces , it assumes access to a uniform sampling prior over S. In practice , this translates in the ability to reset the agent to a uniform random starting state in the environment , which artificially alleviates the exploration problem . As we will show in Section 4 , the uniformity of that prior is crucial for the quality of the learned representation . However , such sampling is not trivial in the absence of the uniform prior privilege , since the agent has to learn to explore the state space to be able to access arbitrary states . In effect , one must handle the exploration along the representation learning in order to preserve the representation ’ s quality . In this work , we propose a representation learning framework that conciliate a similar temporallycontrastive approach with exploration in the task-agnostic setting . In practice , the representation is trained on data collected with a uniformly random policy , πµ ( random walk trajectories ) . Without a uniform access to the state space , the collected data is concentrated around accessible starting states . To achieve a better data collection , we tie the representation learning problem to that of learning a covering strategy . Briefly , our method consists in using the available representation to learn a skill-based ( hierarchical ) covering policy that is in turn used to discover yet unseen parts of the state space , providing novel data to refine and expand the representation . Our approach , illustrated in Figure 1 , is inspired by the cyclic option discovery framework ( Machado , 2019 ) , which motivated several related methods ( Machado et al. , 2017 ; 2018 ; Jinnai et al. , 2020 ) . In addition , we propose to integrate the temporal abstractions learned by the skills in the contrastive representation learning objective to encourage temporally-extended exploration and enforce the representation ’ s dynamics-awareness , i.e . how representative the φ-induced euclidean distance is of distances in the state space . We empirically show our agent ’ s ability to progressively explore the state space and to consistently extend the representation covered domain in a non-uniform prior setting . We show that our representation leads to better value predictions than LAP-REP , and that it recovers the representation quality expected from a uniform prior . We also evaluate our representation in shaping rewards for goal-achieving tasks , and we show it outperforms LAP-REP , confirming both its superior ability in capturing dynamics and in scaling to larger environments . Finally , the skills learned in our framework also prove to be successful at difficult continuous navigation tasks with sparse rewards , where other standard skill discovery methods are limited . 2 PRELIMINARIES . 2.1 TASK-AGNOSTIC REINFORCEMENT LEARNING . We describe a task-agnostic RL environment as a task-agnostic Markov decision process ( MDP ) M = ( S , A , P , γ , d0 ) where S is the state space , A the action space , P : S × A → ∆ ( S ) the transition dynamics defining the next state distribution given current state and action taken , γ ∈ [ 0 , 1 ) the discount factor , and d0 the initial state distribution . A policy π : S → ∆ ( A ) maps states s ∈ S to distributions over actions . Knowledge acquired from task-agnostic interactions with the environment ( e.g. , a representation or a policy ) can then be leveraged for specific tasks . A task is instantiated with a reward function , R : S → R , which is combined with the task-agnostic MDP . The task objective is to find the optimal policy maximizing the expected discounted return , Eπ , d0 [ ∑ t γ tR ( st , at ) ] , starting from state s0 ∼ d0 and acting according to at ∼ π ( ·|st ) . 2.2 THE LAPLACIAN REPRESENTATION . The Laplacian representation ( LAP-REP ) , as proposed by Wu et al . ( 2019 ) , can be learned with the following contrastive objective : LLap ( φ ; Dπµ ) = E ( u , v ) ∼Dπµ [ ‖φ ( u ) − φ ( v ) ‖22 ] +β E u∼Dπµ v∼Dπµ [ ( φ ( u ) > φ ( v ) ) 2 − ‖φ ( u ) ‖22 − ‖φ ( v ) ‖22 ] , ( 1 ) where β is a hyperparameter , πµ is the uniformly random policy , Dπµ a set of trajectories from πµ ( random walks ) . We use ( u , v ) ∼ Dπµ to denote the sampling of a random transition from Dπµ , and similarly u ∼ Dπµ for a random state . Wu et al . ( 2019 ) showed the competitiveness of the Laplacian representation when provided with a uniform prior over S during the collection of Dπµ . Their objective ( Eq . 1 ) is a temporally-contrastive loss : it is comprised of an attractive term that forces temporally close states to have similar representations and a repulsive term that keeps temporally far states ’ representations far apart . Here , the repulsive term was specifically derived from the orthonormality constraint of the Laplacian eigenvectors . 2.3 THE NON-UNIFORM PRIOR SETTING . In RL , representation learning is deeply coupled to the problem of exploration . Indeed , the induced state distribution defines the representation ’ s training distribution . However , LAP-REP ( Wu et al. , 2019 ) has been learned in the specific uniform prior setting that alleviates the exploration challenge . In this setting , Dπµ , from Eq . 1 , is a collection of random walks with uniformly random starting states , which provides a uniform training distribution to the representation learning objective . In the case of a non-uniform prior , the induced visitation distribution can be quite concentrated around the start state distribution when solely relying on random walks , hence the need for an exploration strategy for a better covering distribution . To study the problem described above , we investigate the setting in which the environment has a fixed predefined state s0 to which it resets with a probability pr every K steps ; with K of the order of diameter of S. With a uniformly random behavior policy , this setting is equivalent to a initial state distribution that is concentrated around s0 and whose density decays exponentially away from it . We will refer to this setting as the non-uniform prior ( non-µ ) setting , as opposed to the uniform prior ( µ ) setting where the agent has access to the uniform state distribution . 3 TEMPORAL ABSTRACTIONS-AUGMENTED REPRESENTATION LEARNING . In this section , we present Temporal Abstractions-augmented Temporally-Contrastive learning ( TATC ) , a representation learning approach in which the representation works in tandem with a skill-based covering policy for a better representation learning in the non-uniform prior setting . We first propose an alternative objective to Eq . 1 that suits this setting , then describe the exploratory policy training . Finally , we introduce an augmentation of the proposed objective based on the learned temporal abstractions to improve exploration and enforce the representation ’ s dynamics-awareness . 3.1 TEMPORALLY-CONTRASTIVE REPRESENTATION OBJECTIVE . As mentioned in Section 2.2 , the repulsive term in LAP-REP ’ s objective ( Eq . 1 ) derives from the eigenvectors ’ orthonormality constraint . However , because the environment is expected to be progressively covered in the non-uniform prior setting , the orthonormality constraint can make online representation learning highly non-stationary.1 For this reason , we adopt the following objective with a generic repulsive term that is more amenable to online learning : Lcont ( φ ; Dπµ ) : = E ( u , v ) ∼Dπµ [ ‖φ ( u ) − φ ( v ) ‖22 ] + β E u∼Dπµ v∼Dπµ [ exp ( −‖φ ( u ) − φ ( v ) ‖2 ) ] . ( 2 ) 1In general , even within a given matrix ’ s perturbation neighborhood , its eigenvectors can show a highly nonlinear sensitivity ( Trefethen & Bau , 1997 ) . 3.2 REPRESENTATION-BASED COVERING POLICY . In the non-uniform prior setting , exploration is required to provide the representation with a better training distribution . To this purpose , we adopt a hierarchical RL approach to leverage the exploratory efficiency of options ( Sutton et al. , 1999 ; Nachum et al. , 2019b ) , or skills . The agent acts according to a bi-level policy ( πhi , πlow ) . The high-level policy πhi : S → ∆ ( Ω ) defines , at each state s , a distribution over a set Ω of directions ( unit vectors ) in the representation space ( Ω = { δ | δ ∈ Rd , ‖δ‖2=1 } ) . Each direction corresponds to a fixed length skill encoded by the low-level policy πlow : S × Ω → ∆ ( A ) . These skills are expected to travel in the representation space along the directions instructed by πhi . In short , given a sampled direction πhi ( ·|s ) ∼ δ ∈ Ω , the low-level policy executes the directional skill πlow ( ·|s , δ ) for a fixed number of steps c before a new direction is sampled . Now , we describe the intrinsic rewards used to train the policies πhi and πlow . Low-level Policy . πlow is simply trained to follow directions defined by πhi in the representation space . For a given δ ∈ Ω ⊂ Rd , the corresponding skill πlow ( ·|s , δ ) is trained to maximize the intrinsic reward function : rδ ( s , s′ ) : = cos ( δ , φ ( s′ ) − φ ( s ) ) = δ > ( φ ( s′ ) − φ ( s ) ) ‖φ ( s′ ) − φ ( s ) ‖ ( 3 ) where ( s , s′ ) is an observed state transition , and φ the representation being learned . We use the cosine similarity as a way to encourage learning diverse directional skills . Indeed , skills cospecialization is avoided by rewarding the agent for the steps induced along the instructed direction δ regardless of their magnitudes . High-level Policy . The high-level policy is expected to guide the covering strategy . It should do so by sampling the skills of the most promising directions in terms of exploration : affording new discoveries while avoiding to spend more time than needed in previously explored areas . For this purpose , we design a reward function defined over a sequence of L consecutive skills . Let { shik } Lk=1 be the sequence of their initial states and their respective sampled directions δk ∼ πhi ( ·|shik ) . Since φ is trained to capture the dynamics , the travelled distance in φ ’ s space is a good proxy of how far the choices made by πhi eventually brought the agent in the environment . Therefore , for a given high-level trajectory , τ hi = ( shi1 , s hi 2 , ... , s hi L , s hi f ) , with s hi f the final state reached by the last skill , the high-level policy is trained to maximize the following quantities : ∀k ∈ { 1 , ... , L } , Rhi ( shik , δk ) : = ‖φ ( shi1 ) − φ ( shif ) ‖2 , ( 4 ) where δk ∼ πhi ( ·|shik ) is the direction sampled at shik . From the policy optimization perspective , each of these quantities plays the role of the return cumulated along the sampled high-level trajectory and not just a single ( high-level ) step reward . This term looks at reaching shif as the result of a sequential collaboration of L skills , rewarding them equally . It values how far this sequence of skills has eventually brought the agent . These policy training choices are closely related to how the representation is trained . Indeed , the exploratory behavior emerges from the interaction between the policy and the representation while training . In the following section , we describe how the representation benefits from the temporal abstractions learned by the covering policy ( πhi , πlow ) . | This work focuses on the problem of uniformity in representation learning, especially Laplacian representation. To address the issue, the authors propose TATC method, which leverages the skills training that representations allow, and uses the learned skills to better cover the state space, i.e., learn a covering policy. Finally, the proposed method is evaluated on (discrete) gridworld and (continuous) navigation environments, with results and analysis demonstrating its effectiveness. | SP:dc679a554726072ea8d759e45233b0b096892fef |
Non-Linear Operator Approximations for Initial Value Problems | 1 INTRODUCTION . Predicting the future states using the current conditions is a fundamental problem in machine learning . Such problems fall under the umbrella of a common term , the “ Initial Value Problems ” ( IVPs ) . The basic structure of IVP involves a first-order time-evolution along with non-linear operators . The class of IVPs spans the domain of physics ( modeling gravitational waves ( Lovelace , 2021 ) ) , neuroscience ( Hodgkin-Huxley model ( Zhang et al. , 2020 ) ) , engineering ( fluid dynamics ( Wendt , 2008 ) ) , water waves ( tsunami ( Elbanna et al. , 2021 ) ) , mean field games ( Ruthotto et al. , 2020 ) , to list just a few . Within the current pandemic context , the applications areas like epidemiology ( Kermack–McKendrick model ( Kermack et al. , 1991 ; Diekmann et al. , 2021 ) ) are of tremendous interest . Neural Operators The use of deep learning to solve the IVP like problems for predictions has been exploiting within the framework of convolutional neural networks ( CNNs ) ( Bhatnagar et al. , 2019 ; Guo et al. , 2016 ) , and time-evolution by employing multiple layers ( Khoo et al. , 2020 ) . The multi-layered deep networks with CNNs are suitable to solve problems with a large number of training samples . Moreover , because of the image-regression like structures , such models are restricted to the specifications of the input size . Another research direction aims at solving and modeling the partial differential equations ( PDEs ) versions of the IVPs for a given instance . The works of ( Kochkov et al. , 2021 ) model the IVP solution as NNs for modeling the turbulent flows . Along the same lines , we have physics-informed neural networks ( PINNs ) ( Raissi et al. , 2019 ; Wang et al. , 2021b ) that utilize PDE structure for defining the loss functions . Such models are not applicable within the context of a complete data-driven scenario , or for the setups where the exact PDE structure is not known , for example , modeling the climate , epidemic , or unknown physical and chemical phenomena . Finally , we have the works of Neural Operators that are completely data-driven and input-resolution independent schemes ( Li et al. , 2020b ; c ; a ; Gupta et al. , 2021 ; Bhattacharya et al. , 2020 ; Patel et al. , 2021 ) . Most of these approaches tried to efficiently work with the integral kernel operators , for example , Graph Nyström sampling in ( Li et al. , 2020b ) , convolution approximation in ( Li et al. , 2020a ) , multiwavelets compression in ( Gupta et al. , 2021 ) . Apart from solving non-homogeneous linear differential equations , the PDE operators are mostly non-linear . To tackle the non-linear behavior , these prior works use a multi-cell architecture with non-linearity ( for example , ReLU ) . To work with the IVP like problems , and be data-efficient , we aim to adopt explicitly the non-linear operator ( exponential ) that appears in the IVP solutions . Exponential Operators The exponential of linear transformation has been a subject of research for the last 150 years ( Laguerre , 1898 ) . In the simplest form , the exponential operator appears as a solution of : dydt “ at , yp0q “ y0 as yptq “ e aty0 ( for more general examples , see Table 1 ) . With applications ranging from control systems theory ( converting continuous to discrete systems ) to solving partial differential equations ( Cox & Matthews , 2002 ; Kassam & Trefethen , 2005 ) , the exponential function of operators is a subject of active research . In deep learning , the exponential function to model non-linearity is used in ( Andoni et al. , 2014 ) . Recently , the exponential operators have also been explored in the field of computer vision for generative flows in ( Hoogeboom et al. , 2020 ) . Padé Approximation Although one approach to implementing an exponential operator could be attained through a Taylor series representation , this operator function is prone to errors ( Abramowitz & Stegun , 1965 ) . Scale-and-squaring ( SSQ ) methods are commonly suggested approaches to deal with the errors ( Lawson , 1967 ) . In addition to SSQ , the Padé approximation which represents an analytic function as the ratio of polynomials achieves state-of-the-art accuracy in computing exponential operators ( Fasi & Higham , 2019 ) . Industry standard numerical toolboxes ( for example , MATLAB , SciPy ) use the Padé approximation based approach to compute the matrix exponential expm ( Al-Mohy & Higham , 2009 ) . Matrix exponential via Padé representation requires dense matrix computations ( for example , inverse and higher-order polynomials ) . Such operations are not numerically feasible , in-general , for the inputs with large size . However , the commonly used operators like convolution ( possibly , multi-layered ) have parameters that are fixed beforehand and are much less than the input dimension . A suitable approach , therefore , is a neural architecture based Padé approximation . Our strategy , in this work , is to explicitly embed the exponential operators in the neural operator architecture for dealing with the IVP like datasets . The exponential operators are non-linear , and therefore , this removes the requirement of having multi-cell linear integral operator layers . While with sufficient data in-hand , the proposed approach may work similarly to the existing neural operators with a large number of training parameters . However , this is seldom a feasible scenario for the expensive real-world experiments , or on-going recent issues like COVID19 prediction . Here , the current work is helpful in providing data-efficiency analytics , and is useful in dealing with scarce and noisy datasets ( see Section 3.3 ) . To the advantage of Padé approximation , the exponential of a given operator can be computed with the pre-defined coefficients ( see Section 2.3 ) and a recurrent polynomial mechanism . Our Contributions The main novel contributions of this work are summarized as follows : ( i ) For the IVPs , we propose to embed the exponential operators in the neural operator learning mechanism . ( ii ) By using the Padé approximation , we compute the exponential of the operator using a novel recurrent neural architecture that also eliminates the need for matrix inversion . ( iii ) We theoretically demonstrate that the proposed recurrent scheme , using the Padé coefficients , have bounded gradients with respect to ( w.r.t . ) the model parameters across the recurrent horizon . ( iv ) We demonstrate the data-efficiency on the synthetic 1D datasets of Korteweg-de Vries ( KdV ) and Kuramoto–Sivashinsky ( KS ) equations , where with less parameters we achieve state-of-the-art performance . ( v ) We formulate and investigate the epidemic forecasting as a 2D time-varying neural operator problem , and show that for real-world noisy and scarce data , the proposed model outperforms the best neural operator architectures by 53 % and best non-neural operator schemes by 52 % . 2 OPERATORS FOR INITIAL VALUE PROBLEM . We formalize the partial differential equations ( PDEs ) version of the Initial Value Problem studied in this work in Section 2.1 . Section 2.2 summarizes the multi-resolution analysis using multiwavelets for space-discretization . Section 2.3 describes the proposed use of canonical exponential operators and presents a novel architecture using Padé approximation . 2.1 INITIAL VALUE PROBLEM . The initial value problem ( IVP ) for PDEs can be written in its general form as follows . ut “ Fpt , uq , x P Ω upx , 0q “ u0pxq , x P Ω ( 1 ) where , ut is the first-order time derivative of u , F is a time-varying differential operator ( non-linear in-general ) such that F : R ` Y t0u ˆ B Ñ B with B being a Banach space . Usually , the system in eq . ( 1 ) is required to satisfy a boundary condition such that Bupx , tq “ 0 , x P BΩ @ t in the solution horizon , and BΩ is the boundary of the computational region Ω with B some linear function . Pertaining to our work , the operator map problem for IVP can be formally defined as follows . Operator Problem Given A and U as two Sobolev spaces Hs , p with s ą 0 , p “ 2 , an operator T is such that T : A Ñ U . For a given τ ą 0 and two functions u0pxq and upx , τq , in this work , we take the operator map as T u0pxq “ upx , τq with x P Ω . Table 1 summarizes a few examples of the IVP and their solutions.The exponential operators are ubiquitous in the IVP solutions and , therefore , are important to study . One issue , however , is that the exponential operators are non-linear and unlike convolution like operators , there does not exist a general way to diagonalize them ( Fourier transform diagonalizes convolution operator ) for an efficient representation . Previous work on neural operators ( Li et al. , 2020c ; a ; Gupta et al. , 2021 ) modeled the non-linear operators in one way or another by using multiple canonical integral operators along with non-linearity ( for example , ReLU ) . In this work , we directly produce an exponential operator approximation . First , we discuss an efficient basis ( multiwavelets ) for space discretization of the input/output functions in Section 2.2 . 2.2 MULTI-RESOLUTION ANALYSIS . The multi-resolution analysis ( MRA ) aims at projecting a function to a basis over multiple scales . The wavelet basis ( e.g. , Haar , Daubechies ) are some popular examples . Multiwavelets further this operation by using the family of orthogonal polynomials ( OPs ) , for example , Legendre polynomials for an efficient representation over a finite interval ( Alpert et al. , 2002 ) . The multiwavelets are useful in the sparse representation of the integral operators with smooth kernels . In addition , the multiwavelets also sparsify the exponential functions of the strictly elliptic operators ( Beylkin & Keiser , 1997 ) . However , we do not rely on this assumption in this work . Here , we briefly introduce the MRA and refer the reader to Gupta et al . ( 2021 ) for a detailed picture . Notation We begin by defining the space of finite interval polynomials as Vkn “ tf |f are polynomials of degree ă k defined over interval p2´nl , 2´npl ` 1qq for all l “ 0 , 1 , . . . , 2n ´ 1 , and assumes 0 elsewhereu . The Vkn are contained in each other for subsequent n or , Vk0 Ă Vk1 Ă . . . Ă Vkn´1 Ă Vkn Ă . . . . ( 2 ) 1Time-advection equation with linear operators L , N and non-linear function fp.q . A wide range of problems can be modeled , for example , Korteweg-de Vries , Kuramoto-Sivashinsky , Burgers ’ Equation , Navier-Stokes ( list not exhaustive ) . 2A non-linear integro-differential solution to the time-advection equation using semi-group approach ( Beylkin & Keiser , 1997 ; Pazy , 1983 ; Yoshida , 1980 ) . A slightly general version is discussed in ( Beylkin et al. , 1998 ) . The orthogonal component of these polynomial spaces is termed as multiwavelet space Wkn and are defined such that Vkn à Wkn “ Vkn ` 1 , Vkn K Wkn . ( 3 ) The orthonormal basis of Vk0 are OPs ϕ0 , ϕ1 , . . . , ϕk´1 and we have used appropriately normalized shifted Legendre Polynomials in this work . The basis for Vkn and W k n are ϕ n jlpxq “ 2n { 2ϕjp2nx´ lq and ψnjlpxq “ 2n { 2ψjp2nx´ lq , respectively , for l “ 0 , 1 , . . . , 2n ´ 1 and j “ 0 , 1 , . . . , k ´ 1 . Finally , an important trick for representing the operator T in the multiwavelet basis is called nonstandard ( NS ) form ( Beylkin et al. , 1991 ) . The NS form decouples the interactions of the scales and is useful in obtaining an efficient numerical procedure . Using NS form , the projection of operator T is expanded using a telescopic sum as follows . Tn “ ÿn i “ L ` 1 pQiTQi ` QiTPi´1 ` Pi´1TQiq ` PLTPL , ( 4 ) where , Pn : Hs,2 Ñ Vkn is the projection operator , Tn “ PnTPn , Qn : Hs,2 Ñ Wkn such that Qn “ Pn ´ Pn´1 , and L is the coarsest scale under consideration pL ě 0q . Therefore , the NS form of the operator is a collection of the triplets tAi , Bi , Ciuni “ L ` 1 and PLTPL with Ai “ QiTQi , Bi “ QiTPi´1 and Ci “ Pi´1TQi . In this work , we aim to model Ai , Bi , Ci as the exponential operators to better learn the IVP by explicitly embedding the non-linear operators into the multiwavelet transformation . This is not straightforward due to the non-linearity of exponential functions . We are now in shape to present the main contribution of the current work in the Section 2.3 where we discuss an implementable neural approximation of the exponential operators . | This paper proposed a recurrent Pade network fro learning non-linear operator approximations for IVP. The Pade exponential operator uses a recurrent structure with shared parameters to model the non-linearity compared to recent neural operators that rely on using multiple linear operator layers in succession. The paper showed that Pade network does not suffer from the issue of gradient explosion and the boundedness of the gradients can be established | SP:f0fadc2af439f62ea9fde63e157348278575e3c2 |
Non-Linear Operator Approximations for Initial Value Problems | 1 INTRODUCTION . Predicting the future states using the current conditions is a fundamental problem in machine learning . Such problems fall under the umbrella of a common term , the “ Initial Value Problems ” ( IVPs ) . The basic structure of IVP involves a first-order time-evolution along with non-linear operators . The class of IVPs spans the domain of physics ( modeling gravitational waves ( Lovelace , 2021 ) ) , neuroscience ( Hodgkin-Huxley model ( Zhang et al. , 2020 ) ) , engineering ( fluid dynamics ( Wendt , 2008 ) ) , water waves ( tsunami ( Elbanna et al. , 2021 ) ) , mean field games ( Ruthotto et al. , 2020 ) , to list just a few . Within the current pandemic context , the applications areas like epidemiology ( Kermack–McKendrick model ( Kermack et al. , 1991 ; Diekmann et al. , 2021 ) ) are of tremendous interest . Neural Operators The use of deep learning to solve the IVP like problems for predictions has been exploiting within the framework of convolutional neural networks ( CNNs ) ( Bhatnagar et al. , 2019 ; Guo et al. , 2016 ) , and time-evolution by employing multiple layers ( Khoo et al. , 2020 ) . The multi-layered deep networks with CNNs are suitable to solve problems with a large number of training samples . Moreover , because of the image-regression like structures , such models are restricted to the specifications of the input size . Another research direction aims at solving and modeling the partial differential equations ( PDEs ) versions of the IVPs for a given instance . The works of ( Kochkov et al. , 2021 ) model the IVP solution as NNs for modeling the turbulent flows . Along the same lines , we have physics-informed neural networks ( PINNs ) ( Raissi et al. , 2019 ; Wang et al. , 2021b ) that utilize PDE structure for defining the loss functions . Such models are not applicable within the context of a complete data-driven scenario , or for the setups where the exact PDE structure is not known , for example , modeling the climate , epidemic , or unknown physical and chemical phenomena . Finally , we have the works of Neural Operators that are completely data-driven and input-resolution independent schemes ( Li et al. , 2020b ; c ; a ; Gupta et al. , 2021 ; Bhattacharya et al. , 2020 ; Patel et al. , 2021 ) . Most of these approaches tried to efficiently work with the integral kernel operators , for example , Graph Nyström sampling in ( Li et al. , 2020b ) , convolution approximation in ( Li et al. , 2020a ) , multiwavelets compression in ( Gupta et al. , 2021 ) . Apart from solving non-homogeneous linear differential equations , the PDE operators are mostly non-linear . To tackle the non-linear behavior , these prior works use a multi-cell architecture with non-linearity ( for example , ReLU ) . To work with the IVP like problems , and be data-efficient , we aim to adopt explicitly the non-linear operator ( exponential ) that appears in the IVP solutions . Exponential Operators The exponential of linear transformation has been a subject of research for the last 150 years ( Laguerre , 1898 ) . In the simplest form , the exponential operator appears as a solution of : dydt “ at , yp0q “ y0 as yptq “ e aty0 ( for more general examples , see Table 1 ) . With applications ranging from control systems theory ( converting continuous to discrete systems ) to solving partial differential equations ( Cox & Matthews , 2002 ; Kassam & Trefethen , 2005 ) , the exponential function of operators is a subject of active research . In deep learning , the exponential function to model non-linearity is used in ( Andoni et al. , 2014 ) . Recently , the exponential operators have also been explored in the field of computer vision for generative flows in ( Hoogeboom et al. , 2020 ) . Padé Approximation Although one approach to implementing an exponential operator could be attained through a Taylor series representation , this operator function is prone to errors ( Abramowitz & Stegun , 1965 ) . Scale-and-squaring ( SSQ ) methods are commonly suggested approaches to deal with the errors ( Lawson , 1967 ) . In addition to SSQ , the Padé approximation which represents an analytic function as the ratio of polynomials achieves state-of-the-art accuracy in computing exponential operators ( Fasi & Higham , 2019 ) . Industry standard numerical toolboxes ( for example , MATLAB , SciPy ) use the Padé approximation based approach to compute the matrix exponential expm ( Al-Mohy & Higham , 2009 ) . Matrix exponential via Padé representation requires dense matrix computations ( for example , inverse and higher-order polynomials ) . Such operations are not numerically feasible , in-general , for the inputs with large size . However , the commonly used operators like convolution ( possibly , multi-layered ) have parameters that are fixed beforehand and are much less than the input dimension . A suitable approach , therefore , is a neural architecture based Padé approximation . Our strategy , in this work , is to explicitly embed the exponential operators in the neural operator architecture for dealing with the IVP like datasets . The exponential operators are non-linear , and therefore , this removes the requirement of having multi-cell linear integral operator layers . While with sufficient data in-hand , the proposed approach may work similarly to the existing neural operators with a large number of training parameters . However , this is seldom a feasible scenario for the expensive real-world experiments , or on-going recent issues like COVID19 prediction . Here , the current work is helpful in providing data-efficiency analytics , and is useful in dealing with scarce and noisy datasets ( see Section 3.3 ) . To the advantage of Padé approximation , the exponential of a given operator can be computed with the pre-defined coefficients ( see Section 2.3 ) and a recurrent polynomial mechanism . Our Contributions The main novel contributions of this work are summarized as follows : ( i ) For the IVPs , we propose to embed the exponential operators in the neural operator learning mechanism . ( ii ) By using the Padé approximation , we compute the exponential of the operator using a novel recurrent neural architecture that also eliminates the need for matrix inversion . ( iii ) We theoretically demonstrate that the proposed recurrent scheme , using the Padé coefficients , have bounded gradients with respect to ( w.r.t . ) the model parameters across the recurrent horizon . ( iv ) We demonstrate the data-efficiency on the synthetic 1D datasets of Korteweg-de Vries ( KdV ) and Kuramoto–Sivashinsky ( KS ) equations , where with less parameters we achieve state-of-the-art performance . ( v ) We formulate and investigate the epidemic forecasting as a 2D time-varying neural operator problem , and show that for real-world noisy and scarce data , the proposed model outperforms the best neural operator architectures by 53 % and best non-neural operator schemes by 52 % . 2 OPERATORS FOR INITIAL VALUE PROBLEM . We formalize the partial differential equations ( PDEs ) version of the Initial Value Problem studied in this work in Section 2.1 . Section 2.2 summarizes the multi-resolution analysis using multiwavelets for space-discretization . Section 2.3 describes the proposed use of canonical exponential operators and presents a novel architecture using Padé approximation . 2.1 INITIAL VALUE PROBLEM . The initial value problem ( IVP ) for PDEs can be written in its general form as follows . ut “ Fpt , uq , x P Ω upx , 0q “ u0pxq , x P Ω ( 1 ) where , ut is the first-order time derivative of u , F is a time-varying differential operator ( non-linear in-general ) such that F : R ` Y t0u ˆ B Ñ B with B being a Banach space . Usually , the system in eq . ( 1 ) is required to satisfy a boundary condition such that Bupx , tq “ 0 , x P BΩ @ t in the solution horizon , and BΩ is the boundary of the computational region Ω with B some linear function . Pertaining to our work , the operator map problem for IVP can be formally defined as follows . Operator Problem Given A and U as two Sobolev spaces Hs , p with s ą 0 , p “ 2 , an operator T is such that T : A Ñ U . For a given τ ą 0 and two functions u0pxq and upx , τq , in this work , we take the operator map as T u0pxq “ upx , τq with x P Ω . Table 1 summarizes a few examples of the IVP and their solutions.The exponential operators are ubiquitous in the IVP solutions and , therefore , are important to study . One issue , however , is that the exponential operators are non-linear and unlike convolution like operators , there does not exist a general way to diagonalize them ( Fourier transform diagonalizes convolution operator ) for an efficient representation . Previous work on neural operators ( Li et al. , 2020c ; a ; Gupta et al. , 2021 ) modeled the non-linear operators in one way or another by using multiple canonical integral operators along with non-linearity ( for example , ReLU ) . In this work , we directly produce an exponential operator approximation . First , we discuss an efficient basis ( multiwavelets ) for space discretization of the input/output functions in Section 2.2 . 2.2 MULTI-RESOLUTION ANALYSIS . The multi-resolution analysis ( MRA ) aims at projecting a function to a basis over multiple scales . The wavelet basis ( e.g. , Haar , Daubechies ) are some popular examples . Multiwavelets further this operation by using the family of orthogonal polynomials ( OPs ) , for example , Legendre polynomials for an efficient representation over a finite interval ( Alpert et al. , 2002 ) . The multiwavelets are useful in the sparse representation of the integral operators with smooth kernels . In addition , the multiwavelets also sparsify the exponential functions of the strictly elliptic operators ( Beylkin & Keiser , 1997 ) . However , we do not rely on this assumption in this work . Here , we briefly introduce the MRA and refer the reader to Gupta et al . ( 2021 ) for a detailed picture . Notation We begin by defining the space of finite interval polynomials as Vkn “ tf |f are polynomials of degree ă k defined over interval p2´nl , 2´npl ` 1qq for all l “ 0 , 1 , . . . , 2n ´ 1 , and assumes 0 elsewhereu . The Vkn are contained in each other for subsequent n or , Vk0 Ă Vk1 Ă . . . Ă Vkn´1 Ă Vkn Ă . . . . ( 2 ) 1Time-advection equation with linear operators L , N and non-linear function fp.q . A wide range of problems can be modeled , for example , Korteweg-de Vries , Kuramoto-Sivashinsky , Burgers ’ Equation , Navier-Stokes ( list not exhaustive ) . 2A non-linear integro-differential solution to the time-advection equation using semi-group approach ( Beylkin & Keiser , 1997 ; Pazy , 1983 ; Yoshida , 1980 ) . A slightly general version is discussed in ( Beylkin et al. , 1998 ) . The orthogonal component of these polynomial spaces is termed as multiwavelet space Wkn and are defined such that Vkn à Wkn “ Vkn ` 1 , Vkn K Wkn . ( 3 ) The orthonormal basis of Vk0 are OPs ϕ0 , ϕ1 , . . . , ϕk´1 and we have used appropriately normalized shifted Legendre Polynomials in this work . The basis for Vkn and W k n are ϕ n jlpxq “ 2n { 2ϕjp2nx´ lq and ψnjlpxq “ 2n { 2ψjp2nx´ lq , respectively , for l “ 0 , 1 , . . . , 2n ´ 1 and j “ 0 , 1 , . . . , k ´ 1 . Finally , an important trick for representing the operator T in the multiwavelet basis is called nonstandard ( NS ) form ( Beylkin et al. , 1991 ) . The NS form decouples the interactions of the scales and is useful in obtaining an efficient numerical procedure . Using NS form , the projection of operator T is expanded using a telescopic sum as follows . Tn “ ÿn i “ L ` 1 pQiTQi ` QiTPi´1 ` Pi´1TQiq ` PLTPL , ( 4 ) where , Pn : Hs,2 Ñ Vkn is the projection operator , Tn “ PnTPn , Qn : Hs,2 Ñ Wkn such that Qn “ Pn ´ Pn´1 , and L is the coarsest scale under consideration pL ě 0q . Therefore , the NS form of the operator is a collection of the triplets tAi , Bi , Ciuni “ L ` 1 and PLTPL with Ai “ QiTQi , Bi “ QiTPi´1 and Ci “ Pi´1TQi . In this work , we aim to model Ai , Bi , Ci as the exponential operators to better learn the IVP by explicitly embedding the non-linear operators into the multiwavelet transformation . This is not straightforward due to the non-linearity of exponential functions . We are now in shape to present the main contribution of the current work in the Section 2.3 where we discuss an implementable neural approximation of the exponential operators . | This paper introduces a new approach for solving the operator map problem which is based on the non-standard form and its approximation via exponential operators. The authors integrate their method in the recently introduced neural operator framework. The technique is evaluated on two synthetic PDE problems and one real life example. In all cases, the proposed approach improves over the other baselines on the relative L2 metric. | SP:f0fadc2af439f62ea9fde63e157348278575e3c2 |
Non-Linear Operator Approximations for Initial Value Problems | 1 INTRODUCTION . Predicting the future states using the current conditions is a fundamental problem in machine learning . Such problems fall under the umbrella of a common term , the “ Initial Value Problems ” ( IVPs ) . The basic structure of IVP involves a first-order time-evolution along with non-linear operators . The class of IVPs spans the domain of physics ( modeling gravitational waves ( Lovelace , 2021 ) ) , neuroscience ( Hodgkin-Huxley model ( Zhang et al. , 2020 ) ) , engineering ( fluid dynamics ( Wendt , 2008 ) ) , water waves ( tsunami ( Elbanna et al. , 2021 ) ) , mean field games ( Ruthotto et al. , 2020 ) , to list just a few . Within the current pandemic context , the applications areas like epidemiology ( Kermack–McKendrick model ( Kermack et al. , 1991 ; Diekmann et al. , 2021 ) ) are of tremendous interest . Neural Operators The use of deep learning to solve the IVP like problems for predictions has been exploiting within the framework of convolutional neural networks ( CNNs ) ( Bhatnagar et al. , 2019 ; Guo et al. , 2016 ) , and time-evolution by employing multiple layers ( Khoo et al. , 2020 ) . The multi-layered deep networks with CNNs are suitable to solve problems with a large number of training samples . Moreover , because of the image-regression like structures , such models are restricted to the specifications of the input size . Another research direction aims at solving and modeling the partial differential equations ( PDEs ) versions of the IVPs for a given instance . The works of ( Kochkov et al. , 2021 ) model the IVP solution as NNs for modeling the turbulent flows . Along the same lines , we have physics-informed neural networks ( PINNs ) ( Raissi et al. , 2019 ; Wang et al. , 2021b ) that utilize PDE structure for defining the loss functions . Such models are not applicable within the context of a complete data-driven scenario , or for the setups where the exact PDE structure is not known , for example , modeling the climate , epidemic , or unknown physical and chemical phenomena . Finally , we have the works of Neural Operators that are completely data-driven and input-resolution independent schemes ( Li et al. , 2020b ; c ; a ; Gupta et al. , 2021 ; Bhattacharya et al. , 2020 ; Patel et al. , 2021 ) . Most of these approaches tried to efficiently work with the integral kernel operators , for example , Graph Nyström sampling in ( Li et al. , 2020b ) , convolution approximation in ( Li et al. , 2020a ) , multiwavelets compression in ( Gupta et al. , 2021 ) . Apart from solving non-homogeneous linear differential equations , the PDE operators are mostly non-linear . To tackle the non-linear behavior , these prior works use a multi-cell architecture with non-linearity ( for example , ReLU ) . To work with the IVP like problems , and be data-efficient , we aim to adopt explicitly the non-linear operator ( exponential ) that appears in the IVP solutions . Exponential Operators The exponential of linear transformation has been a subject of research for the last 150 years ( Laguerre , 1898 ) . In the simplest form , the exponential operator appears as a solution of : dydt “ at , yp0q “ y0 as yptq “ e aty0 ( for more general examples , see Table 1 ) . With applications ranging from control systems theory ( converting continuous to discrete systems ) to solving partial differential equations ( Cox & Matthews , 2002 ; Kassam & Trefethen , 2005 ) , the exponential function of operators is a subject of active research . In deep learning , the exponential function to model non-linearity is used in ( Andoni et al. , 2014 ) . Recently , the exponential operators have also been explored in the field of computer vision for generative flows in ( Hoogeboom et al. , 2020 ) . Padé Approximation Although one approach to implementing an exponential operator could be attained through a Taylor series representation , this operator function is prone to errors ( Abramowitz & Stegun , 1965 ) . Scale-and-squaring ( SSQ ) methods are commonly suggested approaches to deal with the errors ( Lawson , 1967 ) . In addition to SSQ , the Padé approximation which represents an analytic function as the ratio of polynomials achieves state-of-the-art accuracy in computing exponential operators ( Fasi & Higham , 2019 ) . Industry standard numerical toolboxes ( for example , MATLAB , SciPy ) use the Padé approximation based approach to compute the matrix exponential expm ( Al-Mohy & Higham , 2009 ) . Matrix exponential via Padé representation requires dense matrix computations ( for example , inverse and higher-order polynomials ) . Such operations are not numerically feasible , in-general , for the inputs with large size . However , the commonly used operators like convolution ( possibly , multi-layered ) have parameters that are fixed beforehand and are much less than the input dimension . A suitable approach , therefore , is a neural architecture based Padé approximation . Our strategy , in this work , is to explicitly embed the exponential operators in the neural operator architecture for dealing with the IVP like datasets . The exponential operators are non-linear , and therefore , this removes the requirement of having multi-cell linear integral operator layers . While with sufficient data in-hand , the proposed approach may work similarly to the existing neural operators with a large number of training parameters . However , this is seldom a feasible scenario for the expensive real-world experiments , or on-going recent issues like COVID19 prediction . Here , the current work is helpful in providing data-efficiency analytics , and is useful in dealing with scarce and noisy datasets ( see Section 3.3 ) . To the advantage of Padé approximation , the exponential of a given operator can be computed with the pre-defined coefficients ( see Section 2.3 ) and a recurrent polynomial mechanism . Our Contributions The main novel contributions of this work are summarized as follows : ( i ) For the IVPs , we propose to embed the exponential operators in the neural operator learning mechanism . ( ii ) By using the Padé approximation , we compute the exponential of the operator using a novel recurrent neural architecture that also eliminates the need for matrix inversion . ( iii ) We theoretically demonstrate that the proposed recurrent scheme , using the Padé coefficients , have bounded gradients with respect to ( w.r.t . ) the model parameters across the recurrent horizon . ( iv ) We demonstrate the data-efficiency on the synthetic 1D datasets of Korteweg-de Vries ( KdV ) and Kuramoto–Sivashinsky ( KS ) equations , where with less parameters we achieve state-of-the-art performance . ( v ) We formulate and investigate the epidemic forecasting as a 2D time-varying neural operator problem , and show that for real-world noisy and scarce data , the proposed model outperforms the best neural operator architectures by 53 % and best non-neural operator schemes by 52 % . 2 OPERATORS FOR INITIAL VALUE PROBLEM . We formalize the partial differential equations ( PDEs ) version of the Initial Value Problem studied in this work in Section 2.1 . Section 2.2 summarizes the multi-resolution analysis using multiwavelets for space-discretization . Section 2.3 describes the proposed use of canonical exponential operators and presents a novel architecture using Padé approximation . 2.1 INITIAL VALUE PROBLEM . The initial value problem ( IVP ) for PDEs can be written in its general form as follows . ut “ Fpt , uq , x P Ω upx , 0q “ u0pxq , x P Ω ( 1 ) where , ut is the first-order time derivative of u , F is a time-varying differential operator ( non-linear in-general ) such that F : R ` Y t0u ˆ B Ñ B with B being a Banach space . Usually , the system in eq . ( 1 ) is required to satisfy a boundary condition such that Bupx , tq “ 0 , x P BΩ @ t in the solution horizon , and BΩ is the boundary of the computational region Ω with B some linear function . Pertaining to our work , the operator map problem for IVP can be formally defined as follows . Operator Problem Given A and U as two Sobolev spaces Hs , p with s ą 0 , p “ 2 , an operator T is such that T : A Ñ U . For a given τ ą 0 and two functions u0pxq and upx , τq , in this work , we take the operator map as T u0pxq “ upx , τq with x P Ω . Table 1 summarizes a few examples of the IVP and their solutions.The exponential operators are ubiquitous in the IVP solutions and , therefore , are important to study . One issue , however , is that the exponential operators are non-linear and unlike convolution like operators , there does not exist a general way to diagonalize them ( Fourier transform diagonalizes convolution operator ) for an efficient representation . Previous work on neural operators ( Li et al. , 2020c ; a ; Gupta et al. , 2021 ) modeled the non-linear operators in one way or another by using multiple canonical integral operators along with non-linearity ( for example , ReLU ) . In this work , we directly produce an exponential operator approximation . First , we discuss an efficient basis ( multiwavelets ) for space discretization of the input/output functions in Section 2.2 . 2.2 MULTI-RESOLUTION ANALYSIS . The multi-resolution analysis ( MRA ) aims at projecting a function to a basis over multiple scales . The wavelet basis ( e.g. , Haar , Daubechies ) are some popular examples . Multiwavelets further this operation by using the family of orthogonal polynomials ( OPs ) , for example , Legendre polynomials for an efficient representation over a finite interval ( Alpert et al. , 2002 ) . The multiwavelets are useful in the sparse representation of the integral operators with smooth kernels . In addition , the multiwavelets also sparsify the exponential functions of the strictly elliptic operators ( Beylkin & Keiser , 1997 ) . However , we do not rely on this assumption in this work . Here , we briefly introduce the MRA and refer the reader to Gupta et al . ( 2021 ) for a detailed picture . Notation We begin by defining the space of finite interval polynomials as Vkn “ tf |f are polynomials of degree ă k defined over interval p2´nl , 2´npl ` 1qq for all l “ 0 , 1 , . . . , 2n ´ 1 , and assumes 0 elsewhereu . The Vkn are contained in each other for subsequent n or , Vk0 Ă Vk1 Ă . . . Ă Vkn´1 Ă Vkn Ă . . . . ( 2 ) 1Time-advection equation with linear operators L , N and non-linear function fp.q . A wide range of problems can be modeled , for example , Korteweg-de Vries , Kuramoto-Sivashinsky , Burgers ’ Equation , Navier-Stokes ( list not exhaustive ) . 2A non-linear integro-differential solution to the time-advection equation using semi-group approach ( Beylkin & Keiser , 1997 ; Pazy , 1983 ; Yoshida , 1980 ) . A slightly general version is discussed in ( Beylkin et al. , 1998 ) . The orthogonal component of these polynomial spaces is termed as multiwavelet space Wkn and are defined such that Vkn à Wkn “ Vkn ` 1 , Vkn K Wkn . ( 3 ) The orthonormal basis of Vk0 are OPs ϕ0 , ϕ1 , . . . , ϕk´1 and we have used appropriately normalized shifted Legendre Polynomials in this work . The basis for Vkn and W k n are ϕ n jlpxq “ 2n { 2ϕjp2nx´ lq and ψnjlpxq “ 2n { 2ψjp2nx´ lq , respectively , for l “ 0 , 1 , . . . , 2n ´ 1 and j “ 0 , 1 , . . . , k ´ 1 . Finally , an important trick for representing the operator T in the multiwavelet basis is called nonstandard ( NS ) form ( Beylkin et al. , 1991 ) . The NS form decouples the interactions of the scales and is useful in obtaining an efficient numerical procedure . Using NS form , the projection of operator T is expanded using a telescopic sum as follows . Tn “ ÿn i “ L ` 1 pQiTQi ` QiTPi´1 ` Pi´1TQiq ` PLTPL , ( 4 ) where , Pn : Hs,2 Ñ Vkn is the projection operator , Tn “ PnTPn , Qn : Hs,2 Ñ Wkn such that Qn “ Pn ´ Pn´1 , and L is the coarsest scale under consideration pL ě 0q . Therefore , the NS form of the operator is a collection of the triplets tAi , Bi , Ciuni “ L ` 1 and PLTPL with Ai “ QiTQi , Bi “ QiTPi´1 and Ci “ Pi´1TQi . In this work , we aim to model Ai , Bi , Ci as the exponential operators to better learn the IVP by explicitly embedding the non-linear operators into the multiwavelet transformation . This is not straightforward due to the non-linearity of exponential functions . We are now in shape to present the main contribution of the current work in the Section 2.3 where we discuss an implementable neural approximation of the exponential operators . | This paper study the problem prediction in time-evolving partial differential equations. Inspired by the nature of solutions in PDEs where the solution often time can be written in terms of exponent of the operator and Pade approximation exponent of operators, the authors propose a recurrent articture to learn solution operator in PDEs. The paper proposes a nice idea and approach to solve the mentioned problem. | SP:f0fadc2af439f62ea9fde63e157348278575e3c2 |
Bayesian Exploration for Lifelong Reinforcement Learning | 1 INTRODUCTION . Reinforcement-learning ( RL ) methods ( Sutton & Barto , 1998 ; Kaelbling et al. , 1996 ) have been successfully applied to solve challenging individual tasks such as learning robotic control ( Duan et al. , 2016 ) and playing expert-level Go ( Silver et al. , 2017 ) . However , in the real world , a robot usually experiences a collection of distinct tasks that arrive sequentially throughout its operational lifetime ; learning each new task from scratch is infeasible , but treating them all as a single task will fail . Therefore , recent research has focused on algorithms that enable agents to learn across multiple , sequentially posed tasks , leveraging past knowledge from previous tasks to accelerate the learning of new tasks . This problem setting is known as lifelong reinforcement learning ( Brunskill & Li , 2014 ; Wilson et al. , 2007b ; Isele et al. , 2016b ) . The key questions in lifelong RL research are : How can an algorithm exploit knowledge gained from past tasks to improve performance in new tasks ( forward transfer ) , and how can data from new tasks help the agent to perform better on previously learned tasks ( backward transfer ) ? To answer these two questions , first consider a simple problem , which is to find different items in different houses . Here , a single task corresponds to finding items in a specific house . Although items may be stored in different locations in different houses , there still exists some shared information that connects all houses . For instance , a toothbrush is more likely to be found in a bathroom than a kitchen , and a room without a window is more likely to be a bathroom than a living room . Such information can significantly accelerate the search for items in newly encountered houses . We propose that extracting the common structure existing in previously encountered tasks can help the agent quickly learn the dynamics of the new tasks . Specifically , this paper considers lifelong RL problems that can be modeled as hidden-parameter MDPs or HiP-MDPs ( Doshi-Velez & Konidaris , 2016 ; Killian et al. , 2017 ) , where variations among the true task dynamics can be described by a set of hidden parameters . We model two main categories of learning across multiple tasks : the world-model distribution , which describes the probability distribution over tasks , and the task-specific model , that defines the ( stochastic ) dynamics within a single task . To enable more accurate sequential knowledge transfer , we separate the learning process for these two quantities and maintain a hierarchical Bayesian posterior to approximate them . The world-model posterior is designed to manage the uncertainty in the world-model distribution , while the task-specific posterior handles the uncertainty from the data collected from only the current task . We propose a Bayesian exploration method for lifelong RL ( BLRL ) that learns a Bayesian worldmodel posterior that distills the common structure of previous tasks , and then uses it as a prior to learn a task-specific model in each subsequent task . For the discrete case , we derive an explicit performance bound that shows that the task-specific model requires fewer samples to become accurate as the world-model posterior approaches the true underlying world-model distribution . We further develop VBLRL , a more scalable version of BLRL that uses variational inference to approximate the world-model distribution and leverages Bayesian Neural Networks ( BNNs ) to build the hierarchical Bayesian posterior . Our experimental results on a set of challenging domains show that our algorithms achieve better forward and backward transfer performance than state-of-the-art lifelong RL algorithms within limited samples for each task . 2 BACKGROUND . RL is the problem of maximizing the long-term expected reward of an agent interacting with an environment ( Sutton & Barto , 1998 ) . We usually model the environment as a Markov Decision Process or MDP ( Puterman , 1994 ) , described by a five tuple : 〈S , A , R , T , γ〉 , where S is a finite set of states ; A is a finite set of actions ; R : S ×A 7→ [ 0 , 1 ] is a reward function , with a lower and upper bound 0 and 1 ; T : S ×A 7→ Pr ( S ) is a transition function , with T ( s′|s , a ) denoting the probability of arriving in state s′ ∈ S after executing action a ∈ A in state s ; and γ ∈ [ 0 , 1 ) is a discount factor , expressing the agent ’ s preference for delayed over immediate rewards . An MDP is a suitable model for the task facing a single agent . In the lifelong RL setting , the agent instead faces a series of tasks τ1 , ... , τn , each of which can be modeled as an MDP : 〈S ( i ) , A ( i ) , R ( i ) , T ( i ) , γ ( i ) 〉 . A key question is how these task MDPs are related ; we model the collection of tasks as a HiP-MDP ( Doshi-Velez & Konidaris , 2016 ; Killian et al. , 2017 ) , where a family of tasks is generated by varying a latent task parameter ω drawn for each task according to the world-model distribution PΩ . Each setting of ω specifies a unique MDP , but the agent neither observes ω nor has access to the function that generates the task family . Formally , then , the dynamics T ( s′|s , a ; ωi ) and reward function R ( r|s , a ; ωi ) for task i depend on ωi ∈ Ω , which is fixed for the duration of the task . For lifelong RL problems , the performance of a specific algorithm is usually evaluated based on both forward transfer and backward transfer results ( Lopez-Paz & Ranzato , 2017 ) : • Forward transfer : the influence that learning task t has on the performance in future task k t. • Backward transfer : the influence that learning task t has on the performance in earlier tasks k ≺ t. 3 BAYESIAN EXPLORATION FOR LIFELONG REINFORCEMENT LEARNING 𝜔𝑖 Ψ 𝑚𝑖 𝜏𝑗 𝑖 = 1 ⋯ 𝐾 𝑗 = 1 ⋯ 𝑅 Figure 1 shows our generative model in plate notation . Ψ is the parameter set that represents distribution PΩ . It functions as the world-model posterior that aims to capture the common structure across different tasks . The resulting MDP mi is created based on ωi , which is one hidden parameter sampled from Ψ . We can sample from our approximation of Ψ to create and solve possible MDPs . The proposed BLRL approach is formalized in Algorithm 3 in the appendix . Initially , before any MDPs are experienced , the world-model posterior qe ( ·|st , at ) is initialized to an uniformed prior . For each new task mi , we first initialize the task-specific posterior qmiθ ( ·|st , at ) with the parameter values from the current world-model posterior , and then , for each timestep , select actions using a Bayesian exploration algorithm based on sampling from this posterior ( Thompson , 1933 ; Asmuth et al. , 2009 ) . A set of sampled MDPs drawn from qmiθ is a concrete representation of the uncertainty within the current task . BLRL samples K models from the task-specific posterior whenever the number of transitions from a state–action pair has reached threshold B. Analogously to RMAX ( Brafman & Tennenholtz , 2003 ) and BOSS ( Asmuth et al. , 2009 ) , we call a state–action pair known whenever it has been observed Nst , at = B times . For each state–action pair , if it is known , we use the taskspecific posterior to sample the model . If it is unknown , we instead sample from the world-model posterior . These models are combined into a merged MDP m # i and BLRL solves m # i to get a policy π∗ m # i . This approach is adopted from BOSS ( best of sampled set ) to create optimism in the face of uncertainty , and thereby drive exploration . The new policy π∗ m # i will be used to interact with the environment until a new state–action pair reaches the sampling threshold . The collected transitions from this task will be used to update the task-specific posterior immediately , while the world-model posterior will be updated using transitions from all the previous tasks at a slower pace . For simple finite MDP problems in practice , we use the Dirichlet distribution ( the conjugate for the multinomial ) to represent the Bayesian posterior . Thus , the updating process for the posterior is straightforward to compute . Intuitively , BLRL is able to rapidly adapt to new tasks as long as the prior of the task-specific model ( that is , the world-model posterior ) is close to the true underlying model and captures the uncertainty of the common structure of a set of tasks . 3.1 SAMPLE COMPLEXITY ANALYSIS . We now provide a simple theoretical analysis of BLRL . First , we use the setting and results of Zhang ( 2006 ) to describe the properties of the Bayesian prior and how it relates to the sample complexity for the concentration of the Bayesian posterior . Lemma 1 . Let π ( ω ) denote the prior distribution on the parameter space Γ . We consider a set of transition-probability densities p ( ·|ω ) indexed by ω , and the true underlying density q . Define the prior-mass radius of the transition-probability densities as : dπ = inf { d : d ≥ − lnπ ( { p ∈ Γ : DKL ( q||p ) ≤ d } ) } . ( 1 ) Intuitively , this quantity measures the distance between the Bayesian prior we use to initialize the posterior and the true underlying distribution . Then , ∀ρ ∈ ( 0 , 1 ) and η ≥ 1 , let εn = ( 1 + 1 n ) ηdπ + ( η − ρ ) εupper , n ( ( η − 1 ) / ( η − ρ ) ) , ( 2 ) where εupper , n is the critical upper-bracketing radius ( Zhang , 2006 ) . The decay rate of εupper , n controls the consistency of the Bayesian posterior distribution ( Asmuth et al. , 2009 ) . Let ρ = 12 , we have for all t ≥ 0 and δ ∈ ( 0 , 1 ) , with probability at least 1− δ , πn ( { p ∈ Γ : ||p− q||21/2 ≥ 2εn + ( 4η − 2 ) t δ/4 } ∣∣∣X ) ≤ 1 1 + ent . ( 3 ) Proof ( sketch ) . The proof is similar to that of Corollary 5.2 of Zhang ( 2006 ) ( see Appendix A.5 ) . Instead of using the critical prior-mass radius επ , n to describe certain characteristics of the Bayesian prior , we define and use the prior-mass radius dπ , which is independent of the sample size n and measures the distance between the prior and true distribution . Similar to BOSS , for a new MDP m ∼M with hidden parameters ωm , we can define the Bayesian concentration sample complexity for the task-specific posterior : f ( s , a , 0 , δ0 , ρ0 ) , as the minimum number c such that , if c IID transitions from ( s , a ) are observed , then , with probability at least 1− δ0 , Prm∼posterior ( ||Tm ( s , a , ωm ) − Tm∗ ( s , a , ωm ) ||1 < 0 ) ≥ 1− ρ0 . ( 4 ) Lemma 2 . Assume the posterior is consistent ( that is , εupper , n = o ( 1 ) ) and set η = 2 , then the Bayesian concentration sample complexity for the task-specific posterior f ( s , a , , δ , ρ ) = O ( dπ+ln 1 ρ 2δ−dπ ) . Proof ( sketch ) . This bound can be derived by directly combining Lemma 1 and Equation 4 . The above lemma suggests an upper bound of the Bayesian concentration sample complexity using the prior-mass radius . We can further combine this result with PAC-MDP theory ( Strehl et al. , 2006 ) and derive the sample complexity of the algorithm for each new task . Theorem 1 . For each new task , set the sample size K = Θ ( S 2A δ ln SA δ ) and the parameters 0 = ( 1− γ ) 2 , δ0 = δSA , ρ0 = δ S2A2K , then , with probability at least 1− 4δ , V At ( st ) ≥ V ∗ ( st ) − 4 0 in all but Õ ( S 2A2dπ δ 3 ( 1−γ ) 6 ) steps , where Õ ( · ) suppresses logarithmic dependence . Proof ( sketch ) . The proof is based on the PAC-MDP theorem ( Strehl et al. , 2009 ) combined with the new bound for the Bayesian concentration sample complexity we derived in Lemma 2 . In general , we use the same process in BOSS to verify the three required properties of PAC-MDP : optimism , accuracy and learning complexity . For each new task , the main difference between BLRL and BOSS is that we use the world-model posterior to initialize the task-specific posterior , which results in a new sample complexity bound based on the prior-mass radius . The result formalizes the intuition that , if we put a larger prior mass at a density that is close to the true q such that dπ is small , the sample complexity of our algorithm will be lower . In the meantime , the sample complexity is bounded by polynomial functions of the relevant quantities , showing that our training strategy preserves the properties required by PAC-MDP algorithms ( Strehl et al. , 2009 ) . | The paper deals with the problem of lifelong RL, also referred to as meat-RL, where an agent attempts to solve a sequence of tasks in order to facilitate the solution of a novel task. The framework follows that of Baxter 2000 (albeit that paper deals with supervised learning), and has been widely studied in recent years. The basic assumption is that the tasks are drawn from an underlying task-distribution, and each task (an MDP) is stochastically selected from a task-specific distribution. The authors work with a Bayesian framework, assuming a hierarchical distribution of the two levels, and learn the two levels separately. This framework has the advantage of providing both estimates and uncertainty estimates. For the discrete case they present a sample complexity analysis, and suggest a variational approach to practical learning. Finally, experiments are provided supporting the utility of the approach. The formal framework is that of hidden-parameter MDPs (HiP-MDPs) from Doshi-Velez 2016, and each MDP is modeled based on a transition and a reward model based on a hidden parameter. As more tasks are encountered the posterior over world models sharpens, and, being used to learn new tasks, is expected to facilitate learning. The learning of each new task is as in BOSS, and takes place by sampling from the learned MDP distribution, creating a mixed MDP, and using standard model-based approaches to solve these. | SP:17b45c5e40ae4db57652b45cc1c533c5ca08523f |
Bayesian Exploration for Lifelong Reinforcement Learning | 1 INTRODUCTION . Reinforcement-learning ( RL ) methods ( Sutton & Barto , 1998 ; Kaelbling et al. , 1996 ) have been successfully applied to solve challenging individual tasks such as learning robotic control ( Duan et al. , 2016 ) and playing expert-level Go ( Silver et al. , 2017 ) . However , in the real world , a robot usually experiences a collection of distinct tasks that arrive sequentially throughout its operational lifetime ; learning each new task from scratch is infeasible , but treating them all as a single task will fail . Therefore , recent research has focused on algorithms that enable agents to learn across multiple , sequentially posed tasks , leveraging past knowledge from previous tasks to accelerate the learning of new tasks . This problem setting is known as lifelong reinforcement learning ( Brunskill & Li , 2014 ; Wilson et al. , 2007b ; Isele et al. , 2016b ) . The key questions in lifelong RL research are : How can an algorithm exploit knowledge gained from past tasks to improve performance in new tasks ( forward transfer ) , and how can data from new tasks help the agent to perform better on previously learned tasks ( backward transfer ) ? To answer these two questions , first consider a simple problem , which is to find different items in different houses . Here , a single task corresponds to finding items in a specific house . Although items may be stored in different locations in different houses , there still exists some shared information that connects all houses . For instance , a toothbrush is more likely to be found in a bathroom than a kitchen , and a room without a window is more likely to be a bathroom than a living room . Such information can significantly accelerate the search for items in newly encountered houses . We propose that extracting the common structure existing in previously encountered tasks can help the agent quickly learn the dynamics of the new tasks . Specifically , this paper considers lifelong RL problems that can be modeled as hidden-parameter MDPs or HiP-MDPs ( Doshi-Velez & Konidaris , 2016 ; Killian et al. , 2017 ) , where variations among the true task dynamics can be described by a set of hidden parameters . We model two main categories of learning across multiple tasks : the world-model distribution , which describes the probability distribution over tasks , and the task-specific model , that defines the ( stochastic ) dynamics within a single task . To enable more accurate sequential knowledge transfer , we separate the learning process for these two quantities and maintain a hierarchical Bayesian posterior to approximate them . The world-model posterior is designed to manage the uncertainty in the world-model distribution , while the task-specific posterior handles the uncertainty from the data collected from only the current task . We propose a Bayesian exploration method for lifelong RL ( BLRL ) that learns a Bayesian worldmodel posterior that distills the common structure of previous tasks , and then uses it as a prior to learn a task-specific model in each subsequent task . For the discrete case , we derive an explicit performance bound that shows that the task-specific model requires fewer samples to become accurate as the world-model posterior approaches the true underlying world-model distribution . We further develop VBLRL , a more scalable version of BLRL that uses variational inference to approximate the world-model distribution and leverages Bayesian Neural Networks ( BNNs ) to build the hierarchical Bayesian posterior . Our experimental results on a set of challenging domains show that our algorithms achieve better forward and backward transfer performance than state-of-the-art lifelong RL algorithms within limited samples for each task . 2 BACKGROUND . RL is the problem of maximizing the long-term expected reward of an agent interacting with an environment ( Sutton & Barto , 1998 ) . We usually model the environment as a Markov Decision Process or MDP ( Puterman , 1994 ) , described by a five tuple : 〈S , A , R , T , γ〉 , where S is a finite set of states ; A is a finite set of actions ; R : S ×A 7→ [ 0 , 1 ] is a reward function , with a lower and upper bound 0 and 1 ; T : S ×A 7→ Pr ( S ) is a transition function , with T ( s′|s , a ) denoting the probability of arriving in state s′ ∈ S after executing action a ∈ A in state s ; and γ ∈ [ 0 , 1 ) is a discount factor , expressing the agent ’ s preference for delayed over immediate rewards . An MDP is a suitable model for the task facing a single agent . In the lifelong RL setting , the agent instead faces a series of tasks τ1 , ... , τn , each of which can be modeled as an MDP : 〈S ( i ) , A ( i ) , R ( i ) , T ( i ) , γ ( i ) 〉 . A key question is how these task MDPs are related ; we model the collection of tasks as a HiP-MDP ( Doshi-Velez & Konidaris , 2016 ; Killian et al. , 2017 ) , where a family of tasks is generated by varying a latent task parameter ω drawn for each task according to the world-model distribution PΩ . Each setting of ω specifies a unique MDP , but the agent neither observes ω nor has access to the function that generates the task family . Formally , then , the dynamics T ( s′|s , a ; ωi ) and reward function R ( r|s , a ; ωi ) for task i depend on ωi ∈ Ω , which is fixed for the duration of the task . For lifelong RL problems , the performance of a specific algorithm is usually evaluated based on both forward transfer and backward transfer results ( Lopez-Paz & Ranzato , 2017 ) : • Forward transfer : the influence that learning task t has on the performance in future task k t. • Backward transfer : the influence that learning task t has on the performance in earlier tasks k ≺ t. 3 BAYESIAN EXPLORATION FOR LIFELONG REINFORCEMENT LEARNING 𝜔𝑖 Ψ 𝑚𝑖 𝜏𝑗 𝑖 = 1 ⋯ 𝐾 𝑗 = 1 ⋯ 𝑅 Figure 1 shows our generative model in plate notation . Ψ is the parameter set that represents distribution PΩ . It functions as the world-model posterior that aims to capture the common structure across different tasks . The resulting MDP mi is created based on ωi , which is one hidden parameter sampled from Ψ . We can sample from our approximation of Ψ to create and solve possible MDPs . The proposed BLRL approach is formalized in Algorithm 3 in the appendix . Initially , before any MDPs are experienced , the world-model posterior qe ( ·|st , at ) is initialized to an uniformed prior . For each new task mi , we first initialize the task-specific posterior qmiθ ( ·|st , at ) with the parameter values from the current world-model posterior , and then , for each timestep , select actions using a Bayesian exploration algorithm based on sampling from this posterior ( Thompson , 1933 ; Asmuth et al. , 2009 ) . A set of sampled MDPs drawn from qmiθ is a concrete representation of the uncertainty within the current task . BLRL samples K models from the task-specific posterior whenever the number of transitions from a state–action pair has reached threshold B. Analogously to RMAX ( Brafman & Tennenholtz , 2003 ) and BOSS ( Asmuth et al. , 2009 ) , we call a state–action pair known whenever it has been observed Nst , at = B times . For each state–action pair , if it is known , we use the taskspecific posterior to sample the model . If it is unknown , we instead sample from the world-model posterior . These models are combined into a merged MDP m # i and BLRL solves m # i to get a policy π∗ m # i . This approach is adopted from BOSS ( best of sampled set ) to create optimism in the face of uncertainty , and thereby drive exploration . The new policy π∗ m # i will be used to interact with the environment until a new state–action pair reaches the sampling threshold . The collected transitions from this task will be used to update the task-specific posterior immediately , while the world-model posterior will be updated using transitions from all the previous tasks at a slower pace . For simple finite MDP problems in practice , we use the Dirichlet distribution ( the conjugate for the multinomial ) to represent the Bayesian posterior . Thus , the updating process for the posterior is straightforward to compute . Intuitively , BLRL is able to rapidly adapt to new tasks as long as the prior of the task-specific model ( that is , the world-model posterior ) is close to the true underlying model and captures the uncertainty of the common structure of a set of tasks . 3.1 SAMPLE COMPLEXITY ANALYSIS . We now provide a simple theoretical analysis of BLRL . First , we use the setting and results of Zhang ( 2006 ) to describe the properties of the Bayesian prior and how it relates to the sample complexity for the concentration of the Bayesian posterior . Lemma 1 . Let π ( ω ) denote the prior distribution on the parameter space Γ . We consider a set of transition-probability densities p ( ·|ω ) indexed by ω , and the true underlying density q . Define the prior-mass radius of the transition-probability densities as : dπ = inf { d : d ≥ − lnπ ( { p ∈ Γ : DKL ( q||p ) ≤ d } ) } . ( 1 ) Intuitively , this quantity measures the distance between the Bayesian prior we use to initialize the posterior and the true underlying distribution . Then , ∀ρ ∈ ( 0 , 1 ) and η ≥ 1 , let εn = ( 1 + 1 n ) ηdπ + ( η − ρ ) εupper , n ( ( η − 1 ) / ( η − ρ ) ) , ( 2 ) where εupper , n is the critical upper-bracketing radius ( Zhang , 2006 ) . The decay rate of εupper , n controls the consistency of the Bayesian posterior distribution ( Asmuth et al. , 2009 ) . Let ρ = 12 , we have for all t ≥ 0 and δ ∈ ( 0 , 1 ) , with probability at least 1− δ , πn ( { p ∈ Γ : ||p− q||21/2 ≥ 2εn + ( 4η − 2 ) t δ/4 } ∣∣∣X ) ≤ 1 1 + ent . ( 3 ) Proof ( sketch ) . The proof is similar to that of Corollary 5.2 of Zhang ( 2006 ) ( see Appendix A.5 ) . Instead of using the critical prior-mass radius επ , n to describe certain characteristics of the Bayesian prior , we define and use the prior-mass radius dπ , which is independent of the sample size n and measures the distance between the prior and true distribution . Similar to BOSS , for a new MDP m ∼M with hidden parameters ωm , we can define the Bayesian concentration sample complexity for the task-specific posterior : f ( s , a , 0 , δ0 , ρ0 ) , as the minimum number c such that , if c IID transitions from ( s , a ) are observed , then , with probability at least 1− δ0 , Prm∼posterior ( ||Tm ( s , a , ωm ) − Tm∗ ( s , a , ωm ) ||1 < 0 ) ≥ 1− ρ0 . ( 4 ) Lemma 2 . Assume the posterior is consistent ( that is , εupper , n = o ( 1 ) ) and set η = 2 , then the Bayesian concentration sample complexity for the task-specific posterior f ( s , a , , δ , ρ ) = O ( dπ+ln 1 ρ 2δ−dπ ) . Proof ( sketch ) . This bound can be derived by directly combining Lemma 1 and Equation 4 . The above lemma suggests an upper bound of the Bayesian concentration sample complexity using the prior-mass radius . We can further combine this result with PAC-MDP theory ( Strehl et al. , 2006 ) and derive the sample complexity of the algorithm for each new task . Theorem 1 . For each new task , set the sample size K = Θ ( S 2A δ ln SA δ ) and the parameters 0 = ( 1− γ ) 2 , δ0 = δSA , ρ0 = δ S2A2K , then , with probability at least 1− 4δ , V At ( st ) ≥ V ∗ ( st ) − 4 0 in all but Õ ( S 2A2dπ δ 3 ( 1−γ ) 6 ) steps , where Õ ( · ) suppresses logarithmic dependence . Proof ( sketch ) . The proof is based on the PAC-MDP theorem ( Strehl et al. , 2009 ) combined with the new bound for the Bayesian concentration sample complexity we derived in Lemma 2 . In general , we use the same process in BOSS to verify the three required properties of PAC-MDP : optimism , accuracy and learning complexity . For each new task , the main difference between BLRL and BOSS is that we use the world-model posterior to initialize the task-specific posterior , which results in a new sample complexity bound based on the prior-mass radius . The result formalizes the intuition that , if we put a larger prior mass at a density that is close to the true q such that dπ is small , the sample complexity of our algorithm will be lower . In the meantime , the sample complexity is bounded by polynomial functions of the relevant quantities , showing that our training strategy preserves the properties required by PAC-MDP algorithms ( Strehl et al. , 2009 ) . | The authors proposes a Hierarchical Bayesian approach for lifelong RL. The global world-model posterior models the world model shared across tasks and the task-specific model learns the dynamics within a specific task. The task-specific model achieves forward transfer by initializing from the global world model. The authors use mean-field variational approximation to scale the proposed model. Also, the authors introduce sample complexity analysis. The method is evaluated on two toy tasks (grid-world and box jumping) and one on MuJoCo simulator and showed superior performance to the previous works. | SP:17b45c5e40ae4db57652b45cc1c533c5ca08523f |
Bayesian Exploration for Lifelong Reinforcement Learning | 1 INTRODUCTION . Reinforcement-learning ( RL ) methods ( Sutton & Barto , 1998 ; Kaelbling et al. , 1996 ) have been successfully applied to solve challenging individual tasks such as learning robotic control ( Duan et al. , 2016 ) and playing expert-level Go ( Silver et al. , 2017 ) . However , in the real world , a robot usually experiences a collection of distinct tasks that arrive sequentially throughout its operational lifetime ; learning each new task from scratch is infeasible , but treating them all as a single task will fail . Therefore , recent research has focused on algorithms that enable agents to learn across multiple , sequentially posed tasks , leveraging past knowledge from previous tasks to accelerate the learning of new tasks . This problem setting is known as lifelong reinforcement learning ( Brunskill & Li , 2014 ; Wilson et al. , 2007b ; Isele et al. , 2016b ) . The key questions in lifelong RL research are : How can an algorithm exploit knowledge gained from past tasks to improve performance in new tasks ( forward transfer ) , and how can data from new tasks help the agent to perform better on previously learned tasks ( backward transfer ) ? To answer these two questions , first consider a simple problem , which is to find different items in different houses . Here , a single task corresponds to finding items in a specific house . Although items may be stored in different locations in different houses , there still exists some shared information that connects all houses . For instance , a toothbrush is more likely to be found in a bathroom than a kitchen , and a room without a window is more likely to be a bathroom than a living room . Such information can significantly accelerate the search for items in newly encountered houses . We propose that extracting the common structure existing in previously encountered tasks can help the agent quickly learn the dynamics of the new tasks . Specifically , this paper considers lifelong RL problems that can be modeled as hidden-parameter MDPs or HiP-MDPs ( Doshi-Velez & Konidaris , 2016 ; Killian et al. , 2017 ) , where variations among the true task dynamics can be described by a set of hidden parameters . We model two main categories of learning across multiple tasks : the world-model distribution , which describes the probability distribution over tasks , and the task-specific model , that defines the ( stochastic ) dynamics within a single task . To enable more accurate sequential knowledge transfer , we separate the learning process for these two quantities and maintain a hierarchical Bayesian posterior to approximate them . The world-model posterior is designed to manage the uncertainty in the world-model distribution , while the task-specific posterior handles the uncertainty from the data collected from only the current task . We propose a Bayesian exploration method for lifelong RL ( BLRL ) that learns a Bayesian worldmodel posterior that distills the common structure of previous tasks , and then uses it as a prior to learn a task-specific model in each subsequent task . For the discrete case , we derive an explicit performance bound that shows that the task-specific model requires fewer samples to become accurate as the world-model posterior approaches the true underlying world-model distribution . We further develop VBLRL , a more scalable version of BLRL that uses variational inference to approximate the world-model distribution and leverages Bayesian Neural Networks ( BNNs ) to build the hierarchical Bayesian posterior . Our experimental results on a set of challenging domains show that our algorithms achieve better forward and backward transfer performance than state-of-the-art lifelong RL algorithms within limited samples for each task . 2 BACKGROUND . RL is the problem of maximizing the long-term expected reward of an agent interacting with an environment ( Sutton & Barto , 1998 ) . We usually model the environment as a Markov Decision Process or MDP ( Puterman , 1994 ) , described by a five tuple : 〈S , A , R , T , γ〉 , where S is a finite set of states ; A is a finite set of actions ; R : S ×A 7→ [ 0 , 1 ] is a reward function , with a lower and upper bound 0 and 1 ; T : S ×A 7→ Pr ( S ) is a transition function , with T ( s′|s , a ) denoting the probability of arriving in state s′ ∈ S after executing action a ∈ A in state s ; and γ ∈ [ 0 , 1 ) is a discount factor , expressing the agent ’ s preference for delayed over immediate rewards . An MDP is a suitable model for the task facing a single agent . In the lifelong RL setting , the agent instead faces a series of tasks τ1 , ... , τn , each of which can be modeled as an MDP : 〈S ( i ) , A ( i ) , R ( i ) , T ( i ) , γ ( i ) 〉 . A key question is how these task MDPs are related ; we model the collection of tasks as a HiP-MDP ( Doshi-Velez & Konidaris , 2016 ; Killian et al. , 2017 ) , where a family of tasks is generated by varying a latent task parameter ω drawn for each task according to the world-model distribution PΩ . Each setting of ω specifies a unique MDP , but the agent neither observes ω nor has access to the function that generates the task family . Formally , then , the dynamics T ( s′|s , a ; ωi ) and reward function R ( r|s , a ; ωi ) for task i depend on ωi ∈ Ω , which is fixed for the duration of the task . For lifelong RL problems , the performance of a specific algorithm is usually evaluated based on both forward transfer and backward transfer results ( Lopez-Paz & Ranzato , 2017 ) : • Forward transfer : the influence that learning task t has on the performance in future task k t. • Backward transfer : the influence that learning task t has on the performance in earlier tasks k ≺ t. 3 BAYESIAN EXPLORATION FOR LIFELONG REINFORCEMENT LEARNING 𝜔𝑖 Ψ 𝑚𝑖 𝜏𝑗 𝑖 = 1 ⋯ 𝐾 𝑗 = 1 ⋯ 𝑅 Figure 1 shows our generative model in plate notation . Ψ is the parameter set that represents distribution PΩ . It functions as the world-model posterior that aims to capture the common structure across different tasks . The resulting MDP mi is created based on ωi , which is one hidden parameter sampled from Ψ . We can sample from our approximation of Ψ to create and solve possible MDPs . The proposed BLRL approach is formalized in Algorithm 3 in the appendix . Initially , before any MDPs are experienced , the world-model posterior qe ( ·|st , at ) is initialized to an uniformed prior . For each new task mi , we first initialize the task-specific posterior qmiθ ( ·|st , at ) with the parameter values from the current world-model posterior , and then , for each timestep , select actions using a Bayesian exploration algorithm based on sampling from this posterior ( Thompson , 1933 ; Asmuth et al. , 2009 ) . A set of sampled MDPs drawn from qmiθ is a concrete representation of the uncertainty within the current task . BLRL samples K models from the task-specific posterior whenever the number of transitions from a state–action pair has reached threshold B. Analogously to RMAX ( Brafman & Tennenholtz , 2003 ) and BOSS ( Asmuth et al. , 2009 ) , we call a state–action pair known whenever it has been observed Nst , at = B times . For each state–action pair , if it is known , we use the taskspecific posterior to sample the model . If it is unknown , we instead sample from the world-model posterior . These models are combined into a merged MDP m # i and BLRL solves m # i to get a policy π∗ m # i . This approach is adopted from BOSS ( best of sampled set ) to create optimism in the face of uncertainty , and thereby drive exploration . The new policy π∗ m # i will be used to interact with the environment until a new state–action pair reaches the sampling threshold . The collected transitions from this task will be used to update the task-specific posterior immediately , while the world-model posterior will be updated using transitions from all the previous tasks at a slower pace . For simple finite MDP problems in practice , we use the Dirichlet distribution ( the conjugate for the multinomial ) to represent the Bayesian posterior . Thus , the updating process for the posterior is straightforward to compute . Intuitively , BLRL is able to rapidly adapt to new tasks as long as the prior of the task-specific model ( that is , the world-model posterior ) is close to the true underlying model and captures the uncertainty of the common structure of a set of tasks . 3.1 SAMPLE COMPLEXITY ANALYSIS . We now provide a simple theoretical analysis of BLRL . First , we use the setting and results of Zhang ( 2006 ) to describe the properties of the Bayesian prior and how it relates to the sample complexity for the concentration of the Bayesian posterior . Lemma 1 . Let π ( ω ) denote the prior distribution on the parameter space Γ . We consider a set of transition-probability densities p ( ·|ω ) indexed by ω , and the true underlying density q . Define the prior-mass radius of the transition-probability densities as : dπ = inf { d : d ≥ − lnπ ( { p ∈ Γ : DKL ( q||p ) ≤ d } ) } . ( 1 ) Intuitively , this quantity measures the distance between the Bayesian prior we use to initialize the posterior and the true underlying distribution . Then , ∀ρ ∈ ( 0 , 1 ) and η ≥ 1 , let εn = ( 1 + 1 n ) ηdπ + ( η − ρ ) εupper , n ( ( η − 1 ) / ( η − ρ ) ) , ( 2 ) where εupper , n is the critical upper-bracketing radius ( Zhang , 2006 ) . The decay rate of εupper , n controls the consistency of the Bayesian posterior distribution ( Asmuth et al. , 2009 ) . Let ρ = 12 , we have for all t ≥ 0 and δ ∈ ( 0 , 1 ) , with probability at least 1− δ , πn ( { p ∈ Γ : ||p− q||21/2 ≥ 2εn + ( 4η − 2 ) t δ/4 } ∣∣∣X ) ≤ 1 1 + ent . ( 3 ) Proof ( sketch ) . The proof is similar to that of Corollary 5.2 of Zhang ( 2006 ) ( see Appendix A.5 ) . Instead of using the critical prior-mass radius επ , n to describe certain characteristics of the Bayesian prior , we define and use the prior-mass radius dπ , which is independent of the sample size n and measures the distance between the prior and true distribution . Similar to BOSS , for a new MDP m ∼M with hidden parameters ωm , we can define the Bayesian concentration sample complexity for the task-specific posterior : f ( s , a , 0 , δ0 , ρ0 ) , as the minimum number c such that , if c IID transitions from ( s , a ) are observed , then , with probability at least 1− δ0 , Prm∼posterior ( ||Tm ( s , a , ωm ) − Tm∗ ( s , a , ωm ) ||1 < 0 ) ≥ 1− ρ0 . ( 4 ) Lemma 2 . Assume the posterior is consistent ( that is , εupper , n = o ( 1 ) ) and set η = 2 , then the Bayesian concentration sample complexity for the task-specific posterior f ( s , a , , δ , ρ ) = O ( dπ+ln 1 ρ 2δ−dπ ) . Proof ( sketch ) . This bound can be derived by directly combining Lemma 1 and Equation 4 . The above lemma suggests an upper bound of the Bayesian concentration sample complexity using the prior-mass radius . We can further combine this result with PAC-MDP theory ( Strehl et al. , 2006 ) and derive the sample complexity of the algorithm for each new task . Theorem 1 . For each new task , set the sample size K = Θ ( S 2A δ ln SA δ ) and the parameters 0 = ( 1− γ ) 2 , δ0 = δSA , ρ0 = δ S2A2K , then , with probability at least 1− 4δ , V At ( st ) ≥ V ∗ ( st ) − 4 0 in all but Õ ( S 2A2dπ δ 3 ( 1−γ ) 6 ) steps , where Õ ( · ) suppresses logarithmic dependence . Proof ( sketch ) . The proof is based on the PAC-MDP theorem ( Strehl et al. , 2009 ) combined with the new bound for the Bayesian concentration sample complexity we derived in Lemma 2 . In general , we use the same process in BOSS to verify the three required properties of PAC-MDP : optimism , accuracy and learning complexity . For each new task , the main difference between BLRL and BOSS is that we use the world-model posterior to initialize the task-specific posterior , which results in a new sample complexity bound based on the prior-mass radius . The result formalizes the intuition that , if we put a larger prior mass at a density that is close to the true q such that dπ is small , the sample complexity of our algorithm will be lower . In the meantime , the sample complexity is bounded by polynomial functions of the relevant quantities , showing that our training strategy preserves the properties required by PAC-MDP algorithms ( Strehl et al. , 2009 ) . | This submission presents an approach for Bayesian model-based exploration in a lifelong RL setting, building upon existing approaches for Bayesian exploration (BOSS) and Bayesian multi-task modeling (HiP-MDPs). The approach keeps separate models for sampling transitions and rewards for each task, and each task model is drawn from a shared prior that models the distribution over tasks. The method continually updates the shared model with data from all observed tasks and the task-specific model with data from the current task. To achieve backward transfer, the approach replaces the task model with the shared model whenever the task model has not been sufficiently trained on a particular state-action pair. | SP:17b45c5e40ae4db57652b45cc1c533c5ca08523f |
On the Adversarial Robustness of Vision Transformers | Following the success in advancing natural language processing and understanding , transformers are expected to bring revolutionary changes to computer vision . This work provides a comprehensive study on the robustness of vision transformers ( ViTs ) against adversarial perturbations . Tested on various white-box and transfer attack settings , we find that ViTs possess better adversarial robustness when compared with convolutional neural networks ( CNNs ) . This observation also holds for certified robustness . We summarize the following main observations contributing to the improved robustness of ViTs : 1 ) Features learned by ViTs contain less low-level information and are more generalizable , which contributes to superior robustness against adversarial perturbations . 2 ) Introducing convolutional or tokens-to-token blocks for learning low-level features in ViTs can improve classification accuracy but at the cost of adversarial robustness . 3 ) Increasing the proportion of transformers in the model structure ( when the model consists of both transformer and CNN blocks ) leads to better robustness . But for a pure transformer model , simply increasing the size or adding layers can not guarantee a similar effect . 4 ) Pre-training without adversarial training on larger datasets does not significantly improve adversarial robustness though it is critical for training ViTs . 5 ) Adversarial training is also applicable to ViT for training robust models . Furthermore , feature visualization and frequency analysis are conducted for explanation . The results show that ViTs are less sensitive to high-frequency perturbations than CNNs and there is a high correlation between how well the model learns low-level features and its robustness against different frequency-based perturbations . 1 INTRODUCTION . Transformers are originally applied in natural language processing ( NLP ) tasks as a type of deep neural network ( DNN ) mainly based on the self-attention mechanism ( Vaswani et al . ( 2017 ) ; Devlin et al . ( 2018 ) ; Brown et al . ( 2020 ) ) , and transformers with large-scale pre-training have achieved state-of-the-art results on many NLP tasks ( Devlin et al . ( 2018 ) ; Liu et al . ( 2019 ) ; Yang et al . ( 2019 ) ; Sun et al . ( 2019 ) ) . Recently , Dosovitskiy et al . ( 2020 ) applied a pure transformer directly to sequences of image patches ( i.e. , a vision transformer , ViT ) and showed that the Transformer itself can be competitive with convolutional neural networks ( CNN ) on image classification tasks . Since then transformers have been extended to various vision tasks and show competitive or even better performance compared to CNNs and recurrent neural networks ( RNNs ) ( Carion et al . ( 2020 ) ; Chen et al . ( 2020 ) ; Zhu et al . ( 2020 ) ) . While ViT and its variants hold promise toward a unified machine learning paradigm and architecture applicable to different data modalities , it remains unclear on the robustness of ViT against adversarial perturbations , which is critical for safe and reliable deployment of many real-world applications . In this work , we examine the adversarial robustness of ViTs on image classification tasks and make comparisons with CNN baselines . As highlighted in Figure 1 ( a ) , our experimental results illustrate the superior robustness of ViTs than CNNs in both white-box and black-box attack settings , based on which we make the following important findings : • Features learned by ViTs contain less low-level information and benefit adversarial robustness . ViTs achieve a lower attack success rate ( ASR ) of 51.9 % compared with a minimum of 83.3 % by CNNs in Figure 1 ( a ) . They are also less sensitive to high-frequency adversarial perturbations . • Using denoised randomized smoothing ( Salman et al. , 2020 ) , ViTs attain significantly better certified robustness than CNNs . • It takes the cost of adversarial robustness to improve the classification accuracy of ViTs by introducing blocks to help learn low-level features as shown in Figure 1 ( a ) . • Increasing the proportion of transformer blocks in the model leads to better robustness when the model consists of both transformer and CNN blocks . For example , the attack success rate ( ASR ) decreases from 87.1 % to 79.2 % when 10 additional transformer blocks are added to T2T-ViT-14 . However , increasing the size of a pure transformer model can not guarantee a similar effect , e.g. , the robustness of ViT-S/16 is better than that of ViT-B/16 in Figure 1 ( a ) . • Pre-training without adversarial training on larger datasets does not improve adversarial robustness though it is critical for training ViT . • The principle of adversarial training through min-max optimization ( Madry et al . ( 2017 ) ; Zhang et al . ( 2019 ) ) can be applied to train robust ViTs . 2 RELATED WORK . Transformer ( Vaswani et al . ( 2017 ) ) has achieved remarkable performance on many NLP tasks , and its robustness has been studied on those NLP tasks . Hsieh et al . ( 2019 ) ; Jin et al . ( 2020 ) ; Shi & Huang ( 2020 ) ; Li et al . ( 2020 ) ; Garg & Ramakrishnan ( 2020 ) ; Yin et al . ( 2020 ) conducted adversarial attacks on transformers including pre-trained models , and in their experiments transformers usually show better robustness compared to other models based on Long short-term memory ( LSTM ) or CNN , with a theoretical explanation provided in Hsieh et al . ( 2019 ) . However , due to the discrete nature of NLP models , these studies are focusing on discrete perturbations ( e.g. , word or character substitutions ) which are very different from small and continuous perturbations in computer vision tasks . Besides , Wang et al . ( 2020a ) improved the robustness of pre-trained transformers from an information-theoretic perspective , and Shi et al . ( 2020 ) ; Ye et al . ( 2020 ) ; Xu et al . ( 2020 ) studied the robustness certification of transformer-based models . To the best of our knowledge , this work is one of the first studies that investigates the adversarial robustness ( against small perturbations in the input pixel space ) of transformers on computer vision tasks . There are some concurrent works studying the adversarial robustness of ViTs . We supplement their contribution in Appendix G . 3 MODEL ARCHITECTURES . We first review the architecture of models investigated in our experiments including several vision transformers ( ViTs ) and CNN models . A detailed table of comparison is given in Table 5 . 3.1 VISION TRANSFORMERS . We consider the original ViT ( Dosovitskiy et al . ( 2020 ) ) and its four variants shown in Figure 1 ( b ) . Vision transformer ( ViT ) and data-efficient image transformer ( DeiT ) : ViT ( Dosovitskiy et al . ( 2020 ) ) mostly follows the original design of Transformer ( Vaswani et al . ( 2017 ) ; Devlin et al . ( 2018 ) ) on language tasks . For a 2D image x ∈ RH×W×C with resolution H ×W and C channels , it is divided into a sequence ofN = H·WP 2 flattened 2D patches of size P ×P , xi ∈ R N× ( P 2·C ) ( 1 ≤ i ≤ N ) . The patches are first encoded into patch embeddings with a simple convolutional layer , where the kernel size and stride of the convolution is exactly P × P . In addition , there are also position embeddings to preserve positional information . Similar to BERT ( Devlin et al . ( 2018 ) ) , a large-scale pre-trained model for NLP , a special [ CLS ] token is added to output features for classification . DeiT ( Touvron et al . ( 2021 ) ) further improves the ViT ’ s performance using data augmentation or distillation from CNN teachers with an additional distillation token . We investigate ViT- { S , B , L } /16 , DeiT-S/16 and Dist-DeiT-B/16 as defined in the corresponding papers in the main text and discuss other structures in Appendix F. Hybrid of CNN and ViT ( CNN-ViT ) : Dosovitskiy et al . ( 2020 ) also proposed a hybrid architecture for ViTs by replacing raw image patches with patches extracted from a CNN feature map . This is equivalent to adding learned CNN blocks to the head of ViT as shown in Figure 1 ( b ) . Following Dosovitskiy et al . ( 2020 ) , we investigate ViT-B/16-Res in our experiments , where the input sequence is obtained by flattening the spatial dimensions of the feature maps from ResNet50 . Hybrid of T2T and ViT ( T2T-ViT ) : Yuan et al . ( 2021 ) proposed to overcome the limitations of the simple tokenization in ViTs , by progressively structurizing an image to tokens with a token-to-token ( T2T ) module , which recursively aggregates neighboring tokens into one token such that low-level structures can be better learned . T2T-ViT was shown to perform better than ViT when trained from scratch on a midsize dataset . We investigate T2T-ViT-14 and T2T-ViT-24 in our experiments . Hybrid of shifted windows and ViT ( Swin-T ) : Liu et al . ( 2021 ) computes the representations with shifted windows scheme , which brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection . We investigate Swin-S/4 in the main text and discuss other structures in Appendix F . 3.2 CONVOLUTIONAL NEURAL NEWORKS . We study several CNN models for comparison , including ResNet18 ( He et al . ( 2016 ) ) , ResNet5032x4d ( He et al . ( 2016 ) ) , ShuffleNet ( Zhang et al . ( 2018 ) ) , MobileNet ( Howard et al . ( 2017 ) ) and VGG16 ( Simonyan & Zisserman ( 2014 ) ) . We also consider the SEResNet50 model , which uses the Squeeze-and-Excitation ( SE ) block ( Hu et al . ( 2018 ) ) that applies attention to channel dimensions to fuse both spatial and channel-wise information within local receptive fields at each layer . The aforementioned CNNs are all trained on ImageNet from scratch . For a better comparison with pre-trained transformers , we also consider two CNN models pre-trained on larger datasets : ResNeXt-32x4d-ssl ( Yalniz et al . ( 2019 ) ) pre-trained on YFCC100M ( Thomee et al . ( 2015 ) ) , and ResNet50-swsl pre-trained on IG-1B-Targeted ( Mahajan et al . ( 2018 ) ) using semi-weakly supervised methods ( Yalniz et al . ( 2019 ) ) . They are both fine-tuned on ImageNet . 4 ADVERSARIAL ROBUSTNESS EVALUATION METHODS . We consider the commonly used ` ∞-norm bounded adversarial attacks to evaluate the robustness of target models . An ` ∞ attack is usually formulated as solving a constrained optimization problem : max xadv L ( xadv , y ) s.t . ∥∥xadv − x0∥∥∞ ≤ , ( 1 ) where x0 is a clean example with label y , and we aim to find an adversarial example xadv within an ` ∞ ball with radius centered at x0 , such that the loss of the classifier L ( xadv , y ) is maximized . We consider untargeted attack in this paper , so an attack is successful if the perturbation successfully changes the model ’ s prediction . The attacks as well as a randomized smoothing method used in this paper are listed below . White-box attack Four white-box attacks are involved in our experiments . The Projected Gradient Decent ( PGD ) attack ( Madry et al . ( 2017 ) ) solves Eq . 1 by iteratively taking gradient ascent : xadvt+1 = Clipx0 , ( x adv t + α · sgn ( ∇xJ ( xadvt , y ) ) ) , ( 2 ) where xadvt stands for the solution after t iterations , and Clipx0 , ( · ) denotes clipping the values to make each xadvt+1 , i within [ x0 , i − , x0 , i + ] , according to the ` ∞ threat model . As a special case , Fast Gradient Sign Method ( FGSM ) ( Goodfellow et al . ( 2014 ) ) uses a single iteration with t = 1 . AutoAttack ( Croce & Hein ( 2020 ) ) is currently the strongest white-box attack which evaluates adversarial robustness with a parameter-free ensemble of diverse attacks . We also design a frequencybased attack for analysis , which conducts attack under an additional frequency constraint : xadvfreq = IDCT ( DCT ( x adv pgd − x0 ) Mf ) + x0 , ( 3 ) where DCT and IDCT stand for discrete cosine transform and inverse discrete cosine transform respectively , xadvpgd stands for the adversarial example generated by PGD , and Mf stands for the mask metric defined the frequency filter which is illustrated in Appendix B . We found this design similar to Wang et al . ( 2020b ) . Black-box attack We consider the transfer attack which studies whether an adversarial perturbation generated by attacking the source model can successfully fool the target model . This test not only evaluates the robustness of models under the black-box setting , but also becomes a sanity check for detecting the obfuscated gradient phenomenon ( Athalye et al . ( 2018 ) ) . Previous works have demonstrated that single-step attacks like FGSM enjoys better transferability than multi-step attacks ( Kurakin et al . ( 2017 ) ) . We thus use FGSM for transfer attack in our experiments . Denoised Randomized Smoothing We also evaluate the certified robustness of the models using randomized smoothing , where the robustness is evaluated as the certified radius , and the model is certified to be robust with high probability for perturbations within the radius . We follow Salman et al . ( 2020 ) to train a DnCNN ( Zhang et al . ( 2017 ) ) denoiser Dθ for each pre-trained model f with the “ stability ” objective with LCE denoting cross entropy and N denoting Gaussian distribution : LStab = E ( xi , yi ) ∈D , δLCE ( f ( Dθ ( xi + δ ) ) , f ( xi ) ) where δ ∼ N ( 0 , σ 2I ) . ( 4 ) Randomized smoothing is applied on the denoised classifier f ◦ Dθ for robustness certification : g ( x ) = arg max c∈Y P [ f ( Dθ ( x+ δ ) ) = c ] where δ ∼ N ( 0 , σ2I ) . ( 5 ) The certified radius is then calculated for the smoothed classifier as ( Cohen et al. , 2019 ) : R = σ 2 ( Φ−1 ( pA ) − Φ−1 ( pB ) ) , ( 6 ) where Φ−1 is the inverse of the standard Gaussian CDF , pA = P ( f ( x+δ ) = cA ) is the confidence of the top-1 predicted class cA , and pB = maxc6=cA P ( f ( x+ δ ) = c ) is the confidence for the second top class . Accordingly , given a perturbation radius , the certified accuracy under this perturbation radius can be evaluated by comparing the given radius to the certified radius R . | This paper studies the adversarial robustness of ViTs. There are several significant strengths and weaknesses in this paper. Especially, the novelty of this paper against one published paper is limited. It would be good to see my detailed comments below. | SP:3dac13a03632e175ec1363d6295970f3bd55a0f5 |
On the Adversarial Robustness of Vision Transformers | Following the success in advancing natural language processing and understanding , transformers are expected to bring revolutionary changes to computer vision . This work provides a comprehensive study on the robustness of vision transformers ( ViTs ) against adversarial perturbations . Tested on various white-box and transfer attack settings , we find that ViTs possess better adversarial robustness when compared with convolutional neural networks ( CNNs ) . This observation also holds for certified robustness . We summarize the following main observations contributing to the improved robustness of ViTs : 1 ) Features learned by ViTs contain less low-level information and are more generalizable , which contributes to superior robustness against adversarial perturbations . 2 ) Introducing convolutional or tokens-to-token blocks for learning low-level features in ViTs can improve classification accuracy but at the cost of adversarial robustness . 3 ) Increasing the proportion of transformers in the model structure ( when the model consists of both transformer and CNN blocks ) leads to better robustness . But for a pure transformer model , simply increasing the size or adding layers can not guarantee a similar effect . 4 ) Pre-training without adversarial training on larger datasets does not significantly improve adversarial robustness though it is critical for training ViTs . 5 ) Adversarial training is also applicable to ViT for training robust models . Furthermore , feature visualization and frequency analysis are conducted for explanation . The results show that ViTs are less sensitive to high-frequency perturbations than CNNs and there is a high correlation between how well the model learns low-level features and its robustness against different frequency-based perturbations . 1 INTRODUCTION . Transformers are originally applied in natural language processing ( NLP ) tasks as a type of deep neural network ( DNN ) mainly based on the self-attention mechanism ( Vaswani et al . ( 2017 ) ; Devlin et al . ( 2018 ) ; Brown et al . ( 2020 ) ) , and transformers with large-scale pre-training have achieved state-of-the-art results on many NLP tasks ( Devlin et al . ( 2018 ) ; Liu et al . ( 2019 ) ; Yang et al . ( 2019 ) ; Sun et al . ( 2019 ) ) . Recently , Dosovitskiy et al . ( 2020 ) applied a pure transformer directly to sequences of image patches ( i.e. , a vision transformer , ViT ) and showed that the Transformer itself can be competitive with convolutional neural networks ( CNN ) on image classification tasks . Since then transformers have been extended to various vision tasks and show competitive or even better performance compared to CNNs and recurrent neural networks ( RNNs ) ( Carion et al . ( 2020 ) ; Chen et al . ( 2020 ) ; Zhu et al . ( 2020 ) ) . While ViT and its variants hold promise toward a unified machine learning paradigm and architecture applicable to different data modalities , it remains unclear on the robustness of ViT against adversarial perturbations , which is critical for safe and reliable deployment of many real-world applications . In this work , we examine the adversarial robustness of ViTs on image classification tasks and make comparisons with CNN baselines . As highlighted in Figure 1 ( a ) , our experimental results illustrate the superior robustness of ViTs than CNNs in both white-box and black-box attack settings , based on which we make the following important findings : • Features learned by ViTs contain less low-level information and benefit adversarial robustness . ViTs achieve a lower attack success rate ( ASR ) of 51.9 % compared with a minimum of 83.3 % by CNNs in Figure 1 ( a ) . They are also less sensitive to high-frequency adversarial perturbations . • Using denoised randomized smoothing ( Salman et al. , 2020 ) , ViTs attain significantly better certified robustness than CNNs . • It takes the cost of adversarial robustness to improve the classification accuracy of ViTs by introducing blocks to help learn low-level features as shown in Figure 1 ( a ) . • Increasing the proportion of transformer blocks in the model leads to better robustness when the model consists of both transformer and CNN blocks . For example , the attack success rate ( ASR ) decreases from 87.1 % to 79.2 % when 10 additional transformer blocks are added to T2T-ViT-14 . However , increasing the size of a pure transformer model can not guarantee a similar effect , e.g. , the robustness of ViT-S/16 is better than that of ViT-B/16 in Figure 1 ( a ) . • Pre-training without adversarial training on larger datasets does not improve adversarial robustness though it is critical for training ViT . • The principle of adversarial training through min-max optimization ( Madry et al . ( 2017 ) ; Zhang et al . ( 2019 ) ) can be applied to train robust ViTs . 2 RELATED WORK . Transformer ( Vaswani et al . ( 2017 ) ) has achieved remarkable performance on many NLP tasks , and its robustness has been studied on those NLP tasks . Hsieh et al . ( 2019 ) ; Jin et al . ( 2020 ) ; Shi & Huang ( 2020 ) ; Li et al . ( 2020 ) ; Garg & Ramakrishnan ( 2020 ) ; Yin et al . ( 2020 ) conducted adversarial attacks on transformers including pre-trained models , and in their experiments transformers usually show better robustness compared to other models based on Long short-term memory ( LSTM ) or CNN , with a theoretical explanation provided in Hsieh et al . ( 2019 ) . However , due to the discrete nature of NLP models , these studies are focusing on discrete perturbations ( e.g. , word or character substitutions ) which are very different from small and continuous perturbations in computer vision tasks . Besides , Wang et al . ( 2020a ) improved the robustness of pre-trained transformers from an information-theoretic perspective , and Shi et al . ( 2020 ) ; Ye et al . ( 2020 ) ; Xu et al . ( 2020 ) studied the robustness certification of transformer-based models . To the best of our knowledge , this work is one of the first studies that investigates the adversarial robustness ( against small perturbations in the input pixel space ) of transformers on computer vision tasks . There are some concurrent works studying the adversarial robustness of ViTs . We supplement their contribution in Appendix G . 3 MODEL ARCHITECTURES . We first review the architecture of models investigated in our experiments including several vision transformers ( ViTs ) and CNN models . A detailed table of comparison is given in Table 5 . 3.1 VISION TRANSFORMERS . We consider the original ViT ( Dosovitskiy et al . ( 2020 ) ) and its four variants shown in Figure 1 ( b ) . Vision transformer ( ViT ) and data-efficient image transformer ( DeiT ) : ViT ( Dosovitskiy et al . ( 2020 ) ) mostly follows the original design of Transformer ( Vaswani et al . ( 2017 ) ; Devlin et al . ( 2018 ) ) on language tasks . For a 2D image x ∈ RH×W×C with resolution H ×W and C channels , it is divided into a sequence ofN = H·WP 2 flattened 2D patches of size P ×P , xi ∈ R N× ( P 2·C ) ( 1 ≤ i ≤ N ) . The patches are first encoded into patch embeddings with a simple convolutional layer , where the kernel size and stride of the convolution is exactly P × P . In addition , there are also position embeddings to preserve positional information . Similar to BERT ( Devlin et al . ( 2018 ) ) , a large-scale pre-trained model for NLP , a special [ CLS ] token is added to output features for classification . DeiT ( Touvron et al . ( 2021 ) ) further improves the ViT ’ s performance using data augmentation or distillation from CNN teachers with an additional distillation token . We investigate ViT- { S , B , L } /16 , DeiT-S/16 and Dist-DeiT-B/16 as defined in the corresponding papers in the main text and discuss other structures in Appendix F. Hybrid of CNN and ViT ( CNN-ViT ) : Dosovitskiy et al . ( 2020 ) also proposed a hybrid architecture for ViTs by replacing raw image patches with patches extracted from a CNN feature map . This is equivalent to adding learned CNN blocks to the head of ViT as shown in Figure 1 ( b ) . Following Dosovitskiy et al . ( 2020 ) , we investigate ViT-B/16-Res in our experiments , where the input sequence is obtained by flattening the spatial dimensions of the feature maps from ResNet50 . Hybrid of T2T and ViT ( T2T-ViT ) : Yuan et al . ( 2021 ) proposed to overcome the limitations of the simple tokenization in ViTs , by progressively structurizing an image to tokens with a token-to-token ( T2T ) module , which recursively aggregates neighboring tokens into one token such that low-level structures can be better learned . T2T-ViT was shown to perform better than ViT when trained from scratch on a midsize dataset . We investigate T2T-ViT-14 and T2T-ViT-24 in our experiments . Hybrid of shifted windows and ViT ( Swin-T ) : Liu et al . ( 2021 ) computes the representations with shifted windows scheme , which brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection . We investigate Swin-S/4 in the main text and discuss other structures in Appendix F . 3.2 CONVOLUTIONAL NEURAL NEWORKS . We study several CNN models for comparison , including ResNet18 ( He et al . ( 2016 ) ) , ResNet5032x4d ( He et al . ( 2016 ) ) , ShuffleNet ( Zhang et al . ( 2018 ) ) , MobileNet ( Howard et al . ( 2017 ) ) and VGG16 ( Simonyan & Zisserman ( 2014 ) ) . We also consider the SEResNet50 model , which uses the Squeeze-and-Excitation ( SE ) block ( Hu et al . ( 2018 ) ) that applies attention to channel dimensions to fuse both spatial and channel-wise information within local receptive fields at each layer . The aforementioned CNNs are all trained on ImageNet from scratch . For a better comparison with pre-trained transformers , we also consider two CNN models pre-trained on larger datasets : ResNeXt-32x4d-ssl ( Yalniz et al . ( 2019 ) ) pre-trained on YFCC100M ( Thomee et al . ( 2015 ) ) , and ResNet50-swsl pre-trained on IG-1B-Targeted ( Mahajan et al . ( 2018 ) ) using semi-weakly supervised methods ( Yalniz et al . ( 2019 ) ) . They are both fine-tuned on ImageNet . 4 ADVERSARIAL ROBUSTNESS EVALUATION METHODS . We consider the commonly used ` ∞-norm bounded adversarial attacks to evaluate the robustness of target models . An ` ∞ attack is usually formulated as solving a constrained optimization problem : max xadv L ( xadv , y ) s.t . ∥∥xadv − x0∥∥∞ ≤ , ( 1 ) where x0 is a clean example with label y , and we aim to find an adversarial example xadv within an ` ∞ ball with radius centered at x0 , such that the loss of the classifier L ( xadv , y ) is maximized . We consider untargeted attack in this paper , so an attack is successful if the perturbation successfully changes the model ’ s prediction . The attacks as well as a randomized smoothing method used in this paper are listed below . White-box attack Four white-box attacks are involved in our experiments . The Projected Gradient Decent ( PGD ) attack ( Madry et al . ( 2017 ) ) solves Eq . 1 by iteratively taking gradient ascent : xadvt+1 = Clipx0 , ( x adv t + α · sgn ( ∇xJ ( xadvt , y ) ) ) , ( 2 ) where xadvt stands for the solution after t iterations , and Clipx0 , ( · ) denotes clipping the values to make each xadvt+1 , i within [ x0 , i − , x0 , i + ] , according to the ` ∞ threat model . As a special case , Fast Gradient Sign Method ( FGSM ) ( Goodfellow et al . ( 2014 ) ) uses a single iteration with t = 1 . AutoAttack ( Croce & Hein ( 2020 ) ) is currently the strongest white-box attack which evaluates adversarial robustness with a parameter-free ensemble of diverse attacks . We also design a frequencybased attack for analysis , which conducts attack under an additional frequency constraint : xadvfreq = IDCT ( DCT ( x adv pgd − x0 ) Mf ) + x0 , ( 3 ) where DCT and IDCT stand for discrete cosine transform and inverse discrete cosine transform respectively , xadvpgd stands for the adversarial example generated by PGD , and Mf stands for the mask metric defined the frequency filter which is illustrated in Appendix B . We found this design similar to Wang et al . ( 2020b ) . Black-box attack We consider the transfer attack which studies whether an adversarial perturbation generated by attacking the source model can successfully fool the target model . This test not only evaluates the robustness of models under the black-box setting , but also becomes a sanity check for detecting the obfuscated gradient phenomenon ( Athalye et al . ( 2018 ) ) . Previous works have demonstrated that single-step attacks like FGSM enjoys better transferability than multi-step attacks ( Kurakin et al . ( 2017 ) ) . We thus use FGSM for transfer attack in our experiments . Denoised Randomized Smoothing We also evaluate the certified robustness of the models using randomized smoothing , where the robustness is evaluated as the certified radius , and the model is certified to be robust with high probability for perturbations within the radius . We follow Salman et al . ( 2020 ) to train a DnCNN ( Zhang et al . ( 2017 ) ) denoiser Dθ for each pre-trained model f with the “ stability ” objective with LCE denoting cross entropy and N denoting Gaussian distribution : LStab = E ( xi , yi ) ∈D , δLCE ( f ( Dθ ( xi + δ ) ) , f ( xi ) ) where δ ∼ N ( 0 , σ 2I ) . ( 4 ) Randomized smoothing is applied on the denoised classifier f ◦ Dθ for robustness certification : g ( x ) = arg max c∈Y P [ f ( Dθ ( x+ δ ) ) = c ] where δ ∼ N ( 0 , σ2I ) . ( 5 ) The certified radius is then calculated for the smoothed classifier as ( Cohen et al. , 2019 ) : R = σ 2 ( Φ−1 ( pA ) − Φ−1 ( pB ) ) , ( 6 ) where Φ−1 is the inverse of the standard Gaussian CDF , pA = P ( f ( x+δ ) = cA ) is the confidence of the top-1 predicted class cA , and pB = maxc6=cA P ( f ( x+ δ ) = c ) is the confidence for the second top class . Accordingly , given a perturbation radius , the certified accuracy under this perturbation radius can be evaluated by comparing the given radius to the certified radius R . | This paper provides a comprehensive study on the robustness of ViTs against adversarial perturbations. The authors found that 1) ViTs has better adversarial robustness than convolutional neural networks; 2) Introducing convolutional or tokens-to-token blocks can improve the classification accuracy but at the cost of the adversarial robustness; 3) More proportion of transformers has better robustness; 4) Pre-training on larger datasets does not improve adversarial robustness; 5) Adversarial training is applicable to ViTs. In addition, many experiments verify the findings on white-box, transfer attack settings and adversarial training. | SP:3dac13a03632e175ec1363d6295970f3bd55a0f5 |
On the Adversarial Robustness of Vision Transformers | Following the success in advancing natural language processing and understanding , transformers are expected to bring revolutionary changes to computer vision . This work provides a comprehensive study on the robustness of vision transformers ( ViTs ) against adversarial perturbations . Tested on various white-box and transfer attack settings , we find that ViTs possess better adversarial robustness when compared with convolutional neural networks ( CNNs ) . This observation also holds for certified robustness . We summarize the following main observations contributing to the improved robustness of ViTs : 1 ) Features learned by ViTs contain less low-level information and are more generalizable , which contributes to superior robustness against adversarial perturbations . 2 ) Introducing convolutional or tokens-to-token blocks for learning low-level features in ViTs can improve classification accuracy but at the cost of adversarial robustness . 3 ) Increasing the proportion of transformers in the model structure ( when the model consists of both transformer and CNN blocks ) leads to better robustness . But for a pure transformer model , simply increasing the size or adding layers can not guarantee a similar effect . 4 ) Pre-training without adversarial training on larger datasets does not significantly improve adversarial robustness though it is critical for training ViTs . 5 ) Adversarial training is also applicable to ViT for training robust models . Furthermore , feature visualization and frequency analysis are conducted for explanation . The results show that ViTs are less sensitive to high-frequency perturbations than CNNs and there is a high correlation between how well the model learns low-level features and its robustness against different frequency-based perturbations . 1 INTRODUCTION . Transformers are originally applied in natural language processing ( NLP ) tasks as a type of deep neural network ( DNN ) mainly based on the self-attention mechanism ( Vaswani et al . ( 2017 ) ; Devlin et al . ( 2018 ) ; Brown et al . ( 2020 ) ) , and transformers with large-scale pre-training have achieved state-of-the-art results on many NLP tasks ( Devlin et al . ( 2018 ) ; Liu et al . ( 2019 ) ; Yang et al . ( 2019 ) ; Sun et al . ( 2019 ) ) . Recently , Dosovitskiy et al . ( 2020 ) applied a pure transformer directly to sequences of image patches ( i.e. , a vision transformer , ViT ) and showed that the Transformer itself can be competitive with convolutional neural networks ( CNN ) on image classification tasks . Since then transformers have been extended to various vision tasks and show competitive or even better performance compared to CNNs and recurrent neural networks ( RNNs ) ( Carion et al . ( 2020 ) ; Chen et al . ( 2020 ) ; Zhu et al . ( 2020 ) ) . While ViT and its variants hold promise toward a unified machine learning paradigm and architecture applicable to different data modalities , it remains unclear on the robustness of ViT against adversarial perturbations , which is critical for safe and reliable deployment of many real-world applications . In this work , we examine the adversarial robustness of ViTs on image classification tasks and make comparisons with CNN baselines . As highlighted in Figure 1 ( a ) , our experimental results illustrate the superior robustness of ViTs than CNNs in both white-box and black-box attack settings , based on which we make the following important findings : • Features learned by ViTs contain less low-level information and benefit adversarial robustness . ViTs achieve a lower attack success rate ( ASR ) of 51.9 % compared with a minimum of 83.3 % by CNNs in Figure 1 ( a ) . They are also less sensitive to high-frequency adversarial perturbations . • Using denoised randomized smoothing ( Salman et al. , 2020 ) , ViTs attain significantly better certified robustness than CNNs . • It takes the cost of adversarial robustness to improve the classification accuracy of ViTs by introducing blocks to help learn low-level features as shown in Figure 1 ( a ) . • Increasing the proportion of transformer blocks in the model leads to better robustness when the model consists of both transformer and CNN blocks . For example , the attack success rate ( ASR ) decreases from 87.1 % to 79.2 % when 10 additional transformer blocks are added to T2T-ViT-14 . However , increasing the size of a pure transformer model can not guarantee a similar effect , e.g. , the robustness of ViT-S/16 is better than that of ViT-B/16 in Figure 1 ( a ) . • Pre-training without adversarial training on larger datasets does not improve adversarial robustness though it is critical for training ViT . • The principle of adversarial training through min-max optimization ( Madry et al . ( 2017 ) ; Zhang et al . ( 2019 ) ) can be applied to train robust ViTs . 2 RELATED WORK . Transformer ( Vaswani et al . ( 2017 ) ) has achieved remarkable performance on many NLP tasks , and its robustness has been studied on those NLP tasks . Hsieh et al . ( 2019 ) ; Jin et al . ( 2020 ) ; Shi & Huang ( 2020 ) ; Li et al . ( 2020 ) ; Garg & Ramakrishnan ( 2020 ) ; Yin et al . ( 2020 ) conducted adversarial attacks on transformers including pre-trained models , and in their experiments transformers usually show better robustness compared to other models based on Long short-term memory ( LSTM ) or CNN , with a theoretical explanation provided in Hsieh et al . ( 2019 ) . However , due to the discrete nature of NLP models , these studies are focusing on discrete perturbations ( e.g. , word or character substitutions ) which are very different from small and continuous perturbations in computer vision tasks . Besides , Wang et al . ( 2020a ) improved the robustness of pre-trained transformers from an information-theoretic perspective , and Shi et al . ( 2020 ) ; Ye et al . ( 2020 ) ; Xu et al . ( 2020 ) studied the robustness certification of transformer-based models . To the best of our knowledge , this work is one of the first studies that investigates the adversarial robustness ( against small perturbations in the input pixel space ) of transformers on computer vision tasks . There are some concurrent works studying the adversarial robustness of ViTs . We supplement their contribution in Appendix G . 3 MODEL ARCHITECTURES . We first review the architecture of models investigated in our experiments including several vision transformers ( ViTs ) and CNN models . A detailed table of comparison is given in Table 5 . 3.1 VISION TRANSFORMERS . We consider the original ViT ( Dosovitskiy et al . ( 2020 ) ) and its four variants shown in Figure 1 ( b ) . Vision transformer ( ViT ) and data-efficient image transformer ( DeiT ) : ViT ( Dosovitskiy et al . ( 2020 ) ) mostly follows the original design of Transformer ( Vaswani et al . ( 2017 ) ; Devlin et al . ( 2018 ) ) on language tasks . For a 2D image x ∈ RH×W×C with resolution H ×W and C channels , it is divided into a sequence ofN = H·WP 2 flattened 2D patches of size P ×P , xi ∈ R N× ( P 2·C ) ( 1 ≤ i ≤ N ) . The patches are first encoded into patch embeddings with a simple convolutional layer , where the kernel size and stride of the convolution is exactly P × P . In addition , there are also position embeddings to preserve positional information . Similar to BERT ( Devlin et al . ( 2018 ) ) , a large-scale pre-trained model for NLP , a special [ CLS ] token is added to output features for classification . DeiT ( Touvron et al . ( 2021 ) ) further improves the ViT ’ s performance using data augmentation or distillation from CNN teachers with an additional distillation token . We investigate ViT- { S , B , L } /16 , DeiT-S/16 and Dist-DeiT-B/16 as defined in the corresponding papers in the main text and discuss other structures in Appendix F. Hybrid of CNN and ViT ( CNN-ViT ) : Dosovitskiy et al . ( 2020 ) also proposed a hybrid architecture for ViTs by replacing raw image patches with patches extracted from a CNN feature map . This is equivalent to adding learned CNN blocks to the head of ViT as shown in Figure 1 ( b ) . Following Dosovitskiy et al . ( 2020 ) , we investigate ViT-B/16-Res in our experiments , where the input sequence is obtained by flattening the spatial dimensions of the feature maps from ResNet50 . Hybrid of T2T and ViT ( T2T-ViT ) : Yuan et al . ( 2021 ) proposed to overcome the limitations of the simple tokenization in ViTs , by progressively structurizing an image to tokens with a token-to-token ( T2T ) module , which recursively aggregates neighboring tokens into one token such that low-level structures can be better learned . T2T-ViT was shown to perform better than ViT when trained from scratch on a midsize dataset . We investigate T2T-ViT-14 and T2T-ViT-24 in our experiments . Hybrid of shifted windows and ViT ( Swin-T ) : Liu et al . ( 2021 ) computes the representations with shifted windows scheme , which brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection . We investigate Swin-S/4 in the main text and discuss other structures in Appendix F . 3.2 CONVOLUTIONAL NEURAL NEWORKS . We study several CNN models for comparison , including ResNet18 ( He et al . ( 2016 ) ) , ResNet5032x4d ( He et al . ( 2016 ) ) , ShuffleNet ( Zhang et al . ( 2018 ) ) , MobileNet ( Howard et al . ( 2017 ) ) and VGG16 ( Simonyan & Zisserman ( 2014 ) ) . We also consider the SEResNet50 model , which uses the Squeeze-and-Excitation ( SE ) block ( Hu et al . ( 2018 ) ) that applies attention to channel dimensions to fuse both spatial and channel-wise information within local receptive fields at each layer . The aforementioned CNNs are all trained on ImageNet from scratch . For a better comparison with pre-trained transformers , we also consider two CNN models pre-trained on larger datasets : ResNeXt-32x4d-ssl ( Yalniz et al . ( 2019 ) ) pre-trained on YFCC100M ( Thomee et al . ( 2015 ) ) , and ResNet50-swsl pre-trained on IG-1B-Targeted ( Mahajan et al . ( 2018 ) ) using semi-weakly supervised methods ( Yalniz et al . ( 2019 ) ) . They are both fine-tuned on ImageNet . 4 ADVERSARIAL ROBUSTNESS EVALUATION METHODS . We consider the commonly used ` ∞-norm bounded adversarial attacks to evaluate the robustness of target models . An ` ∞ attack is usually formulated as solving a constrained optimization problem : max xadv L ( xadv , y ) s.t . ∥∥xadv − x0∥∥∞ ≤ , ( 1 ) where x0 is a clean example with label y , and we aim to find an adversarial example xadv within an ` ∞ ball with radius centered at x0 , such that the loss of the classifier L ( xadv , y ) is maximized . We consider untargeted attack in this paper , so an attack is successful if the perturbation successfully changes the model ’ s prediction . The attacks as well as a randomized smoothing method used in this paper are listed below . White-box attack Four white-box attacks are involved in our experiments . The Projected Gradient Decent ( PGD ) attack ( Madry et al . ( 2017 ) ) solves Eq . 1 by iteratively taking gradient ascent : xadvt+1 = Clipx0 , ( x adv t + α · sgn ( ∇xJ ( xadvt , y ) ) ) , ( 2 ) where xadvt stands for the solution after t iterations , and Clipx0 , ( · ) denotes clipping the values to make each xadvt+1 , i within [ x0 , i − , x0 , i + ] , according to the ` ∞ threat model . As a special case , Fast Gradient Sign Method ( FGSM ) ( Goodfellow et al . ( 2014 ) ) uses a single iteration with t = 1 . AutoAttack ( Croce & Hein ( 2020 ) ) is currently the strongest white-box attack which evaluates adversarial robustness with a parameter-free ensemble of diverse attacks . We also design a frequencybased attack for analysis , which conducts attack under an additional frequency constraint : xadvfreq = IDCT ( DCT ( x adv pgd − x0 ) Mf ) + x0 , ( 3 ) where DCT and IDCT stand for discrete cosine transform and inverse discrete cosine transform respectively , xadvpgd stands for the adversarial example generated by PGD , and Mf stands for the mask metric defined the frequency filter which is illustrated in Appendix B . We found this design similar to Wang et al . ( 2020b ) . Black-box attack We consider the transfer attack which studies whether an adversarial perturbation generated by attacking the source model can successfully fool the target model . This test not only evaluates the robustness of models under the black-box setting , but also becomes a sanity check for detecting the obfuscated gradient phenomenon ( Athalye et al . ( 2018 ) ) . Previous works have demonstrated that single-step attacks like FGSM enjoys better transferability than multi-step attacks ( Kurakin et al . ( 2017 ) ) . We thus use FGSM for transfer attack in our experiments . Denoised Randomized Smoothing We also evaluate the certified robustness of the models using randomized smoothing , where the robustness is evaluated as the certified radius , and the model is certified to be robust with high probability for perturbations within the radius . We follow Salman et al . ( 2020 ) to train a DnCNN ( Zhang et al . ( 2017 ) ) denoiser Dθ for each pre-trained model f with the “ stability ” objective with LCE denoting cross entropy and N denoting Gaussian distribution : LStab = E ( xi , yi ) ∈D , δLCE ( f ( Dθ ( xi + δ ) ) , f ( xi ) ) where δ ∼ N ( 0 , σ 2I ) . ( 4 ) Randomized smoothing is applied on the denoised classifier f ◦ Dθ for robustness certification : g ( x ) = arg max c∈Y P [ f ( Dθ ( x+ δ ) ) = c ] where δ ∼ N ( 0 , σ2I ) . ( 5 ) The certified radius is then calculated for the smoothed classifier as ( Cohen et al. , 2019 ) : R = σ 2 ( Φ−1 ( pA ) − Φ−1 ( pB ) ) , ( 6 ) where Φ−1 is the inverse of the standard Gaussian CDF , pA = P ( f ( x+δ ) = cA ) is the confidence of the top-1 predicted class cA , and pB = maxc6=cA P ( f ( x+ δ ) = c ) is the confidence for the second top class . Accordingly , given a perturbation radius , the certified accuracy under this perturbation radius can be evaluated by comparing the given radius to the certified radius R . | This work studies the adversarial robustness of vision transformers comprehensively. The author first presents their finding via empirical experiments. Concretely, they show the robustness of vision transformers from the perspective of adversarial attack, transferability, and certified robustness, adversarial training defense. Then, they analyze the reason behind the observation. | SP:3dac13a03632e175ec1363d6295970f3bd55a0f5 |
No One Representation to Rule Them All: Overlapping Features of Training Methods | 1 INTRODUCTION . Over the years , the machine learning field has developed myriad techniques for training neural networks . In image classification , these include data augmentation , regularization , architectures , losses , pre-training schemes , and more . Such techniques have highlighted the ability of networks to capture diverse features of the data : textures/shapes ( Geirhos et al. , 2018 ) , robust/non-robust features ( Ilyas et al. , 2019 ) , and even features that fit a random , pre-determined classifier ( Hoffer et al. , 2018 ) . Despite this representation-learning power , methods that yield high generalization performance seem to produce networks with little behavior diversity : models make similar predictions , with high-accuracy models rarely making mistakes that low-accuracy models predict correctly ( Mania et al. , 2019 ) . Additionally , the quality of features learned ( e.g . : for downstream tasks ) seems dictated by upstream performance ( Kornblith et al. , 2019 ) . Finally , training on subsets of the data yields low-accuracy models that don ’ t make performant ensembles ( Nixon et al. , 2020 ) . This seemingly suggests that high-performing models share similar biases , regardless of training methodology . Without behavior diversity , ensemble benefits are limited to reducing noise , since models make correlated errors ( Perrone & Cooper , 1992 ; Opitz & Maclin , 1999 ) . Without feature diversity , representations might not capture important features for downstream tasks , since feature reuse has been shown to be crucial for transfer learning ( Neyshabur et al. , 2020 ) . Without knowing the effect of training methodology , one might conclude that low-accuracy models have no practical use , since their predictions would be dominated by high-accuracy ones . One open question is if these findings faced unavoidable selection bias , since the highest-performing models have historically been trained with similar supervised objectives on IID datasets . Up until recently , this hypothesis was difficult to test . That changed with the recent success of large-scale contrastive learning , which produces competitively-high accuracy on standard generalization and robustness benchmarks ( Radford et al. , 2021 ; Jia et al. , 2021 ) . This motivates revisiting the question : How does training methodology affect learned representations and prediction behavior ? In this paper , we conduct a systematic empirical study of 82 models , which we train or collect , across hyper-parameters , architectures , objective functions , and datasets , including the latest high performing models CLIP , ALIGN , SimCLR , BiT , ViT-G/14 , and MPL . In addition to using different techniques , these new models were trained on data collected very differently , allowing us to probe the effect of both training objective , as well as pre-training data . We categorize these models based on how their training methodologies diverge from a typical , base model and show : 1 . Model pairs that diverge more in training methodology ( reinitializations , hyper-parameters , architectures , frameworks , → dataset scales ) produce increasingly uncorrelated errors . 2 . Ensemble performance increases as error correlation decreases , due to higher ensemble efficiency . The most typical ImageNet model ( ResNet-50 , 76.5 % ) , and its most different counterpart ( ALIGN-ZS , 75.5 % ) yield 83.4 % accuracy when ensembled , a +7 % boost . 3 . Contrastively-learned models display categorically different generalization behavior , specializing in subdomains of the data , which explains the higher ensembling efficiency . We show CLIP-S specializes in antropogenic images , whereas ResNet-50 excells in nature images . 4 . Surprisingly , we find that low-accuracy models can be useful if they are trained differently enough . By combining a high-accuracy model ( BiT-1k , 82.9 % ) with only low-accuracy models ( max individual acc . 77.4 % ) , we can create ensembles that yield as much as 86.7 % . 5 . Diverging training methodology yield representations that capture overlapping ( but not supersetting ) feature sets which , when combined , lead to increased downstream performance ( 91.4 % on Pascal VOC , using models with max individual accuracy 90.7 % ) . 2 RELATED WORK . Diversity in Ensembles . It is widely understood that good ensembles are made of models that are both accurate and make independent errors ( Perrone & Cooper , 1992 ; Opitz & Maclin , 1999 ; Wen et al. , 2020 ) . Beyond improving ensemble performance , finding diverse solutions that equally well explain the observations can help quantify model uncertainty ( also known as epistemic uncertainty ) – what the model does not know because training data was not appropriate ( Kendall & Gal , 2017 ; Fort et al. , 2019 ) . Many works have explored ways of finding such solutions ( Izmailov et al. , 2018 ) . Boostrapping ( Freund et al. , 1996 ) ( ensembling models trained on subsets of the data ) was found not to produce deep ensembles with higher accuracy than a single model trained on the entire dataset ( Nixon et al. , 2020 ) . This emphasizes how much data deep neural networks need to achieve high performance . Another work has examined the effect of augmentation-induced prediction diversity on adversarial robustness ( Liu et al. , 2019 ) . More relevant to us , Wenzel et al . ( 2020 ) has explored the effect of random hyper-parameters , finding best ensembles when combining models that are both hyperparameter and weight-diverse , albeit still considering similar frameworks and architectures . Model Behavior Similarity . These attempts were hindered as many high-performing techniques seem to produce similar prediction behavior . Mania et al . ( 2019 ) demonstrates , via “ dominance probabilities ” , that high-accuracy models rarely make mistakes that low-accuracy models predict correctly . This indicates that , within the models studied , high-accuracy models “ dominate ” the predictions of low-accuracy ones . Recht et al . ( 2019 ) shows that out-of-distribution robustness seems correlated with in-distribution performance . Relatedly , Kornblith et al . ( 2019 ) shows that upstream and downstream performance are very correlated . These jointly indicate that high-accuracy models learn strictly better representations , diminishing the importance of low-accuracy solutions ( even if they are diverse ) . Finally , Fort et al . ( 2019 ) shows that subspace-sampling methods for ensembling generate solutions that , while different in weight space , remain similar in function space , which gives rise to an insufficiently diverse set of predictions . Contrastive-Learning Models ; Different Large-Scale Datasets . This model behavior similarity might be explained by the fact that the training techniques that yield high performance on image classification tasks have been relatively similar , mostly relying on supervised learning on ImageNet , optionally pre-training on a dataset with similar distribution . Recently , various works have demonstrated the effectiveness of learning from large-scale data using contrastive learning ( Radford et al. , 2021 ; Jia et al. , 2021 ) . They report impressive results on out-of-distribution benchmarks , and have been shown to have higher dominance probabilities ( Andreassen et al. , 2021 ) . These represent some of the first models to deviate from standard supervised training ( or finetuning ) on downstream data , while still yielding competitive accuracy . They add to the set of high-performing training techniques ( which include data augmentation ( Cubuk et al. , 2018 ; 2020 ) , regularization ( Srivastava et al. , 2014 ; Szegedy et al. , 2016 ; Ghiasi et al. , 2018 ) , architectures ( Tan & Le , 2019 ; Dosovitskiy et al. , 2020 ; Hu et al. , 2018 ; Iandola et al. , 2014 ; Li et al. , 2019 ; Szegedy et al. , 2016 ; Simonyan & Zisserman , 2014 ; Sandler et al. , 2018 ) , losses ( Chen et al. , 2020 ; Radford et al. , 2021 ; Jia et al. , 2021 ) , pre-training schemes ( Kolesnikov et al. , 2020 ; Pham et al. , 2021 ) , etc ) , and provide the motivation for revisiting the question of whether training methodology can yield different model behavior . 3 METHOD . 3.1 MODEL CATEGORIZATION . In order to evaluate the performance of learned representations as a function of training methodology , we define the following categories , which classify model pairs based on their training differences : 1 . Reinits : identical models , just different in reinitialization . 2 . Hyper-parameters : models of the same architecture trained with different hyper parameters ( e.g . : weight decay , learning rate , initialization algorithm , etc ) . 3 . Architectures : models with different architectures , but still trained within the same frame- work and dataset ( e.g . : ResNet and ViT , both with ImageNet supervision ) . 4 . Frameworks : models trained with different optimization objectives , but on same dataset ( e.g . : ResNet and SimCLR , respectively supervised and contrastive learning on ImageNet ) . 5 . Datasets : models trained on large-scale data ( e.g . : CLIP or BiT – trained on WIT or JFT ) . In some sense , model categories can be supersets of one another : when we change a model architecture , we may also change the hyper-parameters used to train such architecture , to make sure that they are optimal for training this new setting . Unless stated otherwise , all ensembles are comprised of a fixed base model , and another model belonging to one of the categories above . This way , each category is defined relative to the base model : model pairs in a given category will vary because Model 2 is different than the base model along that axis . The result is that as we navigate along model categories ( Reinit → ... → Dataset ) , we will naturally be measuring the effect of increasingly dissimilar training methodology . See Appendix Table 1 for details . 3.2 MODEL SELECTION . We collect representations and predictions for 82 models , across the many categories above . We fix ResNet-50 , trained with RandAugment , as our base model . ResNet is a good candidate for a base model since it is one of the most typical ImageNet classification models , and the de-facto standard baseline for this task . In total , we train/collect models in the categories : 1 ) Reinit ; 2 ) Hyperparameters ( 51 ) : varying dropout , dropblock , learning rate , and weight decay , sometimes jointly ; 3 ) Architectures ( 17 ) : including EfficientNet , ViT , DenseNet , VGG ; 4 ) Framework ( 2 ) : including SimCLR , and models trained with distillation ; and 5 ) Dataset ( 12 ) : including CLIP , ALIGN , BiT , and more , trained on WIT ( Radford et al. , 2021 ) , the ALIGN dataset , JFT ( Sun et al. , 2017 ) , etc . We additionally collect high-performing models MPL , ALIGN ( L-BFGS ) , ViT-G/14 , BiT-1k , CLIP-L , EfficientNet-B3 for some of our analyses . These are some of the latest , highest-performing models for ImageNet classification . We found it necessary to calibrate all models using temperature scaling ( Roelofs et al. , 2020 ; Guo et al. , 2017 ) to maximize ensemble performance . Finally , unless stated otherwise , we only use models in the same narrow accuracy range ( 74-78 % accuracy on ImageNet ) , which guarantees that the effects observed are indeed a function of diverging training methodology , and not of any given model being intrinsically more performant than another . A complete list of models can be found in the Appendix . | This paper empirically investigates how different models, trained with different methodologies should be ensembled to maximize the accuracy in image-net classification task. By carefully designed experiments, the authors look at different aspects of the trained models and provide guidelines for selecting models to be ensembled. The main takeaway: models trained with increased divergence in training methodologies are best suited for ensembling. | SP:850dc3f2eea36dbfb032dbb91d0a875d7c55ff8f |
No One Representation to Rule Them All: Overlapping Features of Training Methods | 1 INTRODUCTION . Over the years , the machine learning field has developed myriad techniques for training neural networks . In image classification , these include data augmentation , regularization , architectures , losses , pre-training schemes , and more . Such techniques have highlighted the ability of networks to capture diverse features of the data : textures/shapes ( Geirhos et al. , 2018 ) , robust/non-robust features ( Ilyas et al. , 2019 ) , and even features that fit a random , pre-determined classifier ( Hoffer et al. , 2018 ) . Despite this representation-learning power , methods that yield high generalization performance seem to produce networks with little behavior diversity : models make similar predictions , with high-accuracy models rarely making mistakes that low-accuracy models predict correctly ( Mania et al. , 2019 ) . Additionally , the quality of features learned ( e.g . : for downstream tasks ) seems dictated by upstream performance ( Kornblith et al. , 2019 ) . Finally , training on subsets of the data yields low-accuracy models that don ’ t make performant ensembles ( Nixon et al. , 2020 ) . This seemingly suggests that high-performing models share similar biases , regardless of training methodology . Without behavior diversity , ensemble benefits are limited to reducing noise , since models make correlated errors ( Perrone & Cooper , 1992 ; Opitz & Maclin , 1999 ) . Without feature diversity , representations might not capture important features for downstream tasks , since feature reuse has been shown to be crucial for transfer learning ( Neyshabur et al. , 2020 ) . Without knowing the effect of training methodology , one might conclude that low-accuracy models have no practical use , since their predictions would be dominated by high-accuracy ones . One open question is if these findings faced unavoidable selection bias , since the highest-performing models have historically been trained with similar supervised objectives on IID datasets . Up until recently , this hypothesis was difficult to test . That changed with the recent success of large-scale contrastive learning , which produces competitively-high accuracy on standard generalization and robustness benchmarks ( Radford et al. , 2021 ; Jia et al. , 2021 ) . This motivates revisiting the question : How does training methodology affect learned representations and prediction behavior ? In this paper , we conduct a systematic empirical study of 82 models , which we train or collect , across hyper-parameters , architectures , objective functions , and datasets , including the latest high performing models CLIP , ALIGN , SimCLR , BiT , ViT-G/14 , and MPL . In addition to using different techniques , these new models were trained on data collected very differently , allowing us to probe the effect of both training objective , as well as pre-training data . We categorize these models based on how their training methodologies diverge from a typical , base model and show : 1 . Model pairs that diverge more in training methodology ( reinitializations , hyper-parameters , architectures , frameworks , → dataset scales ) produce increasingly uncorrelated errors . 2 . Ensemble performance increases as error correlation decreases , due to higher ensemble efficiency . The most typical ImageNet model ( ResNet-50 , 76.5 % ) , and its most different counterpart ( ALIGN-ZS , 75.5 % ) yield 83.4 % accuracy when ensembled , a +7 % boost . 3 . Contrastively-learned models display categorically different generalization behavior , specializing in subdomains of the data , which explains the higher ensembling efficiency . We show CLIP-S specializes in antropogenic images , whereas ResNet-50 excells in nature images . 4 . Surprisingly , we find that low-accuracy models can be useful if they are trained differently enough . By combining a high-accuracy model ( BiT-1k , 82.9 % ) with only low-accuracy models ( max individual acc . 77.4 % ) , we can create ensembles that yield as much as 86.7 % . 5 . Diverging training methodology yield representations that capture overlapping ( but not supersetting ) feature sets which , when combined , lead to increased downstream performance ( 91.4 % on Pascal VOC , using models with max individual accuracy 90.7 % ) . 2 RELATED WORK . Diversity in Ensembles . It is widely understood that good ensembles are made of models that are both accurate and make independent errors ( Perrone & Cooper , 1992 ; Opitz & Maclin , 1999 ; Wen et al. , 2020 ) . Beyond improving ensemble performance , finding diverse solutions that equally well explain the observations can help quantify model uncertainty ( also known as epistemic uncertainty ) – what the model does not know because training data was not appropriate ( Kendall & Gal , 2017 ; Fort et al. , 2019 ) . Many works have explored ways of finding such solutions ( Izmailov et al. , 2018 ) . Boostrapping ( Freund et al. , 1996 ) ( ensembling models trained on subsets of the data ) was found not to produce deep ensembles with higher accuracy than a single model trained on the entire dataset ( Nixon et al. , 2020 ) . This emphasizes how much data deep neural networks need to achieve high performance . Another work has examined the effect of augmentation-induced prediction diversity on adversarial robustness ( Liu et al. , 2019 ) . More relevant to us , Wenzel et al . ( 2020 ) has explored the effect of random hyper-parameters , finding best ensembles when combining models that are both hyperparameter and weight-diverse , albeit still considering similar frameworks and architectures . Model Behavior Similarity . These attempts were hindered as many high-performing techniques seem to produce similar prediction behavior . Mania et al . ( 2019 ) demonstrates , via “ dominance probabilities ” , that high-accuracy models rarely make mistakes that low-accuracy models predict correctly . This indicates that , within the models studied , high-accuracy models “ dominate ” the predictions of low-accuracy ones . Recht et al . ( 2019 ) shows that out-of-distribution robustness seems correlated with in-distribution performance . Relatedly , Kornblith et al . ( 2019 ) shows that upstream and downstream performance are very correlated . These jointly indicate that high-accuracy models learn strictly better representations , diminishing the importance of low-accuracy solutions ( even if they are diverse ) . Finally , Fort et al . ( 2019 ) shows that subspace-sampling methods for ensembling generate solutions that , while different in weight space , remain similar in function space , which gives rise to an insufficiently diverse set of predictions . Contrastive-Learning Models ; Different Large-Scale Datasets . This model behavior similarity might be explained by the fact that the training techniques that yield high performance on image classification tasks have been relatively similar , mostly relying on supervised learning on ImageNet , optionally pre-training on a dataset with similar distribution . Recently , various works have demonstrated the effectiveness of learning from large-scale data using contrastive learning ( Radford et al. , 2021 ; Jia et al. , 2021 ) . They report impressive results on out-of-distribution benchmarks , and have been shown to have higher dominance probabilities ( Andreassen et al. , 2021 ) . These represent some of the first models to deviate from standard supervised training ( or finetuning ) on downstream data , while still yielding competitive accuracy . They add to the set of high-performing training techniques ( which include data augmentation ( Cubuk et al. , 2018 ; 2020 ) , regularization ( Srivastava et al. , 2014 ; Szegedy et al. , 2016 ; Ghiasi et al. , 2018 ) , architectures ( Tan & Le , 2019 ; Dosovitskiy et al. , 2020 ; Hu et al. , 2018 ; Iandola et al. , 2014 ; Li et al. , 2019 ; Szegedy et al. , 2016 ; Simonyan & Zisserman , 2014 ; Sandler et al. , 2018 ) , losses ( Chen et al. , 2020 ; Radford et al. , 2021 ; Jia et al. , 2021 ) , pre-training schemes ( Kolesnikov et al. , 2020 ; Pham et al. , 2021 ) , etc ) , and provide the motivation for revisiting the question of whether training methodology can yield different model behavior . 3 METHOD . 3.1 MODEL CATEGORIZATION . In order to evaluate the performance of learned representations as a function of training methodology , we define the following categories , which classify model pairs based on their training differences : 1 . Reinits : identical models , just different in reinitialization . 2 . Hyper-parameters : models of the same architecture trained with different hyper parameters ( e.g . : weight decay , learning rate , initialization algorithm , etc ) . 3 . Architectures : models with different architectures , but still trained within the same frame- work and dataset ( e.g . : ResNet and ViT , both with ImageNet supervision ) . 4 . Frameworks : models trained with different optimization objectives , but on same dataset ( e.g . : ResNet and SimCLR , respectively supervised and contrastive learning on ImageNet ) . 5 . Datasets : models trained on large-scale data ( e.g . : CLIP or BiT – trained on WIT or JFT ) . In some sense , model categories can be supersets of one another : when we change a model architecture , we may also change the hyper-parameters used to train such architecture , to make sure that they are optimal for training this new setting . Unless stated otherwise , all ensembles are comprised of a fixed base model , and another model belonging to one of the categories above . This way , each category is defined relative to the base model : model pairs in a given category will vary because Model 2 is different than the base model along that axis . The result is that as we navigate along model categories ( Reinit → ... → Dataset ) , we will naturally be measuring the effect of increasingly dissimilar training methodology . See Appendix Table 1 for details . 3.2 MODEL SELECTION . We collect representations and predictions for 82 models , across the many categories above . We fix ResNet-50 , trained with RandAugment , as our base model . ResNet is a good candidate for a base model since it is one of the most typical ImageNet classification models , and the de-facto standard baseline for this task . In total , we train/collect models in the categories : 1 ) Reinit ; 2 ) Hyperparameters ( 51 ) : varying dropout , dropblock , learning rate , and weight decay , sometimes jointly ; 3 ) Architectures ( 17 ) : including EfficientNet , ViT , DenseNet , VGG ; 4 ) Framework ( 2 ) : including SimCLR , and models trained with distillation ; and 5 ) Dataset ( 12 ) : including CLIP , ALIGN , BiT , and more , trained on WIT ( Radford et al. , 2021 ) , the ALIGN dataset , JFT ( Sun et al. , 2017 ) , etc . We additionally collect high-performing models MPL , ALIGN ( L-BFGS ) , ViT-G/14 , BiT-1k , CLIP-L , EfficientNet-B3 for some of our analyses . These are some of the latest , highest-performing models for ImageNet classification . We found it necessary to calibrate all models using temperature scaling ( Roelofs et al. , 2020 ; Guo et al. , 2017 ) to maximize ensemble performance . Finally , unless stated otherwise , we only use models in the same narrow accuracy range ( 74-78 % accuracy on ImageNet ) , which guarantees that the effects observed are indeed a function of diverging training methodology , and not of any given model being intrinsically more performant than another . A complete list of models can be found in the Appendix . | The authors conduct a large-scale study on when models are more likely to learn different representations, and how diversity in learned representations can help ensemble performance. They select 5 possible reasons for representation diversity: model reinitialization, hyperparameters, model architectures, model frameworks, and datasets. They define a measure of error correlation, and show that a pair of models with more uncorrelated errors results in higher ensemble performance - even if one or both of those models has "low accuracy". They show that models trained on different datasets are most likely to have uncorrelated errors, as opposed to the other sources of representation diversity. The authors study the effect of sampling and concatenating features from two different models, find that a mix of features from both models yields the best performance, and conclude that the models have learnt different features (yet likely overlapping). They also examine the categorical specializations between varying models, as well as the effect of training diversity for creating ensembles on downstream tasks. | SP:850dc3f2eea36dbfb032dbb91d0a875d7c55ff8f |
No One Representation to Rule Them All: Overlapping Features of Training Methods | 1 INTRODUCTION . Over the years , the machine learning field has developed myriad techniques for training neural networks . In image classification , these include data augmentation , regularization , architectures , losses , pre-training schemes , and more . Such techniques have highlighted the ability of networks to capture diverse features of the data : textures/shapes ( Geirhos et al. , 2018 ) , robust/non-robust features ( Ilyas et al. , 2019 ) , and even features that fit a random , pre-determined classifier ( Hoffer et al. , 2018 ) . Despite this representation-learning power , methods that yield high generalization performance seem to produce networks with little behavior diversity : models make similar predictions , with high-accuracy models rarely making mistakes that low-accuracy models predict correctly ( Mania et al. , 2019 ) . Additionally , the quality of features learned ( e.g . : for downstream tasks ) seems dictated by upstream performance ( Kornblith et al. , 2019 ) . Finally , training on subsets of the data yields low-accuracy models that don ’ t make performant ensembles ( Nixon et al. , 2020 ) . This seemingly suggests that high-performing models share similar biases , regardless of training methodology . Without behavior diversity , ensemble benefits are limited to reducing noise , since models make correlated errors ( Perrone & Cooper , 1992 ; Opitz & Maclin , 1999 ) . Without feature diversity , representations might not capture important features for downstream tasks , since feature reuse has been shown to be crucial for transfer learning ( Neyshabur et al. , 2020 ) . Without knowing the effect of training methodology , one might conclude that low-accuracy models have no practical use , since their predictions would be dominated by high-accuracy ones . One open question is if these findings faced unavoidable selection bias , since the highest-performing models have historically been trained with similar supervised objectives on IID datasets . Up until recently , this hypothesis was difficult to test . That changed with the recent success of large-scale contrastive learning , which produces competitively-high accuracy on standard generalization and robustness benchmarks ( Radford et al. , 2021 ; Jia et al. , 2021 ) . This motivates revisiting the question : How does training methodology affect learned representations and prediction behavior ? In this paper , we conduct a systematic empirical study of 82 models , which we train or collect , across hyper-parameters , architectures , objective functions , and datasets , including the latest high performing models CLIP , ALIGN , SimCLR , BiT , ViT-G/14 , and MPL . In addition to using different techniques , these new models were trained on data collected very differently , allowing us to probe the effect of both training objective , as well as pre-training data . We categorize these models based on how their training methodologies diverge from a typical , base model and show : 1 . Model pairs that diverge more in training methodology ( reinitializations , hyper-parameters , architectures , frameworks , → dataset scales ) produce increasingly uncorrelated errors . 2 . Ensemble performance increases as error correlation decreases , due to higher ensemble efficiency . The most typical ImageNet model ( ResNet-50 , 76.5 % ) , and its most different counterpart ( ALIGN-ZS , 75.5 % ) yield 83.4 % accuracy when ensembled , a +7 % boost . 3 . Contrastively-learned models display categorically different generalization behavior , specializing in subdomains of the data , which explains the higher ensembling efficiency . We show CLIP-S specializes in antropogenic images , whereas ResNet-50 excells in nature images . 4 . Surprisingly , we find that low-accuracy models can be useful if they are trained differently enough . By combining a high-accuracy model ( BiT-1k , 82.9 % ) with only low-accuracy models ( max individual acc . 77.4 % ) , we can create ensembles that yield as much as 86.7 % . 5 . Diverging training methodology yield representations that capture overlapping ( but not supersetting ) feature sets which , when combined , lead to increased downstream performance ( 91.4 % on Pascal VOC , using models with max individual accuracy 90.7 % ) . 2 RELATED WORK . Diversity in Ensembles . It is widely understood that good ensembles are made of models that are both accurate and make independent errors ( Perrone & Cooper , 1992 ; Opitz & Maclin , 1999 ; Wen et al. , 2020 ) . Beyond improving ensemble performance , finding diverse solutions that equally well explain the observations can help quantify model uncertainty ( also known as epistemic uncertainty ) – what the model does not know because training data was not appropriate ( Kendall & Gal , 2017 ; Fort et al. , 2019 ) . Many works have explored ways of finding such solutions ( Izmailov et al. , 2018 ) . Boostrapping ( Freund et al. , 1996 ) ( ensembling models trained on subsets of the data ) was found not to produce deep ensembles with higher accuracy than a single model trained on the entire dataset ( Nixon et al. , 2020 ) . This emphasizes how much data deep neural networks need to achieve high performance . Another work has examined the effect of augmentation-induced prediction diversity on adversarial robustness ( Liu et al. , 2019 ) . More relevant to us , Wenzel et al . ( 2020 ) has explored the effect of random hyper-parameters , finding best ensembles when combining models that are both hyperparameter and weight-diverse , albeit still considering similar frameworks and architectures . Model Behavior Similarity . These attempts were hindered as many high-performing techniques seem to produce similar prediction behavior . Mania et al . ( 2019 ) demonstrates , via “ dominance probabilities ” , that high-accuracy models rarely make mistakes that low-accuracy models predict correctly . This indicates that , within the models studied , high-accuracy models “ dominate ” the predictions of low-accuracy ones . Recht et al . ( 2019 ) shows that out-of-distribution robustness seems correlated with in-distribution performance . Relatedly , Kornblith et al . ( 2019 ) shows that upstream and downstream performance are very correlated . These jointly indicate that high-accuracy models learn strictly better representations , diminishing the importance of low-accuracy solutions ( even if they are diverse ) . Finally , Fort et al . ( 2019 ) shows that subspace-sampling methods for ensembling generate solutions that , while different in weight space , remain similar in function space , which gives rise to an insufficiently diverse set of predictions . Contrastive-Learning Models ; Different Large-Scale Datasets . This model behavior similarity might be explained by the fact that the training techniques that yield high performance on image classification tasks have been relatively similar , mostly relying on supervised learning on ImageNet , optionally pre-training on a dataset with similar distribution . Recently , various works have demonstrated the effectiveness of learning from large-scale data using contrastive learning ( Radford et al. , 2021 ; Jia et al. , 2021 ) . They report impressive results on out-of-distribution benchmarks , and have been shown to have higher dominance probabilities ( Andreassen et al. , 2021 ) . These represent some of the first models to deviate from standard supervised training ( or finetuning ) on downstream data , while still yielding competitive accuracy . They add to the set of high-performing training techniques ( which include data augmentation ( Cubuk et al. , 2018 ; 2020 ) , regularization ( Srivastava et al. , 2014 ; Szegedy et al. , 2016 ; Ghiasi et al. , 2018 ) , architectures ( Tan & Le , 2019 ; Dosovitskiy et al. , 2020 ; Hu et al. , 2018 ; Iandola et al. , 2014 ; Li et al. , 2019 ; Szegedy et al. , 2016 ; Simonyan & Zisserman , 2014 ; Sandler et al. , 2018 ) , losses ( Chen et al. , 2020 ; Radford et al. , 2021 ; Jia et al. , 2021 ) , pre-training schemes ( Kolesnikov et al. , 2020 ; Pham et al. , 2021 ) , etc ) , and provide the motivation for revisiting the question of whether training methodology can yield different model behavior . 3 METHOD . 3.1 MODEL CATEGORIZATION . In order to evaluate the performance of learned representations as a function of training methodology , we define the following categories , which classify model pairs based on their training differences : 1 . Reinits : identical models , just different in reinitialization . 2 . Hyper-parameters : models of the same architecture trained with different hyper parameters ( e.g . : weight decay , learning rate , initialization algorithm , etc ) . 3 . Architectures : models with different architectures , but still trained within the same frame- work and dataset ( e.g . : ResNet and ViT , both with ImageNet supervision ) . 4 . Frameworks : models trained with different optimization objectives , but on same dataset ( e.g . : ResNet and SimCLR , respectively supervised and contrastive learning on ImageNet ) . 5 . Datasets : models trained on large-scale data ( e.g . : CLIP or BiT – trained on WIT or JFT ) . In some sense , model categories can be supersets of one another : when we change a model architecture , we may also change the hyper-parameters used to train such architecture , to make sure that they are optimal for training this new setting . Unless stated otherwise , all ensembles are comprised of a fixed base model , and another model belonging to one of the categories above . This way , each category is defined relative to the base model : model pairs in a given category will vary because Model 2 is different than the base model along that axis . The result is that as we navigate along model categories ( Reinit → ... → Dataset ) , we will naturally be measuring the effect of increasingly dissimilar training methodology . See Appendix Table 1 for details . 3.2 MODEL SELECTION . We collect representations and predictions for 82 models , across the many categories above . We fix ResNet-50 , trained with RandAugment , as our base model . ResNet is a good candidate for a base model since it is one of the most typical ImageNet classification models , and the de-facto standard baseline for this task . In total , we train/collect models in the categories : 1 ) Reinit ; 2 ) Hyperparameters ( 51 ) : varying dropout , dropblock , learning rate , and weight decay , sometimes jointly ; 3 ) Architectures ( 17 ) : including EfficientNet , ViT , DenseNet , VGG ; 4 ) Framework ( 2 ) : including SimCLR , and models trained with distillation ; and 5 ) Dataset ( 12 ) : including CLIP , ALIGN , BiT , and more , trained on WIT ( Radford et al. , 2021 ) , the ALIGN dataset , JFT ( Sun et al. , 2017 ) , etc . We additionally collect high-performing models MPL , ALIGN ( L-BFGS ) , ViT-G/14 , BiT-1k , CLIP-L , EfficientNet-B3 for some of our analyses . These are some of the latest , highest-performing models for ImageNet classification . We found it necessary to calibrate all models using temperature scaling ( Roelofs et al. , 2020 ; Guo et al. , 2017 ) to maximize ensemble performance . Finally , unless stated otherwise , we only use models in the same narrow accuracy range ( 74-78 % accuracy on ImageNet ) , which guarantees that the effects observed are indeed a function of diverging training methodology , and not of any given model being intrinsically more performant than another . A complete list of models can be found in the Appendix . | This paper proposes an empirical study of how different modelling choices (eg, hyper-parameters, architecture, training algorithm, dataset) result in different and complementary data representations. The first half of the paper shows that those choices influence model predictions, leading to models that specialize to a subset of the data. The second half shows that those different representations are complementary and can be ensembled to yield better performing models. In particular, the authors show that ensembling models that are most diverse gives the largest improvement when transferring to a downstream task. | SP:850dc3f2eea36dbfb032dbb91d0a875d7c55ff8f |
Constructing Orthogonal Convolutions in an Explicit Manner | 1 INTRODUCTION . A layer with an orthogonal input-output Jacobian matrix is 1-Lipschitz in the 2-norm , robust to the perturbation in input . Meanwhile , it preserves the gradient norm in back-propagating the gradient , which effectively overcomes the gradient explosion and attenuation issues in training deep neural networks . In the past years , many studies have shown that exploiting orthogonality of the Jacobian matrix of layers in neural networks can achieve provable robustness to adversarial attacks ( Li et al. , 2019 ) , stabler and faster training ( Arjovsky et al. , 2016 ; Xiao et al. , 2018 ) , and improved generalization ( Cogswell et al. , 2016 ; Bansal et al. , 2018 ; Sedghi et al. , 2019 ) . In a fully-connected layer y = Wx where W ∈ Rcout×cin is the weight matrix , the layer ’ s inputoutput Jacobian matrix J = ∂y∂x is just W. Thus , preserving the orthogonality of J can be accomplished through imposing the orthogonal constraint on W , which has been extensively studied in previous works ( Mhammedi et al. , 2017 ; Cissé et al. , 2017 ; Anil et al. , 2019 ) . In contrast , in a convolution layer , the Jacobian matrix is no longer the weight matrix ( convolution kernel ) . Instead , it is the circulant matrix composed of convolution kernel ( Karami et al. , 2019 ; Sedghi et al. , 2019 ) . Thus , generally , simply constructing an orthogonal convolution kernel can not achieve an orthogonal convolution . Achieving an orthogonal Jacobian matrix in the convolution layer is more challenging than that in a fully-connected layer . It is plausibly straightforward to expand the convolution kernel to the doubly block circulant Jacobian matrix and directly impose the orthogonal constraint on the Jacobian matrix . Nevertheless , it is extremely difficult to construct a Jacobian matrix that is both doubly block-circulant and orthogonal . Block Convolutional Orthogonal parameterization ( BCOP ) ( Li et al. , 2019 ) is one of the pioneering works for constructing the orthogonal convolution neural networks . It adopts the construction algorithm ( Xiao et al. , 2018 ) which decomposes a 2-D convolution into a stack of 1-D convolutions and a channel-wise orthogonal transformation . Trockman & Kolter ( 2021 ) maps the convolution kernel and the feature tensor into the frequency domain using Fast Fourier Transform ( FFT ) , and achieves the orthogonality of the Jacobian matrix through Cayley transform on the weight matrix in the frequency domain . They devise the convolution kernel of the same size as the input feature map , which takes more parameters than standard convolution layers with a local reception field . Meanwhile , FFT maps the real-value feature map and the weight matrix into matrices of complex values , increasing the computational cost to 4 times its counterpart in the real-value domain . Meanwhile , the Cayley transform requires computing the matrix inverse , which is not friendly for GPU computation . Recently , Skew Orthogonal Convolutions ( SOC ) ( Singla & Feizi , 2021b ) devises skew-symmetric filter and exploits matrix exponential ( Hoogeboom et al. , 2020 ) to attain the orthogonality of the Jacobian matrix . But SOC is slow in evaluation since it needs to apply a convolution filter multiple times sequentially on the feature map to obtain the Taylor expansion . Observing the efficiency limitations of the existing methods , in this work , we propose an explicitly constructed orthogonal ( ECO ) convolution , which is fast in both training and evaluation . It is well known that the Jacobian matrix of a layer is orthogonal if and only if each of its singular values is 1 . Thus we convert the problem of ensuring the orthogonality of the Jacobian matrix into making every singular value as 1 . Based on the relation between the singular values of the Jacobian matrix for the convolution layer and the weight matrix of the convolution layer discovered by Sedghi et al . ( 2019 ) , we construct the convolution kernel so that it ensures every singular value of the Jacobian matrix to be 1 . Compared with the recent state-of-the-art method , SOC ( Singla & Feizi , 2021b ) implicitly approximating orthogonal convolution by multiple times convolution operations , ours explicitly builds the orthogonal convolution . Thus , we can directly deploy the constructed orthogonal convolution in evaluation , taking the same computational cost as the standard convolution . It is more efficient than SOC with multiple times convolution operations in evaluation . Experiments on CIFAR10 and CIFAR100 show that , in evaluation , taking less time , ours achieves competitive standard and robust accuracy compared with SOC . 2 RELATED WORK . Weight orthogonalization . In a fully-connected layer , the input-output Jacobian matrix is the transform itself . Thus many efforts are devoted to orthogonalizing the weights . Some early works exploit an orthogonal weight initialization to speed up the training ( Saxe et al. , 2014 ; Pennington et al. , 2017 ) and build extremely deep neural networks ( Xiao et al. , 2018 ) . Recently , more efforts are devoted to exploiting the orthogonality not only in initialization but also in the whole training process . Some approaches propose “ soft ” constraints on the weight matrix . For example , Bansal et al . ( 2018 ) ; Xiong et al . ( 2016 ) introduce mutual coherence and spectral restricted isometry as orthogonal regularization on the weight matrix . Parseval Networks ( Cissé et al. , 2017 ) adapts a regularizer to encourage the orthogonality of the weight matrices . However , these methods can not guarantee the exact orthogonality of the weight matrix . Some approaches orthogonalize features during the forward pass . For example , Huang et al . ( 2018b ) extends Batch Normalization ( Ioffe & Szegedy , 2015 ) with ZCA ; ( Huang et al. , 2018a ) solves a suboptimization problem to decorrelate deep features . These methods can not avoid computationally expensive operations like SVD , thus slow in practice . Some approaches enforce orthogonality by using the Riemannian optimization on the Stiefel manifold . For example , Casado & Martı́nez-Rubio ( 2019 ) uses Pade approximation as an alternative to matrix exponential mapping to update the weight matrix on the Stifel manifold . Li et al . ( 2020 ) uses an iterative Cayley transform to enforce weight matrix orthonormal . Anil et al . ( 2019 ) also attains orthogonal weights by orthogonal parameterization based on Björck orthonormalization ( BjoRick & Bowie , 1971 ) . Lastly , some approaches incorporate orthogonal constraint into their network architecture . For example , Mhammedi et al . ( 2017 ) builds orthgonal layers in RNNs with householder reflections . Orthogonal convolution . In a convolution layer , the Jacobian matrix is no longer the weight matrix . Thus , the above-mentioned weight orthogonalization methods can not be trivially applied in convolution layers . Wang et al . ( 2020 ) ; Qi et al . ( 2020 ) encourage the orthogonality of the Jacobian matrix of convolution layers through a regularizer , but they can not achieve the strict orthogonality and can not attain the provable robustness . Block Convolutional Orthogonal parameterization ( BCOP ) ( Li et al. , 2019 ) is a pioneering work for enforcing the orthogonality of the Jacobian matrix of a convolution layer . It conducts parameterization of orthogonal convolutions by adapting the construction algorithm proposed by Xiao et al . ( 2018 ) . But BCOP is slow in training . Trockman & Kolter ( 2021 ) applies the Cayley transform to a skew-symmetric convolution weight in the Fourier domain so that the convolution recovered from the Fourier domain has an orthogonal Jacobian matrix . Though it achieves higher efficiency in some applications than BCOP , it is still significantly more costly than standard convolutional layers . Skew Orthogonal Convolution ( Singla & Feizi , 2021b ) achieves orthogonal convolution through Taylor expansion of the matrix exponential . It is faster than BCOP in training but slower in evaluation . In contrast , our method is efficient in both training and evaluation . Recently , Su et al . ( 2021 ) achieves orthogonal convolutions via paraunitary systems in an elegant manner . Nevertheless , it also suffers from inefficiency issue . 3 PRELIMINARY . Notation . For a matrix M , M [ i , j ] denotes the element in the i-th row and the j-th column , M [ i , : ] denotes the i-th row and M [ : , j ] denotes the j-th column . We use the similar notation for indexing tensors . For n ∈ N , [ n ] denotes the set of n numbers from 0 to n − 1 , that is , { 0 , 1 , · · · , n − 1 } . We denote the function to obtain the singular values by σ ( · ) . Let vec ( · ) denotes the function which unfolds a matrix or higher-order tensor into a vector . Let ωk = exp ( 2πi/k ) where i = √ −1 . Let Fk ∈ Rk×k be matrix for the discrete Fourier transform ( DFT ) for k-element vectors and each entry of Fk is defined as Fk [ p , q ] = ω pq k . 3.1 RELATION BETWEEN THE JACOBIAN MATRIX AND THE WEIGHT MATRIX . We denote the input tensor of a convolution layer by X ∈ Rc×n×n , where n is the spatial size , and c is the number of input channels . We denote the convolution operation on X by convŴ ( X ) , where Ŵ ∈ Rk×k×c×c size where k is the size of the reception field and k is normally smaller than size of the input image , n. In this definition , we set the number of output channels identical to that of input channels . Meanwhile , we set the convolution stride as 1 and adapt the cyclic padding to make the output of the same size as the input . We denote the output tensor by Y ∈ Rc×n×n and each element of Y is obtained by ∀l , s , t ∈ [ c ] × [ n ] × [ n ] , Y [ l , s , t ] = ∑ r∈ [ c ] ∑ p∈ [ k ] ∑ q∈ [ k ] X [ l + r , s+ p , t+ q ] Ŵ [ p , q , l , r ] . ( 1 ) Since the convolution operation is a linear transformation , there exists a linear transformation M ∈ Rn2c×n2c which satisfies : Y = convŴ ( X ) ⇔ vec ( Y ) = Mvec ( X ) . ( 2 ) To simplify illustration , Ŵ ∈ Rk×k×c×c is normally expanded into W ∈ Rn×n×c×c through zeropadding . We term W as the expanded convolution kernel . In W , only k2c2 elements are non-zero . Based on the above definition , we review the following theorem in Sedghi et al . ( 2019 ) : Theorem 1 ( see Sedghi et al . ( 2019 ) Section 2.2 ) . For any expanded convolution kernel W ∈ Rn×n×c×c , let M is the matrix encoding the linear transformation of a convolution with W . For each p , q ∈ [ n ] × [ n ] , let P ( p , q ) be the c× c matrix computed by ∀s , t ∈ [ c ] × [ c ] , P ( p , q ) [ s , t ] = ( F > nW [ : , : , s , t ] Fn ) [ p , q ] . ( 3 ) Then σ ( M ) = ⋃ p∈ [ n ] , q∈ [ n ] σ ( P ( p , q ) ) . ( 4 ) The above theorem gives the relation between the singular values of the transformation matrix M and the expanded convolution W . Theorem 2 . For any W ∈ Rn×n×c×c , the Jacobian matrix of the convolution layer , J , is orthogonal if and only if ∀p , q ∈ [ n ] × [ n ] , each singular value of the matrix P ( p , q ) is 1 . Proof . A real matrix is orthogonal if and only if each singular value is 1 . Meanwhile , the Jacobian matrix J = ∂vec ( Y ) ∂vec ( X ) = M. Based on Theorem 1 , each singular value of M is 1 if and only if ∀p , q ∈ [ n ] × [ n ] , each singular value of the matrix P ( p , q ) is 1 , thus we complete the proof . | Summarizing the paper ------------------------------- The paper proposes a method to construct a convolutional layer with the orthogonal Jacobian matrix. Such a layer is 1-Lipschits: this property is an important one for building robust neural networks. The authors conduct experiments on classification tasks using CIFAR10/CIFAR100 datasets. They report standard and robust accuracies, training, and evaluation time. They claim the proposed method outperforms existing state-of-the-art approaches. | SP:7c1eb404db9c02b3d01939998f0a56f6e08d907e |
Constructing Orthogonal Convolutions in an Explicit Manner | 1 INTRODUCTION . A layer with an orthogonal input-output Jacobian matrix is 1-Lipschitz in the 2-norm , robust to the perturbation in input . Meanwhile , it preserves the gradient norm in back-propagating the gradient , which effectively overcomes the gradient explosion and attenuation issues in training deep neural networks . In the past years , many studies have shown that exploiting orthogonality of the Jacobian matrix of layers in neural networks can achieve provable robustness to adversarial attacks ( Li et al. , 2019 ) , stabler and faster training ( Arjovsky et al. , 2016 ; Xiao et al. , 2018 ) , and improved generalization ( Cogswell et al. , 2016 ; Bansal et al. , 2018 ; Sedghi et al. , 2019 ) . In a fully-connected layer y = Wx where W ∈ Rcout×cin is the weight matrix , the layer ’ s inputoutput Jacobian matrix J = ∂y∂x is just W. Thus , preserving the orthogonality of J can be accomplished through imposing the orthogonal constraint on W , which has been extensively studied in previous works ( Mhammedi et al. , 2017 ; Cissé et al. , 2017 ; Anil et al. , 2019 ) . In contrast , in a convolution layer , the Jacobian matrix is no longer the weight matrix ( convolution kernel ) . Instead , it is the circulant matrix composed of convolution kernel ( Karami et al. , 2019 ; Sedghi et al. , 2019 ) . Thus , generally , simply constructing an orthogonal convolution kernel can not achieve an orthogonal convolution . Achieving an orthogonal Jacobian matrix in the convolution layer is more challenging than that in a fully-connected layer . It is plausibly straightforward to expand the convolution kernel to the doubly block circulant Jacobian matrix and directly impose the orthogonal constraint on the Jacobian matrix . Nevertheless , it is extremely difficult to construct a Jacobian matrix that is both doubly block-circulant and orthogonal . Block Convolutional Orthogonal parameterization ( BCOP ) ( Li et al. , 2019 ) is one of the pioneering works for constructing the orthogonal convolution neural networks . It adopts the construction algorithm ( Xiao et al. , 2018 ) which decomposes a 2-D convolution into a stack of 1-D convolutions and a channel-wise orthogonal transformation . Trockman & Kolter ( 2021 ) maps the convolution kernel and the feature tensor into the frequency domain using Fast Fourier Transform ( FFT ) , and achieves the orthogonality of the Jacobian matrix through Cayley transform on the weight matrix in the frequency domain . They devise the convolution kernel of the same size as the input feature map , which takes more parameters than standard convolution layers with a local reception field . Meanwhile , FFT maps the real-value feature map and the weight matrix into matrices of complex values , increasing the computational cost to 4 times its counterpart in the real-value domain . Meanwhile , the Cayley transform requires computing the matrix inverse , which is not friendly for GPU computation . Recently , Skew Orthogonal Convolutions ( SOC ) ( Singla & Feizi , 2021b ) devises skew-symmetric filter and exploits matrix exponential ( Hoogeboom et al. , 2020 ) to attain the orthogonality of the Jacobian matrix . But SOC is slow in evaluation since it needs to apply a convolution filter multiple times sequentially on the feature map to obtain the Taylor expansion . Observing the efficiency limitations of the existing methods , in this work , we propose an explicitly constructed orthogonal ( ECO ) convolution , which is fast in both training and evaluation . It is well known that the Jacobian matrix of a layer is orthogonal if and only if each of its singular values is 1 . Thus we convert the problem of ensuring the orthogonality of the Jacobian matrix into making every singular value as 1 . Based on the relation between the singular values of the Jacobian matrix for the convolution layer and the weight matrix of the convolution layer discovered by Sedghi et al . ( 2019 ) , we construct the convolution kernel so that it ensures every singular value of the Jacobian matrix to be 1 . Compared with the recent state-of-the-art method , SOC ( Singla & Feizi , 2021b ) implicitly approximating orthogonal convolution by multiple times convolution operations , ours explicitly builds the orthogonal convolution . Thus , we can directly deploy the constructed orthogonal convolution in evaluation , taking the same computational cost as the standard convolution . It is more efficient than SOC with multiple times convolution operations in evaluation . Experiments on CIFAR10 and CIFAR100 show that , in evaluation , taking less time , ours achieves competitive standard and robust accuracy compared with SOC . 2 RELATED WORK . Weight orthogonalization . In a fully-connected layer , the input-output Jacobian matrix is the transform itself . Thus many efforts are devoted to orthogonalizing the weights . Some early works exploit an orthogonal weight initialization to speed up the training ( Saxe et al. , 2014 ; Pennington et al. , 2017 ) and build extremely deep neural networks ( Xiao et al. , 2018 ) . Recently , more efforts are devoted to exploiting the orthogonality not only in initialization but also in the whole training process . Some approaches propose “ soft ” constraints on the weight matrix . For example , Bansal et al . ( 2018 ) ; Xiong et al . ( 2016 ) introduce mutual coherence and spectral restricted isometry as orthogonal regularization on the weight matrix . Parseval Networks ( Cissé et al. , 2017 ) adapts a regularizer to encourage the orthogonality of the weight matrices . However , these methods can not guarantee the exact orthogonality of the weight matrix . Some approaches orthogonalize features during the forward pass . For example , Huang et al . ( 2018b ) extends Batch Normalization ( Ioffe & Szegedy , 2015 ) with ZCA ; ( Huang et al. , 2018a ) solves a suboptimization problem to decorrelate deep features . These methods can not avoid computationally expensive operations like SVD , thus slow in practice . Some approaches enforce orthogonality by using the Riemannian optimization on the Stiefel manifold . For example , Casado & Martı́nez-Rubio ( 2019 ) uses Pade approximation as an alternative to matrix exponential mapping to update the weight matrix on the Stifel manifold . Li et al . ( 2020 ) uses an iterative Cayley transform to enforce weight matrix orthonormal . Anil et al . ( 2019 ) also attains orthogonal weights by orthogonal parameterization based on Björck orthonormalization ( BjoRick & Bowie , 1971 ) . Lastly , some approaches incorporate orthogonal constraint into their network architecture . For example , Mhammedi et al . ( 2017 ) builds orthgonal layers in RNNs with householder reflections . Orthogonal convolution . In a convolution layer , the Jacobian matrix is no longer the weight matrix . Thus , the above-mentioned weight orthogonalization methods can not be trivially applied in convolution layers . Wang et al . ( 2020 ) ; Qi et al . ( 2020 ) encourage the orthogonality of the Jacobian matrix of convolution layers through a regularizer , but they can not achieve the strict orthogonality and can not attain the provable robustness . Block Convolutional Orthogonal parameterization ( BCOP ) ( Li et al. , 2019 ) is a pioneering work for enforcing the orthogonality of the Jacobian matrix of a convolution layer . It conducts parameterization of orthogonal convolutions by adapting the construction algorithm proposed by Xiao et al . ( 2018 ) . But BCOP is slow in training . Trockman & Kolter ( 2021 ) applies the Cayley transform to a skew-symmetric convolution weight in the Fourier domain so that the convolution recovered from the Fourier domain has an orthogonal Jacobian matrix . Though it achieves higher efficiency in some applications than BCOP , it is still significantly more costly than standard convolutional layers . Skew Orthogonal Convolution ( Singla & Feizi , 2021b ) achieves orthogonal convolution through Taylor expansion of the matrix exponential . It is faster than BCOP in training but slower in evaluation . In contrast , our method is efficient in both training and evaluation . Recently , Su et al . ( 2021 ) achieves orthogonal convolutions via paraunitary systems in an elegant manner . Nevertheless , it also suffers from inefficiency issue . 3 PRELIMINARY . Notation . For a matrix M , M [ i , j ] denotes the element in the i-th row and the j-th column , M [ i , : ] denotes the i-th row and M [ : , j ] denotes the j-th column . We use the similar notation for indexing tensors . For n ∈ N , [ n ] denotes the set of n numbers from 0 to n − 1 , that is , { 0 , 1 , · · · , n − 1 } . We denote the function to obtain the singular values by σ ( · ) . Let vec ( · ) denotes the function which unfolds a matrix or higher-order tensor into a vector . Let ωk = exp ( 2πi/k ) where i = √ −1 . Let Fk ∈ Rk×k be matrix for the discrete Fourier transform ( DFT ) for k-element vectors and each entry of Fk is defined as Fk [ p , q ] = ω pq k . 3.1 RELATION BETWEEN THE JACOBIAN MATRIX AND THE WEIGHT MATRIX . We denote the input tensor of a convolution layer by X ∈ Rc×n×n , where n is the spatial size , and c is the number of input channels . We denote the convolution operation on X by convŴ ( X ) , where Ŵ ∈ Rk×k×c×c size where k is the size of the reception field and k is normally smaller than size of the input image , n. In this definition , we set the number of output channels identical to that of input channels . Meanwhile , we set the convolution stride as 1 and adapt the cyclic padding to make the output of the same size as the input . We denote the output tensor by Y ∈ Rc×n×n and each element of Y is obtained by ∀l , s , t ∈ [ c ] × [ n ] × [ n ] , Y [ l , s , t ] = ∑ r∈ [ c ] ∑ p∈ [ k ] ∑ q∈ [ k ] X [ l + r , s+ p , t+ q ] Ŵ [ p , q , l , r ] . ( 1 ) Since the convolution operation is a linear transformation , there exists a linear transformation M ∈ Rn2c×n2c which satisfies : Y = convŴ ( X ) ⇔ vec ( Y ) = Mvec ( X ) . ( 2 ) To simplify illustration , Ŵ ∈ Rk×k×c×c is normally expanded into W ∈ Rn×n×c×c through zeropadding . We term W as the expanded convolution kernel . In W , only k2c2 elements are non-zero . Based on the above definition , we review the following theorem in Sedghi et al . ( 2019 ) : Theorem 1 ( see Sedghi et al . ( 2019 ) Section 2.2 ) . For any expanded convolution kernel W ∈ Rn×n×c×c , let M is the matrix encoding the linear transformation of a convolution with W . For each p , q ∈ [ n ] × [ n ] , let P ( p , q ) be the c× c matrix computed by ∀s , t ∈ [ c ] × [ c ] , P ( p , q ) [ s , t ] = ( F > nW [ : , : , s , t ] Fn ) [ p , q ] . ( 3 ) Then σ ( M ) = ⋃ p∈ [ n ] , q∈ [ n ] σ ( P ( p , q ) ) . ( 4 ) The above theorem gives the relation between the singular values of the transformation matrix M and the expanded convolution W . Theorem 2 . For any W ∈ Rn×n×c×c , the Jacobian matrix of the convolution layer , J , is orthogonal if and only if ∀p , q ∈ [ n ] × [ n ] , each singular value of the matrix P ( p , q ) is 1 . Proof . A real matrix is orthogonal if and only if each singular value is 1 . Meanwhile , the Jacobian matrix J = ∂vec ( Y ) ∂vec ( X ) = M. Based on Theorem 1 , each singular value of M is 1 if and only if ∀p , q ∈ [ n ] × [ n ] , each singular value of the matrix P ( p , q ) is 1 , thus we complete the proof . | This paper presents a method for enforcing strict orthogonality for convolutional layers. The method is based on considering the spectral domain, where the orthogonality of the 4D conv kernel (in spatial domain) is characterized as the orthogonality of 2D matrices, for which orthogonality can be enforced by existing techniques such as Carley transform. It is shown in experiments that this method is significantly faster than the recent skew orthogonal convolution (at ICML'21) method. | SP:7c1eb404db9c02b3d01939998f0a56f6e08d907e |
Constructing Orthogonal Convolutions in an Explicit Manner | 1 INTRODUCTION . A layer with an orthogonal input-output Jacobian matrix is 1-Lipschitz in the 2-norm , robust to the perturbation in input . Meanwhile , it preserves the gradient norm in back-propagating the gradient , which effectively overcomes the gradient explosion and attenuation issues in training deep neural networks . In the past years , many studies have shown that exploiting orthogonality of the Jacobian matrix of layers in neural networks can achieve provable robustness to adversarial attacks ( Li et al. , 2019 ) , stabler and faster training ( Arjovsky et al. , 2016 ; Xiao et al. , 2018 ) , and improved generalization ( Cogswell et al. , 2016 ; Bansal et al. , 2018 ; Sedghi et al. , 2019 ) . In a fully-connected layer y = Wx where W ∈ Rcout×cin is the weight matrix , the layer ’ s inputoutput Jacobian matrix J = ∂y∂x is just W. Thus , preserving the orthogonality of J can be accomplished through imposing the orthogonal constraint on W , which has been extensively studied in previous works ( Mhammedi et al. , 2017 ; Cissé et al. , 2017 ; Anil et al. , 2019 ) . In contrast , in a convolution layer , the Jacobian matrix is no longer the weight matrix ( convolution kernel ) . Instead , it is the circulant matrix composed of convolution kernel ( Karami et al. , 2019 ; Sedghi et al. , 2019 ) . Thus , generally , simply constructing an orthogonal convolution kernel can not achieve an orthogonal convolution . Achieving an orthogonal Jacobian matrix in the convolution layer is more challenging than that in a fully-connected layer . It is plausibly straightforward to expand the convolution kernel to the doubly block circulant Jacobian matrix and directly impose the orthogonal constraint on the Jacobian matrix . Nevertheless , it is extremely difficult to construct a Jacobian matrix that is both doubly block-circulant and orthogonal . Block Convolutional Orthogonal parameterization ( BCOP ) ( Li et al. , 2019 ) is one of the pioneering works for constructing the orthogonal convolution neural networks . It adopts the construction algorithm ( Xiao et al. , 2018 ) which decomposes a 2-D convolution into a stack of 1-D convolutions and a channel-wise orthogonal transformation . Trockman & Kolter ( 2021 ) maps the convolution kernel and the feature tensor into the frequency domain using Fast Fourier Transform ( FFT ) , and achieves the orthogonality of the Jacobian matrix through Cayley transform on the weight matrix in the frequency domain . They devise the convolution kernel of the same size as the input feature map , which takes more parameters than standard convolution layers with a local reception field . Meanwhile , FFT maps the real-value feature map and the weight matrix into matrices of complex values , increasing the computational cost to 4 times its counterpart in the real-value domain . Meanwhile , the Cayley transform requires computing the matrix inverse , which is not friendly for GPU computation . Recently , Skew Orthogonal Convolutions ( SOC ) ( Singla & Feizi , 2021b ) devises skew-symmetric filter and exploits matrix exponential ( Hoogeboom et al. , 2020 ) to attain the orthogonality of the Jacobian matrix . But SOC is slow in evaluation since it needs to apply a convolution filter multiple times sequentially on the feature map to obtain the Taylor expansion . Observing the efficiency limitations of the existing methods , in this work , we propose an explicitly constructed orthogonal ( ECO ) convolution , which is fast in both training and evaluation . It is well known that the Jacobian matrix of a layer is orthogonal if and only if each of its singular values is 1 . Thus we convert the problem of ensuring the orthogonality of the Jacobian matrix into making every singular value as 1 . Based on the relation between the singular values of the Jacobian matrix for the convolution layer and the weight matrix of the convolution layer discovered by Sedghi et al . ( 2019 ) , we construct the convolution kernel so that it ensures every singular value of the Jacobian matrix to be 1 . Compared with the recent state-of-the-art method , SOC ( Singla & Feizi , 2021b ) implicitly approximating orthogonal convolution by multiple times convolution operations , ours explicitly builds the orthogonal convolution . Thus , we can directly deploy the constructed orthogonal convolution in evaluation , taking the same computational cost as the standard convolution . It is more efficient than SOC with multiple times convolution operations in evaluation . Experiments on CIFAR10 and CIFAR100 show that , in evaluation , taking less time , ours achieves competitive standard and robust accuracy compared with SOC . 2 RELATED WORK . Weight orthogonalization . In a fully-connected layer , the input-output Jacobian matrix is the transform itself . Thus many efforts are devoted to orthogonalizing the weights . Some early works exploit an orthogonal weight initialization to speed up the training ( Saxe et al. , 2014 ; Pennington et al. , 2017 ) and build extremely deep neural networks ( Xiao et al. , 2018 ) . Recently , more efforts are devoted to exploiting the orthogonality not only in initialization but also in the whole training process . Some approaches propose “ soft ” constraints on the weight matrix . For example , Bansal et al . ( 2018 ) ; Xiong et al . ( 2016 ) introduce mutual coherence and spectral restricted isometry as orthogonal regularization on the weight matrix . Parseval Networks ( Cissé et al. , 2017 ) adapts a regularizer to encourage the orthogonality of the weight matrices . However , these methods can not guarantee the exact orthogonality of the weight matrix . Some approaches orthogonalize features during the forward pass . For example , Huang et al . ( 2018b ) extends Batch Normalization ( Ioffe & Szegedy , 2015 ) with ZCA ; ( Huang et al. , 2018a ) solves a suboptimization problem to decorrelate deep features . These methods can not avoid computationally expensive operations like SVD , thus slow in practice . Some approaches enforce orthogonality by using the Riemannian optimization on the Stiefel manifold . For example , Casado & Martı́nez-Rubio ( 2019 ) uses Pade approximation as an alternative to matrix exponential mapping to update the weight matrix on the Stifel manifold . Li et al . ( 2020 ) uses an iterative Cayley transform to enforce weight matrix orthonormal . Anil et al . ( 2019 ) also attains orthogonal weights by orthogonal parameterization based on Björck orthonormalization ( BjoRick & Bowie , 1971 ) . Lastly , some approaches incorporate orthogonal constraint into their network architecture . For example , Mhammedi et al . ( 2017 ) builds orthgonal layers in RNNs with householder reflections . Orthogonal convolution . In a convolution layer , the Jacobian matrix is no longer the weight matrix . Thus , the above-mentioned weight orthogonalization methods can not be trivially applied in convolution layers . Wang et al . ( 2020 ) ; Qi et al . ( 2020 ) encourage the orthogonality of the Jacobian matrix of convolution layers through a regularizer , but they can not achieve the strict orthogonality and can not attain the provable robustness . Block Convolutional Orthogonal parameterization ( BCOP ) ( Li et al. , 2019 ) is a pioneering work for enforcing the orthogonality of the Jacobian matrix of a convolution layer . It conducts parameterization of orthogonal convolutions by adapting the construction algorithm proposed by Xiao et al . ( 2018 ) . But BCOP is slow in training . Trockman & Kolter ( 2021 ) applies the Cayley transform to a skew-symmetric convolution weight in the Fourier domain so that the convolution recovered from the Fourier domain has an orthogonal Jacobian matrix . Though it achieves higher efficiency in some applications than BCOP , it is still significantly more costly than standard convolutional layers . Skew Orthogonal Convolution ( Singla & Feizi , 2021b ) achieves orthogonal convolution through Taylor expansion of the matrix exponential . It is faster than BCOP in training but slower in evaluation . In contrast , our method is efficient in both training and evaluation . Recently , Su et al . ( 2021 ) achieves orthogonal convolutions via paraunitary systems in an elegant manner . Nevertheless , it also suffers from inefficiency issue . 3 PRELIMINARY . Notation . For a matrix M , M [ i , j ] denotes the element in the i-th row and the j-th column , M [ i , : ] denotes the i-th row and M [ : , j ] denotes the j-th column . We use the similar notation for indexing tensors . For n ∈ N , [ n ] denotes the set of n numbers from 0 to n − 1 , that is , { 0 , 1 , · · · , n − 1 } . We denote the function to obtain the singular values by σ ( · ) . Let vec ( · ) denotes the function which unfolds a matrix or higher-order tensor into a vector . Let ωk = exp ( 2πi/k ) where i = √ −1 . Let Fk ∈ Rk×k be matrix for the discrete Fourier transform ( DFT ) for k-element vectors and each entry of Fk is defined as Fk [ p , q ] = ω pq k . 3.1 RELATION BETWEEN THE JACOBIAN MATRIX AND THE WEIGHT MATRIX . We denote the input tensor of a convolution layer by X ∈ Rc×n×n , where n is the spatial size , and c is the number of input channels . We denote the convolution operation on X by convŴ ( X ) , where Ŵ ∈ Rk×k×c×c size where k is the size of the reception field and k is normally smaller than size of the input image , n. In this definition , we set the number of output channels identical to that of input channels . Meanwhile , we set the convolution stride as 1 and adapt the cyclic padding to make the output of the same size as the input . We denote the output tensor by Y ∈ Rc×n×n and each element of Y is obtained by ∀l , s , t ∈ [ c ] × [ n ] × [ n ] , Y [ l , s , t ] = ∑ r∈ [ c ] ∑ p∈ [ k ] ∑ q∈ [ k ] X [ l + r , s+ p , t+ q ] Ŵ [ p , q , l , r ] . ( 1 ) Since the convolution operation is a linear transformation , there exists a linear transformation M ∈ Rn2c×n2c which satisfies : Y = convŴ ( X ) ⇔ vec ( Y ) = Mvec ( X ) . ( 2 ) To simplify illustration , Ŵ ∈ Rk×k×c×c is normally expanded into W ∈ Rn×n×c×c through zeropadding . We term W as the expanded convolution kernel . In W , only k2c2 elements are non-zero . Based on the above definition , we review the following theorem in Sedghi et al . ( 2019 ) : Theorem 1 ( see Sedghi et al . ( 2019 ) Section 2.2 ) . For any expanded convolution kernel W ∈ Rn×n×c×c , let M is the matrix encoding the linear transformation of a convolution with W . For each p , q ∈ [ n ] × [ n ] , let P ( p , q ) be the c× c matrix computed by ∀s , t ∈ [ c ] × [ c ] , P ( p , q ) [ s , t ] = ( F > nW [ : , : , s , t ] Fn ) [ p , q ] . ( 3 ) Then σ ( M ) = ⋃ p∈ [ n ] , q∈ [ n ] σ ( P ( p , q ) ) . ( 4 ) The above theorem gives the relation between the singular values of the transformation matrix M and the expanded convolution W . Theorem 2 . For any W ∈ Rn×n×c×c , the Jacobian matrix of the convolution layer , J , is orthogonal if and only if ∀p , q ∈ [ n ] × [ n ] , each singular value of the matrix P ( p , q ) is 1 . Proof . A real matrix is orthogonal if and only if each singular value is 1 . Meanwhile , the Jacobian matrix J = ∂vec ( Y ) ∂vec ( X ) = M. Based on Theorem 1 , each singular value of M is 1 if and only if ∀p , q ∈ [ n ] × [ n ] , each singular value of the matrix P ( p , q ) is 1 , thus we complete the proof . | This paper studies how to construct orthogonal convolutional networks in an efficient way. To this end, this paper builds the connection between the DFT-transformed kernel with the common dilated convolution. During training, the forward pass can be done by a sequence of inverse DFT and dilated convolution. During testing, all the convolution kernels only need to be transformed once so that evaluation time is significantly reduced. | SP:7c1eb404db9c02b3d01939998f0a56f6e08d907e |
Metric Learning on Temporal Graphs via Few-Shot Examples | 1 INTRODUCTION . Metric learning aims to learn a proper distance metric among data items in the input space , which reflects their underlying relationship . With the prevalence of graph data in many real-world applications , it is of key importance to design a good distance metric function for graph data , such that the output value of the function is small for similar graphs and large for dissimilar ones . Many downstream tasks on the graph data can benefit from such a distance metric . For example , it could lead to significantly improved classification accuracy for graph classification in many domains such as protein and drug discovery ( Schölkopf et al. , 2004 ; Dai et al. , 2016 ) , molecular property prediction ( Duvenaud et al. , 2015 ; Gilmer et al. , 2017 ) , and epidemic infectious pattern analysis ( Derr et al. , 2020 ; Oettershagen et al. , 2020 ) ; it could also speed up the labeling of graph data in an active learning framework ( Macskassy , 2009 ) . However , current graph metric learning methods ( Shaw et al. , 2011 ; Tsitsulin et al. , 2018 ; Bai et al. , 2019 ; Li et al. , 2019 ; Yoshida et al. , 2019 ) assume the input graph data as static and ignore evolution patterns of temporal graphs , which may also provide insights for identifying the graph property ( Isella et al. , 2011 ) . To best of our knowledge , there is currently no algorithm designed for learning metrics over temporal graphs to further involve evolution pattern consideration into the learned metric space . On the other hand , facing limited i.i.d . data , traditional metric learning methods ( Goldberger et al. , 2004 ; Salakhutdinov & Hinton , 2007 ) have been extended to the few-shot learning by transferring the learned metric across different tasks ( Vinyals et al. , 2016 ; Snell et al. , 2017 ; Oreshkin et al. , 2018 ; Allen et al. , 2019 ) . Label scarcity problem also occurs in the graph research community , because labeling graph data is typically expensive and requires background knowledge ( Hu et al. , 2020a ; b ; Qiu et al. , 2020 ) , especially for domain-specific applications such as biological graph data labeling ( Zitnik et al. , 2018 ) . Inspired by that , graph metric learning via few-shot examples has recently attracted many nascent researchers ’ attention . But , the majority has been devoted to the node-level metric learning ( Yao et al. , 2020 ; Suo et al. , 2020 ; Huang & Zitnik , 2020 ; Lan et al. , 2020 ; Wang et al. , 2020 ; Ding et al. , 2020 ) , only a few nascent efforts focus on the graph-level metrics ( Ma et al. , 2020 ; Chauhan et al. , 2020 ) , and all of them ignore the graph dynamics but take static graphs as input . To wrap up , these discussed-above observations bring three bottlenecks to present temporal graph metric learning algorithms : 1 ) How to learn a good metric over temporal graphs , especially on the entire graph level ( i.e. , accuracy of metrics ) ; 2 ) How to ensure the learning process only consumes less labelled temporal graph data ; and 3 ) How to smoothly apply that learned metric to identify unseen graphs ( i.e. , flexibility of metrics ) . In this paper , we wish to learn a distance metric only over fewer temporal graphs , which metric ( as shown in Figure 1 ) could not only help accurately classify seen temporal graphs during each metric learning task , but also be adapted smoothly to new metric learning tasks and converge fast ( i.e. , several training iterations ) to classify unseen temporal graphs by consuming a few labeled examples . Our main contributions can be summarized as : • To describe the evolving graph in a fine-grained manner , we propose the streaming-snapshot model that contains multiple time scales suitable for complex real-world scenarios and other merits are discussed in Section 3 . • To learn the metric over a bunch of streaming-snapshot modelled temporal graphs , we propose the prototypical temporal graph encoder to extract the lifelong evolution representation of a temporal graph with the proposed multi-scale time attention mechanism , such that temporal graphs from the same class share the similar encoded patterns ; To make the extracted metric rapidly adapt to unseen temporal graphs with only a few examples , we introduce a meta-learner to transfer and tailor knowledge and encapsulate it with the prototypical temporal graph encoder into an end-to-end model , called METATAG . • We conduct the temporal graph classification experiments on biological network domain and social network domain , which show the effectiveness of METATAG compared with state-of-the-art algorithms . Also , we analyze the convergence speed of METATAG during the meta-testing , the parameter sensitivity , and the ablation study of each part of METATAG . 2 PRELIMINARIES . Graph Metric Learning . Learning a distance metric is closely related to the feature extraction problem ( Globerson & Roweis , 2005 ; Salakhutdinov & Hinton , 2007 ) . To be specific , given any distance metric D , we can measure distance D ( xi , xj ) between two input feature vectors xi ∈ Rm and xj ∈ Rm by computing D′ ( fθ ( xi ) , fθ ( xi ) ) , where fθ is a learnable function mapping the input feature xi ∈ Rm into the latent feature hi = fθ ( xi ) ∈ Rf ( Salakhutdinov & Hinton , 2007 ) . The transformation function fθ could be linear or non-linear ( Wang & Sun , 2015 ) . When fθ is a linear function fθ ( xi ) = Wxi , learning a generalized Mahalanobis metric D can be expressed as follows . D ( xi , yj ) = √ ( xi − xj ) > M ( xi − xj ) = √ ( xi − xj ) > W > W ( xi − xj ) = √ ( Wxi −Wxj ) > ( Wxi −Wxj ) = D′ ( fθ ( xi ) , fθ ( xj ) ) ( 1 ) where M is some arbitrary positive semi-definite matrix to be determined for the Mahalanobis metric D , and M can be decomposed as M = W > W . Then the Mahalanobis metric D on the input feature space is equivalent to the Euclidean metric D′ on the hidden feature space , such that learning a undetermined metric D ( e.g. , Mahalanobis ) on input feature is equivalent to learning hidden features on a fixed metric D′ ( e.g. , Euclidean ) ( Globerson & Roweis , 2005 ; Salakhutdinov & Hinton , 2007 ; Wang & Sun , 2015 ; Snell et al. , 2017 ) . Also , fθ can be a non-linear transformation for involving more parameters to model higher-order correlations between input data dimensions than linear transformations ( Salakhutdinov & Hinton , 2007 ; Wang & Sun , 2015 ; Snell et al. , 2017 ) . Based on the above analysis , we are ready to model our graph metric learning problem : learning a `` good '' distance metric over pairs of graphs is to learn a `` good '' mapping function fθ of graphs in Euclidean space . The `` goodness '' is controlled by θ and we discuss how we define it in Section 3 . 3 STREAMING-SNAPSHOT MODEL AND PROBLEM SETUP . The table of symbols is summarized in Appendix . To specify , we use bold lowercase letters to denote column vectors ( e.g . a ) , bold capital letters to denote matrices ( e.g. , A ) , and A ( i , : ) to denote the i-th row of matrix A . Also , we let the parenthesized superscript denote the timestamp like A ( t ) . We use graph and network interchangeably in this paper . Streaming-Snapshot Model . In the streaming-snapshot model , there exists two kinds of timestamps , te ∈ { 0 , 1 , . . . , Te } denotes the edge timestamp and ts ∈ { 0 , 1 , . . . , Ts } denotes the snapshot timestamp . To be specific , we describe a temporal graph G as a sequence of timestamped snapshots { S ( ts ) } Tsts=0 , and each timestamped snapshot has a set of timestamped edges labeled as ( vi , vj , te , ts ) . Note that , these two timestamps are different measures , they do not need to have the comparison relationship . In Figure 2 , we provide a temporal graph example whose Te = 4 and Ts = 2 . The merits of describing the temporal graph within the streaming-snapshot model include : 1 ) Carrying multi-scale complex temporal information . Some social networks change rapidly in the microscopic view ( Leskovec et al. , 2008 ) , while some graphs like yeast metabolic graph ( Tu et al. , 2005 ) and repeating frames in video analysis ( Li et al. , 2020 ) change slowly in the macroscopic view ( Leskovec et al. , 2005 ) . If the input temporal graph has these two evolution patterns ( i.e. , edge timestamps and snapshot timestamps ) , our streaming-snapshot model could handle both of them simultaneously because streaming model could describe the interaction graph in a rapid and continuous manner and snapshots could compensate for the complement by modeling episodic , slowly-changing , and periodical patterns ( Aggarwal & Subbian , 2014 ) . If not , our streaming-snapshot is also viable by downgrading into a single streaming or a single snapshot model . 2 ) Saving computation memory . When we need to generate the graph-level embedding for a long lifetime temporal graph , we only need to load each snapshot embedding vector instead of loading every node embedding that appears in the whole temporal graph . ( The detail of how to generate a snapshot embedding through its relevant node embeddings is discussed in Section 4.1.1 , i.e. , Multi-Scale Time Attention Mechanism . ) Beyond recent temporal graph representation learning methods ( Pareja et al. , 2020 ; Xu et al. , 2020 ; Beladev et al. , 2020 ) that only focus on one time scale and ignore the whole lifetime evolution representation , our method can learn the lifelong evolution pattern of a temporal graph on different time scales . As for the data structure , we store each edge as ( vi , vj , te ) and each snapshot adjacency matrix as A ( ts ) ∈ R|V ( ts ) |×|V ( ts ) | , i.e. , V ( ts ) ⊆ V and |V ( ts ) | 6= |V ( ts+1 ) | is allowable . Although our method is readily designed for evolving input features according to different timestamps , for the notation clarity , we denote the node feature matrix X ∈ Rn×m , such that the input node feature of temporal graph G is already time-aware , and n = |V | and m denotes the dimension of features . Problem Setup . With the streaming-snapshot modelled temporal graphs , our goal is to learn a parameterized metric that could accurately classify seen temporal graphs and also be smoothly adapted to unseen temporal graphs . Based on above analysis , this problem can be solved by learning a `` good '' graph representation learning function fθ in Euclidean metric . To further achieve this `` goodness '' only with less labelled data , we formalize fθ into a bi-level meta-learning paradigm ( Finn et al. , 2017 ) . Given the streaming-snapshot modelled temporal graphs and corresponding labels G̃ = { ( G0 , y0 ) , ( G1 , y1 ) , . . . , ( Gn , yn ) } , we split G̃ into G̃train for meta-training and G̃test for metatesting , where the testing set only has unseen graph labels from the training set . We shuffle the training set G̃train to sample graph metric learning tasks following a distribution Ti ∼ P ( T ) , where each graph metric learning task Ti is realized by a K-way N -shot temporal graph classification task based on the graph representation fθi ( Gn ) . During each task Ti , we sample a support set G̃trainsupport and a query set G̃trainquery , such that the support set is used to train the graph representation function fθi to accurately predict the graph labels of the query set . At the meta-testing stage , we transfer the learned knowledge from each task ( i.e. , θi ) to the meta-learner ( i.e. , Θ ) , then we update Θ a few times by classifying unseen temporal graphs on support set G̃testsupport , finally we report the classification accuracy of fine-tuned Θ on query set G̃testquery . The concrete objective and loss function of each graph metric learning task Ti , i.e. , the `` goodness '' , is mathematically expressed in Section 4 . | This paper introduces a methodology of graph learning for dynamic graphs, where the dynamics are encoded in the representation to obtain improved results on graph classification tasks. This framework includes a temporal graph encoder that uses attention mechanisms to generate representations, as well as a meta-learning component that ensures easy knowledge transfer. Experiments are carried out on two temporal graph datasets to show strong performance in graph classification. | SP:1d1cc09adf7dcae73ef34aab3b9a66e3823a05ed |
Metric Learning on Temporal Graphs via Few-Shot Examples | 1 INTRODUCTION . Metric learning aims to learn a proper distance metric among data items in the input space , which reflects their underlying relationship . With the prevalence of graph data in many real-world applications , it is of key importance to design a good distance metric function for graph data , such that the output value of the function is small for similar graphs and large for dissimilar ones . Many downstream tasks on the graph data can benefit from such a distance metric . For example , it could lead to significantly improved classification accuracy for graph classification in many domains such as protein and drug discovery ( Schölkopf et al. , 2004 ; Dai et al. , 2016 ) , molecular property prediction ( Duvenaud et al. , 2015 ; Gilmer et al. , 2017 ) , and epidemic infectious pattern analysis ( Derr et al. , 2020 ; Oettershagen et al. , 2020 ) ; it could also speed up the labeling of graph data in an active learning framework ( Macskassy , 2009 ) . However , current graph metric learning methods ( Shaw et al. , 2011 ; Tsitsulin et al. , 2018 ; Bai et al. , 2019 ; Li et al. , 2019 ; Yoshida et al. , 2019 ) assume the input graph data as static and ignore evolution patterns of temporal graphs , which may also provide insights for identifying the graph property ( Isella et al. , 2011 ) . To best of our knowledge , there is currently no algorithm designed for learning metrics over temporal graphs to further involve evolution pattern consideration into the learned metric space . On the other hand , facing limited i.i.d . data , traditional metric learning methods ( Goldberger et al. , 2004 ; Salakhutdinov & Hinton , 2007 ) have been extended to the few-shot learning by transferring the learned metric across different tasks ( Vinyals et al. , 2016 ; Snell et al. , 2017 ; Oreshkin et al. , 2018 ; Allen et al. , 2019 ) . Label scarcity problem also occurs in the graph research community , because labeling graph data is typically expensive and requires background knowledge ( Hu et al. , 2020a ; b ; Qiu et al. , 2020 ) , especially for domain-specific applications such as biological graph data labeling ( Zitnik et al. , 2018 ) . Inspired by that , graph metric learning via few-shot examples has recently attracted many nascent researchers ’ attention . But , the majority has been devoted to the node-level metric learning ( Yao et al. , 2020 ; Suo et al. , 2020 ; Huang & Zitnik , 2020 ; Lan et al. , 2020 ; Wang et al. , 2020 ; Ding et al. , 2020 ) , only a few nascent efforts focus on the graph-level metrics ( Ma et al. , 2020 ; Chauhan et al. , 2020 ) , and all of them ignore the graph dynamics but take static graphs as input . To wrap up , these discussed-above observations bring three bottlenecks to present temporal graph metric learning algorithms : 1 ) How to learn a good metric over temporal graphs , especially on the entire graph level ( i.e. , accuracy of metrics ) ; 2 ) How to ensure the learning process only consumes less labelled temporal graph data ; and 3 ) How to smoothly apply that learned metric to identify unseen graphs ( i.e. , flexibility of metrics ) . In this paper , we wish to learn a distance metric only over fewer temporal graphs , which metric ( as shown in Figure 1 ) could not only help accurately classify seen temporal graphs during each metric learning task , but also be adapted smoothly to new metric learning tasks and converge fast ( i.e. , several training iterations ) to classify unseen temporal graphs by consuming a few labeled examples . Our main contributions can be summarized as : • To describe the evolving graph in a fine-grained manner , we propose the streaming-snapshot model that contains multiple time scales suitable for complex real-world scenarios and other merits are discussed in Section 3 . • To learn the metric over a bunch of streaming-snapshot modelled temporal graphs , we propose the prototypical temporal graph encoder to extract the lifelong evolution representation of a temporal graph with the proposed multi-scale time attention mechanism , such that temporal graphs from the same class share the similar encoded patterns ; To make the extracted metric rapidly adapt to unseen temporal graphs with only a few examples , we introduce a meta-learner to transfer and tailor knowledge and encapsulate it with the prototypical temporal graph encoder into an end-to-end model , called METATAG . • We conduct the temporal graph classification experiments on biological network domain and social network domain , which show the effectiveness of METATAG compared with state-of-the-art algorithms . Also , we analyze the convergence speed of METATAG during the meta-testing , the parameter sensitivity , and the ablation study of each part of METATAG . 2 PRELIMINARIES . Graph Metric Learning . Learning a distance metric is closely related to the feature extraction problem ( Globerson & Roweis , 2005 ; Salakhutdinov & Hinton , 2007 ) . To be specific , given any distance metric D , we can measure distance D ( xi , xj ) between two input feature vectors xi ∈ Rm and xj ∈ Rm by computing D′ ( fθ ( xi ) , fθ ( xi ) ) , where fθ is a learnable function mapping the input feature xi ∈ Rm into the latent feature hi = fθ ( xi ) ∈ Rf ( Salakhutdinov & Hinton , 2007 ) . The transformation function fθ could be linear or non-linear ( Wang & Sun , 2015 ) . When fθ is a linear function fθ ( xi ) = Wxi , learning a generalized Mahalanobis metric D can be expressed as follows . D ( xi , yj ) = √ ( xi − xj ) > M ( xi − xj ) = √ ( xi − xj ) > W > W ( xi − xj ) = √ ( Wxi −Wxj ) > ( Wxi −Wxj ) = D′ ( fθ ( xi ) , fθ ( xj ) ) ( 1 ) where M is some arbitrary positive semi-definite matrix to be determined for the Mahalanobis metric D , and M can be decomposed as M = W > W . Then the Mahalanobis metric D on the input feature space is equivalent to the Euclidean metric D′ on the hidden feature space , such that learning a undetermined metric D ( e.g. , Mahalanobis ) on input feature is equivalent to learning hidden features on a fixed metric D′ ( e.g. , Euclidean ) ( Globerson & Roweis , 2005 ; Salakhutdinov & Hinton , 2007 ; Wang & Sun , 2015 ; Snell et al. , 2017 ) . Also , fθ can be a non-linear transformation for involving more parameters to model higher-order correlations between input data dimensions than linear transformations ( Salakhutdinov & Hinton , 2007 ; Wang & Sun , 2015 ; Snell et al. , 2017 ) . Based on the above analysis , we are ready to model our graph metric learning problem : learning a `` good '' distance metric over pairs of graphs is to learn a `` good '' mapping function fθ of graphs in Euclidean space . The `` goodness '' is controlled by θ and we discuss how we define it in Section 3 . 3 STREAMING-SNAPSHOT MODEL AND PROBLEM SETUP . The table of symbols is summarized in Appendix . To specify , we use bold lowercase letters to denote column vectors ( e.g . a ) , bold capital letters to denote matrices ( e.g. , A ) , and A ( i , : ) to denote the i-th row of matrix A . Also , we let the parenthesized superscript denote the timestamp like A ( t ) . We use graph and network interchangeably in this paper . Streaming-Snapshot Model . In the streaming-snapshot model , there exists two kinds of timestamps , te ∈ { 0 , 1 , . . . , Te } denotes the edge timestamp and ts ∈ { 0 , 1 , . . . , Ts } denotes the snapshot timestamp . To be specific , we describe a temporal graph G as a sequence of timestamped snapshots { S ( ts ) } Tsts=0 , and each timestamped snapshot has a set of timestamped edges labeled as ( vi , vj , te , ts ) . Note that , these two timestamps are different measures , they do not need to have the comparison relationship . In Figure 2 , we provide a temporal graph example whose Te = 4 and Ts = 2 . The merits of describing the temporal graph within the streaming-snapshot model include : 1 ) Carrying multi-scale complex temporal information . Some social networks change rapidly in the microscopic view ( Leskovec et al. , 2008 ) , while some graphs like yeast metabolic graph ( Tu et al. , 2005 ) and repeating frames in video analysis ( Li et al. , 2020 ) change slowly in the macroscopic view ( Leskovec et al. , 2005 ) . If the input temporal graph has these two evolution patterns ( i.e. , edge timestamps and snapshot timestamps ) , our streaming-snapshot model could handle both of them simultaneously because streaming model could describe the interaction graph in a rapid and continuous manner and snapshots could compensate for the complement by modeling episodic , slowly-changing , and periodical patterns ( Aggarwal & Subbian , 2014 ) . If not , our streaming-snapshot is also viable by downgrading into a single streaming or a single snapshot model . 2 ) Saving computation memory . When we need to generate the graph-level embedding for a long lifetime temporal graph , we only need to load each snapshot embedding vector instead of loading every node embedding that appears in the whole temporal graph . ( The detail of how to generate a snapshot embedding through its relevant node embeddings is discussed in Section 4.1.1 , i.e. , Multi-Scale Time Attention Mechanism . ) Beyond recent temporal graph representation learning methods ( Pareja et al. , 2020 ; Xu et al. , 2020 ; Beladev et al. , 2020 ) that only focus on one time scale and ignore the whole lifetime evolution representation , our method can learn the lifelong evolution pattern of a temporal graph on different time scales . As for the data structure , we store each edge as ( vi , vj , te ) and each snapshot adjacency matrix as A ( ts ) ∈ R|V ( ts ) |×|V ( ts ) | , i.e. , V ( ts ) ⊆ V and |V ( ts ) | 6= |V ( ts+1 ) | is allowable . Although our method is readily designed for evolving input features according to different timestamps , for the notation clarity , we denote the node feature matrix X ∈ Rn×m , such that the input node feature of temporal graph G is already time-aware , and n = |V | and m denotes the dimension of features . Problem Setup . With the streaming-snapshot modelled temporal graphs , our goal is to learn a parameterized metric that could accurately classify seen temporal graphs and also be smoothly adapted to unseen temporal graphs . Based on above analysis , this problem can be solved by learning a `` good '' graph representation learning function fθ in Euclidean metric . To further achieve this `` goodness '' only with less labelled data , we formalize fθ into a bi-level meta-learning paradigm ( Finn et al. , 2017 ) . Given the streaming-snapshot modelled temporal graphs and corresponding labels G̃ = { ( G0 , y0 ) , ( G1 , y1 ) , . . . , ( Gn , yn ) } , we split G̃ into G̃train for meta-training and G̃test for metatesting , where the testing set only has unseen graph labels from the training set . We shuffle the training set G̃train to sample graph metric learning tasks following a distribution Ti ∼ P ( T ) , where each graph metric learning task Ti is realized by a K-way N -shot temporal graph classification task based on the graph representation fθi ( Gn ) . During each task Ti , we sample a support set G̃trainsupport and a query set G̃trainquery , such that the support set is used to train the graph representation function fθi to accurately predict the graph labels of the query set . At the meta-testing stage , we transfer the learned knowledge from each task ( i.e. , θi ) to the meta-learner ( i.e. , Θ ) , then we update Θ a few times by classifying unseen temporal graphs on support set G̃testsupport , finally we report the classification accuracy of fine-tuned Θ on query set G̃testquery . The concrete objective and loss function of each graph metric learning task Ti , i.e. , the `` goodness '' , is mathematically expressed in Section 4 . | This authors present a novel method for learning representations for time-varying graphs which allows for incorporating information at different time-scales using their streaming-snapshot model. The streaming-snapshot model has the following parts: * Each snapshot $S^{(t_{s})} = (V^{(t_{s})}, E^{(t_{s})})$ has edges of the form $(v_i, v_j, t_e) \in E^{(t_{s})}$ where $t_e$ denotes the time at which edge was formed (and is present since then). * The snapshots $S^{(t_{s})}$ are at a different time-scale ($t_e$ and $t_s$ are not comparable) with the overall learning representation being ${\cal S} \to \Re^f$. * Learning this representation is used for downstream few-shot classification task (for dynamic graphs) and is evaluated on two scenarios - time-varying biological (protein-protein interaction) networks and time-varying social networks. The MetaTag architecture has the following components: * _Time-aware node representation_ The edge creation time $t_e$ is used to learn a time-aware node representation ${\bf u}^t_e$ using attention-based weighting of neighbouring nodes features concatenated with a learnable time kernel (Algorithm 1). - The snapshot feature matrix $U^{(t_{s})}$ takes the node representation by consider the latest edge for node $u$ and using attention mechanism above to get influence of earlier edges. * _Intra-snapshot representation_ This is constructed using standard representation loss using a GCN-based encoder-decoder architecture followed by permutation-invariant readout to obtain vector representation for snapshot. * _Overall representation_ The overall representation for the time-varying graph is weighted average using attention pooling (learnt parameter) of different snapshot representations. This representation is used downstream for classification task (classification-head) based on prototypical approach [Snell+, 2017] resulting in overall end-to-end differentiable model with weighted average of reconstruction loss and classification loss. Further, the model allows adaptation to new tasks (with different classification labels) by fine-tuning on small test set (few-shot learning). Experiments are shown on biological and social network datasets (in appendix) showing efficacy of the approach compared to static graph representation methods as well as tdGraphEmbed (doc2vec style method for embedding temporal graphs) including augmentation with ProtoNet for few-shot learning comparison. | SP:1d1cc09adf7dcae73ef34aab3b9a66e3823a05ed |
Metric Learning on Temporal Graphs via Few-Shot Examples | 1 INTRODUCTION . Metric learning aims to learn a proper distance metric among data items in the input space , which reflects their underlying relationship . With the prevalence of graph data in many real-world applications , it is of key importance to design a good distance metric function for graph data , such that the output value of the function is small for similar graphs and large for dissimilar ones . Many downstream tasks on the graph data can benefit from such a distance metric . For example , it could lead to significantly improved classification accuracy for graph classification in many domains such as protein and drug discovery ( Schölkopf et al. , 2004 ; Dai et al. , 2016 ) , molecular property prediction ( Duvenaud et al. , 2015 ; Gilmer et al. , 2017 ) , and epidemic infectious pattern analysis ( Derr et al. , 2020 ; Oettershagen et al. , 2020 ) ; it could also speed up the labeling of graph data in an active learning framework ( Macskassy , 2009 ) . However , current graph metric learning methods ( Shaw et al. , 2011 ; Tsitsulin et al. , 2018 ; Bai et al. , 2019 ; Li et al. , 2019 ; Yoshida et al. , 2019 ) assume the input graph data as static and ignore evolution patterns of temporal graphs , which may also provide insights for identifying the graph property ( Isella et al. , 2011 ) . To best of our knowledge , there is currently no algorithm designed for learning metrics over temporal graphs to further involve evolution pattern consideration into the learned metric space . On the other hand , facing limited i.i.d . data , traditional metric learning methods ( Goldberger et al. , 2004 ; Salakhutdinov & Hinton , 2007 ) have been extended to the few-shot learning by transferring the learned metric across different tasks ( Vinyals et al. , 2016 ; Snell et al. , 2017 ; Oreshkin et al. , 2018 ; Allen et al. , 2019 ) . Label scarcity problem also occurs in the graph research community , because labeling graph data is typically expensive and requires background knowledge ( Hu et al. , 2020a ; b ; Qiu et al. , 2020 ) , especially for domain-specific applications such as biological graph data labeling ( Zitnik et al. , 2018 ) . Inspired by that , graph metric learning via few-shot examples has recently attracted many nascent researchers ’ attention . But , the majority has been devoted to the node-level metric learning ( Yao et al. , 2020 ; Suo et al. , 2020 ; Huang & Zitnik , 2020 ; Lan et al. , 2020 ; Wang et al. , 2020 ; Ding et al. , 2020 ) , only a few nascent efforts focus on the graph-level metrics ( Ma et al. , 2020 ; Chauhan et al. , 2020 ) , and all of them ignore the graph dynamics but take static graphs as input . To wrap up , these discussed-above observations bring three bottlenecks to present temporal graph metric learning algorithms : 1 ) How to learn a good metric over temporal graphs , especially on the entire graph level ( i.e. , accuracy of metrics ) ; 2 ) How to ensure the learning process only consumes less labelled temporal graph data ; and 3 ) How to smoothly apply that learned metric to identify unseen graphs ( i.e. , flexibility of metrics ) . In this paper , we wish to learn a distance metric only over fewer temporal graphs , which metric ( as shown in Figure 1 ) could not only help accurately classify seen temporal graphs during each metric learning task , but also be adapted smoothly to new metric learning tasks and converge fast ( i.e. , several training iterations ) to classify unseen temporal graphs by consuming a few labeled examples . Our main contributions can be summarized as : • To describe the evolving graph in a fine-grained manner , we propose the streaming-snapshot model that contains multiple time scales suitable for complex real-world scenarios and other merits are discussed in Section 3 . • To learn the metric over a bunch of streaming-snapshot modelled temporal graphs , we propose the prototypical temporal graph encoder to extract the lifelong evolution representation of a temporal graph with the proposed multi-scale time attention mechanism , such that temporal graphs from the same class share the similar encoded patterns ; To make the extracted metric rapidly adapt to unseen temporal graphs with only a few examples , we introduce a meta-learner to transfer and tailor knowledge and encapsulate it with the prototypical temporal graph encoder into an end-to-end model , called METATAG . • We conduct the temporal graph classification experiments on biological network domain and social network domain , which show the effectiveness of METATAG compared with state-of-the-art algorithms . Also , we analyze the convergence speed of METATAG during the meta-testing , the parameter sensitivity , and the ablation study of each part of METATAG . 2 PRELIMINARIES . Graph Metric Learning . Learning a distance metric is closely related to the feature extraction problem ( Globerson & Roweis , 2005 ; Salakhutdinov & Hinton , 2007 ) . To be specific , given any distance metric D , we can measure distance D ( xi , xj ) between two input feature vectors xi ∈ Rm and xj ∈ Rm by computing D′ ( fθ ( xi ) , fθ ( xi ) ) , where fθ is a learnable function mapping the input feature xi ∈ Rm into the latent feature hi = fθ ( xi ) ∈ Rf ( Salakhutdinov & Hinton , 2007 ) . The transformation function fθ could be linear or non-linear ( Wang & Sun , 2015 ) . When fθ is a linear function fθ ( xi ) = Wxi , learning a generalized Mahalanobis metric D can be expressed as follows . D ( xi , yj ) = √ ( xi − xj ) > M ( xi − xj ) = √ ( xi − xj ) > W > W ( xi − xj ) = √ ( Wxi −Wxj ) > ( Wxi −Wxj ) = D′ ( fθ ( xi ) , fθ ( xj ) ) ( 1 ) where M is some arbitrary positive semi-definite matrix to be determined for the Mahalanobis metric D , and M can be decomposed as M = W > W . Then the Mahalanobis metric D on the input feature space is equivalent to the Euclidean metric D′ on the hidden feature space , such that learning a undetermined metric D ( e.g. , Mahalanobis ) on input feature is equivalent to learning hidden features on a fixed metric D′ ( e.g. , Euclidean ) ( Globerson & Roweis , 2005 ; Salakhutdinov & Hinton , 2007 ; Wang & Sun , 2015 ; Snell et al. , 2017 ) . Also , fθ can be a non-linear transformation for involving more parameters to model higher-order correlations between input data dimensions than linear transformations ( Salakhutdinov & Hinton , 2007 ; Wang & Sun , 2015 ; Snell et al. , 2017 ) . Based on the above analysis , we are ready to model our graph metric learning problem : learning a `` good '' distance metric over pairs of graphs is to learn a `` good '' mapping function fθ of graphs in Euclidean space . The `` goodness '' is controlled by θ and we discuss how we define it in Section 3 . 3 STREAMING-SNAPSHOT MODEL AND PROBLEM SETUP . The table of symbols is summarized in Appendix . To specify , we use bold lowercase letters to denote column vectors ( e.g . a ) , bold capital letters to denote matrices ( e.g. , A ) , and A ( i , : ) to denote the i-th row of matrix A . Also , we let the parenthesized superscript denote the timestamp like A ( t ) . We use graph and network interchangeably in this paper . Streaming-Snapshot Model . In the streaming-snapshot model , there exists two kinds of timestamps , te ∈ { 0 , 1 , . . . , Te } denotes the edge timestamp and ts ∈ { 0 , 1 , . . . , Ts } denotes the snapshot timestamp . To be specific , we describe a temporal graph G as a sequence of timestamped snapshots { S ( ts ) } Tsts=0 , and each timestamped snapshot has a set of timestamped edges labeled as ( vi , vj , te , ts ) . Note that , these two timestamps are different measures , they do not need to have the comparison relationship . In Figure 2 , we provide a temporal graph example whose Te = 4 and Ts = 2 . The merits of describing the temporal graph within the streaming-snapshot model include : 1 ) Carrying multi-scale complex temporal information . Some social networks change rapidly in the microscopic view ( Leskovec et al. , 2008 ) , while some graphs like yeast metabolic graph ( Tu et al. , 2005 ) and repeating frames in video analysis ( Li et al. , 2020 ) change slowly in the macroscopic view ( Leskovec et al. , 2005 ) . If the input temporal graph has these two evolution patterns ( i.e. , edge timestamps and snapshot timestamps ) , our streaming-snapshot model could handle both of them simultaneously because streaming model could describe the interaction graph in a rapid and continuous manner and snapshots could compensate for the complement by modeling episodic , slowly-changing , and periodical patterns ( Aggarwal & Subbian , 2014 ) . If not , our streaming-snapshot is also viable by downgrading into a single streaming or a single snapshot model . 2 ) Saving computation memory . When we need to generate the graph-level embedding for a long lifetime temporal graph , we only need to load each snapshot embedding vector instead of loading every node embedding that appears in the whole temporal graph . ( The detail of how to generate a snapshot embedding through its relevant node embeddings is discussed in Section 4.1.1 , i.e. , Multi-Scale Time Attention Mechanism . ) Beyond recent temporal graph representation learning methods ( Pareja et al. , 2020 ; Xu et al. , 2020 ; Beladev et al. , 2020 ) that only focus on one time scale and ignore the whole lifetime evolution representation , our method can learn the lifelong evolution pattern of a temporal graph on different time scales . As for the data structure , we store each edge as ( vi , vj , te ) and each snapshot adjacency matrix as A ( ts ) ∈ R|V ( ts ) |×|V ( ts ) | , i.e. , V ( ts ) ⊆ V and |V ( ts ) | 6= |V ( ts+1 ) | is allowable . Although our method is readily designed for evolving input features according to different timestamps , for the notation clarity , we denote the node feature matrix X ∈ Rn×m , such that the input node feature of temporal graph G is already time-aware , and n = |V | and m denotes the dimension of features . Problem Setup . With the streaming-snapshot modelled temporal graphs , our goal is to learn a parameterized metric that could accurately classify seen temporal graphs and also be smoothly adapted to unseen temporal graphs . Based on above analysis , this problem can be solved by learning a `` good '' graph representation learning function fθ in Euclidean metric . To further achieve this `` goodness '' only with less labelled data , we formalize fθ into a bi-level meta-learning paradigm ( Finn et al. , 2017 ) . Given the streaming-snapshot modelled temporal graphs and corresponding labels G̃ = { ( G0 , y0 ) , ( G1 , y1 ) , . . . , ( Gn , yn ) } , we split G̃ into G̃train for meta-training and G̃test for metatesting , where the testing set only has unseen graph labels from the training set . We shuffle the training set G̃train to sample graph metric learning tasks following a distribution Ti ∼ P ( T ) , where each graph metric learning task Ti is realized by a K-way N -shot temporal graph classification task based on the graph representation fθi ( Gn ) . During each task Ti , we sample a support set G̃trainsupport and a query set G̃trainquery , such that the support set is used to train the graph representation function fθi to accurately predict the graph labels of the query set . At the meta-testing stage , we transfer the learned knowledge from each task ( i.e. , θi ) to the meta-learner ( i.e. , Θ ) , then we update Θ a few times by classifying unseen temporal graphs on support set G̃testsupport , finally we report the classification accuracy of fine-tuned Θ on query set G̃testquery . The concrete objective and loss function of each graph metric learning task Ti , i.e. , the `` goodness '' , is mathematically expressed in Section 4 . | This paper considers the graph metric learning problem where the underlying graphs are temporal. The key idea of obtaining a higher classification accuracy is to use a bi-level meta-learning paradigm. It essential contains two parts: 1) prototypical temporal graph encoder where the model uses multi-scale time attention to capture temporal information; and 2) meta-learner where it uses bi-level paradigm proposed in Finn et al., 2017. The authors apply the proposed method to the task of graph classification on two real-world datasets. Compared with other baseline methods, MetaTag achieves better performance in terms of classification accuracy. | SP:1d1cc09adf7dcae73ef34aab3b9a66e3823a05ed |
D$^2$-GCN: Data-Dependent GCNs for Boosting Both Efficiency and Scalability | 1 INTRODUCTION . Graph Convolutional Networks ( GCNs ) have drawn increasing attention thanks to their performance breakthroughs in graph-based learning tasks . In particular , the success of GCNs are attributed to their excellent capability to learn from non-Euclidean graph structures with irregular graph neighborhood connections via two execution phases : ( 1 ) aggregation , during which the features from the neighbor nodes are aggregated , and ( 2 ) combination , in which further updates of the features of each node are made via feed-forward layers to extract more useful features . Despite their promising performance , the unique structure of GCNs imposes prohibitive challenges for applying them to more extensive real-world applications especially those with large-scale graphs . First , GCNs ’ prohibitive inference cost limits their deployment into resource-constrained devices . For example , a 2-layer GCN model requires 19 Giga ( G ) Floating Point Operations ( FLOPs ) to process the Reddit graph ( Tailor et al. , 2021 ) and a latency of 2.94×105 milliseconds , when being executed on an Intel Xeon E5-2680 CPU platform ( Geng et al. , 2020 ) , which is 2× and 5000× over that of a 50-layer powerful Convolutional Neural Network ( CNN ) ( Awan et al. , 2017 ) , ResNet50 , respectively ; Second , while CNNs with more layers are known to consistently favor a higher accuracy ( Belkin et al. , 2019 ) , deeper GCNs suffer from accuracy drops compared with shallower ones ( Kipf & Welling , 2016 ) , making it difficult to unleash their full potential . Aiming at tackling both of the aforementioned challenges , we draw inspirations from recent works . First , previous CNN works ( Katharopoulos & Fleuret , 2018 ; Johnson & Guestrin , 2018 ; Coleman et al. , 2018 ) show that not all samples are equally important during training and training on more informative samples can improve the model accuracy , motivating us to consider allocating GCN computational budgets adapting to the sample complexity . In addition , ( Zhang et al. , 2019 ) finds that not all the CNN layers equally contribute to the final model accuracy and ( Wang et al. , 2018 ) demonstrates that skipping some of the layers even helps boost the accuracy while reducing the inference cost of CNNs . Meanwhile , recent GCN works show that not all the nodes contribute equally to the feature extraction ( Veličković et al. , 2018 ) and some neighbor nodes can be randomly abandoned without affecting the task performance ( Hamilton et al. , 2017 ) . The above prior arts motivate us to consider data-dependent dynamic GCNs for ( 1 ) pushing forward their achievable accuracy-efficiency frontier and ( 2 ) improving the trainability of deeper GCNs . To this end , we adopt a new perspective as compared to existing GCN compression works and explore data-dependent dynamic GCNs on top of SOTA GCNs . Specifically , we identify the potential data-dependent patterns that are unique to GCNs at different granularitis , and then leverage them to largely squeeze out unnecessary costs within GCNs to boost their inference efficiency and trainability . Specifically , we make the following contributions : • We propose a Data-Dependent GCN framework dubbed D2-GCN , the first dynamic inference framework dedicated to GCNs . D2-GCN integrates data-dependent dynamic skipping at multiple granularities : node-wise , edge-wise , and bit-wise , via a low-cost indicator to notably reduce the GCN inference cost , while offering a comparable or even better accuracy . • D2-GCN is found to naturally alleviate the over-smoothing issue in GCNs and thus improves the trainability of deeper GCNs , which we conjecture is because D2-GCN introduces more flexibility into the models . Hence , D2-GCN opens up a new knob to not only boost GCNs ’ inference efficiency but also provide a promising perspective towards deeper and more powerful GCNs . • Extensive experiments and ablation studies on top of various SOTA GCNs and datasets consistently validate the effectiveness and advantages of the proposed D2-GCN . In particular , D2-GCN can achieve ↓1.1×∼ ↓37.0× inference FLOPs reduction and ↓1.6×∼ ↓8.4× lower energy cost , while leading to a comparable or even better accuracy ( ↓0.5 % ∼ ↑5.6 % ) . 2 RELATED WORKS . Graph Convolutional Networks . GCNs are one of the most widely adopted algorithms for nonEuclidean and irregular graph structures ( Wu et al. , 2020 ) to categorize the nodes in the same graph ( Sen et al. , 2008 ) or predict the class of graphs ( Hu et al. , 2020 ) . They can mainly be divided into two categories : spectral-based ( Kipf & Welling , 2016 ) and spatial-based ( Gao et al. , 2019 ) . For the spectral-based GCNs , graph convolution which is based on the spectral graph theory ( Chung & Graham , 1997 ) is firstly proposed by ( Bruna et al. , 2013 ) and improved in ( Kipf & Welling , 2016 ; Defferrard et al. , 2016 ; Li et al. , 2018b ) for wider applications and better accuracy . Meanwhile , the spatial-based GCNs ( Hamilton et al. , 2017 ) directly perform the convolution in the graph domain by aggregating the neighbor nodes ’ features and recent works further improve their accuracy via clustering-based graph sampling ( Chiang et al. , 2019 ) , more expressive aggregation scheme ( Zeng et al. , 2019 ) , and attention mechanism ( Veličković et al. , 2018 ) . Orthogonal to those prior works , we explore and develop data-dependent dynamic GCNs on top of SOTA spatial-based GCNs for improving their efficiency and scalability . Efficient GCNs . Motivated by the fact that the prohibitive computational cost and memory usage of GCNs , which will even expeditiously increase with the graph size , limit the development of more powerful GCNs and their deployment into real-world applications , various compression techniques has been developed , which mainly fall into three categories : pruning graphs ( i.e. , simpler graphs ) , pruning weights ( i.e. , sparser weights ) , and quantization ( i.e. , lower bit-precision for hidden features and weights ) . For pruning graphs , ( Li et al. , 2020b ) introduces a sparse regularizer for pruning the graph connections ( i.e. , the graph adjacency matrix ) and leverages an alternating direction method of multipliers ( ADMM ) training method to make the regularization differentiable ; for pruning weights , ( Chen et al. , 2021 ) prunes the graph adjacency matrix and the model weights simultaneously , generalizing the lottery ticket hypothesis ( Frankle & Carbin , 2018 ) to GCNs ; for quantization , ( Tailor et al. , 2021 ) for the first time trains GCNs with 8-bits integer arithmetic forwarding without sacrificing the classification accuracy . Our proposed D2-GCN considers an unexplored and orthogonal perspective , and explores data-dependent dynamic GCNs on multiple granularity levels for achieving better accuracy-efficiency trade-offs and improving GCNs ’ scalability . Deeper GCNs . One challenge in designing GCNs is their scalablity of deeper GCNs for ( 1 ) handling real-world large graphs and ( 2 ) unleashing the potential of more sophisticated GCN architectures , motivating various techniques along this direction . A pioneering work ( Kipf & Welling , 2016 ) attempts to build deeper GCNs through a residual mechanism , and finds that such a design is only effective for GCNs with no more than 2 layers . Then , ( Li et al. , 2018a ) argues that the over-smoothing issue ( i.e. , connected nodes in deeper layers have more similar hidden features ) prevent GCN architectures from going deeper , and thus , several following works strive to design deeper GCNs by alleviating this issue . For example , ( Rong et al. , 2020 ) proposes to tackle the over-smoothing issue by randomly removing a certain number of edges from the input graph at each training epoch ; ( Li et al. , 2019b ; a ) further explore a generalized aggregation function and normalization layers to boost the performance of GCNs on large-scale graph learning tasks . Our proposed D2-GCN distinguishes itself as the first to explore deeper GCNs from the data-dependent aspect by incorporating automated data-dependent gating functions to alleviate the over-smoothing issue and facilitate deeper GCNs . Dynamic inference . Dynamic inference methods have been developed in the context of CNNs for adapting model complexity to input data for reducing overall average inference costs . Early works ( Teerapittayanon et al. , 2016 ; Huang et al. , 2017 ) equip DNNs with extra branch classifiers to enforce a portion of inputs to exit at earlier branches . Later works incorporate a finer-grained layer-wise skipping policy via selectively executing a subset of layers conditioned on each input data . In particular , SkipNet ( Wang et al. , 2018 ) adopts reinforcement learning to learn the layerwise skipping policy and BlockDrop ( Wu et al. , 2018 ) trains one global policy network to skip residual blocks . The following works extend this idea to even finer-grained granularity levels , e.g. , the filter level ( Lin et al. , 2017 ; Chen et al. , 2018 ) or the bit level ( Shen et al. , 2020 ; Fu et al. , 2020 ) . Different from previous works , our D2-GCN framework is the first attempt at dynamic inference in the context of GCNs . More importantly , D2-GCN leverages the unique structures of GCNs to exploit GCN-specific data-dependent strategies from three different granularities , i.e. , node-wise , edge-wise , and bit-wise , to largely boost the accuracy-efficiency trade-offs frontiers of GCNs , and further makes these strategies to facilitate the development of deeper GCNs . 3 THE PROPOSED D2-GCN FRAMEWORK In this section , we first introduce the preliminaries of GCNs and the motivating analysis to support our D2-GCN framework , and then present the detailed design of D2-GCN and its training pipeline . 3.1 PRELIMINARIES OF GCNS . GCN general formulation . For a given graph G = ( V , E ) with n nodes vi ∈ V , m edges ( vi , vj ) ∈ E , and an adjacent matrix A ∈ RN×N to represent the connectivity information , where the non-zero entries represent the existing connections among different nodes . Also , the node degree for each node vi ∈ V is defined as di = ∑ j Aij and the diagonal degree matrix D is formed with Dii = di . For each layer l of a GCN , the hidden features of the nodes are represented by the feature matrix xl ∈ RN×H , where H denotes the hidden feature dimension of each node . Thus , a GCN layer can be formulated as : xl+1 = ACTl ( ÂxlWl ) , ( 1 ) where  is the normalized version of A :  = D− 1 2AD− 1 2 , ACTl represents the activation function of layer l , and Wl represents the weights of layer l. The inference process of one GCN layer can be viewed as two separated phases : Aggregation and Combination . Âxl represents the Aggregation phase which aggregates the 1-hop neighbors of each node into a unified feature vector ; after that , during the Combination phase , Âxl is transformed to ÂxlWl via a feed-forward layer . Meanwhile , there are also some works that define the combination phase inside the aggregation phase , thus merging the two phases into a single message passing process with the self-loop ( Hamilton , 2020 ) . Complexity analysis of GCNs . The computational complexity of GCN inferences can be represented as : O ( LmH + LnH2 ) , ( 2 ) where L is the total number of GCN layers , n is the total number of nodes , m is the total number of edges , and H is hidden feature dimension of each node ( Chiang et al. , 2019 ) . 3.2 D2-GCN : MOTIVATING ANALYSIS Causes of GCNs ’ prohibitive inference cost . The inefficiency of GCNs mainly comes from two aspects : First , graphs are often very large as exacerbated by their complex and irregular neighbor connections , which lead to a prohibitive amount of nodes and edges , i.e. , m and n in Eq . 2 , respectively . For example , a graph dataset can come with as many as 169,343 nodes and 39,561,252 edges ( Hu et al. , 2020 ) , which can cause both large computational and data movement costs . Second , the dimension of GCNs ’ node feature vectors , i.e. , H in Eq . 2 , can be very large , e.g. , each node in the Citeseer graph has a feature dimension of 3,703 , leading to a high workload during the feed-forward computation in the combination step , especially when the computations are conducted with full precision ( i.e. , 32-bits float point ) . Accordingly , a straightforward way to reduce the aforementioned costs associated with GCN inference is to reduce the number of ( 1 ) nodes , i.e. , n in Eq . 2 , ( 2 ) connections between nodes , i.e. , edges , m in Eq . 2 , and ( 3 ) features of each node , i.e. , H in Eq . 2 , whereas naively reducing these parameters can hurt GCNs ’ model capacity and thus their achievable task accuracy . Causes of deeper GCNs ’ training difficulty . Deeper GCNs are consistently observed to suffer from accuracy drop as compared to their shallower counterparts , regardless of the adopted GCN designs ( Kipf & Welling , 2016 ; Pham et al. , 2017 ; Rahimi et al. , 2018 ; Xu et al. , 2018 ) . ( Li et al. , 2018a ; Zhao & Akoglu , 2019 ) propose that the accuracy drop is resulted from the GCN ’ s oversmoothing issue , i.e. , repeatedly applying GCN layers many times will make the hidden features of different nodes converge to similar values . Based on that , some regularizations are proposed to enable a deeper GCN to achieve a higher accuracy , e.g. , randomly drop out certain edges of the input graph during each training iteration in ( Rong et al. , 2020 ) . Not all data/ ( model components ) are equally important . Recent works in both CNNs ( Katharopoulos & Fleuret , 2018 ; Wang et al. , 2018 ) and GCNs ( Veličković et al. , 2018 ; Hamilton et al. , 2017 ) show that not all data samples , e.g. , input images or graphs , and model components , e.g. , specific Convolutional or Graph Convolutional layers , for the same model are equally important for a given task in terms of the achievable accuracy vs. efficiency trade-offs . For example , some of data/ ( model components ) can be skipped without hurting or even boosting the accuracy due to the increased model flexibility . These observations motivate us to consider boosting both GCNs ’ efficiency and scalability by processing the graphs in a comprehensive data-dependent manner , i.e. , for different data , only a fraction of the graph ’ s components , e.g. , parts of nodes , edges , or bitwidth , are involved into the computations based on the corresponding features of the given graph . The resulting benefits come from two aspects : ( 1 ) such a data-dependent design can dynamically allocate more computational budgets to difficult data samples and smaller budgets for simpler data samples to reduce the total inference cost while maintaining the accuracy ; and ( 2 ) the increased model flexibility resulting from the dynamic inference models can naturally provide a certain regularization effect since different components of a graph may work together or independently in an data-dependent manner to alleviate GCNs ’ over-smoothing issue discussed in ( Li et al. , 2020a ) . As such , deeper GCNs can be more effectively trained thanks to the improved learning capacity enabled by the increased model flexibility . 3.3 D2-GCN : SKIPPING STRATEGY DESIGN Overview . Motivated by the analysis in the above subsection , we propose the D2-GCN framework that can dynamically process each graph with a complexity adapting to the data difficulty at three granularities , i.e. , node/edge/bit-wise , as shown in Fig . 1 . We hypothesize that such a coarse-to-fine strategy can achieve more efficient GCN inference without hurting the accuracy , and maximize the flexibility of GCN structures to provide effective regularizations for training deeper GCNs . Based on the GCN layer formulation in Eq . 1 , we can represent each layer of our proposed D2-GCN as : xl+1 = g n l xgbl + ( 1− g n l ) xl ( 3 ) xgbl = K∑ k=1 gb , kl ( ÂgelQBk ( xl ) QBk ( wl ) ) ( 4 ) Âgel = ( g e l ) T  ( 5 ) where xl+1 and xl ∈ Rn×H denote the feature matrix of layer l + 1 and layer l , respectively , wl ∈ RH×H is the weights for the Combination phase in layer l ,  ∈ Rn×n is the normalized adjacency matrix , and represents the element-wise matrix multiplication ( broadcast will be performed if the matrix shapes do not match ) . All the variables above share the same definition as in Eq . 1 . Specifically , in Eq . 3 , gnl ∈ { 0 , 1 } n×1 represents the output of the gating function for node-wise skipping , and xgbl is the feature matrix after enabling the gating function for bitwise skipping in layer l ; in Eq . 4 , gbl ∈ { 0 , 1 } n×K ( gb , kl ∈ { 0 , 1 } n×1 is the k-th entry of gbl ) represents the output of the gating function for bit-wise skipping , QBk ( ) is the quantization function to quantize weights wl and feature matrix xl into Bkbits following the pre-defined quantization bit-width options B = { B1 , ... , BK } , and Âgel is the normalized adjacency matrix after enabling the gating function for edge-wise skipping in layer l ; and in Eq . 5 , gel ∈ { 0 , 1 } n×1 represents the output of the gating function for edge-wise skipping . We elaborate more on how they work in GCNs and how they tackle the causes of GCNs ’ inference cost below . Node-wise skipping ( i.e. , Eq . 3 ) . gnl makes a binary decision for each node regarding whether to skip its aggregation and combination phases based on its hidden features . In this way , a set of less important nodes identified by gnl will not participate the feed-forward computation during each GCN inference and the corresponding hidden features in the feature matrix xl will be directly passed to the next layer to construct xl+1 , so that the inference cost ( FLOPs , latency , and energy ) will be reduced thanks to the smaller n and m in the computational complexity , as analyzed in Eq . 2 . Bit-wise skipping ( i.e. , Eq . 4 ) . gbl determines the quantization precision in both the aggregation and combination phases for each node according to its hidden features . Since quantization reduces the feed-forward computation from the most fine-grained bit level , the computational cost for updating each feature can be aggressively reduced , making it feasible to deal with large node feature vectors even on resource constrained platforms . Edge-wise skipping ( i.e. , Eq . 5 ) . gel determines whether the connection between two nodes will be removed based on the hidden features of the corresponding nodes . Specifically , the connection will be removed when the corresponding item is zero in Âgel , as defined in Eg . 5 , and thus result in a smaller inference cost ( FLOPs , latency , and energy ) thanks to the resulting smaller m in Eq . 2 . Gating function design . Inspired by the gating function designs from dynamic CNNs ( Wang et al. , 2018 ) , we adopt a single feed-forward layer ( i.e. , fully-connected layer ) to map each node ’ s feature ( a vector with the shape of 1 × H ) to the output of the gating functions . For the node-wise and edge-wise skipping , the output of the gating functions is only an element with the shape of 1 × 1 . For the bit-wise skipping , the output of the gating functions is a vector with the shape of 1 × K. K represents the number of bit-width options and we set K = 4 in our experiments . Additionally , we follow ( Wang et al. , 2018 ) to make the training of the gating functions differentiable via straight through estimators ( Bengio et al. , 2013 ) . After building the gate functions following the design described above , we surprisingly find that such a simple design can effectively capture the dynamic patterns with very low computational overhead . In our design , it is less than 0.07 % of the GCN model ’ s cost in terms of FLOPs , while the gating functions in CNNs can cause an overhead of as large as 12.5 % ( ↑179× ) ( Wang et al. , 2020 ) . 3.4 D2-GCN : TRAINING PIPELINE DESIGN Learning objective . The learning objective of D2-GCN , LD2−GCN ( W , WG ) , can be formulated as : LD2−GCN ( W , WG ) = LGCN ( W , WG ) + αLcomp ( 6 ) where W = { w0 , w1 , w2 , ... } is the total set of model weights , WG = { wgn0 , wge0 , wgb0 , wgn1 , ... } is the total set of the gating functions ’ weights , LGCN is the commonly adopted loss function of graphbased learning tasks like node classification ( Kipf & Welling , 2016 ) or graph classification ( Hu et al. , 2020 ) , α is a trade-off parameter to balance task performance and efficiency , and Lcomp is the computational cost determined by the gating functions . If we use FLOPs as its metric , it can be written as : Lcomp = ∑L l=1 ( ||Agel ||0H ∑K k=1 ||g b , k l ||0 Bk 32 n + ||g n l ||0H2 ∑K k=1 ||g b , k l ||0 ( Bk 32 ) 2 n ) ∑L l=1 ( mH + nH 2 ) ( 7 ) The variables in Eq . 7 share the same definition as those in Eq . 1 , 2 , 3 , 4 , and 5 , i.e. , n is the total number of nodes , m is the total number of edges , H is the feature dimensions of each node , Agel is the adjacency matrix after enabling the gating function for edge-wise skipping in layer l , gb , kl is the k-th entry of gbl which is the output of the bit-wise skipping ’ s gating function , and g n l represents the output of the gating function for node-wise skipping . It is worth noting that Eq . 7 also indicates how our data-dependant dynamic skipping techniques reduce the inference cost . Specifically , the node-wise skipping will squeeze the cost via reducing the number of nodes ( n ) to ||gnl ||0 < n. The edge-wise skipping will shrink the the number of edges ( m ) to ||Agel ||0 < m to reduce the inference cost . The bit-wise skipping will reduce the cost of each matrix multiplication/addition by using a smaller bit-width instead of full precision ( 32-bits ) , which can be regarded as further multiplying a factor to the inference cost ( i.e. , ∑K k=1 ||g b , k l ||0 Bk 32 n < 1 and ∑K k=1 ||g b , k l ||0 ( Bk 32 ) 2 n < 1 ) Three-stage training pipeline . Jointly training the GCN and gating functions from scratch could lead to both lower task performance and inferior gating decisions since the gating functions with random initializations in the early training stages may harm the learning process via improper gating strategies . Therefore , we propose a three-stage training pipeline to stabilize the training of D2-GCN . Stage 1 : We pretrain the GCN model with the gating functions fixed and unused . We find this step is indispensable for a decent D2-GCN design since the gating functions can hardly learn a meaningful gating strategy on top of an under-performed GCN model . Stage 2 : We fix the GCN model and train the gating functions only to maximize the task performance by setting the trade-off parameter α in Eq . 6 to be 0 . Since randomly initialized gating functions may generate improper gating strategies which may deteriorate the pretrained GCN ’ s performance , this step contributes to generate a decent initialization for the gating functions ’ weights . Stage 3 : We jointly train the GCN model and gating functions based on Eq . 6 to optimize both the task performance and computational cost . After this step , the D2-GCN trained model is ready to be delivered and deployed onto the target platform . | The paper attempts to address the training efficiency and scalability of GCNs. To be specific, a so-called Data-Dependent dynamic GCN framework is proposed, in which node-wise skepping, edge-wise skipping, an bit-wise skipping are integrated via gate function to squeeze out (or reduce) the unimportant neighbor nodes in combinations, unimportant edge connections, and in the bit-precision, respectively. Extensive experiments are provided, showing new SOTA results on benchmark datasets. | SP:527729d7c95795a8d9a858f6d4a726bbbddc48b5 |
D$^2$-GCN: Data-Dependent GCNs for Boosting Both Efficiency and Scalability | 1 INTRODUCTION . Graph Convolutional Networks ( GCNs ) have drawn increasing attention thanks to their performance breakthroughs in graph-based learning tasks . In particular , the success of GCNs are attributed to their excellent capability to learn from non-Euclidean graph structures with irregular graph neighborhood connections via two execution phases : ( 1 ) aggregation , during which the features from the neighbor nodes are aggregated , and ( 2 ) combination , in which further updates of the features of each node are made via feed-forward layers to extract more useful features . Despite their promising performance , the unique structure of GCNs imposes prohibitive challenges for applying them to more extensive real-world applications especially those with large-scale graphs . First , GCNs ’ prohibitive inference cost limits their deployment into resource-constrained devices . For example , a 2-layer GCN model requires 19 Giga ( G ) Floating Point Operations ( FLOPs ) to process the Reddit graph ( Tailor et al. , 2021 ) and a latency of 2.94×105 milliseconds , when being executed on an Intel Xeon E5-2680 CPU platform ( Geng et al. , 2020 ) , which is 2× and 5000× over that of a 50-layer powerful Convolutional Neural Network ( CNN ) ( Awan et al. , 2017 ) , ResNet50 , respectively ; Second , while CNNs with more layers are known to consistently favor a higher accuracy ( Belkin et al. , 2019 ) , deeper GCNs suffer from accuracy drops compared with shallower ones ( Kipf & Welling , 2016 ) , making it difficult to unleash their full potential . Aiming at tackling both of the aforementioned challenges , we draw inspirations from recent works . First , previous CNN works ( Katharopoulos & Fleuret , 2018 ; Johnson & Guestrin , 2018 ; Coleman et al. , 2018 ) show that not all samples are equally important during training and training on more informative samples can improve the model accuracy , motivating us to consider allocating GCN computational budgets adapting to the sample complexity . In addition , ( Zhang et al. , 2019 ) finds that not all the CNN layers equally contribute to the final model accuracy and ( Wang et al. , 2018 ) demonstrates that skipping some of the layers even helps boost the accuracy while reducing the inference cost of CNNs . Meanwhile , recent GCN works show that not all the nodes contribute equally to the feature extraction ( Veličković et al. , 2018 ) and some neighbor nodes can be randomly abandoned without affecting the task performance ( Hamilton et al. , 2017 ) . The above prior arts motivate us to consider data-dependent dynamic GCNs for ( 1 ) pushing forward their achievable accuracy-efficiency frontier and ( 2 ) improving the trainability of deeper GCNs . To this end , we adopt a new perspective as compared to existing GCN compression works and explore data-dependent dynamic GCNs on top of SOTA GCNs . Specifically , we identify the potential data-dependent patterns that are unique to GCNs at different granularitis , and then leverage them to largely squeeze out unnecessary costs within GCNs to boost their inference efficiency and trainability . Specifically , we make the following contributions : • We propose a Data-Dependent GCN framework dubbed D2-GCN , the first dynamic inference framework dedicated to GCNs . D2-GCN integrates data-dependent dynamic skipping at multiple granularities : node-wise , edge-wise , and bit-wise , via a low-cost indicator to notably reduce the GCN inference cost , while offering a comparable or even better accuracy . • D2-GCN is found to naturally alleviate the over-smoothing issue in GCNs and thus improves the trainability of deeper GCNs , which we conjecture is because D2-GCN introduces more flexibility into the models . Hence , D2-GCN opens up a new knob to not only boost GCNs ’ inference efficiency but also provide a promising perspective towards deeper and more powerful GCNs . • Extensive experiments and ablation studies on top of various SOTA GCNs and datasets consistently validate the effectiveness and advantages of the proposed D2-GCN . In particular , D2-GCN can achieve ↓1.1×∼ ↓37.0× inference FLOPs reduction and ↓1.6×∼ ↓8.4× lower energy cost , while leading to a comparable or even better accuracy ( ↓0.5 % ∼ ↑5.6 % ) . 2 RELATED WORKS . Graph Convolutional Networks . GCNs are one of the most widely adopted algorithms for nonEuclidean and irregular graph structures ( Wu et al. , 2020 ) to categorize the nodes in the same graph ( Sen et al. , 2008 ) or predict the class of graphs ( Hu et al. , 2020 ) . They can mainly be divided into two categories : spectral-based ( Kipf & Welling , 2016 ) and spatial-based ( Gao et al. , 2019 ) . For the spectral-based GCNs , graph convolution which is based on the spectral graph theory ( Chung & Graham , 1997 ) is firstly proposed by ( Bruna et al. , 2013 ) and improved in ( Kipf & Welling , 2016 ; Defferrard et al. , 2016 ; Li et al. , 2018b ) for wider applications and better accuracy . Meanwhile , the spatial-based GCNs ( Hamilton et al. , 2017 ) directly perform the convolution in the graph domain by aggregating the neighbor nodes ’ features and recent works further improve their accuracy via clustering-based graph sampling ( Chiang et al. , 2019 ) , more expressive aggregation scheme ( Zeng et al. , 2019 ) , and attention mechanism ( Veličković et al. , 2018 ) . Orthogonal to those prior works , we explore and develop data-dependent dynamic GCNs on top of SOTA spatial-based GCNs for improving their efficiency and scalability . Efficient GCNs . Motivated by the fact that the prohibitive computational cost and memory usage of GCNs , which will even expeditiously increase with the graph size , limit the development of more powerful GCNs and their deployment into real-world applications , various compression techniques has been developed , which mainly fall into three categories : pruning graphs ( i.e. , simpler graphs ) , pruning weights ( i.e. , sparser weights ) , and quantization ( i.e. , lower bit-precision for hidden features and weights ) . For pruning graphs , ( Li et al. , 2020b ) introduces a sparse regularizer for pruning the graph connections ( i.e. , the graph adjacency matrix ) and leverages an alternating direction method of multipliers ( ADMM ) training method to make the regularization differentiable ; for pruning weights , ( Chen et al. , 2021 ) prunes the graph adjacency matrix and the model weights simultaneously , generalizing the lottery ticket hypothesis ( Frankle & Carbin , 2018 ) to GCNs ; for quantization , ( Tailor et al. , 2021 ) for the first time trains GCNs with 8-bits integer arithmetic forwarding without sacrificing the classification accuracy . Our proposed D2-GCN considers an unexplored and orthogonal perspective , and explores data-dependent dynamic GCNs on multiple granularity levels for achieving better accuracy-efficiency trade-offs and improving GCNs ’ scalability . Deeper GCNs . One challenge in designing GCNs is their scalablity of deeper GCNs for ( 1 ) handling real-world large graphs and ( 2 ) unleashing the potential of more sophisticated GCN architectures , motivating various techniques along this direction . A pioneering work ( Kipf & Welling , 2016 ) attempts to build deeper GCNs through a residual mechanism , and finds that such a design is only effective for GCNs with no more than 2 layers . Then , ( Li et al. , 2018a ) argues that the over-smoothing issue ( i.e. , connected nodes in deeper layers have more similar hidden features ) prevent GCN architectures from going deeper , and thus , several following works strive to design deeper GCNs by alleviating this issue . For example , ( Rong et al. , 2020 ) proposes to tackle the over-smoothing issue by randomly removing a certain number of edges from the input graph at each training epoch ; ( Li et al. , 2019b ; a ) further explore a generalized aggregation function and normalization layers to boost the performance of GCNs on large-scale graph learning tasks . Our proposed D2-GCN distinguishes itself as the first to explore deeper GCNs from the data-dependent aspect by incorporating automated data-dependent gating functions to alleviate the over-smoothing issue and facilitate deeper GCNs . Dynamic inference . Dynamic inference methods have been developed in the context of CNNs for adapting model complexity to input data for reducing overall average inference costs . Early works ( Teerapittayanon et al. , 2016 ; Huang et al. , 2017 ) equip DNNs with extra branch classifiers to enforce a portion of inputs to exit at earlier branches . Later works incorporate a finer-grained layer-wise skipping policy via selectively executing a subset of layers conditioned on each input data . In particular , SkipNet ( Wang et al. , 2018 ) adopts reinforcement learning to learn the layerwise skipping policy and BlockDrop ( Wu et al. , 2018 ) trains one global policy network to skip residual blocks . The following works extend this idea to even finer-grained granularity levels , e.g. , the filter level ( Lin et al. , 2017 ; Chen et al. , 2018 ) or the bit level ( Shen et al. , 2020 ; Fu et al. , 2020 ) . Different from previous works , our D2-GCN framework is the first attempt at dynamic inference in the context of GCNs . More importantly , D2-GCN leverages the unique structures of GCNs to exploit GCN-specific data-dependent strategies from three different granularities , i.e. , node-wise , edge-wise , and bit-wise , to largely boost the accuracy-efficiency trade-offs frontiers of GCNs , and further makes these strategies to facilitate the development of deeper GCNs . 3 THE PROPOSED D2-GCN FRAMEWORK In this section , we first introduce the preliminaries of GCNs and the motivating analysis to support our D2-GCN framework , and then present the detailed design of D2-GCN and its training pipeline . 3.1 PRELIMINARIES OF GCNS . GCN general formulation . For a given graph G = ( V , E ) with n nodes vi ∈ V , m edges ( vi , vj ) ∈ E , and an adjacent matrix A ∈ RN×N to represent the connectivity information , where the non-zero entries represent the existing connections among different nodes . Also , the node degree for each node vi ∈ V is defined as di = ∑ j Aij and the diagonal degree matrix D is formed with Dii = di . For each layer l of a GCN , the hidden features of the nodes are represented by the feature matrix xl ∈ RN×H , where H denotes the hidden feature dimension of each node . Thus , a GCN layer can be formulated as : xl+1 = ACTl ( ÂxlWl ) , ( 1 ) where  is the normalized version of A :  = D− 1 2AD− 1 2 , ACTl represents the activation function of layer l , and Wl represents the weights of layer l. The inference process of one GCN layer can be viewed as two separated phases : Aggregation and Combination . Âxl represents the Aggregation phase which aggregates the 1-hop neighbors of each node into a unified feature vector ; after that , during the Combination phase , Âxl is transformed to ÂxlWl via a feed-forward layer . Meanwhile , there are also some works that define the combination phase inside the aggregation phase , thus merging the two phases into a single message passing process with the self-loop ( Hamilton , 2020 ) . Complexity analysis of GCNs . The computational complexity of GCN inferences can be represented as : O ( LmH + LnH2 ) , ( 2 ) where L is the total number of GCN layers , n is the total number of nodes , m is the total number of edges , and H is hidden feature dimension of each node ( Chiang et al. , 2019 ) . 3.2 D2-GCN : MOTIVATING ANALYSIS Causes of GCNs ’ prohibitive inference cost . The inefficiency of GCNs mainly comes from two aspects : First , graphs are often very large as exacerbated by their complex and irregular neighbor connections , which lead to a prohibitive amount of nodes and edges , i.e. , m and n in Eq . 2 , respectively . For example , a graph dataset can come with as many as 169,343 nodes and 39,561,252 edges ( Hu et al. , 2020 ) , which can cause both large computational and data movement costs . Second , the dimension of GCNs ’ node feature vectors , i.e. , H in Eq . 2 , can be very large , e.g. , each node in the Citeseer graph has a feature dimension of 3,703 , leading to a high workload during the feed-forward computation in the combination step , especially when the computations are conducted with full precision ( i.e. , 32-bits float point ) . Accordingly , a straightforward way to reduce the aforementioned costs associated with GCN inference is to reduce the number of ( 1 ) nodes , i.e. , n in Eq . 2 , ( 2 ) connections between nodes , i.e. , edges , m in Eq . 2 , and ( 3 ) features of each node , i.e. , H in Eq . 2 , whereas naively reducing these parameters can hurt GCNs ’ model capacity and thus their achievable task accuracy . Causes of deeper GCNs ’ training difficulty . Deeper GCNs are consistently observed to suffer from accuracy drop as compared to their shallower counterparts , regardless of the adopted GCN designs ( Kipf & Welling , 2016 ; Pham et al. , 2017 ; Rahimi et al. , 2018 ; Xu et al. , 2018 ) . ( Li et al. , 2018a ; Zhao & Akoglu , 2019 ) propose that the accuracy drop is resulted from the GCN ’ s oversmoothing issue , i.e. , repeatedly applying GCN layers many times will make the hidden features of different nodes converge to similar values . Based on that , some regularizations are proposed to enable a deeper GCN to achieve a higher accuracy , e.g. , randomly drop out certain edges of the input graph during each training iteration in ( Rong et al. , 2020 ) . Not all data/ ( model components ) are equally important . Recent works in both CNNs ( Katharopoulos & Fleuret , 2018 ; Wang et al. , 2018 ) and GCNs ( Veličković et al. , 2018 ; Hamilton et al. , 2017 ) show that not all data samples , e.g. , input images or graphs , and model components , e.g. , specific Convolutional or Graph Convolutional layers , for the same model are equally important for a given task in terms of the achievable accuracy vs. efficiency trade-offs . For example , some of data/ ( model components ) can be skipped without hurting or even boosting the accuracy due to the increased model flexibility . These observations motivate us to consider boosting both GCNs ’ efficiency and scalability by processing the graphs in a comprehensive data-dependent manner , i.e. , for different data , only a fraction of the graph ’ s components , e.g. , parts of nodes , edges , or bitwidth , are involved into the computations based on the corresponding features of the given graph . The resulting benefits come from two aspects : ( 1 ) such a data-dependent design can dynamically allocate more computational budgets to difficult data samples and smaller budgets for simpler data samples to reduce the total inference cost while maintaining the accuracy ; and ( 2 ) the increased model flexibility resulting from the dynamic inference models can naturally provide a certain regularization effect since different components of a graph may work together or independently in an data-dependent manner to alleviate GCNs ’ over-smoothing issue discussed in ( Li et al. , 2020a ) . As such , deeper GCNs can be more effectively trained thanks to the improved learning capacity enabled by the increased model flexibility . 3.3 D2-GCN : SKIPPING STRATEGY DESIGN Overview . Motivated by the analysis in the above subsection , we propose the D2-GCN framework that can dynamically process each graph with a complexity adapting to the data difficulty at three granularities , i.e. , node/edge/bit-wise , as shown in Fig . 1 . We hypothesize that such a coarse-to-fine strategy can achieve more efficient GCN inference without hurting the accuracy , and maximize the flexibility of GCN structures to provide effective regularizations for training deeper GCNs . Based on the GCN layer formulation in Eq . 1 , we can represent each layer of our proposed D2-GCN as : xl+1 = g n l xgbl + ( 1− g n l ) xl ( 3 ) xgbl = K∑ k=1 gb , kl ( ÂgelQBk ( xl ) QBk ( wl ) ) ( 4 ) Âgel = ( g e l ) T  ( 5 ) where xl+1 and xl ∈ Rn×H denote the feature matrix of layer l + 1 and layer l , respectively , wl ∈ RH×H is the weights for the Combination phase in layer l ,  ∈ Rn×n is the normalized adjacency matrix , and represents the element-wise matrix multiplication ( broadcast will be performed if the matrix shapes do not match ) . All the variables above share the same definition as in Eq . 1 . Specifically , in Eq . 3 , gnl ∈ { 0 , 1 } n×1 represents the output of the gating function for node-wise skipping , and xgbl is the feature matrix after enabling the gating function for bitwise skipping in layer l ; in Eq . 4 , gbl ∈ { 0 , 1 } n×K ( gb , kl ∈ { 0 , 1 } n×1 is the k-th entry of gbl ) represents the output of the gating function for bit-wise skipping , QBk ( ) is the quantization function to quantize weights wl and feature matrix xl into Bkbits following the pre-defined quantization bit-width options B = { B1 , ... , BK } , and Âgel is the normalized adjacency matrix after enabling the gating function for edge-wise skipping in layer l ; and in Eq . 5 , gel ∈ { 0 , 1 } n×1 represents the output of the gating function for edge-wise skipping . We elaborate more on how they work in GCNs and how they tackle the causes of GCNs ’ inference cost below . Node-wise skipping ( i.e. , Eq . 3 ) . gnl makes a binary decision for each node regarding whether to skip its aggregation and combination phases based on its hidden features . In this way , a set of less important nodes identified by gnl will not participate the feed-forward computation during each GCN inference and the corresponding hidden features in the feature matrix xl will be directly passed to the next layer to construct xl+1 , so that the inference cost ( FLOPs , latency , and energy ) will be reduced thanks to the smaller n and m in the computational complexity , as analyzed in Eq . 2 . Bit-wise skipping ( i.e. , Eq . 4 ) . gbl determines the quantization precision in both the aggregation and combination phases for each node according to its hidden features . Since quantization reduces the feed-forward computation from the most fine-grained bit level , the computational cost for updating each feature can be aggressively reduced , making it feasible to deal with large node feature vectors even on resource constrained platforms . Edge-wise skipping ( i.e. , Eq . 5 ) . gel determines whether the connection between two nodes will be removed based on the hidden features of the corresponding nodes . Specifically , the connection will be removed when the corresponding item is zero in Âgel , as defined in Eg . 5 , and thus result in a smaller inference cost ( FLOPs , latency , and energy ) thanks to the resulting smaller m in Eq . 2 . Gating function design . Inspired by the gating function designs from dynamic CNNs ( Wang et al. , 2018 ) , we adopt a single feed-forward layer ( i.e. , fully-connected layer ) to map each node ’ s feature ( a vector with the shape of 1 × H ) to the output of the gating functions . For the node-wise and edge-wise skipping , the output of the gating functions is only an element with the shape of 1 × 1 . For the bit-wise skipping , the output of the gating functions is a vector with the shape of 1 × K. K represents the number of bit-width options and we set K = 4 in our experiments . Additionally , we follow ( Wang et al. , 2018 ) to make the training of the gating functions differentiable via straight through estimators ( Bengio et al. , 2013 ) . After building the gate functions following the design described above , we surprisingly find that such a simple design can effectively capture the dynamic patterns with very low computational overhead . In our design , it is less than 0.07 % of the GCN model ’ s cost in terms of FLOPs , while the gating functions in CNNs can cause an overhead of as large as 12.5 % ( ↑179× ) ( Wang et al. , 2020 ) . 3.4 D2-GCN : TRAINING PIPELINE DESIGN Learning objective . The learning objective of D2-GCN , LD2−GCN ( W , WG ) , can be formulated as : LD2−GCN ( W , WG ) = LGCN ( W , WG ) + αLcomp ( 6 ) where W = { w0 , w1 , w2 , ... } is the total set of model weights , WG = { wgn0 , wge0 , wgb0 , wgn1 , ... } is the total set of the gating functions ’ weights , LGCN is the commonly adopted loss function of graphbased learning tasks like node classification ( Kipf & Welling , 2016 ) or graph classification ( Hu et al. , 2020 ) , α is a trade-off parameter to balance task performance and efficiency , and Lcomp is the computational cost determined by the gating functions . If we use FLOPs as its metric , it can be written as : Lcomp = ∑L l=1 ( ||Agel ||0H ∑K k=1 ||g b , k l ||0 Bk 32 n + ||g n l ||0H2 ∑K k=1 ||g b , k l ||0 ( Bk 32 ) 2 n ) ∑L l=1 ( mH + nH 2 ) ( 7 ) The variables in Eq . 7 share the same definition as those in Eq . 1 , 2 , 3 , 4 , and 5 , i.e. , n is the total number of nodes , m is the total number of edges , H is the feature dimensions of each node , Agel is the adjacency matrix after enabling the gating function for edge-wise skipping in layer l , gb , kl is the k-th entry of gbl which is the output of the bit-wise skipping ’ s gating function , and g n l represents the output of the gating function for node-wise skipping . It is worth noting that Eq . 7 also indicates how our data-dependant dynamic skipping techniques reduce the inference cost . Specifically , the node-wise skipping will squeeze the cost via reducing the number of nodes ( n ) to ||gnl ||0 < n. The edge-wise skipping will shrink the the number of edges ( m ) to ||Agel ||0 < m to reduce the inference cost . The bit-wise skipping will reduce the cost of each matrix multiplication/addition by using a smaller bit-width instead of full precision ( 32-bits ) , which can be regarded as further multiplying a factor to the inference cost ( i.e. , ∑K k=1 ||g b , k l ||0 Bk 32 n < 1 and ∑K k=1 ||g b , k l ||0 ( Bk 32 ) 2 n < 1 ) Three-stage training pipeline . Jointly training the GCN and gating functions from scratch could lead to both lower task performance and inferior gating decisions since the gating functions with random initializations in the early training stages may harm the learning process via improper gating strategies . Therefore , we propose a three-stage training pipeline to stabilize the training of D2-GCN . Stage 1 : We pretrain the GCN model with the gating functions fixed and unused . We find this step is indispensable for a decent D2-GCN design since the gating functions can hardly learn a meaningful gating strategy on top of an under-performed GCN model . Stage 2 : We fix the GCN model and train the gating functions only to maximize the task performance by setting the trade-off parameter α in Eq . 6 to be 0 . Since randomly initialized gating functions may generate improper gating strategies which may deteriorate the pretrained GCN ’ s performance , this step contributes to generate a decent initialization for the gating functions ’ weights . Stage 3 : We jointly train the GCN model and gating functions based on Eq . 6 to optimize both the task performance and computational cost . After this step , the D2-GCN trained model is ready to be delivered and deployed onto the target platform . | In this work, the authors propose relatively low-cost GCNs in a data-dependent way. Their framework has three main components: node-wise, edge-wise, and bit-wise skipping. 1. Node-wise skipping is determined by a binary decision for each node based on its features. 2. Edge-wise skipping is about the removal of connections between two nodes. 3. Bit-wise skipping is about the quantization precision of aggregated node features. It boosts efficiency while achieving comparable performance over benchmark datasets. | SP:527729d7c95795a8d9a858f6d4a726bbbddc48b5 |
D$^2$-GCN: Data-Dependent GCNs for Boosting Both Efficiency and Scalability | 1 INTRODUCTION . Graph Convolutional Networks ( GCNs ) have drawn increasing attention thanks to their performance breakthroughs in graph-based learning tasks . In particular , the success of GCNs are attributed to their excellent capability to learn from non-Euclidean graph structures with irregular graph neighborhood connections via two execution phases : ( 1 ) aggregation , during which the features from the neighbor nodes are aggregated , and ( 2 ) combination , in which further updates of the features of each node are made via feed-forward layers to extract more useful features . Despite their promising performance , the unique structure of GCNs imposes prohibitive challenges for applying them to more extensive real-world applications especially those with large-scale graphs . First , GCNs ’ prohibitive inference cost limits their deployment into resource-constrained devices . For example , a 2-layer GCN model requires 19 Giga ( G ) Floating Point Operations ( FLOPs ) to process the Reddit graph ( Tailor et al. , 2021 ) and a latency of 2.94×105 milliseconds , when being executed on an Intel Xeon E5-2680 CPU platform ( Geng et al. , 2020 ) , which is 2× and 5000× over that of a 50-layer powerful Convolutional Neural Network ( CNN ) ( Awan et al. , 2017 ) , ResNet50 , respectively ; Second , while CNNs with more layers are known to consistently favor a higher accuracy ( Belkin et al. , 2019 ) , deeper GCNs suffer from accuracy drops compared with shallower ones ( Kipf & Welling , 2016 ) , making it difficult to unleash their full potential . Aiming at tackling both of the aforementioned challenges , we draw inspirations from recent works . First , previous CNN works ( Katharopoulos & Fleuret , 2018 ; Johnson & Guestrin , 2018 ; Coleman et al. , 2018 ) show that not all samples are equally important during training and training on more informative samples can improve the model accuracy , motivating us to consider allocating GCN computational budgets adapting to the sample complexity . In addition , ( Zhang et al. , 2019 ) finds that not all the CNN layers equally contribute to the final model accuracy and ( Wang et al. , 2018 ) demonstrates that skipping some of the layers even helps boost the accuracy while reducing the inference cost of CNNs . Meanwhile , recent GCN works show that not all the nodes contribute equally to the feature extraction ( Veličković et al. , 2018 ) and some neighbor nodes can be randomly abandoned without affecting the task performance ( Hamilton et al. , 2017 ) . The above prior arts motivate us to consider data-dependent dynamic GCNs for ( 1 ) pushing forward their achievable accuracy-efficiency frontier and ( 2 ) improving the trainability of deeper GCNs . To this end , we adopt a new perspective as compared to existing GCN compression works and explore data-dependent dynamic GCNs on top of SOTA GCNs . Specifically , we identify the potential data-dependent patterns that are unique to GCNs at different granularitis , and then leverage them to largely squeeze out unnecessary costs within GCNs to boost their inference efficiency and trainability . Specifically , we make the following contributions : • We propose a Data-Dependent GCN framework dubbed D2-GCN , the first dynamic inference framework dedicated to GCNs . D2-GCN integrates data-dependent dynamic skipping at multiple granularities : node-wise , edge-wise , and bit-wise , via a low-cost indicator to notably reduce the GCN inference cost , while offering a comparable or even better accuracy . • D2-GCN is found to naturally alleviate the over-smoothing issue in GCNs and thus improves the trainability of deeper GCNs , which we conjecture is because D2-GCN introduces more flexibility into the models . Hence , D2-GCN opens up a new knob to not only boost GCNs ’ inference efficiency but also provide a promising perspective towards deeper and more powerful GCNs . • Extensive experiments and ablation studies on top of various SOTA GCNs and datasets consistently validate the effectiveness and advantages of the proposed D2-GCN . In particular , D2-GCN can achieve ↓1.1×∼ ↓37.0× inference FLOPs reduction and ↓1.6×∼ ↓8.4× lower energy cost , while leading to a comparable or even better accuracy ( ↓0.5 % ∼ ↑5.6 % ) . 2 RELATED WORKS . Graph Convolutional Networks . GCNs are one of the most widely adopted algorithms for nonEuclidean and irregular graph structures ( Wu et al. , 2020 ) to categorize the nodes in the same graph ( Sen et al. , 2008 ) or predict the class of graphs ( Hu et al. , 2020 ) . They can mainly be divided into two categories : spectral-based ( Kipf & Welling , 2016 ) and spatial-based ( Gao et al. , 2019 ) . For the spectral-based GCNs , graph convolution which is based on the spectral graph theory ( Chung & Graham , 1997 ) is firstly proposed by ( Bruna et al. , 2013 ) and improved in ( Kipf & Welling , 2016 ; Defferrard et al. , 2016 ; Li et al. , 2018b ) for wider applications and better accuracy . Meanwhile , the spatial-based GCNs ( Hamilton et al. , 2017 ) directly perform the convolution in the graph domain by aggregating the neighbor nodes ’ features and recent works further improve their accuracy via clustering-based graph sampling ( Chiang et al. , 2019 ) , more expressive aggregation scheme ( Zeng et al. , 2019 ) , and attention mechanism ( Veličković et al. , 2018 ) . Orthogonal to those prior works , we explore and develop data-dependent dynamic GCNs on top of SOTA spatial-based GCNs for improving their efficiency and scalability . Efficient GCNs . Motivated by the fact that the prohibitive computational cost and memory usage of GCNs , which will even expeditiously increase with the graph size , limit the development of more powerful GCNs and their deployment into real-world applications , various compression techniques has been developed , which mainly fall into three categories : pruning graphs ( i.e. , simpler graphs ) , pruning weights ( i.e. , sparser weights ) , and quantization ( i.e. , lower bit-precision for hidden features and weights ) . For pruning graphs , ( Li et al. , 2020b ) introduces a sparse regularizer for pruning the graph connections ( i.e. , the graph adjacency matrix ) and leverages an alternating direction method of multipliers ( ADMM ) training method to make the regularization differentiable ; for pruning weights , ( Chen et al. , 2021 ) prunes the graph adjacency matrix and the model weights simultaneously , generalizing the lottery ticket hypothesis ( Frankle & Carbin , 2018 ) to GCNs ; for quantization , ( Tailor et al. , 2021 ) for the first time trains GCNs with 8-bits integer arithmetic forwarding without sacrificing the classification accuracy . Our proposed D2-GCN considers an unexplored and orthogonal perspective , and explores data-dependent dynamic GCNs on multiple granularity levels for achieving better accuracy-efficiency trade-offs and improving GCNs ’ scalability . Deeper GCNs . One challenge in designing GCNs is their scalablity of deeper GCNs for ( 1 ) handling real-world large graphs and ( 2 ) unleashing the potential of more sophisticated GCN architectures , motivating various techniques along this direction . A pioneering work ( Kipf & Welling , 2016 ) attempts to build deeper GCNs through a residual mechanism , and finds that such a design is only effective for GCNs with no more than 2 layers . Then , ( Li et al. , 2018a ) argues that the over-smoothing issue ( i.e. , connected nodes in deeper layers have more similar hidden features ) prevent GCN architectures from going deeper , and thus , several following works strive to design deeper GCNs by alleviating this issue . For example , ( Rong et al. , 2020 ) proposes to tackle the over-smoothing issue by randomly removing a certain number of edges from the input graph at each training epoch ; ( Li et al. , 2019b ; a ) further explore a generalized aggregation function and normalization layers to boost the performance of GCNs on large-scale graph learning tasks . Our proposed D2-GCN distinguishes itself as the first to explore deeper GCNs from the data-dependent aspect by incorporating automated data-dependent gating functions to alleviate the over-smoothing issue and facilitate deeper GCNs . Dynamic inference . Dynamic inference methods have been developed in the context of CNNs for adapting model complexity to input data for reducing overall average inference costs . Early works ( Teerapittayanon et al. , 2016 ; Huang et al. , 2017 ) equip DNNs with extra branch classifiers to enforce a portion of inputs to exit at earlier branches . Later works incorporate a finer-grained layer-wise skipping policy via selectively executing a subset of layers conditioned on each input data . In particular , SkipNet ( Wang et al. , 2018 ) adopts reinforcement learning to learn the layerwise skipping policy and BlockDrop ( Wu et al. , 2018 ) trains one global policy network to skip residual blocks . The following works extend this idea to even finer-grained granularity levels , e.g. , the filter level ( Lin et al. , 2017 ; Chen et al. , 2018 ) or the bit level ( Shen et al. , 2020 ; Fu et al. , 2020 ) . Different from previous works , our D2-GCN framework is the first attempt at dynamic inference in the context of GCNs . More importantly , D2-GCN leverages the unique structures of GCNs to exploit GCN-specific data-dependent strategies from three different granularities , i.e. , node-wise , edge-wise , and bit-wise , to largely boost the accuracy-efficiency trade-offs frontiers of GCNs , and further makes these strategies to facilitate the development of deeper GCNs . 3 THE PROPOSED D2-GCN FRAMEWORK In this section , we first introduce the preliminaries of GCNs and the motivating analysis to support our D2-GCN framework , and then present the detailed design of D2-GCN and its training pipeline . 3.1 PRELIMINARIES OF GCNS . GCN general formulation . For a given graph G = ( V , E ) with n nodes vi ∈ V , m edges ( vi , vj ) ∈ E , and an adjacent matrix A ∈ RN×N to represent the connectivity information , where the non-zero entries represent the existing connections among different nodes . Also , the node degree for each node vi ∈ V is defined as di = ∑ j Aij and the diagonal degree matrix D is formed with Dii = di . For each layer l of a GCN , the hidden features of the nodes are represented by the feature matrix xl ∈ RN×H , where H denotes the hidden feature dimension of each node . Thus , a GCN layer can be formulated as : xl+1 = ACTl ( ÂxlWl ) , ( 1 ) where  is the normalized version of A :  = D− 1 2AD− 1 2 , ACTl represents the activation function of layer l , and Wl represents the weights of layer l. The inference process of one GCN layer can be viewed as two separated phases : Aggregation and Combination . Âxl represents the Aggregation phase which aggregates the 1-hop neighbors of each node into a unified feature vector ; after that , during the Combination phase , Âxl is transformed to ÂxlWl via a feed-forward layer . Meanwhile , there are also some works that define the combination phase inside the aggregation phase , thus merging the two phases into a single message passing process with the self-loop ( Hamilton , 2020 ) . Complexity analysis of GCNs . The computational complexity of GCN inferences can be represented as : O ( LmH + LnH2 ) , ( 2 ) where L is the total number of GCN layers , n is the total number of nodes , m is the total number of edges , and H is hidden feature dimension of each node ( Chiang et al. , 2019 ) . 3.2 D2-GCN : MOTIVATING ANALYSIS Causes of GCNs ’ prohibitive inference cost . The inefficiency of GCNs mainly comes from two aspects : First , graphs are often very large as exacerbated by their complex and irregular neighbor connections , which lead to a prohibitive amount of nodes and edges , i.e. , m and n in Eq . 2 , respectively . For example , a graph dataset can come with as many as 169,343 nodes and 39,561,252 edges ( Hu et al. , 2020 ) , which can cause both large computational and data movement costs . Second , the dimension of GCNs ’ node feature vectors , i.e. , H in Eq . 2 , can be very large , e.g. , each node in the Citeseer graph has a feature dimension of 3,703 , leading to a high workload during the feed-forward computation in the combination step , especially when the computations are conducted with full precision ( i.e. , 32-bits float point ) . Accordingly , a straightforward way to reduce the aforementioned costs associated with GCN inference is to reduce the number of ( 1 ) nodes , i.e. , n in Eq . 2 , ( 2 ) connections between nodes , i.e. , edges , m in Eq . 2 , and ( 3 ) features of each node , i.e. , H in Eq . 2 , whereas naively reducing these parameters can hurt GCNs ’ model capacity and thus their achievable task accuracy . Causes of deeper GCNs ’ training difficulty . Deeper GCNs are consistently observed to suffer from accuracy drop as compared to their shallower counterparts , regardless of the adopted GCN designs ( Kipf & Welling , 2016 ; Pham et al. , 2017 ; Rahimi et al. , 2018 ; Xu et al. , 2018 ) . ( Li et al. , 2018a ; Zhao & Akoglu , 2019 ) propose that the accuracy drop is resulted from the GCN ’ s oversmoothing issue , i.e. , repeatedly applying GCN layers many times will make the hidden features of different nodes converge to similar values . Based on that , some regularizations are proposed to enable a deeper GCN to achieve a higher accuracy , e.g. , randomly drop out certain edges of the input graph during each training iteration in ( Rong et al. , 2020 ) . Not all data/ ( model components ) are equally important . Recent works in both CNNs ( Katharopoulos & Fleuret , 2018 ; Wang et al. , 2018 ) and GCNs ( Veličković et al. , 2018 ; Hamilton et al. , 2017 ) show that not all data samples , e.g. , input images or graphs , and model components , e.g. , specific Convolutional or Graph Convolutional layers , for the same model are equally important for a given task in terms of the achievable accuracy vs. efficiency trade-offs . For example , some of data/ ( model components ) can be skipped without hurting or even boosting the accuracy due to the increased model flexibility . These observations motivate us to consider boosting both GCNs ’ efficiency and scalability by processing the graphs in a comprehensive data-dependent manner , i.e. , for different data , only a fraction of the graph ’ s components , e.g. , parts of nodes , edges , or bitwidth , are involved into the computations based on the corresponding features of the given graph . The resulting benefits come from two aspects : ( 1 ) such a data-dependent design can dynamically allocate more computational budgets to difficult data samples and smaller budgets for simpler data samples to reduce the total inference cost while maintaining the accuracy ; and ( 2 ) the increased model flexibility resulting from the dynamic inference models can naturally provide a certain regularization effect since different components of a graph may work together or independently in an data-dependent manner to alleviate GCNs ’ over-smoothing issue discussed in ( Li et al. , 2020a ) . As such , deeper GCNs can be more effectively trained thanks to the improved learning capacity enabled by the increased model flexibility . 3.3 D2-GCN : SKIPPING STRATEGY DESIGN Overview . Motivated by the analysis in the above subsection , we propose the D2-GCN framework that can dynamically process each graph with a complexity adapting to the data difficulty at three granularities , i.e. , node/edge/bit-wise , as shown in Fig . 1 . We hypothesize that such a coarse-to-fine strategy can achieve more efficient GCN inference without hurting the accuracy , and maximize the flexibility of GCN structures to provide effective regularizations for training deeper GCNs . Based on the GCN layer formulation in Eq . 1 , we can represent each layer of our proposed D2-GCN as : xl+1 = g n l xgbl + ( 1− g n l ) xl ( 3 ) xgbl = K∑ k=1 gb , kl ( ÂgelQBk ( xl ) QBk ( wl ) ) ( 4 ) Âgel = ( g e l ) T  ( 5 ) where xl+1 and xl ∈ Rn×H denote the feature matrix of layer l + 1 and layer l , respectively , wl ∈ RH×H is the weights for the Combination phase in layer l ,  ∈ Rn×n is the normalized adjacency matrix , and represents the element-wise matrix multiplication ( broadcast will be performed if the matrix shapes do not match ) . All the variables above share the same definition as in Eq . 1 . Specifically , in Eq . 3 , gnl ∈ { 0 , 1 } n×1 represents the output of the gating function for node-wise skipping , and xgbl is the feature matrix after enabling the gating function for bitwise skipping in layer l ; in Eq . 4 , gbl ∈ { 0 , 1 } n×K ( gb , kl ∈ { 0 , 1 } n×1 is the k-th entry of gbl ) represents the output of the gating function for bit-wise skipping , QBk ( ) is the quantization function to quantize weights wl and feature matrix xl into Bkbits following the pre-defined quantization bit-width options B = { B1 , ... , BK } , and Âgel is the normalized adjacency matrix after enabling the gating function for edge-wise skipping in layer l ; and in Eq . 5 , gel ∈ { 0 , 1 } n×1 represents the output of the gating function for edge-wise skipping . We elaborate more on how they work in GCNs and how they tackle the causes of GCNs ’ inference cost below . Node-wise skipping ( i.e. , Eq . 3 ) . gnl makes a binary decision for each node regarding whether to skip its aggregation and combination phases based on its hidden features . In this way , a set of less important nodes identified by gnl will not participate the feed-forward computation during each GCN inference and the corresponding hidden features in the feature matrix xl will be directly passed to the next layer to construct xl+1 , so that the inference cost ( FLOPs , latency , and energy ) will be reduced thanks to the smaller n and m in the computational complexity , as analyzed in Eq . 2 . Bit-wise skipping ( i.e. , Eq . 4 ) . gbl determines the quantization precision in both the aggregation and combination phases for each node according to its hidden features . Since quantization reduces the feed-forward computation from the most fine-grained bit level , the computational cost for updating each feature can be aggressively reduced , making it feasible to deal with large node feature vectors even on resource constrained platforms . Edge-wise skipping ( i.e. , Eq . 5 ) . gel determines whether the connection between two nodes will be removed based on the hidden features of the corresponding nodes . Specifically , the connection will be removed when the corresponding item is zero in Âgel , as defined in Eg . 5 , and thus result in a smaller inference cost ( FLOPs , latency , and energy ) thanks to the resulting smaller m in Eq . 2 . Gating function design . Inspired by the gating function designs from dynamic CNNs ( Wang et al. , 2018 ) , we adopt a single feed-forward layer ( i.e. , fully-connected layer ) to map each node ’ s feature ( a vector with the shape of 1 × H ) to the output of the gating functions . For the node-wise and edge-wise skipping , the output of the gating functions is only an element with the shape of 1 × 1 . For the bit-wise skipping , the output of the gating functions is a vector with the shape of 1 × K. K represents the number of bit-width options and we set K = 4 in our experiments . Additionally , we follow ( Wang et al. , 2018 ) to make the training of the gating functions differentiable via straight through estimators ( Bengio et al. , 2013 ) . After building the gate functions following the design described above , we surprisingly find that such a simple design can effectively capture the dynamic patterns with very low computational overhead . In our design , it is less than 0.07 % of the GCN model ’ s cost in terms of FLOPs , while the gating functions in CNNs can cause an overhead of as large as 12.5 % ( ↑179× ) ( Wang et al. , 2020 ) . 3.4 D2-GCN : TRAINING PIPELINE DESIGN Learning objective . The learning objective of D2-GCN , LD2−GCN ( W , WG ) , can be formulated as : LD2−GCN ( W , WG ) = LGCN ( W , WG ) + αLcomp ( 6 ) where W = { w0 , w1 , w2 , ... } is the total set of model weights , WG = { wgn0 , wge0 , wgb0 , wgn1 , ... } is the total set of the gating functions ’ weights , LGCN is the commonly adopted loss function of graphbased learning tasks like node classification ( Kipf & Welling , 2016 ) or graph classification ( Hu et al. , 2020 ) , α is a trade-off parameter to balance task performance and efficiency , and Lcomp is the computational cost determined by the gating functions . If we use FLOPs as its metric , it can be written as : Lcomp = ∑L l=1 ( ||Agel ||0H ∑K k=1 ||g b , k l ||0 Bk 32 n + ||g n l ||0H2 ∑K k=1 ||g b , k l ||0 ( Bk 32 ) 2 n ) ∑L l=1 ( mH + nH 2 ) ( 7 ) The variables in Eq . 7 share the same definition as those in Eq . 1 , 2 , 3 , 4 , and 5 , i.e. , n is the total number of nodes , m is the total number of edges , H is the feature dimensions of each node , Agel is the adjacency matrix after enabling the gating function for edge-wise skipping in layer l , gb , kl is the k-th entry of gbl which is the output of the bit-wise skipping ’ s gating function , and g n l represents the output of the gating function for node-wise skipping . It is worth noting that Eq . 7 also indicates how our data-dependant dynamic skipping techniques reduce the inference cost . Specifically , the node-wise skipping will squeeze the cost via reducing the number of nodes ( n ) to ||gnl ||0 < n. The edge-wise skipping will shrink the the number of edges ( m ) to ||Agel ||0 < m to reduce the inference cost . The bit-wise skipping will reduce the cost of each matrix multiplication/addition by using a smaller bit-width instead of full precision ( 32-bits ) , which can be regarded as further multiplying a factor to the inference cost ( i.e. , ∑K k=1 ||g b , k l ||0 Bk 32 n < 1 and ∑K k=1 ||g b , k l ||0 ( Bk 32 ) 2 n < 1 ) Three-stage training pipeline . Jointly training the GCN and gating functions from scratch could lead to both lower task performance and inferior gating decisions since the gating functions with random initializations in the early training stages may harm the learning process via improper gating strategies . Therefore , we propose a three-stage training pipeline to stabilize the training of D2-GCN . Stage 1 : We pretrain the GCN model with the gating functions fixed and unused . We find this step is indispensable for a decent D2-GCN design since the gating functions can hardly learn a meaningful gating strategy on top of an under-performed GCN model . Stage 2 : We fix the GCN model and train the gating functions only to maximize the task performance by setting the trade-off parameter α in Eq . 6 to be 0 . Since randomly initialized gating functions may generate improper gating strategies which may deteriorate the pretrained GCN ’ s performance , this step contributes to generate a decent initialization for the gating functions ’ weights . Stage 3 : We jointly train the GCN model and gating functions based on Eq . 6 to optimize both the task performance and computational cost . After this step , the D2-GCN trained model is ready to be delivered and deployed onto the target platform . | This paper proposes a Data-Dependent GCN framework (D$^2$-GCN) that integrates data-dependent dynamic skipping at multiple granularities. D$^2$-GCN is achieved by identifying the importance of node features via a low-cost indicator and thus is simple and generally applicable to various graph-based learning tasks. Experiments certify the effectiveness and efficiency of D$^2$-GCN. | SP:527729d7c95795a8d9a858f6d4a726bbbddc48b5 |
A Neural Tangent Kernel Perspective of Infinite Tree Ensembles | 1 INTRODUCTION . Tree ensembles and neural networks are powerful machine learning models used in various realworld applications . A soft tree ensemble is one of the variants of tree ensemble models that inherits characteristics of neural networks . Instead of using a greedy method ( Quinlan , 1986 ; Breiman et al. , 1984 ) for searching splitting rules , the soft tree makes the splitting rules soft and updates the entire model ’ s parameters simultaneously using the gradient method . Soft tree ensemble models are known to have high empirical performance ( Kontschieder et al. , 2015 ; Popov et al. , 2020 ; Hazimeh et al. , 2020 ) , especially for tabular datasets . Besides accuracy , there are many additional reasons why one should formulate trees in a soft manner . For example , unlike hard decision trees , the model can be updated sequentially ( Ke et al. , 2019 ) and trained in combination with pre-training ( Arik & Pfister , 2019 ) , resulting in favorable characteristics in terms of real-world continuous service deployment . Their model interpretability induced by the hierarchical splitting structure has also attracted much attention ( Frosst & Hinton , 2017 ; Wan et al. , 2021 ; Tanno et al. , 2019 ) . In addition , the idea of the soft tree is implicitly used in many different places ; for example , the process of allocating data to the appropriate leaves can be interpreted as a special case of Mixture-of-Experts ( Jordan & Jacobs , 1993 ; Shazeer et al. , 2017 ; Lepikhin et al. , 2021 ) , a technique for balancing computational complexity and prediction performance . Although various techniques have been proposed to train trees , the theoretical validity of such techniques is not well understood at sufficient depth . Examples of the practical technique include constraints on individual trees using parameter sharing ( Popov et al. , 2020 ) , adjusting the hardness of the splitting operation ( Frosst & Hinton , 2017 ; Hazimeh et al. , 2020 ) , and the use of overparameterization ( Belkin et al. , 2019 ; Karthikeyan et al. , 2021 ) . To better understand the training of tree ensemble models , we focus on the Neural Tangent Kernel ( NTK ) ( Jacot et al. , 2018 ) , a powerful tool that has been successfully applied to various neural network models with infinite hidden layer nodes . Every model architecture is known to produce a distinct NTK . Not only for the multi-layer perceptron ( MLP ) , many studies have been performed across various models , such as for Convolutional Neural Networks ( CNTK ) ( Arora et al. , 2019 ; Li et al. , 2019 ) , Graph Neural Networks ( GNTK ) ( Du et al. , 2019b ) , and Recurrent Neural Networks ( RNTK ) ( Alemohammad et al. , 2021 ) . The NTK theory is often used in the context of overparameterization of neural networks . In response to recent trends , overparameterization is also a subject of interest for tree ensembles ( Belkin et al. , 2019 ; Karthikeyan et al. , 2021 ) . Although a number of findings have been obtained using the NTK , they are mainly for typical neural networks , and it is still not obvious how to apply the NTK theory to the tree models . In this paper , by considering the limit of infinitely many trees , we introduce and study the neural tangent kernel for tree ensembles , called the Tree Neural Tangent Kernel ( TNTK ) , which provides new insights into the behavior of the ensemble of soft trees . The goal of this research is to derive the kernel that characterizes the training behavior of soft tree ensembles , and to obtain theoretical support for the empirical techniques . Our contributions are summarized as follows : • First extension of the NTK concept to the tree ensemble models . We derive the analytical form for the TNTK at initialization induced by infinitely many complete binary trees with arbitrary depth . ( Section 4.1.1 ) . We also prove that the TNTK remains constant during the training of infinite soft trees . This property allows us to analyze the behavior by kernel regression and discuss global convergence of training using the positive definiteness of the TNTK ( Section 4.1.2 , 4.1.3 ) . • Equivalence of the oblivious tree ensemble models . We show the TNTK induced by the oblivious tree structure used in practical open-source libraries such as CatBoost ( Prokhorenkova et al. , 2018 ) and NODE ( Popov et al. , 2020 ) converges to the same TNTK induced by a non-oblivious one in the limit of infinite trees . This observation implicitly supports the good empirical performance of oblivious trees with parameter sharing between tree nodes ( Section 4.2.1 ) . • Nonlinearity by adjusting the tree splitting operation . Practically , various functions have been proposed to represent the tree splitting operation . The most basic function is sigmoid . We show that the TNTK is almost a linear kernel in the basic case , and when we adjust the splitting function hard , the TNTK becomes nonlinear ( Section 4.2.2 ) . • Degeneracy of the TNTK with deep trees . The TNTK associated with deep trees exhibits degeneracy : the TNTK values are almost identical for deep trees even if the inner products of inputs are different . As a result , poor performance in numerical experiments is observed with the TNTK induced by infinitely many deep trees . This result supports the fact that the depth of trees is usually not so large in practical situations ( Section 4.2.3 ) . • Comparison to the NTK induced by the MLP . We investigate the generalization performance of infinite tree ensembles by kernel regression with the TNTK on 90 real-world datasets . Although the MLP with infinite width has better prediction accuracy on average , the infinite tree ensemble performs better than the infinite width MLP in more than 30 % of the datasets . We also showed that the TNTK is superior to the MLP-induced NTK in computational speed ( Section 5 ) . 2 BACKGROUND AND RELATED WORK . Our main focus in this paper is the soft tree and the neural tangent kernel . We briefly introduce and review them . 2.1 SOFT TREE . Based on Kontschieder et al . ( 2015 ) , we formulate a regression by soft trees . Figure 1 is a schematic image of an ensemble of M soft trees . We define a data matrix x ∈ RN0×N for N training samples { x1 , . . . , xN } with N0 features and define tree-wise parameter matrices for internal nodes wm ∈ RN0×N and leaf nodes πm ∈ R1×L for each tree m ∈ [ M ] = { 1 , . . . , M } as x = ( | . . . | x1 . . . xN | . . . | ) , wm = ( | . . . | wm,1 . . . wm , N | . . . | ) , πm = ( πm,1 , . . . , πm , L ) , where internal nodes ( blue nodes in Figure 1 ) and leaf nodes ( green nodes in Figure 1 ) are indexed from 1 to N and 1 to L , respectively . N and L may change across trees in general , while we assume that they are always fixed for simplicity throughout the paper . We also write horizontal concatenation of ( column ) vectors as x = ( x1 , . . . , xN ) ∈ RN0×N and wm = ( wm,1 , . . . , wm , N ) ∈ RN0×N . Unlike hard decision trees , we consider a model in which every single leaf node ` ∈ [ L ] = { 1 , . . . , L } of a tree m holds the probability that data will reach to it . Therefore , splitting operation at an intermediate node n ∈ [ N ] = { 1 , . . . , N } does not definitively decide splitting to the left or right . To provide an explicit form of the probabilistic tree splitting operation , we introduce the following binary relations that depend on the tree ’ s structure : ` ↙ n ( resp . n ↘ ` ) , which is true if a leaf ` belongs to the left ( resp . right ) subtree of a node n and false otherwise . We can now exploit µm , ` ( xi , wm ) : RN0 ×RN0×N → [ 0 , 1 ] , a function that returns the probability that a sample xi will reach a leaf ` of the tree m , as follows : µm , ` ( xi , wm ) = N∏ n=1 gm , n ( xi , wm , n ) 1 ` ↙n ( 1− gm , n ( xi , wm , n ) ) 1n↘ ` , ( 1 ) where 1Q is an indicator function conditioned on the argument Q , i.e. , 1true = 1 and 1false = 0 , and gm , n : RN0×RN0 → [ 0 , 1 ] is a decision function at each internal node n of a treem . To approximate decision tree splitting , the output of the decision function gm , n should be between 0.0 and 1.0 . If the output of a decision function takes only 0.0 or 1.0 , the splitting operation is equivalent to hard splitting used in typical decision trees . We will define an explicit form of the decision function gm , n in Equation ( 5 ) in the next section . The prediction for each xi from a tree m with nodes parameterized by wm and πm is given by fm ( xi , wm , πm ) = L∑ ` =1 πm , ` µm , ` ( xi , wm ) , ( 2 ) where fm : RN0 × RN0×N × R1×L → R , and πm , ` denotes the response of a leaf ` of the tree m. This formulation means that the prediction output is the average of the leaf values πm , ` weighted by µm , ` ( xi , wm ) , probability of assigning the sample xi to the leaf ` . If µm , ` ( xi , wm ) takes only 1.0 for one leaf and 0.0 for the other leaves , the behavior is equivalent to a typical decision tree prediction . In this model , wm and πm are updated during training with a gradient method . While many empirical successes have been reported , theoretical analysis for soft tree ensemble models has not been sufficiently developed . 2.2 NEURAL TANGENT KERNEL . Given N samples x ∈ RN0×N , the NTK induced by any model architecture at a training time τ is formulated as a matrix Ĥ∗τ ∈ RN×N , in which each ( i , j ) ∈ [ N ] × [ N ] component is defined as [ Ĥ∗τ ] ij : = Θ̂ ∗ τ ( xi , xj ) : = 〈 ∂farbitrary ( xi , θτ ) ∂θτ , ∂farbitrary ( xj , θτ ) ∂θτ 〉 , ( 3 ) where 〈· , ·〉 denotes the inner product and θτ ∈ RP is a concatenated vector of all the P trainable model parameters at τ . An asterisk “ ∗ ” indicates that the model is arbitrary . The model function farbitrary : RN0 × RP → R used in Equation ( 3 ) is expected to be applicable to a variety of model structures . For the soft tree ensembles introduced in Section 2.1 , the NTK is formulated as∑M m=1 ∑N n=1 〈 ∂f ( xi , w , π ) ∂wm , n , ∂f ( xj , w , π ) ∂wm , n 〉 + ∑M m=1 ∑L ` =1 〈 ∂f ( xi , w , π ) ∂πm , ` , ∂f ( xj , w , π ) ∂πm , ` 〉 . In the limit of infinite width with a proper parameter scaling , a variety of properties have been discovered from the NTK induced by the MLP . For example , Jacot et al . ( 2018 ) showed the convergence of Θ̂MLP0 ( xi , xj ) , which can vary with respect to parameters , to the unique limiting kernel Θ ( xi , xj ) at initialization in probability . Moreover , they also showed that the limiting kernel does not change during training in probability : lim width→∞ Θ̂MLPτ ( xi , xj ) = limwidth→∞ Θ̂MLP0 ( xi , xj ) = : Θ MLP ( xi , xj ) . ( 4 ) This property helps in the analytical understanding of the model behavior . For example , with the squared loss and infinitesimal step size with learning rate η , the training dynamics of gradient flow in function space coincides with kernel ridge-less regression with the limiting NTK . Such a property gives us a data-dependent generalization bound ( Bartlett & Mendelson , 2003 ) related to the NTK and the prediction targets . In addition , if the NTK is positive definite , the training can achieve global convergence ( Du et al. , 2019a ; Jacot et al. , 2018 ) . Although a number of findings have been obtained using the NTK , they are mainly for typical neural networks such as MLP and ResNet , and the NTK theory has not been applied to tree models yet . The NTK theory is often used in the context of overparameterization , a subject of interest not only for the neural networks but also for the tree models ( Belkin et al. , 2019 ; Karthikeyan et al. , 2021 ; Tang et al. , 2018 ) . 3 SETUP We train model parametersw and π to minimize the squared loss using the gradient method , where w = ( w1 , . . . , wm ) and π = ( π1 , . . . , πm ) . The tree structure is fixed during training . In order to use a known closed-form solution of the NTK ( Williams , 1996 ; Lee et al. , 2019 ) , we use a scaled error function σ : R→ ( 0 , 1 ) , resulting in the following decision function : gm , n ( xi , wm , n ) = σ ( w > m , nxi ) : = 1 2 erf ( αw > m , nxi ) + 1 2 , ( 5 ) where erf ( p ) = 2√ π ∫ p 0 e−t 2 dt for p ∈ R. This scaled error function approximates commonly used sigmoid function . Since the bias term for the input of σ can be expressed inside of w by adding an element that takes a fixed constant value for all input of the soft trees x , we do not consider the bias for simplicity . The scaling factor α is introduced by Frosst & Hinton ( 2017 ) to avoid too soft splitting . Figure 2 shows that the decision function becomes harder as α increases ( from blue to red ) , and in the limit α→∞ it coincides with the hard splitting used in typical decision trees . When aggregating the output of multiple trees , we divide the sum of the tree outputs by the square root of the number of trees f ( xi , w , π ) = 1√ M M∑ m=1 fm ( xi , wm , πm ) . ( 6 ) This 1/ √ M scaling is known to be essential in the existing NTK literature to use the weak law of the large numbers ( Jacot et al. , 2018 ) . On top of Equation ( 6 ) , we initialize each of model parameters wm , n and πm , ` with zero-mean i.i.d . Gaussians with unit variances . We refer such a parameterization as NTK initialization . In this paper , we consider a model such that allM trees have the same complete binary tree structure , a common setting for soft tree ensembles ( Popov et al. , 2020 ; Kontschieder et al. , 2015 ; Hazimeh et al. , 2020 ) . | The paper presents an extension of a neural network analytic tool called Neural Tangent Kernels to its use for decision trees and forests. The focus is a previously proposed notion of soft trees. A few analytical results are presented about the properties of the extended notion, called Tree Neural Tangent Kernel, such as the existence of some limiting deterministic form, how alternative probabilistic tree ensembles would have their kernels converging to this deterministic form, that this limiting kernel is positive definite, and that the kernel remains stable during training, equivalence of the kernels for one-level tree ensembles to those of two-layer perceptrons, etc. Some support for the analytical claims is provided by simulation experiments. | SP:f0c375422b5f1c2418652b71c20ceb6eb35f5b96 |
A Neural Tangent Kernel Perspective of Infinite Tree Ensembles | 1 INTRODUCTION . Tree ensembles and neural networks are powerful machine learning models used in various realworld applications . A soft tree ensemble is one of the variants of tree ensemble models that inherits characteristics of neural networks . Instead of using a greedy method ( Quinlan , 1986 ; Breiman et al. , 1984 ) for searching splitting rules , the soft tree makes the splitting rules soft and updates the entire model ’ s parameters simultaneously using the gradient method . Soft tree ensemble models are known to have high empirical performance ( Kontschieder et al. , 2015 ; Popov et al. , 2020 ; Hazimeh et al. , 2020 ) , especially for tabular datasets . Besides accuracy , there are many additional reasons why one should formulate trees in a soft manner . For example , unlike hard decision trees , the model can be updated sequentially ( Ke et al. , 2019 ) and trained in combination with pre-training ( Arik & Pfister , 2019 ) , resulting in favorable characteristics in terms of real-world continuous service deployment . Their model interpretability induced by the hierarchical splitting structure has also attracted much attention ( Frosst & Hinton , 2017 ; Wan et al. , 2021 ; Tanno et al. , 2019 ) . In addition , the idea of the soft tree is implicitly used in many different places ; for example , the process of allocating data to the appropriate leaves can be interpreted as a special case of Mixture-of-Experts ( Jordan & Jacobs , 1993 ; Shazeer et al. , 2017 ; Lepikhin et al. , 2021 ) , a technique for balancing computational complexity and prediction performance . Although various techniques have been proposed to train trees , the theoretical validity of such techniques is not well understood at sufficient depth . Examples of the practical technique include constraints on individual trees using parameter sharing ( Popov et al. , 2020 ) , adjusting the hardness of the splitting operation ( Frosst & Hinton , 2017 ; Hazimeh et al. , 2020 ) , and the use of overparameterization ( Belkin et al. , 2019 ; Karthikeyan et al. , 2021 ) . To better understand the training of tree ensemble models , we focus on the Neural Tangent Kernel ( NTK ) ( Jacot et al. , 2018 ) , a powerful tool that has been successfully applied to various neural network models with infinite hidden layer nodes . Every model architecture is known to produce a distinct NTK . Not only for the multi-layer perceptron ( MLP ) , many studies have been performed across various models , such as for Convolutional Neural Networks ( CNTK ) ( Arora et al. , 2019 ; Li et al. , 2019 ) , Graph Neural Networks ( GNTK ) ( Du et al. , 2019b ) , and Recurrent Neural Networks ( RNTK ) ( Alemohammad et al. , 2021 ) . The NTK theory is often used in the context of overparameterization of neural networks . In response to recent trends , overparameterization is also a subject of interest for tree ensembles ( Belkin et al. , 2019 ; Karthikeyan et al. , 2021 ) . Although a number of findings have been obtained using the NTK , they are mainly for typical neural networks , and it is still not obvious how to apply the NTK theory to the tree models . In this paper , by considering the limit of infinitely many trees , we introduce and study the neural tangent kernel for tree ensembles , called the Tree Neural Tangent Kernel ( TNTK ) , which provides new insights into the behavior of the ensemble of soft trees . The goal of this research is to derive the kernel that characterizes the training behavior of soft tree ensembles , and to obtain theoretical support for the empirical techniques . Our contributions are summarized as follows : • First extension of the NTK concept to the tree ensemble models . We derive the analytical form for the TNTK at initialization induced by infinitely many complete binary trees with arbitrary depth . ( Section 4.1.1 ) . We also prove that the TNTK remains constant during the training of infinite soft trees . This property allows us to analyze the behavior by kernel regression and discuss global convergence of training using the positive definiteness of the TNTK ( Section 4.1.2 , 4.1.3 ) . • Equivalence of the oblivious tree ensemble models . We show the TNTK induced by the oblivious tree structure used in practical open-source libraries such as CatBoost ( Prokhorenkova et al. , 2018 ) and NODE ( Popov et al. , 2020 ) converges to the same TNTK induced by a non-oblivious one in the limit of infinite trees . This observation implicitly supports the good empirical performance of oblivious trees with parameter sharing between tree nodes ( Section 4.2.1 ) . • Nonlinearity by adjusting the tree splitting operation . Practically , various functions have been proposed to represent the tree splitting operation . The most basic function is sigmoid . We show that the TNTK is almost a linear kernel in the basic case , and when we adjust the splitting function hard , the TNTK becomes nonlinear ( Section 4.2.2 ) . • Degeneracy of the TNTK with deep trees . The TNTK associated with deep trees exhibits degeneracy : the TNTK values are almost identical for deep trees even if the inner products of inputs are different . As a result , poor performance in numerical experiments is observed with the TNTK induced by infinitely many deep trees . This result supports the fact that the depth of trees is usually not so large in practical situations ( Section 4.2.3 ) . • Comparison to the NTK induced by the MLP . We investigate the generalization performance of infinite tree ensembles by kernel regression with the TNTK on 90 real-world datasets . Although the MLP with infinite width has better prediction accuracy on average , the infinite tree ensemble performs better than the infinite width MLP in more than 30 % of the datasets . We also showed that the TNTK is superior to the MLP-induced NTK in computational speed ( Section 5 ) . 2 BACKGROUND AND RELATED WORK . Our main focus in this paper is the soft tree and the neural tangent kernel . We briefly introduce and review them . 2.1 SOFT TREE . Based on Kontschieder et al . ( 2015 ) , we formulate a regression by soft trees . Figure 1 is a schematic image of an ensemble of M soft trees . We define a data matrix x ∈ RN0×N for N training samples { x1 , . . . , xN } with N0 features and define tree-wise parameter matrices for internal nodes wm ∈ RN0×N and leaf nodes πm ∈ R1×L for each tree m ∈ [ M ] = { 1 , . . . , M } as x = ( | . . . | x1 . . . xN | . . . | ) , wm = ( | . . . | wm,1 . . . wm , N | . . . | ) , πm = ( πm,1 , . . . , πm , L ) , where internal nodes ( blue nodes in Figure 1 ) and leaf nodes ( green nodes in Figure 1 ) are indexed from 1 to N and 1 to L , respectively . N and L may change across trees in general , while we assume that they are always fixed for simplicity throughout the paper . We also write horizontal concatenation of ( column ) vectors as x = ( x1 , . . . , xN ) ∈ RN0×N and wm = ( wm,1 , . . . , wm , N ) ∈ RN0×N . Unlike hard decision trees , we consider a model in which every single leaf node ` ∈ [ L ] = { 1 , . . . , L } of a tree m holds the probability that data will reach to it . Therefore , splitting operation at an intermediate node n ∈ [ N ] = { 1 , . . . , N } does not definitively decide splitting to the left or right . To provide an explicit form of the probabilistic tree splitting operation , we introduce the following binary relations that depend on the tree ’ s structure : ` ↙ n ( resp . n ↘ ` ) , which is true if a leaf ` belongs to the left ( resp . right ) subtree of a node n and false otherwise . We can now exploit µm , ` ( xi , wm ) : RN0 ×RN0×N → [ 0 , 1 ] , a function that returns the probability that a sample xi will reach a leaf ` of the tree m , as follows : µm , ` ( xi , wm ) = N∏ n=1 gm , n ( xi , wm , n ) 1 ` ↙n ( 1− gm , n ( xi , wm , n ) ) 1n↘ ` , ( 1 ) where 1Q is an indicator function conditioned on the argument Q , i.e. , 1true = 1 and 1false = 0 , and gm , n : RN0×RN0 → [ 0 , 1 ] is a decision function at each internal node n of a treem . To approximate decision tree splitting , the output of the decision function gm , n should be between 0.0 and 1.0 . If the output of a decision function takes only 0.0 or 1.0 , the splitting operation is equivalent to hard splitting used in typical decision trees . We will define an explicit form of the decision function gm , n in Equation ( 5 ) in the next section . The prediction for each xi from a tree m with nodes parameterized by wm and πm is given by fm ( xi , wm , πm ) = L∑ ` =1 πm , ` µm , ` ( xi , wm ) , ( 2 ) where fm : RN0 × RN0×N × R1×L → R , and πm , ` denotes the response of a leaf ` of the tree m. This formulation means that the prediction output is the average of the leaf values πm , ` weighted by µm , ` ( xi , wm ) , probability of assigning the sample xi to the leaf ` . If µm , ` ( xi , wm ) takes only 1.0 for one leaf and 0.0 for the other leaves , the behavior is equivalent to a typical decision tree prediction . In this model , wm and πm are updated during training with a gradient method . While many empirical successes have been reported , theoretical analysis for soft tree ensemble models has not been sufficiently developed . 2.2 NEURAL TANGENT KERNEL . Given N samples x ∈ RN0×N , the NTK induced by any model architecture at a training time τ is formulated as a matrix Ĥ∗τ ∈ RN×N , in which each ( i , j ) ∈ [ N ] × [ N ] component is defined as [ Ĥ∗τ ] ij : = Θ̂ ∗ τ ( xi , xj ) : = 〈 ∂farbitrary ( xi , θτ ) ∂θτ , ∂farbitrary ( xj , θτ ) ∂θτ 〉 , ( 3 ) where 〈· , ·〉 denotes the inner product and θτ ∈ RP is a concatenated vector of all the P trainable model parameters at τ . An asterisk “ ∗ ” indicates that the model is arbitrary . The model function farbitrary : RN0 × RP → R used in Equation ( 3 ) is expected to be applicable to a variety of model structures . For the soft tree ensembles introduced in Section 2.1 , the NTK is formulated as∑M m=1 ∑N n=1 〈 ∂f ( xi , w , π ) ∂wm , n , ∂f ( xj , w , π ) ∂wm , n 〉 + ∑M m=1 ∑L ` =1 〈 ∂f ( xi , w , π ) ∂πm , ` , ∂f ( xj , w , π ) ∂πm , ` 〉 . In the limit of infinite width with a proper parameter scaling , a variety of properties have been discovered from the NTK induced by the MLP . For example , Jacot et al . ( 2018 ) showed the convergence of Θ̂MLP0 ( xi , xj ) , which can vary with respect to parameters , to the unique limiting kernel Θ ( xi , xj ) at initialization in probability . Moreover , they also showed that the limiting kernel does not change during training in probability : lim width→∞ Θ̂MLPτ ( xi , xj ) = limwidth→∞ Θ̂MLP0 ( xi , xj ) = : Θ MLP ( xi , xj ) . ( 4 ) This property helps in the analytical understanding of the model behavior . For example , with the squared loss and infinitesimal step size with learning rate η , the training dynamics of gradient flow in function space coincides with kernel ridge-less regression with the limiting NTK . Such a property gives us a data-dependent generalization bound ( Bartlett & Mendelson , 2003 ) related to the NTK and the prediction targets . In addition , if the NTK is positive definite , the training can achieve global convergence ( Du et al. , 2019a ; Jacot et al. , 2018 ) . Although a number of findings have been obtained using the NTK , they are mainly for typical neural networks such as MLP and ResNet , and the NTK theory has not been applied to tree models yet . The NTK theory is often used in the context of overparameterization , a subject of interest not only for the neural networks but also for the tree models ( Belkin et al. , 2019 ; Karthikeyan et al. , 2021 ; Tang et al. , 2018 ) . 3 SETUP We train model parametersw and π to minimize the squared loss using the gradient method , where w = ( w1 , . . . , wm ) and π = ( π1 , . . . , πm ) . The tree structure is fixed during training . In order to use a known closed-form solution of the NTK ( Williams , 1996 ; Lee et al. , 2019 ) , we use a scaled error function σ : R→ ( 0 , 1 ) , resulting in the following decision function : gm , n ( xi , wm , n ) = σ ( w > m , nxi ) : = 1 2 erf ( αw > m , nxi ) + 1 2 , ( 5 ) where erf ( p ) = 2√ π ∫ p 0 e−t 2 dt for p ∈ R. This scaled error function approximates commonly used sigmoid function . Since the bias term for the input of σ can be expressed inside of w by adding an element that takes a fixed constant value for all input of the soft trees x , we do not consider the bias for simplicity . The scaling factor α is introduced by Frosst & Hinton ( 2017 ) to avoid too soft splitting . Figure 2 shows that the decision function becomes harder as α increases ( from blue to red ) , and in the limit α→∞ it coincides with the hard splitting used in typical decision trees . When aggregating the output of multiple trees , we divide the sum of the tree outputs by the square root of the number of trees f ( xi , w , π ) = 1√ M M∑ m=1 fm ( xi , wm , πm ) . ( 6 ) This 1/ √ M scaling is known to be essential in the existing NTK literature to use the weak law of the large numbers ( Jacot et al. , 2018 ) . On top of Equation ( 6 ) , we initialize each of model parameters wm , n and πm , ` with zero-mean i.i.d . Gaussians with unit variances . We refer such a parameterization as NTK initialization . In this paper , we consider a model such that allM trees have the same complete binary tree structure , a common setting for soft tree ensembles ( Popov et al. , 2020 ; Kontschieder et al. , 2015 ; Hazimeh et al. , 2020 ) . | The authors derive a neural tangent kernel for soft trees, and prove several properties of the kernel. These include the stability of the kernel in the large ensemble limit, applicability to oblivious tree ensembles, degeneracy with large tree depth, and comparison to NTK. They then evaluate the utility of the kernel on 90 UCI classification data sets using a kernel regression classifier. | SP:f0c375422b5f1c2418652b71c20ceb6eb35f5b96 |
A Neural Tangent Kernel Perspective of Infinite Tree Ensembles | 1 INTRODUCTION . Tree ensembles and neural networks are powerful machine learning models used in various realworld applications . A soft tree ensemble is one of the variants of tree ensemble models that inherits characteristics of neural networks . Instead of using a greedy method ( Quinlan , 1986 ; Breiman et al. , 1984 ) for searching splitting rules , the soft tree makes the splitting rules soft and updates the entire model ’ s parameters simultaneously using the gradient method . Soft tree ensemble models are known to have high empirical performance ( Kontschieder et al. , 2015 ; Popov et al. , 2020 ; Hazimeh et al. , 2020 ) , especially for tabular datasets . Besides accuracy , there are many additional reasons why one should formulate trees in a soft manner . For example , unlike hard decision trees , the model can be updated sequentially ( Ke et al. , 2019 ) and trained in combination with pre-training ( Arik & Pfister , 2019 ) , resulting in favorable characteristics in terms of real-world continuous service deployment . Their model interpretability induced by the hierarchical splitting structure has also attracted much attention ( Frosst & Hinton , 2017 ; Wan et al. , 2021 ; Tanno et al. , 2019 ) . In addition , the idea of the soft tree is implicitly used in many different places ; for example , the process of allocating data to the appropriate leaves can be interpreted as a special case of Mixture-of-Experts ( Jordan & Jacobs , 1993 ; Shazeer et al. , 2017 ; Lepikhin et al. , 2021 ) , a technique for balancing computational complexity and prediction performance . Although various techniques have been proposed to train trees , the theoretical validity of such techniques is not well understood at sufficient depth . Examples of the practical technique include constraints on individual trees using parameter sharing ( Popov et al. , 2020 ) , adjusting the hardness of the splitting operation ( Frosst & Hinton , 2017 ; Hazimeh et al. , 2020 ) , and the use of overparameterization ( Belkin et al. , 2019 ; Karthikeyan et al. , 2021 ) . To better understand the training of tree ensemble models , we focus on the Neural Tangent Kernel ( NTK ) ( Jacot et al. , 2018 ) , a powerful tool that has been successfully applied to various neural network models with infinite hidden layer nodes . Every model architecture is known to produce a distinct NTK . Not only for the multi-layer perceptron ( MLP ) , many studies have been performed across various models , such as for Convolutional Neural Networks ( CNTK ) ( Arora et al. , 2019 ; Li et al. , 2019 ) , Graph Neural Networks ( GNTK ) ( Du et al. , 2019b ) , and Recurrent Neural Networks ( RNTK ) ( Alemohammad et al. , 2021 ) . The NTK theory is often used in the context of overparameterization of neural networks . In response to recent trends , overparameterization is also a subject of interest for tree ensembles ( Belkin et al. , 2019 ; Karthikeyan et al. , 2021 ) . Although a number of findings have been obtained using the NTK , they are mainly for typical neural networks , and it is still not obvious how to apply the NTK theory to the tree models . In this paper , by considering the limit of infinitely many trees , we introduce and study the neural tangent kernel for tree ensembles , called the Tree Neural Tangent Kernel ( TNTK ) , which provides new insights into the behavior of the ensemble of soft trees . The goal of this research is to derive the kernel that characterizes the training behavior of soft tree ensembles , and to obtain theoretical support for the empirical techniques . Our contributions are summarized as follows : • First extension of the NTK concept to the tree ensemble models . We derive the analytical form for the TNTK at initialization induced by infinitely many complete binary trees with arbitrary depth . ( Section 4.1.1 ) . We also prove that the TNTK remains constant during the training of infinite soft trees . This property allows us to analyze the behavior by kernel regression and discuss global convergence of training using the positive definiteness of the TNTK ( Section 4.1.2 , 4.1.3 ) . • Equivalence of the oblivious tree ensemble models . We show the TNTK induced by the oblivious tree structure used in practical open-source libraries such as CatBoost ( Prokhorenkova et al. , 2018 ) and NODE ( Popov et al. , 2020 ) converges to the same TNTK induced by a non-oblivious one in the limit of infinite trees . This observation implicitly supports the good empirical performance of oblivious trees with parameter sharing between tree nodes ( Section 4.2.1 ) . • Nonlinearity by adjusting the tree splitting operation . Practically , various functions have been proposed to represent the tree splitting operation . The most basic function is sigmoid . We show that the TNTK is almost a linear kernel in the basic case , and when we adjust the splitting function hard , the TNTK becomes nonlinear ( Section 4.2.2 ) . • Degeneracy of the TNTK with deep trees . The TNTK associated with deep trees exhibits degeneracy : the TNTK values are almost identical for deep trees even if the inner products of inputs are different . As a result , poor performance in numerical experiments is observed with the TNTK induced by infinitely many deep trees . This result supports the fact that the depth of trees is usually not so large in practical situations ( Section 4.2.3 ) . • Comparison to the NTK induced by the MLP . We investigate the generalization performance of infinite tree ensembles by kernel regression with the TNTK on 90 real-world datasets . Although the MLP with infinite width has better prediction accuracy on average , the infinite tree ensemble performs better than the infinite width MLP in more than 30 % of the datasets . We also showed that the TNTK is superior to the MLP-induced NTK in computational speed ( Section 5 ) . 2 BACKGROUND AND RELATED WORK . Our main focus in this paper is the soft tree and the neural tangent kernel . We briefly introduce and review them . 2.1 SOFT TREE . Based on Kontschieder et al . ( 2015 ) , we formulate a regression by soft trees . Figure 1 is a schematic image of an ensemble of M soft trees . We define a data matrix x ∈ RN0×N for N training samples { x1 , . . . , xN } with N0 features and define tree-wise parameter matrices for internal nodes wm ∈ RN0×N and leaf nodes πm ∈ R1×L for each tree m ∈ [ M ] = { 1 , . . . , M } as x = ( | . . . | x1 . . . xN | . . . | ) , wm = ( | . . . | wm,1 . . . wm , N | . . . | ) , πm = ( πm,1 , . . . , πm , L ) , where internal nodes ( blue nodes in Figure 1 ) and leaf nodes ( green nodes in Figure 1 ) are indexed from 1 to N and 1 to L , respectively . N and L may change across trees in general , while we assume that they are always fixed for simplicity throughout the paper . We also write horizontal concatenation of ( column ) vectors as x = ( x1 , . . . , xN ) ∈ RN0×N and wm = ( wm,1 , . . . , wm , N ) ∈ RN0×N . Unlike hard decision trees , we consider a model in which every single leaf node ` ∈ [ L ] = { 1 , . . . , L } of a tree m holds the probability that data will reach to it . Therefore , splitting operation at an intermediate node n ∈ [ N ] = { 1 , . . . , N } does not definitively decide splitting to the left or right . To provide an explicit form of the probabilistic tree splitting operation , we introduce the following binary relations that depend on the tree ’ s structure : ` ↙ n ( resp . n ↘ ` ) , which is true if a leaf ` belongs to the left ( resp . right ) subtree of a node n and false otherwise . We can now exploit µm , ` ( xi , wm ) : RN0 ×RN0×N → [ 0 , 1 ] , a function that returns the probability that a sample xi will reach a leaf ` of the tree m , as follows : µm , ` ( xi , wm ) = N∏ n=1 gm , n ( xi , wm , n ) 1 ` ↙n ( 1− gm , n ( xi , wm , n ) ) 1n↘ ` , ( 1 ) where 1Q is an indicator function conditioned on the argument Q , i.e. , 1true = 1 and 1false = 0 , and gm , n : RN0×RN0 → [ 0 , 1 ] is a decision function at each internal node n of a treem . To approximate decision tree splitting , the output of the decision function gm , n should be between 0.0 and 1.0 . If the output of a decision function takes only 0.0 or 1.0 , the splitting operation is equivalent to hard splitting used in typical decision trees . We will define an explicit form of the decision function gm , n in Equation ( 5 ) in the next section . The prediction for each xi from a tree m with nodes parameterized by wm and πm is given by fm ( xi , wm , πm ) = L∑ ` =1 πm , ` µm , ` ( xi , wm ) , ( 2 ) where fm : RN0 × RN0×N × R1×L → R , and πm , ` denotes the response of a leaf ` of the tree m. This formulation means that the prediction output is the average of the leaf values πm , ` weighted by µm , ` ( xi , wm ) , probability of assigning the sample xi to the leaf ` . If µm , ` ( xi , wm ) takes only 1.0 for one leaf and 0.0 for the other leaves , the behavior is equivalent to a typical decision tree prediction . In this model , wm and πm are updated during training with a gradient method . While many empirical successes have been reported , theoretical analysis for soft tree ensemble models has not been sufficiently developed . 2.2 NEURAL TANGENT KERNEL . Given N samples x ∈ RN0×N , the NTK induced by any model architecture at a training time τ is formulated as a matrix Ĥ∗τ ∈ RN×N , in which each ( i , j ) ∈ [ N ] × [ N ] component is defined as [ Ĥ∗τ ] ij : = Θ̂ ∗ τ ( xi , xj ) : = 〈 ∂farbitrary ( xi , θτ ) ∂θτ , ∂farbitrary ( xj , θτ ) ∂θτ 〉 , ( 3 ) where 〈· , ·〉 denotes the inner product and θτ ∈ RP is a concatenated vector of all the P trainable model parameters at τ . An asterisk “ ∗ ” indicates that the model is arbitrary . The model function farbitrary : RN0 × RP → R used in Equation ( 3 ) is expected to be applicable to a variety of model structures . For the soft tree ensembles introduced in Section 2.1 , the NTK is formulated as∑M m=1 ∑N n=1 〈 ∂f ( xi , w , π ) ∂wm , n , ∂f ( xj , w , π ) ∂wm , n 〉 + ∑M m=1 ∑L ` =1 〈 ∂f ( xi , w , π ) ∂πm , ` , ∂f ( xj , w , π ) ∂πm , ` 〉 . In the limit of infinite width with a proper parameter scaling , a variety of properties have been discovered from the NTK induced by the MLP . For example , Jacot et al . ( 2018 ) showed the convergence of Θ̂MLP0 ( xi , xj ) , which can vary with respect to parameters , to the unique limiting kernel Θ ( xi , xj ) at initialization in probability . Moreover , they also showed that the limiting kernel does not change during training in probability : lim width→∞ Θ̂MLPτ ( xi , xj ) = limwidth→∞ Θ̂MLP0 ( xi , xj ) = : Θ MLP ( xi , xj ) . ( 4 ) This property helps in the analytical understanding of the model behavior . For example , with the squared loss and infinitesimal step size with learning rate η , the training dynamics of gradient flow in function space coincides with kernel ridge-less regression with the limiting NTK . Such a property gives us a data-dependent generalization bound ( Bartlett & Mendelson , 2003 ) related to the NTK and the prediction targets . In addition , if the NTK is positive definite , the training can achieve global convergence ( Du et al. , 2019a ; Jacot et al. , 2018 ) . Although a number of findings have been obtained using the NTK , they are mainly for typical neural networks such as MLP and ResNet , and the NTK theory has not been applied to tree models yet . The NTK theory is often used in the context of overparameterization , a subject of interest not only for the neural networks but also for the tree models ( Belkin et al. , 2019 ; Karthikeyan et al. , 2021 ; Tang et al. , 2018 ) . 3 SETUP We train model parametersw and π to minimize the squared loss using the gradient method , where w = ( w1 , . . . , wm ) and π = ( π1 , . . . , πm ) . The tree structure is fixed during training . In order to use a known closed-form solution of the NTK ( Williams , 1996 ; Lee et al. , 2019 ) , we use a scaled error function σ : R→ ( 0 , 1 ) , resulting in the following decision function : gm , n ( xi , wm , n ) = σ ( w > m , nxi ) : = 1 2 erf ( αw > m , nxi ) + 1 2 , ( 5 ) where erf ( p ) = 2√ π ∫ p 0 e−t 2 dt for p ∈ R. This scaled error function approximates commonly used sigmoid function . Since the bias term for the input of σ can be expressed inside of w by adding an element that takes a fixed constant value for all input of the soft trees x , we do not consider the bias for simplicity . The scaling factor α is introduced by Frosst & Hinton ( 2017 ) to avoid too soft splitting . Figure 2 shows that the decision function becomes harder as α increases ( from blue to red ) , and in the limit α→∞ it coincides with the hard splitting used in typical decision trees . When aggregating the output of multiple trees , we divide the sum of the tree outputs by the square root of the number of trees f ( xi , w , π ) = 1√ M M∑ m=1 fm ( xi , wm , πm ) . ( 6 ) This 1/ √ M scaling is known to be essential in the existing NTK literature to use the weak law of the large numbers ( Jacot et al. , 2018 ) . On top of Equation ( 6 ) , we initialize each of model parameters wm , n and πm , ` with zero-mean i.i.d . Gaussians with unit variances . We refer such a parameterization as NTK initialization . In this paper , we consider a model such that allM trees have the same complete binary tree structure , a common setting for soft tree ensembles ( Popov et al. , 2020 ; Kontschieder et al. , 2015 ; Hazimeh et al. , 2020 ) . | This paper proposed the Tree Neural Tangent Kernel (TNTK) for tree ensembles. The proposed idea extends the NTK concept to tree ensemble models and enables ensembles of infinite soft trees. This paper provides theoretical studies to analyze the properties of the proposed TNTK. They also provide comprehensive experimental results to show the effectiveness of the proposed method. | SP:f0c375422b5f1c2418652b71c20ceb6eb35f5b96 |
When do Convolutional Neural Networks Stop Learning? | 1 INTRODUCTION . “ Wider and deeper are better ” has become the rule of thumb to design deep neural network architecture ( Guo et al. , 2020 ; Huang et al. , 2017 ; He et al. , 2016b ; Szegedy et al. , 2015 ; Simonyan & Zisserman , 2014b ) . Deep neural network requires large amount of data to be trained but how much data we should feed to a deep neural network to gain optimum performance is not well established . Deep neural networks behave “ double-descent ” curve while traditional machine learning models stuck to the “ bell-shaped ” curve as deep neural networks have larger model complexity ( Belkin et al. , 2019 ) . However , in deep neural network the data interpolation is reduced as the data are fed into the deeper layers of the network . This raises a critical question : Can we predict whether the deep neural network keeps learning or not based on the training data behavior ? Convolutional Neural Network ( CNN ) gains impressive performance on computer vision task ( He et al. , 2016a ) . Deeper layer based CNN tends to achieve higher accuracy on vision task ( e.g. , image classification ) ( Sinha et al. , 2020 ) . In terms of computational time saving , light-weighted CNN architectures are introduced which have a trade-off between speed and accuracy . However , when a CNN architecture reaches its ’ model capacity and stops significant learning from the training data remains unclear . In training phase , all training data are fed into the CNN as an epoch . Current practice is to use many epochs ( e.g. , 200∼500 ) to train a CNN model . The optimum number of epochs required to train a CNN model is not well researched . To infer whether the model keeps learning or not in each epoch , validation data are used alongside training data . Traditionally , training of the model is stopped when the validation error or generalization gap starts to increase ( Goodfellow et al. , 2017 ) . Generalization gap indicates model ’ s capability to predict unseen data . However , this current approach is based on trial-and-error . Our research objective is to replace this trial-and-error based approach by algorithmic approach . We hypothesize that a layer after convolution reaches its near optimal learning capacity if the produced data have significantly less variation . We use our hypothesis to identify the epoch where all the layers reach their near optimal learning capacity which also represents the model ’ s near optimal learning capacity . Thus , our hypothesis predicts the near optimal epoch number to train a CNN model without using any validation data set . The selection of optimal epoch number to train a deep neural network is not well established . Followings are some of the recent works that use different epoch number for their experiments : Zhang & He ( 2020 ) use 186 epochs , Piergiovanni & Ryoo ( 2020 ) use 256 epochs , Peng et al . ( 2020 ) use 360 epochs , Khalifa & Islam ( 2020 ) use 150 epochs . For CIFIR10 and CIFIR100 dataset Li et al . ( 2020 ) and Reddy et al . ( 2020 ) use 200 epochs . For ResNet and VGG architecture , Kim et al . ( 2020 ) use 200 epochs . Dong et al . ( 2020 ) and Liu et al . ( 2020 ) also use 200 epochs for their experiment . Huang et al . ( 2020 ) use 50∼500 epochs as a range . Curry et al . ( 2020 ) use 1000 epochs for their custom dataset . In short , most deep neural models adapt a safe epoch number . As illustrated by Figure 1 , our hypothesis can be deployed as a plug-and-play to any CNN architecture . Our hypothesis does not introduce any additional trainable parameter to the network . At the end of each epoch , our hypothesis verifies the models ’ learning capacity . The training is terminated when the model reaches its near optimal learning capacity . The main contributions of this paper can be summarized as : • We introduce a hypothesis regarding near optimal learning capacity of CNN architecture . • We examine the data variation across all the layers of a CNN architecture and correlate to its near optimal learning capacity . • The proposed hypothesis can predict the near optimal epoch number to train a CNN model without using any validation dataset . • The implementation of the proposed hypothesis can be deployed as plug-and-play to any CNN architecture and does not introduce any additional trainable parameter to the network . • To test our hypothesis , we conduct image classification experiments on six CNN architectures and three datasets . Adding the hypothesis to the existing CNN architectures save 32 % to 78 % of the computational time . • Finally , we provide detail analysis of our hypothesis on different phases of training and how we obtain the near optimal epoch number . 2 RELATED WORK Validation data are used along side training data to identify generalization gap ( Goodfellow et al. , 2017 ) . Generalization refers the model ’ s capability to predict unseen data . Increasing generalization gap indicates that the model is going to overfit . It is recommended to stop training the model at that point . However , this is a trial and error based approach that has been widely used in current training strategy . In order to use this strategy , validation dataset is required . In terms of model complexity , modern neural networks have more complexity compared to the classical machine learning methods . In terms of biasvarience trade off for generalization of neural networks , traditional machine learning methods behave the “ bell-shaped ” , and modern neural networks behave the “ double descent curve ” ( Belkin et al. , 2019 ) . To the best of our knowledge , there is no previous work that investigates at what optimal epoch a CNN model stops learning . However , there are CNN archi- tectures that aim at obtaining best possible accuracy under a limited computational budget based on different hardware and/or applications . This results a series of works towards light-weight CNN architectures and have speed-accuracy trade-off , including Xception ( Chollet , 2017 ) , MobileNet ( Howard et al. , 2017 ) , ShuffleNet ( Zhang et al. , 2018 ) , and CondenseNet ( Huang et al. , 2018 ) . They use FLOP as an indirect metric to compare computational complexity . ShuffleNetV2 ( Ma et al. , 2018 ) uses speed as a direct metric while considering memory access cost , and platform characteristics . However , considering epoch number as metric to analyze computation on CNN or at what optimal epoch CNN reaches optimal learning capacity is not well researched . For a specific dataset and CNN architecture , the usual practice is to adapt a safe epoch number . However , the epoch number selection is random and an arbitrary safe number is picked for most of the experiments . This inspires us to conduct our research work to find out when CNN almost stops learning and to predict near optimal epoch number to train any CNN architecture regardless of dataset . 3 TRAINING BEHAVIOR ON DEEP NEURAL NETWORK . 3.1 CONVOLUTIONAL NEURAL NETWORK ( CNN ) . In deep learning , a typical CNN is composed of stacked trainable convectional layers ( LeCun et al. , 1998 ) . In one epoch ( e ) , the entire training data is sent by multiple iteration ( t ) in batch size ( N ) . The entire training data sent in one epoch is expressed as e = Nt . The input tensor X is organised by batch size N , channel number c , height h , and width w as X ( N , c , h , w ) . A typical CNN convolution operation on n-th layer can be mathematically represented by Equation 1 , where θw are the learned weights of the convolutional kernel . Xn = ( θw ~ Xn−1 ) ( 1 ) 3.2 STABILITY VECTOR During training phase , we examine whether the deep leaning model is learning or not by measuring data variation after convolution operation . We introduce stability vector S to measure data variation . In every epoch , we construct stability vector for each layer of the deep neural network . The stability vector for e-th epoch and n-th layer is denoted by Sne . At each epoch and each layer , we construct stability vector Sne by computing stability value for all the iterations ( t ) of an epoch . At t-th iteration , after the convolution of n-th layer , we measure stability value ( element of a stability vector ) αnt by computing the standard deviation value of Xnt as αnt = σ ( Xnt ) . The pro- cess of constructing the elements of stability vector is shown in Figure 2 . After t iterations at n-th layer and e-th epoch , we have the stability vector Sne = [ αn1 , αn2 , . . . , αnt ] . The process of constructing stability vectors for all the layers ( i.e. , layers 1 to n ) after t number of iterations at epoch e is shown in Figure 3 . At every epoch , we have n number of stability vectors ( i.e. , based on number of layers ) where each vector has size t . 3.3 LAYER AND MODEL STABILITY . We compute the mean of stability vector , µen , at e-th epoch and n-th layer by the following equation : µen = 1 t t∑ i=1 αni ( 2 ) We define a function pr that rounds a number to decimal places r. Thus , if µen = 1.23456 , p 2 ( µen ) will return 1.23 . For each layer n , we compare the mean of stability vector with previous epoch by rounding to decimal places r by using the following equation : δn = p r ( µen ) − pr ( µe−1n ) ( 3 ) At n-th layer and e-th epoch , if δn equals zero , we consider that the n-th layer is stable on e-th epoch . If all layers shows the stability by ∑n i=1 δi = 0 , then it indicates the possibility that the CNN model becomes stable ( i.e. , reaches its near optimal learning capacity ) and it can not extract significant information from the training data . To make sure that the CNN model reaches its near optimal learning capacity , we verify the ∑n i=1 δi = 0 for two more epochs and if the result remains same , we reach to the conclusion that the model reaches near optimal learning capacity and we terminate the training phase . The trained model is now ready for testing environment . All the variables we use in our hypothesis are not trained via back-propagation , and do not introduce any trainable parameter to the network . 3.3.1 RESNET18 STABILITY ON CIFAR100 DATASET . On CIFAR100 dataset , the total number of training sample is 50000 . We consider 64 as the batch size for training ( i.e. , N = 64 ) . So , we need 5000064 = 782 iterations ( i.e. , t = 782 ) in an epoch ( e ) to use entire training data . At epoch e and layer n , the first iteration constructs the first element ( i.e. , αn1 ) of stable vector Sne . In ResNet18 architecture , at epoch e , there are 18 layers and for each layer we construct one stability vector , so we have in total 18 stability vectors ( i.e. , S1e , S2e , . . . , S18e ) . The length of each stability vector is 782 because each epoch consists of 782 iterations ( Figure 3 ) . Table 1 shows the p2 ( µen ) values for epoch 73 to 76 . As the δn is 0 for four consecutive epochs , our hypothesis terminates the ResNet18 training on CIFAR100 dataset at epoch 76 . | The paper proposes a data stability measure for early stopping in training various CNN architectures (RestNet18, VGG16), and a shallow CNN. Experiments demonstrate that comparing the means of layer weights across epochs to a certain number of decimal places allows for a significant (~30%-~70%) computational time savings with a relatively small drop in accuracy. Methodology is thoroughly quantitatively evaluation on classification networks, and a detailed ablation study in presented. | SP:1870f22eafe666574bac3212887c07591aae5a2e |
When do Convolutional Neural Networks Stop Learning? | 1 INTRODUCTION . “ Wider and deeper are better ” has become the rule of thumb to design deep neural network architecture ( Guo et al. , 2020 ; Huang et al. , 2017 ; He et al. , 2016b ; Szegedy et al. , 2015 ; Simonyan & Zisserman , 2014b ) . Deep neural network requires large amount of data to be trained but how much data we should feed to a deep neural network to gain optimum performance is not well established . Deep neural networks behave “ double-descent ” curve while traditional machine learning models stuck to the “ bell-shaped ” curve as deep neural networks have larger model complexity ( Belkin et al. , 2019 ) . However , in deep neural network the data interpolation is reduced as the data are fed into the deeper layers of the network . This raises a critical question : Can we predict whether the deep neural network keeps learning or not based on the training data behavior ? Convolutional Neural Network ( CNN ) gains impressive performance on computer vision task ( He et al. , 2016a ) . Deeper layer based CNN tends to achieve higher accuracy on vision task ( e.g. , image classification ) ( Sinha et al. , 2020 ) . In terms of computational time saving , light-weighted CNN architectures are introduced which have a trade-off between speed and accuracy . However , when a CNN architecture reaches its ’ model capacity and stops significant learning from the training data remains unclear . In training phase , all training data are fed into the CNN as an epoch . Current practice is to use many epochs ( e.g. , 200∼500 ) to train a CNN model . The optimum number of epochs required to train a CNN model is not well researched . To infer whether the model keeps learning or not in each epoch , validation data are used alongside training data . Traditionally , training of the model is stopped when the validation error or generalization gap starts to increase ( Goodfellow et al. , 2017 ) . Generalization gap indicates model ’ s capability to predict unseen data . However , this current approach is based on trial-and-error . Our research objective is to replace this trial-and-error based approach by algorithmic approach . We hypothesize that a layer after convolution reaches its near optimal learning capacity if the produced data have significantly less variation . We use our hypothesis to identify the epoch where all the layers reach their near optimal learning capacity which also represents the model ’ s near optimal learning capacity . Thus , our hypothesis predicts the near optimal epoch number to train a CNN model without using any validation data set . The selection of optimal epoch number to train a deep neural network is not well established . Followings are some of the recent works that use different epoch number for their experiments : Zhang & He ( 2020 ) use 186 epochs , Piergiovanni & Ryoo ( 2020 ) use 256 epochs , Peng et al . ( 2020 ) use 360 epochs , Khalifa & Islam ( 2020 ) use 150 epochs . For CIFIR10 and CIFIR100 dataset Li et al . ( 2020 ) and Reddy et al . ( 2020 ) use 200 epochs . For ResNet and VGG architecture , Kim et al . ( 2020 ) use 200 epochs . Dong et al . ( 2020 ) and Liu et al . ( 2020 ) also use 200 epochs for their experiment . Huang et al . ( 2020 ) use 50∼500 epochs as a range . Curry et al . ( 2020 ) use 1000 epochs for their custom dataset . In short , most deep neural models adapt a safe epoch number . As illustrated by Figure 1 , our hypothesis can be deployed as a plug-and-play to any CNN architecture . Our hypothesis does not introduce any additional trainable parameter to the network . At the end of each epoch , our hypothesis verifies the models ’ learning capacity . The training is terminated when the model reaches its near optimal learning capacity . The main contributions of this paper can be summarized as : • We introduce a hypothesis regarding near optimal learning capacity of CNN architecture . • We examine the data variation across all the layers of a CNN architecture and correlate to its near optimal learning capacity . • The proposed hypothesis can predict the near optimal epoch number to train a CNN model without using any validation dataset . • The implementation of the proposed hypothesis can be deployed as plug-and-play to any CNN architecture and does not introduce any additional trainable parameter to the network . • To test our hypothesis , we conduct image classification experiments on six CNN architectures and three datasets . Adding the hypothesis to the existing CNN architectures save 32 % to 78 % of the computational time . • Finally , we provide detail analysis of our hypothesis on different phases of training and how we obtain the near optimal epoch number . 2 RELATED WORK Validation data are used along side training data to identify generalization gap ( Goodfellow et al. , 2017 ) . Generalization refers the model ’ s capability to predict unseen data . Increasing generalization gap indicates that the model is going to overfit . It is recommended to stop training the model at that point . However , this is a trial and error based approach that has been widely used in current training strategy . In order to use this strategy , validation dataset is required . In terms of model complexity , modern neural networks have more complexity compared to the classical machine learning methods . In terms of biasvarience trade off for generalization of neural networks , traditional machine learning methods behave the “ bell-shaped ” , and modern neural networks behave the “ double descent curve ” ( Belkin et al. , 2019 ) . To the best of our knowledge , there is no previous work that investigates at what optimal epoch a CNN model stops learning . However , there are CNN archi- tectures that aim at obtaining best possible accuracy under a limited computational budget based on different hardware and/or applications . This results a series of works towards light-weight CNN architectures and have speed-accuracy trade-off , including Xception ( Chollet , 2017 ) , MobileNet ( Howard et al. , 2017 ) , ShuffleNet ( Zhang et al. , 2018 ) , and CondenseNet ( Huang et al. , 2018 ) . They use FLOP as an indirect metric to compare computational complexity . ShuffleNetV2 ( Ma et al. , 2018 ) uses speed as a direct metric while considering memory access cost , and platform characteristics . However , considering epoch number as metric to analyze computation on CNN or at what optimal epoch CNN reaches optimal learning capacity is not well researched . For a specific dataset and CNN architecture , the usual practice is to adapt a safe epoch number . However , the epoch number selection is random and an arbitrary safe number is picked for most of the experiments . This inspires us to conduct our research work to find out when CNN almost stops learning and to predict near optimal epoch number to train any CNN architecture regardless of dataset . 3 TRAINING BEHAVIOR ON DEEP NEURAL NETWORK . 3.1 CONVOLUTIONAL NEURAL NETWORK ( CNN ) . In deep learning , a typical CNN is composed of stacked trainable convectional layers ( LeCun et al. , 1998 ) . In one epoch ( e ) , the entire training data is sent by multiple iteration ( t ) in batch size ( N ) . The entire training data sent in one epoch is expressed as e = Nt . The input tensor X is organised by batch size N , channel number c , height h , and width w as X ( N , c , h , w ) . A typical CNN convolution operation on n-th layer can be mathematically represented by Equation 1 , where θw are the learned weights of the convolutional kernel . Xn = ( θw ~ Xn−1 ) ( 1 ) 3.2 STABILITY VECTOR During training phase , we examine whether the deep leaning model is learning or not by measuring data variation after convolution operation . We introduce stability vector S to measure data variation . In every epoch , we construct stability vector for each layer of the deep neural network . The stability vector for e-th epoch and n-th layer is denoted by Sne . At each epoch and each layer , we construct stability vector Sne by computing stability value for all the iterations ( t ) of an epoch . At t-th iteration , after the convolution of n-th layer , we measure stability value ( element of a stability vector ) αnt by computing the standard deviation value of Xnt as αnt = σ ( Xnt ) . The pro- cess of constructing the elements of stability vector is shown in Figure 2 . After t iterations at n-th layer and e-th epoch , we have the stability vector Sne = [ αn1 , αn2 , . . . , αnt ] . The process of constructing stability vectors for all the layers ( i.e. , layers 1 to n ) after t number of iterations at epoch e is shown in Figure 3 . At every epoch , we have n number of stability vectors ( i.e. , based on number of layers ) where each vector has size t . 3.3 LAYER AND MODEL STABILITY . We compute the mean of stability vector , µen , at e-th epoch and n-th layer by the following equation : µen = 1 t t∑ i=1 αni ( 2 ) We define a function pr that rounds a number to decimal places r. Thus , if µen = 1.23456 , p 2 ( µen ) will return 1.23 . For each layer n , we compare the mean of stability vector with previous epoch by rounding to decimal places r by using the following equation : δn = p r ( µen ) − pr ( µe−1n ) ( 3 ) At n-th layer and e-th epoch , if δn equals zero , we consider that the n-th layer is stable on e-th epoch . If all layers shows the stability by ∑n i=1 δi = 0 , then it indicates the possibility that the CNN model becomes stable ( i.e. , reaches its near optimal learning capacity ) and it can not extract significant information from the training data . To make sure that the CNN model reaches its near optimal learning capacity , we verify the ∑n i=1 δi = 0 for two more epochs and if the result remains same , we reach to the conclusion that the model reaches near optimal learning capacity and we terminate the training phase . The trained model is now ready for testing environment . All the variables we use in our hypothesis are not trained via back-propagation , and do not introduce any trainable parameter to the network . 3.3.1 RESNET18 STABILITY ON CIFAR100 DATASET . On CIFAR100 dataset , the total number of training sample is 50000 . We consider 64 as the batch size for training ( i.e. , N = 64 ) . So , we need 5000064 = 782 iterations ( i.e. , t = 782 ) in an epoch ( e ) to use entire training data . At epoch e and layer n , the first iteration constructs the first element ( i.e. , αn1 ) of stable vector Sne . In ResNet18 architecture , at epoch e , there are 18 layers and for each layer we construct one stability vector , so we have in total 18 stability vectors ( i.e. , S1e , S2e , . . . , S18e ) . The length of each stability vector is 782 because each epoch consists of 782 iterations ( Figure 3 ) . Table 1 shows the p2 ( µen ) values for epoch 73 to 76 . As the δn is 0 for four consecutive epochs , our hypothesis terminates the ResNet18 training on CIFAR100 dataset at epoch 76 . | The paper presents a method to measure the stability of vectors obtaining during training of a CNN as proxy measure for convergence of the model. This is used in a early-stopping fashion to allow interruption of the training procedure, instead of training for a fixed number of iterations/epochs. In comparison with training for a fixed number of epochs and using Curriculum by Smoothing (CBS), the proposed method is competitive in terms of final accuracy, while reducing the number of total epochs. Also, stability measured in 3 benchmark datasets behave similarly. | SP:1870f22eafe666574bac3212887c07591aae5a2e |
When do Convolutional Neural Networks Stop Learning? | 1 INTRODUCTION . “ Wider and deeper are better ” has become the rule of thumb to design deep neural network architecture ( Guo et al. , 2020 ; Huang et al. , 2017 ; He et al. , 2016b ; Szegedy et al. , 2015 ; Simonyan & Zisserman , 2014b ) . Deep neural network requires large amount of data to be trained but how much data we should feed to a deep neural network to gain optimum performance is not well established . Deep neural networks behave “ double-descent ” curve while traditional machine learning models stuck to the “ bell-shaped ” curve as deep neural networks have larger model complexity ( Belkin et al. , 2019 ) . However , in deep neural network the data interpolation is reduced as the data are fed into the deeper layers of the network . This raises a critical question : Can we predict whether the deep neural network keeps learning or not based on the training data behavior ? Convolutional Neural Network ( CNN ) gains impressive performance on computer vision task ( He et al. , 2016a ) . Deeper layer based CNN tends to achieve higher accuracy on vision task ( e.g. , image classification ) ( Sinha et al. , 2020 ) . In terms of computational time saving , light-weighted CNN architectures are introduced which have a trade-off between speed and accuracy . However , when a CNN architecture reaches its ’ model capacity and stops significant learning from the training data remains unclear . In training phase , all training data are fed into the CNN as an epoch . Current practice is to use many epochs ( e.g. , 200∼500 ) to train a CNN model . The optimum number of epochs required to train a CNN model is not well researched . To infer whether the model keeps learning or not in each epoch , validation data are used alongside training data . Traditionally , training of the model is stopped when the validation error or generalization gap starts to increase ( Goodfellow et al. , 2017 ) . Generalization gap indicates model ’ s capability to predict unseen data . However , this current approach is based on trial-and-error . Our research objective is to replace this trial-and-error based approach by algorithmic approach . We hypothesize that a layer after convolution reaches its near optimal learning capacity if the produced data have significantly less variation . We use our hypothesis to identify the epoch where all the layers reach their near optimal learning capacity which also represents the model ’ s near optimal learning capacity . Thus , our hypothesis predicts the near optimal epoch number to train a CNN model without using any validation data set . The selection of optimal epoch number to train a deep neural network is not well established . Followings are some of the recent works that use different epoch number for their experiments : Zhang & He ( 2020 ) use 186 epochs , Piergiovanni & Ryoo ( 2020 ) use 256 epochs , Peng et al . ( 2020 ) use 360 epochs , Khalifa & Islam ( 2020 ) use 150 epochs . For CIFIR10 and CIFIR100 dataset Li et al . ( 2020 ) and Reddy et al . ( 2020 ) use 200 epochs . For ResNet and VGG architecture , Kim et al . ( 2020 ) use 200 epochs . Dong et al . ( 2020 ) and Liu et al . ( 2020 ) also use 200 epochs for their experiment . Huang et al . ( 2020 ) use 50∼500 epochs as a range . Curry et al . ( 2020 ) use 1000 epochs for their custom dataset . In short , most deep neural models adapt a safe epoch number . As illustrated by Figure 1 , our hypothesis can be deployed as a plug-and-play to any CNN architecture . Our hypothesis does not introduce any additional trainable parameter to the network . At the end of each epoch , our hypothesis verifies the models ’ learning capacity . The training is terminated when the model reaches its near optimal learning capacity . The main contributions of this paper can be summarized as : • We introduce a hypothesis regarding near optimal learning capacity of CNN architecture . • We examine the data variation across all the layers of a CNN architecture and correlate to its near optimal learning capacity . • The proposed hypothesis can predict the near optimal epoch number to train a CNN model without using any validation dataset . • The implementation of the proposed hypothesis can be deployed as plug-and-play to any CNN architecture and does not introduce any additional trainable parameter to the network . • To test our hypothesis , we conduct image classification experiments on six CNN architectures and three datasets . Adding the hypothesis to the existing CNN architectures save 32 % to 78 % of the computational time . • Finally , we provide detail analysis of our hypothesis on different phases of training and how we obtain the near optimal epoch number . 2 RELATED WORK Validation data are used along side training data to identify generalization gap ( Goodfellow et al. , 2017 ) . Generalization refers the model ’ s capability to predict unseen data . Increasing generalization gap indicates that the model is going to overfit . It is recommended to stop training the model at that point . However , this is a trial and error based approach that has been widely used in current training strategy . In order to use this strategy , validation dataset is required . In terms of model complexity , modern neural networks have more complexity compared to the classical machine learning methods . In terms of biasvarience trade off for generalization of neural networks , traditional machine learning methods behave the “ bell-shaped ” , and modern neural networks behave the “ double descent curve ” ( Belkin et al. , 2019 ) . To the best of our knowledge , there is no previous work that investigates at what optimal epoch a CNN model stops learning . However , there are CNN archi- tectures that aim at obtaining best possible accuracy under a limited computational budget based on different hardware and/or applications . This results a series of works towards light-weight CNN architectures and have speed-accuracy trade-off , including Xception ( Chollet , 2017 ) , MobileNet ( Howard et al. , 2017 ) , ShuffleNet ( Zhang et al. , 2018 ) , and CondenseNet ( Huang et al. , 2018 ) . They use FLOP as an indirect metric to compare computational complexity . ShuffleNetV2 ( Ma et al. , 2018 ) uses speed as a direct metric while considering memory access cost , and platform characteristics . However , considering epoch number as metric to analyze computation on CNN or at what optimal epoch CNN reaches optimal learning capacity is not well researched . For a specific dataset and CNN architecture , the usual practice is to adapt a safe epoch number . However , the epoch number selection is random and an arbitrary safe number is picked for most of the experiments . This inspires us to conduct our research work to find out when CNN almost stops learning and to predict near optimal epoch number to train any CNN architecture regardless of dataset . 3 TRAINING BEHAVIOR ON DEEP NEURAL NETWORK . 3.1 CONVOLUTIONAL NEURAL NETWORK ( CNN ) . In deep learning , a typical CNN is composed of stacked trainable convectional layers ( LeCun et al. , 1998 ) . In one epoch ( e ) , the entire training data is sent by multiple iteration ( t ) in batch size ( N ) . The entire training data sent in one epoch is expressed as e = Nt . The input tensor X is organised by batch size N , channel number c , height h , and width w as X ( N , c , h , w ) . A typical CNN convolution operation on n-th layer can be mathematically represented by Equation 1 , where θw are the learned weights of the convolutional kernel . Xn = ( θw ~ Xn−1 ) ( 1 ) 3.2 STABILITY VECTOR During training phase , we examine whether the deep leaning model is learning or not by measuring data variation after convolution operation . We introduce stability vector S to measure data variation . In every epoch , we construct stability vector for each layer of the deep neural network . The stability vector for e-th epoch and n-th layer is denoted by Sne . At each epoch and each layer , we construct stability vector Sne by computing stability value for all the iterations ( t ) of an epoch . At t-th iteration , after the convolution of n-th layer , we measure stability value ( element of a stability vector ) αnt by computing the standard deviation value of Xnt as αnt = σ ( Xnt ) . The pro- cess of constructing the elements of stability vector is shown in Figure 2 . After t iterations at n-th layer and e-th epoch , we have the stability vector Sne = [ αn1 , αn2 , . . . , αnt ] . The process of constructing stability vectors for all the layers ( i.e. , layers 1 to n ) after t number of iterations at epoch e is shown in Figure 3 . At every epoch , we have n number of stability vectors ( i.e. , based on number of layers ) where each vector has size t . 3.3 LAYER AND MODEL STABILITY . We compute the mean of stability vector , µen , at e-th epoch and n-th layer by the following equation : µen = 1 t t∑ i=1 αni ( 2 ) We define a function pr that rounds a number to decimal places r. Thus , if µen = 1.23456 , p 2 ( µen ) will return 1.23 . For each layer n , we compare the mean of stability vector with previous epoch by rounding to decimal places r by using the following equation : δn = p r ( µen ) − pr ( µe−1n ) ( 3 ) At n-th layer and e-th epoch , if δn equals zero , we consider that the n-th layer is stable on e-th epoch . If all layers shows the stability by ∑n i=1 δi = 0 , then it indicates the possibility that the CNN model becomes stable ( i.e. , reaches its near optimal learning capacity ) and it can not extract significant information from the training data . To make sure that the CNN model reaches its near optimal learning capacity , we verify the ∑n i=1 δi = 0 for two more epochs and if the result remains same , we reach to the conclusion that the model reaches near optimal learning capacity and we terminate the training phase . The trained model is now ready for testing environment . All the variables we use in our hypothesis are not trained via back-propagation , and do not introduce any trainable parameter to the network . 3.3.1 RESNET18 STABILITY ON CIFAR100 DATASET . On CIFAR100 dataset , the total number of training sample is 50000 . We consider 64 as the batch size for training ( i.e. , N = 64 ) . So , we need 5000064 = 782 iterations ( i.e. , t = 782 ) in an epoch ( e ) to use entire training data . At epoch e and layer n , the first iteration constructs the first element ( i.e. , αn1 ) of stable vector Sne . In ResNet18 architecture , at epoch e , there are 18 layers and for each layer we construct one stability vector , so we have in total 18 stability vectors ( i.e. , S1e , S2e , . . . , S18e ) . The length of each stability vector is 782 because each epoch consists of 782 iterations ( Figure 3 ) . Table 1 shows the p2 ( µen ) values for epoch 73 to 76 . As the δn is 0 for four consecutive epochs , our hypothesis terminates the ResNet18 training on CIFAR100 dataset at epoch 76 . | The paper describes a method to define the optimal number of epochs for training process without using validation set. The work proposes to compute, for each epoch and for each layer, the stability vector defined as the vector containing the standard deviation of the activations for each CNN layer, for all iterations of a particular epoch. At each epoch, authors compute the mean of the stability vector for each layer (average over the iterations). If the difference between the mean of the stability vectors of two successive epochs for a layer is small, we can assume that the layer is stable for the considered epochs. If this happens for all layers, we can stop the training process. The proposed approach is of simple implementation and the authors tested it on 6 different CNN based architectures for classification on CIFAR10, CIFAR100 and SVHN datasets. | SP:1870f22eafe666574bac3212887c07591aae5a2e |
How many degrees of freedom do we need to train deep networks: a loss landscape perspective | 1 INTRODUCTION . How many parameters are needed to train a neural network to a specified accuracy ? Recent work on two fronts indicates that the answer for a given architecture and dataset pair is often much smaller than the total number of parameters used in modern large-scale neural networks . The first is successfully identifying lottery tickets or sparse trainable subnetworks through iterative training and pruning cycles ( Frankle & Carbin , 2019 ) . Such methods utilize information from training to identify lowerdimensional parameter spaces which can optimize to a similar accuracy as the full model . The second is the observation that constrained training within a random , low-dimension affine subspace , is often successful at reaching a high desired train and test accuracy on a variety of tasks , provided that the training dimension of the subspace is above an empirically-observed threshold training dimension ( Li et al. , 2018 ) . These results , however , leave open the question of why low-dimensional training is so successful and whether we can theoretically explain the existence of a threshold training dimension . In this work , we provide such an explanation in terms of the high-dimensional geometry of the loss landscape , the initialization , and the desired loss . In particular , we leverage a powerful tool from high-dimensional probability theory , namely Gordon ’ s escape theorem , to show that this threshold training dimension is equal to the dimension of the full parameter space minus the squared Gaussian width of the desired loss sublevel set projected onto the unit sphere around initialization . This theory can then be applied in several ways to enhance our understanding of neural network loss landscapes . For a quadratic well or second-order approximation around a local minimum , we derive an analytic bound on this threshold training dimension in terms of the Hessian spectrum and the distance of the initialization from the minimum . For general models , this relationship can be used in reverse to measure important high dimensional properties of loss landscape geometry . For example , by performing a tomographic exploration of the loss landscape , i.e . training within random subspaces of varying training dimension , we uncover a phase transition in the success probability of hitting a given loss sub-level set . The threshold-training dimension is then the phase boundary in this transition , and our theory explains the dependence of the phase boundary on the desired loss sub-level set and the initialization , in terms of the Gaussian width of the loss sub-level set projected onto a sphere surrounding the initialization . Motivated by lottery tickets , we furthermore consider training not only within random dimensions , but also within optimized subspaces using information from training in the full space . Lottery tickets can be viewed as constructing an optimized , axis-aligned subspace , i.e . where each subspace dimension corresponds to a single parameter . What would constitute an optimized choice for general subspaces ? We propose two new methods : burn-in subspaces which optimize the offset of the subspace by taking a few steps along a training trajectory and lottery subspaces determined by the span of gradients along a full training trajectory ( Fig . 1 ) . Burn-in subspaces in particular can be viewed as lowering the threshold training dimension by moving closer to the desired loss sublevel set . For all three methods , we empirically explore the threshold training dimension across a range of datasets and architectures . Related Work : An important motivation of our work is the observation that training within a random , low-dimensional affine subspace can suffice to reach high training and test accuracies on a variety of tasks , provided the training dimension exceeds a threshold that was called the intrinsic dimension ( Li et al. , 2018 ) and which we call the threshold training dimension . However Li et al . ( 2018 ) provided no theoretical explanation for this threshold and did not explore the dependence of this threshold on the quality of the initialization . Our primary goal is to provide a theoretical explanation for the existence of this threshold in terms of the geometry of the loss landscape and the quality of initialization . Indeed understand- ing the geometry of high dimensional error landscapes has been a subject of intense interest in deep learning , see e.g . Dauphin et al . ( 2014 ) ; Goodfellow et al . ( 2014 ) ; Fort & Jastrzebski ( 2019 ) ; Ghorbani et al . ( 2019 ) ; Sagun et al . ( 2016 ; 2017 ) ; Yao et al . ( 2018 ) ; Fort & Scherlis ( 2019 ) ; Papyan ( 2020 ) ; Gur-Ari et al . ( 2018 ) ; Fort & Ganguli ( 2019 ) ; Papyan ( 2019 ) ; Fort et al . ( 2020 ) , or Bahri et al . ( 2020 ) for a review . But to our knowledge , the Gaussian width of sub-level sets projected onto a sphere surrounding initialization , a key quantity that determines the threshold training dimension , has not been extensively explored in deep learning . Another motivation for our work is contextualizing the efficacy of diverse more sophisticated network pruning methods like lottery tickets ( Frankle & Carbin , 2019 ; Frankle et al. , 2019 ) . Further work in this area revealed the advantages obtained by pruning networks not at initialization ( Frankle & Carbin , 2019 ; Lee et al. , 2018 ; Wang et al. , 2020 ; Tanaka et al. , 2020 ) but slightly later in training ( Frankle et al. , 2020 ) , highlighting the importance of early stages of training ( Jastrzebski et al. , 2020 ; Lewkowycz et al. , 2020 ) . We find empirically , as well as explain theoretically , that even when training within random subspaces , one can obtain higher accuracies for a given training dimension if one starts from a slightly pre-trained , or burned-in initialization as opposed to a random initialization . 2 AN EMPIRICALLY OBSERVED PHASE TRANSITION IN TRAINING SUCCESS . We begin with the empirical observation of a phase transition in the probability of hitting a loss sub-level set when training within a random subspace of a given training dimension , starting from some initialization . Before presenting this phase transition , we first define loss sublevel sets and two different methods for training within a random subspace that differ only in the quality of the initialization . In the next section we develop theory for the nature of this phase transition . Loss sublevel sets . Let ŷ = fw ( x ) be a neural network with weights w ∈ RD and inputs x ∈ Rk . For a given training set { xn , yn } Nn=1 and loss function ` , the empirical loss landscape is given by L ( w ) = 1N ∑N n=1 ` ( fw ( xn ) , yn ) . Though our theory is general , we focus on classification for our experiments , where y ∈ { 0 , 1 } C is a one-hot encoding of C class labels , ŷ is a vector of class probabilities , and ` ( ŷ , y ) is the cross-entropy loss . In general , the loss sublevel set S ( ) at a desired value of loss is the set of all points for which the loss is less than or equal to : S ( ) : = { w ∈ RD : L ( w ) ≤ } . ( 2.1 ) Random affine subspace . Consider a d dimensional random affine hyperplane contained in D dimensional weight space , parameterized by θ ∈ Rd : w ( θ ) = Aθ + w0 . Here A ∈ RD×d is a random Gaussian matrix with columns normalized to 1 and w0 ∈ RD a random weight initialization by standard methods . To train within this subspace , we initialize θ = 0 , which corresponds to randomly initializing the network at w0 , and we minimize L ( w ( θ ) ) with respect to θ. Burn-in affine subspace . Alternatively , we can initialize the network with parameters w0 and train the network in the full space for some number of iterations t , arriving at the parameters wt . We can then construct the random burn-in subspace w ( θ ) = Aθ +wt , ( 2.2 ) with A chosen randomly as before , and then subsequently train within this subspace by minimizing L ( w ( θ ) ) with respect to θ . The random affine subspace is identical to the burn-in affine subspace but with t = 0 . Exploring the properties of training within burn-in as opposed to random affine subspaces enables us to explore the impact of the quality of the initialization , after burning in some information from the training data , on the success of subsequent restricted training . Success probability in hitting a sub-level set . In either training method , achieving L ( w ( θ ) ) = implies that the intersection between our random or burn-in affine subspace and the loss sub-level set S ( ′ ) is non-empty for all ′ ≥ . As both the subspace A and the initialization w0 leading to wt are random , we are interested in the success probability Ps ( d , , t ) that a burn-in ( or random when t = 0 ) subspace of training dimension d actually intersects a loss sub-level set S ( ) : Ps ( d , , t ) ≡ P [ S ( ) ∩ { wt + span ( A ) } 6= ∅ ] . ( 2.3 ) Here , span ( A ) denotes the column space of A . Note in practice we can not guarantee that we obtain the minimal loss in the subspace , so we use the best value achieved by Adam ( Kingma & Ba , 2014 ) as an approximation . Thus the probability of achieving a given loss sublevel set via training constitutes an approximate lower bound on the probability in ( 2.3 ) that the subspace actually intersects the loss sublevel set . Threshold training dimension as a phase transition boundary . We will find that for any fixed t , the success probability Ps ( d , , t ) in the by d plane undergoes a sharp phase transition . In particular for a desired ( not too low ) loss it transitions sharply from 0 to 1 as the training dimension d increases . To capture this transition we define : Definition 2.1 . [ Threshold training dimension ] The threshold training dimension d∗ ( , t , δ ) is the minimal value of d such that Ps ( d , , t ) ≥ 1− δ for some small δ > 0 . For any chosen criterion δ ( and fixed t ) we will see that the curve d∗ ( , t , δ ) forms a phase boundary in the by d plane separating two phases of high and low success probability . This definition also gives an operational procedure to approximately measure the threshold training dimension : run either the random or burn-in affine subspace method repeatedly over a range of training dimensions d and record the lowest loss value found in the plane when optimizing via Adam . We can then construct the empirical probability across runs of hitting a given sublevel set S ( ) and the threshold training dimension is lowest value of d for which this probability crosses 1− δ ( where we employ δ = 0.1 ) . | The modern deep neural networks are over-parameterized and it is possible to construct a smaller parameter space that delivers a similar loss value. If one investigates the dependency of the probability of achieving the desired loss values on training subspace size, one will observe a sharp transition. The dimension where this transition happens the authors call a threshold training dimension. The authors of this paper proposed a theoretical explanation of the existence of the threshold training dimension. In particular, using Gordon’s escape theorem the authors describe the dependence of the threshold training dimension on the initialization and final desired loss. The authors proposed new lottery subspaces for which the threshold training dimension is much higher than for other random subspaces. | SP:aaf89c1ff3dd3953a278d85284ed4ff34dbf5b9e |
How many degrees of freedom do we need to train deep networks: a loss landscape perspective | 1 INTRODUCTION . How many parameters are needed to train a neural network to a specified accuracy ? Recent work on two fronts indicates that the answer for a given architecture and dataset pair is often much smaller than the total number of parameters used in modern large-scale neural networks . The first is successfully identifying lottery tickets or sparse trainable subnetworks through iterative training and pruning cycles ( Frankle & Carbin , 2019 ) . Such methods utilize information from training to identify lowerdimensional parameter spaces which can optimize to a similar accuracy as the full model . The second is the observation that constrained training within a random , low-dimension affine subspace , is often successful at reaching a high desired train and test accuracy on a variety of tasks , provided that the training dimension of the subspace is above an empirically-observed threshold training dimension ( Li et al. , 2018 ) . These results , however , leave open the question of why low-dimensional training is so successful and whether we can theoretically explain the existence of a threshold training dimension . In this work , we provide such an explanation in terms of the high-dimensional geometry of the loss landscape , the initialization , and the desired loss . In particular , we leverage a powerful tool from high-dimensional probability theory , namely Gordon ’ s escape theorem , to show that this threshold training dimension is equal to the dimension of the full parameter space minus the squared Gaussian width of the desired loss sublevel set projected onto the unit sphere around initialization . This theory can then be applied in several ways to enhance our understanding of neural network loss landscapes . For a quadratic well or second-order approximation around a local minimum , we derive an analytic bound on this threshold training dimension in terms of the Hessian spectrum and the distance of the initialization from the minimum . For general models , this relationship can be used in reverse to measure important high dimensional properties of loss landscape geometry . For example , by performing a tomographic exploration of the loss landscape , i.e . training within random subspaces of varying training dimension , we uncover a phase transition in the success probability of hitting a given loss sub-level set . The threshold-training dimension is then the phase boundary in this transition , and our theory explains the dependence of the phase boundary on the desired loss sub-level set and the initialization , in terms of the Gaussian width of the loss sub-level set projected onto a sphere surrounding the initialization . Motivated by lottery tickets , we furthermore consider training not only within random dimensions , but also within optimized subspaces using information from training in the full space . Lottery tickets can be viewed as constructing an optimized , axis-aligned subspace , i.e . where each subspace dimension corresponds to a single parameter . What would constitute an optimized choice for general subspaces ? We propose two new methods : burn-in subspaces which optimize the offset of the subspace by taking a few steps along a training trajectory and lottery subspaces determined by the span of gradients along a full training trajectory ( Fig . 1 ) . Burn-in subspaces in particular can be viewed as lowering the threshold training dimension by moving closer to the desired loss sublevel set . For all three methods , we empirically explore the threshold training dimension across a range of datasets and architectures . Related Work : An important motivation of our work is the observation that training within a random , low-dimensional affine subspace can suffice to reach high training and test accuracies on a variety of tasks , provided the training dimension exceeds a threshold that was called the intrinsic dimension ( Li et al. , 2018 ) and which we call the threshold training dimension . However Li et al . ( 2018 ) provided no theoretical explanation for this threshold and did not explore the dependence of this threshold on the quality of the initialization . Our primary goal is to provide a theoretical explanation for the existence of this threshold in terms of the geometry of the loss landscape and the quality of initialization . Indeed understand- ing the geometry of high dimensional error landscapes has been a subject of intense interest in deep learning , see e.g . Dauphin et al . ( 2014 ) ; Goodfellow et al . ( 2014 ) ; Fort & Jastrzebski ( 2019 ) ; Ghorbani et al . ( 2019 ) ; Sagun et al . ( 2016 ; 2017 ) ; Yao et al . ( 2018 ) ; Fort & Scherlis ( 2019 ) ; Papyan ( 2020 ) ; Gur-Ari et al . ( 2018 ) ; Fort & Ganguli ( 2019 ) ; Papyan ( 2019 ) ; Fort et al . ( 2020 ) , or Bahri et al . ( 2020 ) for a review . But to our knowledge , the Gaussian width of sub-level sets projected onto a sphere surrounding initialization , a key quantity that determines the threshold training dimension , has not been extensively explored in deep learning . Another motivation for our work is contextualizing the efficacy of diverse more sophisticated network pruning methods like lottery tickets ( Frankle & Carbin , 2019 ; Frankle et al. , 2019 ) . Further work in this area revealed the advantages obtained by pruning networks not at initialization ( Frankle & Carbin , 2019 ; Lee et al. , 2018 ; Wang et al. , 2020 ; Tanaka et al. , 2020 ) but slightly later in training ( Frankle et al. , 2020 ) , highlighting the importance of early stages of training ( Jastrzebski et al. , 2020 ; Lewkowycz et al. , 2020 ) . We find empirically , as well as explain theoretically , that even when training within random subspaces , one can obtain higher accuracies for a given training dimension if one starts from a slightly pre-trained , or burned-in initialization as opposed to a random initialization . 2 AN EMPIRICALLY OBSERVED PHASE TRANSITION IN TRAINING SUCCESS . We begin with the empirical observation of a phase transition in the probability of hitting a loss sub-level set when training within a random subspace of a given training dimension , starting from some initialization . Before presenting this phase transition , we first define loss sublevel sets and two different methods for training within a random subspace that differ only in the quality of the initialization . In the next section we develop theory for the nature of this phase transition . Loss sublevel sets . Let ŷ = fw ( x ) be a neural network with weights w ∈ RD and inputs x ∈ Rk . For a given training set { xn , yn } Nn=1 and loss function ` , the empirical loss landscape is given by L ( w ) = 1N ∑N n=1 ` ( fw ( xn ) , yn ) . Though our theory is general , we focus on classification for our experiments , where y ∈ { 0 , 1 } C is a one-hot encoding of C class labels , ŷ is a vector of class probabilities , and ` ( ŷ , y ) is the cross-entropy loss . In general , the loss sublevel set S ( ) at a desired value of loss is the set of all points for which the loss is less than or equal to : S ( ) : = { w ∈ RD : L ( w ) ≤ } . ( 2.1 ) Random affine subspace . Consider a d dimensional random affine hyperplane contained in D dimensional weight space , parameterized by θ ∈ Rd : w ( θ ) = Aθ + w0 . Here A ∈ RD×d is a random Gaussian matrix with columns normalized to 1 and w0 ∈ RD a random weight initialization by standard methods . To train within this subspace , we initialize θ = 0 , which corresponds to randomly initializing the network at w0 , and we minimize L ( w ( θ ) ) with respect to θ. Burn-in affine subspace . Alternatively , we can initialize the network with parameters w0 and train the network in the full space for some number of iterations t , arriving at the parameters wt . We can then construct the random burn-in subspace w ( θ ) = Aθ +wt , ( 2.2 ) with A chosen randomly as before , and then subsequently train within this subspace by minimizing L ( w ( θ ) ) with respect to θ . The random affine subspace is identical to the burn-in affine subspace but with t = 0 . Exploring the properties of training within burn-in as opposed to random affine subspaces enables us to explore the impact of the quality of the initialization , after burning in some information from the training data , on the success of subsequent restricted training . Success probability in hitting a sub-level set . In either training method , achieving L ( w ( θ ) ) = implies that the intersection between our random or burn-in affine subspace and the loss sub-level set S ( ′ ) is non-empty for all ′ ≥ . As both the subspace A and the initialization w0 leading to wt are random , we are interested in the success probability Ps ( d , , t ) that a burn-in ( or random when t = 0 ) subspace of training dimension d actually intersects a loss sub-level set S ( ) : Ps ( d , , t ) ≡ P [ S ( ) ∩ { wt + span ( A ) } 6= ∅ ] . ( 2.3 ) Here , span ( A ) denotes the column space of A . Note in practice we can not guarantee that we obtain the minimal loss in the subspace , so we use the best value achieved by Adam ( Kingma & Ba , 2014 ) as an approximation . Thus the probability of achieving a given loss sublevel set via training constitutes an approximate lower bound on the probability in ( 2.3 ) that the subspace actually intersects the loss sublevel set . Threshold training dimension as a phase transition boundary . We will find that for any fixed t , the success probability Ps ( d , , t ) in the by d plane undergoes a sharp phase transition . In particular for a desired ( not too low ) loss it transitions sharply from 0 to 1 as the training dimension d increases . To capture this transition we define : Definition 2.1 . [ Threshold training dimension ] The threshold training dimension d∗ ( , t , δ ) is the minimal value of d such that Ps ( d , , t ) ≥ 1− δ for some small δ > 0 . For any chosen criterion δ ( and fixed t ) we will see that the curve d∗ ( , t , δ ) forms a phase boundary in the by d plane separating two phases of high and low success probability . This definition also gives an operational procedure to approximately measure the threshold training dimension : run either the random or burn-in affine subspace method repeatedly over a range of training dimensions d and record the lowest loss value found in the plane when optimizing via Adam . We can then construct the empirical probability across runs of hitting a given sublevel set S ( ) and the threshold training dimension is lowest value of d for which this probability crosses 1− δ ( where we employ δ = 0.1 ) . | The paper aims to provide a theoretical explanation for recent observations (lottery tickets, training in random subspaces, spanning pruning) that deep neural networks can be trained using fewer parameters than necessary. They provide the theoretical explanation using the so called Gordon escape theorem from high-dimensional geometry, which says that there is a phase transition in the success probability of training as the training dimension exceeds a threshold and this threshold is rather tight. This is supported by experiments on various benchmark datasets that seem to exhibit this phenomena of this phase transition. | SP:aaf89c1ff3dd3953a278d85284ed4ff34dbf5b9e |
How many degrees of freedom do we need to train deep networks: a loss landscape perspective | 1 INTRODUCTION . How many parameters are needed to train a neural network to a specified accuracy ? Recent work on two fronts indicates that the answer for a given architecture and dataset pair is often much smaller than the total number of parameters used in modern large-scale neural networks . The first is successfully identifying lottery tickets or sparse trainable subnetworks through iterative training and pruning cycles ( Frankle & Carbin , 2019 ) . Such methods utilize information from training to identify lowerdimensional parameter spaces which can optimize to a similar accuracy as the full model . The second is the observation that constrained training within a random , low-dimension affine subspace , is often successful at reaching a high desired train and test accuracy on a variety of tasks , provided that the training dimension of the subspace is above an empirically-observed threshold training dimension ( Li et al. , 2018 ) . These results , however , leave open the question of why low-dimensional training is so successful and whether we can theoretically explain the existence of a threshold training dimension . In this work , we provide such an explanation in terms of the high-dimensional geometry of the loss landscape , the initialization , and the desired loss . In particular , we leverage a powerful tool from high-dimensional probability theory , namely Gordon ’ s escape theorem , to show that this threshold training dimension is equal to the dimension of the full parameter space minus the squared Gaussian width of the desired loss sublevel set projected onto the unit sphere around initialization . This theory can then be applied in several ways to enhance our understanding of neural network loss landscapes . For a quadratic well or second-order approximation around a local minimum , we derive an analytic bound on this threshold training dimension in terms of the Hessian spectrum and the distance of the initialization from the minimum . For general models , this relationship can be used in reverse to measure important high dimensional properties of loss landscape geometry . For example , by performing a tomographic exploration of the loss landscape , i.e . training within random subspaces of varying training dimension , we uncover a phase transition in the success probability of hitting a given loss sub-level set . The threshold-training dimension is then the phase boundary in this transition , and our theory explains the dependence of the phase boundary on the desired loss sub-level set and the initialization , in terms of the Gaussian width of the loss sub-level set projected onto a sphere surrounding the initialization . Motivated by lottery tickets , we furthermore consider training not only within random dimensions , but also within optimized subspaces using information from training in the full space . Lottery tickets can be viewed as constructing an optimized , axis-aligned subspace , i.e . where each subspace dimension corresponds to a single parameter . What would constitute an optimized choice for general subspaces ? We propose two new methods : burn-in subspaces which optimize the offset of the subspace by taking a few steps along a training trajectory and lottery subspaces determined by the span of gradients along a full training trajectory ( Fig . 1 ) . Burn-in subspaces in particular can be viewed as lowering the threshold training dimension by moving closer to the desired loss sublevel set . For all three methods , we empirically explore the threshold training dimension across a range of datasets and architectures . Related Work : An important motivation of our work is the observation that training within a random , low-dimensional affine subspace can suffice to reach high training and test accuracies on a variety of tasks , provided the training dimension exceeds a threshold that was called the intrinsic dimension ( Li et al. , 2018 ) and which we call the threshold training dimension . However Li et al . ( 2018 ) provided no theoretical explanation for this threshold and did not explore the dependence of this threshold on the quality of the initialization . Our primary goal is to provide a theoretical explanation for the existence of this threshold in terms of the geometry of the loss landscape and the quality of initialization . Indeed understand- ing the geometry of high dimensional error landscapes has been a subject of intense interest in deep learning , see e.g . Dauphin et al . ( 2014 ) ; Goodfellow et al . ( 2014 ) ; Fort & Jastrzebski ( 2019 ) ; Ghorbani et al . ( 2019 ) ; Sagun et al . ( 2016 ; 2017 ) ; Yao et al . ( 2018 ) ; Fort & Scherlis ( 2019 ) ; Papyan ( 2020 ) ; Gur-Ari et al . ( 2018 ) ; Fort & Ganguli ( 2019 ) ; Papyan ( 2019 ) ; Fort et al . ( 2020 ) , or Bahri et al . ( 2020 ) for a review . But to our knowledge , the Gaussian width of sub-level sets projected onto a sphere surrounding initialization , a key quantity that determines the threshold training dimension , has not been extensively explored in deep learning . Another motivation for our work is contextualizing the efficacy of diverse more sophisticated network pruning methods like lottery tickets ( Frankle & Carbin , 2019 ; Frankle et al. , 2019 ) . Further work in this area revealed the advantages obtained by pruning networks not at initialization ( Frankle & Carbin , 2019 ; Lee et al. , 2018 ; Wang et al. , 2020 ; Tanaka et al. , 2020 ) but slightly later in training ( Frankle et al. , 2020 ) , highlighting the importance of early stages of training ( Jastrzebski et al. , 2020 ; Lewkowycz et al. , 2020 ) . We find empirically , as well as explain theoretically , that even when training within random subspaces , one can obtain higher accuracies for a given training dimension if one starts from a slightly pre-trained , or burned-in initialization as opposed to a random initialization . 2 AN EMPIRICALLY OBSERVED PHASE TRANSITION IN TRAINING SUCCESS . We begin with the empirical observation of a phase transition in the probability of hitting a loss sub-level set when training within a random subspace of a given training dimension , starting from some initialization . Before presenting this phase transition , we first define loss sublevel sets and two different methods for training within a random subspace that differ only in the quality of the initialization . In the next section we develop theory for the nature of this phase transition . Loss sublevel sets . Let ŷ = fw ( x ) be a neural network with weights w ∈ RD and inputs x ∈ Rk . For a given training set { xn , yn } Nn=1 and loss function ` , the empirical loss landscape is given by L ( w ) = 1N ∑N n=1 ` ( fw ( xn ) , yn ) . Though our theory is general , we focus on classification for our experiments , where y ∈ { 0 , 1 } C is a one-hot encoding of C class labels , ŷ is a vector of class probabilities , and ` ( ŷ , y ) is the cross-entropy loss . In general , the loss sublevel set S ( ) at a desired value of loss is the set of all points for which the loss is less than or equal to : S ( ) : = { w ∈ RD : L ( w ) ≤ } . ( 2.1 ) Random affine subspace . Consider a d dimensional random affine hyperplane contained in D dimensional weight space , parameterized by θ ∈ Rd : w ( θ ) = Aθ + w0 . Here A ∈ RD×d is a random Gaussian matrix with columns normalized to 1 and w0 ∈ RD a random weight initialization by standard methods . To train within this subspace , we initialize θ = 0 , which corresponds to randomly initializing the network at w0 , and we minimize L ( w ( θ ) ) with respect to θ. Burn-in affine subspace . Alternatively , we can initialize the network with parameters w0 and train the network in the full space for some number of iterations t , arriving at the parameters wt . We can then construct the random burn-in subspace w ( θ ) = Aθ +wt , ( 2.2 ) with A chosen randomly as before , and then subsequently train within this subspace by minimizing L ( w ( θ ) ) with respect to θ . The random affine subspace is identical to the burn-in affine subspace but with t = 0 . Exploring the properties of training within burn-in as opposed to random affine subspaces enables us to explore the impact of the quality of the initialization , after burning in some information from the training data , on the success of subsequent restricted training . Success probability in hitting a sub-level set . In either training method , achieving L ( w ( θ ) ) = implies that the intersection between our random or burn-in affine subspace and the loss sub-level set S ( ′ ) is non-empty for all ′ ≥ . As both the subspace A and the initialization w0 leading to wt are random , we are interested in the success probability Ps ( d , , t ) that a burn-in ( or random when t = 0 ) subspace of training dimension d actually intersects a loss sub-level set S ( ) : Ps ( d , , t ) ≡ P [ S ( ) ∩ { wt + span ( A ) } 6= ∅ ] . ( 2.3 ) Here , span ( A ) denotes the column space of A . Note in practice we can not guarantee that we obtain the minimal loss in the subspace , so we use the best value achieved by Adam ( Kingma & Ba , 2014 ) as an approximation . Thus the probability of achieving a given loss sublevel set via training constitutes an approximate lower bound on the probability in ( 2.3 ) that the subspace actually intersects the loss sublevel set . Threshold training dimension as a phase transition boundary . We will find that for any fixed t , the success probability Ps ( d , , t ) in the by d plane undergoes a sharp phase transition . In particular for a desired ( not too low ) loss it transitions sharply from 0 to 1 as the training dimension d increases . To capture this transition we define : Definition 2.1 . [ Threshold training dimension ] The threshold training dimension d∗ ( , t , δ ) is the minimal value of d such that Ps ( d , , t ) ≥ 1− δ for some small δ > 0 . For any chosen criterion δ ( and fixed t ) we will see that the curve d∗ ( , t , δ ) forms a phase boundary in the by d plane separating two phases of high and low success probability . This definition also gives an operational procedure to approximately measure the threshold training dimension : run either the random or burn-in affine subspace method repeatedly over a range of training dimensions d and record the lowest loss value found in the plane when optimizing via Adam . We can then construct the empirical probability across runs of hitting a given sublevel set S ( ) and the threshold training dimension is lowest value of d for which this probability crosses 1− δ ( where we employ δ = 0.1 ) . | The paper studies DNNs training when restricted to a random affine subspace (centered either at the initialization point $x_0$ or at a point $x_t$ obtained by training an unrestricted network for $t$ steps, which they call burn-in), and the probability that a loss lower than $\epsilon$ can be reached with a subspace of dimension $d$. They observe a sharp transition between pairs $(\epsilon,d)$ where this probability is either almost $0$ or $1$. They observe that the lower the dimension, the less likely one is to reach a loss $\epsilon$. The authors show that this transition can be understood in terms of the Gaussian width of the projection of the sublevel set $S(\epsilon)$ to the unit ball around the center point $x_0$ (or $x_t$), which they call the local angular dimension. Gordon's escape theorem makes it possible to bound the transition between the two phases as a function of the local angular dimension. Calculating the local angular dimension of the sublevel sets of the loss of DNNs is difficult, instead the authors study a quadratic loss, giving an approximation for the local angular dimension in this case. They then compare the empirical transition and the bound on the transition in terms of the local angular dimension between the two phases and find a good agreement with the theory. Finally they propose a notion of lottery subspaces inspired by the lottery tickets paper and compare their results. | SP:aaf89c1ff3dd3953a278d85284ed4ff34dbf5b9e |
Topologically Regularized Data Embeddings | 1 INTRODUCTION . Motivation Modern data often arrives in complex forms that complicate their analysis . For example , high-dimensional data can not be visualized directly , whereas relational data such as graphs lack the natural vectorized structure required by various machine learning models ( Bhagat et al. , 2011 ; Kazemi & Poole , 2018 ; Goyal & Ferrara , 2018 ) . Representation learning aims to derive mathematically and computationally convenient representations to process and learn from such data . However , obtaining an effective representation is often challenging , for example , due to the accumulation of noise in high-dimensional biological expression data ( Vandaele et al. , 2021b ) . In other examples such as community detection in social networks , graph embeddings struggle to clearly separate communities with interconnections between them . In such cases , expert prior knowledge of the topological model may improve learning from , visualizing , and interpreting the data . Unfortunately , a general tool for incorporating prior topological knowledge in representation learning is lacking . In this paper , we introduce such tool under the name of topological regularization . Here , we build on the recently developed differentiation frameworks for optimizing data to capture topological properties of interest ( Brüel-Gabrielsson et al. , 2020 ; Solomon et al. , 2021 ; Carriere et al. , 2021 ) . Unfortunately , such topological optimization has been poorly studied within the context of representation learning . For example , the used topological losses are indifferent to any structure other than topological , such as neighborhoods , which may be useful for learning . Therefore , topological optimization often destructs natural and informative properties of the data in favor of the topological loss . Our proposed method of topological regularization effectively resolves this by learning an embedding representation that incorporates the topological prior . As we will see in this paper , these priors can be directly postulated through topological loss functions . For example , if the prior is that the data lies on a circular model , we design a loss function that is lower whenever a more prominent cycle is present in the embedding . By extending the previously suggested topological losses to fit a wider set of models , we show that topological regularization effectively embeds data according to a variety of topological priors , ranging from clusters , cycles , and flares , to any combination of these . Related Work Certain methods that incorporate topological information into representation learning have already been developed . For example , Deep Embedded Clustering ( Xie et al. , 2016 ) simultaneously learns feature representations and cluster assignments using deep neural networks . Constrained embeddings of Euclidean data on spheres have also been studied by Bai et al . ( 2015 ) . However , such methods often require extensive development for one particular kind of input data and topological model . Contrary to this , incorporating topological optimization into representation learning provides a simple yet versatile approach towards combining any data embedding method with topological priors , that generalizes well to any input data as long as the output is a point cloud . Topological autoencoders ( Moor et al. , 2020 ) already combine topological optimization with a data embedding procedure . The main difference here is that the topological information used for optimization is obtained from the original high-dimensional data , and not passed as a prior . While this may sound as a major advantage—and certainly can be as shown by Moor et al . ( 2020 ) — obtaining such topological information heavily relies on distances between observations , which are often meaningless and unstable in high dimensions ( Aggarwal et al. , 2001 ) . Furthermore , certain constructions such as the α-filtration obtained from the Delaunay triangulation—which we will use extensively and is further discussed in Appendix A—are expensive to obtain from high-dimensional data ( Cignoni et al. , 1998 ) , and are therefore best computed from the low-dimensional embedding . Our work builds on a series of recent papers ( Brüel-Gabrielsson et al. , 2020 ; Solomon et al. , 2021 ; Carriere et al. , 2021 ) , which showed that topological optimization is possible in various settings and developed the mathematical foundation thereof . However , studying the use of topological optimization for data embedding and visualization applications , as well as the new losses we develop therefore and the insights we derive from them in this paper , are to best of our knowledge novel . Contributions We include a sufficient background on persistent homology—the method behind topological optimization—in Appendix A ( note that all of its concepts important for this paper are summarized in Figure 1 ) . We summarize the previous idea behind topological optimization of point clouds ( Section 2.1 ) . We introduce a new set of losses to model a wider variety of shapes in a natural manner ( Section 2.2 ) . We show how these can be used to topologically regularize embedding methods for which the output is a point cloud ( Equation ( 1 ) ) . We include experiments on synthetic and real data that show the usefulness and versatility of topological regularization , and provide additional insights into the performance of data embedding methods ( Section 3 & Appendix B ) . We discuss open problems in topological representation learning and conclude on our work ( Section 4 ) . 2 METHODS . The main purpose of this paper is to present a method to incorporate prior topological knowledge in a point cloud embedding E ( dimensionality reduction , graph embedding , . . . ) of a data set X . As will become clear below , these topological priors can be directly postulated through topological loss functions Ltop . Then , the goal is to find an embedding that that minimizes a total loss Ltot ( E , X ) : = Lemb ( E , X ) + λtopLtop ( E ) , ( 1 ) where Lemb is a loss that aims to preserve structural attributes of the original data , and λtop > 0 controls the strength of topological regularization . Note that , X itself is not required to be a point cloud , or reside in the same space as E , which is especially useful for representation learning . In this section , we mainly focus on topological optimization of point clouds , that is , the loss Ltop . The basic idea behind this recently introduced method—as presented by Brüel-Gabrielsson et al . ( 2020 ) —is illustrated in Section 2.1 . We show that direct topological optimization may neglect important structural information such as neighborhoods , which can effectively be resolved through ( 1 ) . Hence , as we will also see in Section 3 , while representation learning may benefit from topological losses for incorporating prior topological knowledge , topological optimization itself may also benefit from other structural losses to represent the topological prior in a more truthful manner . Nevertheless , some topological models remain difficult to naturally represent through topological optimization . Therefore , we introduce a new set of topological losses , and provide an overview of how different models can be postulated through them in Section 2.2 . Experiments with and comparisons to topological regularization of embeddings through ( 1 ) will be presented in Section 3 . 2.1 BACKGROUND ON TOPOLOGICAL OPTIMIZATION OF POINT CLOUDS . Topological optimization is performed through a topological loss function evaluated on one or more persistence diagrams ( Barannikov , 1994 ; Carlsson , 2009 ) . These diagrams—obtained through persistent homology as formally discussed in Appendix A—summarize all from the finest to coarsest topological holes ( connected components , cycles , voids , . . . ) in the data , as illustrated in Figure 1 . While methods that learn from persistent homology are now both well developed and diverse ( Pun et al. , 2018 ) , optimizing the data representation for the persistent homology thereof only gained recent attention ( Brüel-Gabrielsson et al. , 2020 ; Solomon et al. , 2021 ; Carriere et al. , 2021 ) . Persistent homology has a rather abstract mathematical foundation within algebraic topology ( Hatcher , 2002 ) , and its computation is inherently combinatorial ( Barannikov , 1994 ; Zomorodian & Carlsson , 2005 ) . This complicates working with usual derivatives for optimization . To accommodate for this , topological optimization makes use of Clarke subderivatives ( Clarke , 1990 ) , whose applicability to persistence builds on arguments from o-minimal geometry ( van den Dries , 1998 ; Carriere et al. , 2021 ) . Fortunately , thanks to the recent work of Brüel-Gabrielsson et al . ( 2020 ) and Carriere et al . ( 2021 ) , powerful tools for topological optimization have been developed for software libraries such as PyTorch and TensorFlow , allowing their application without deeper knowledge of these subjects . Topological optimization optimizes the data representation with respect to topological information summarized by its persistence diagram ( s ) D. We will use the approach by Brüel-Gabrielsson et al . ( 2020 ) , where ( birth , death ) tuples ( b1 , d1 ) , ( b2 , d2 ) , . . . , ( b|D| , d|D| ) in D are first ordered by decreasing persistence dk − bk . The points ( b , ∞ ) , usually plotted on top of the diagram such as in Figure 1b , form the essential part of D. The points with finite coordinates form the regular part of D. In case of point clouds , one and only one topological hole , i.e. , a connected component born at time α − 0 , will always persist indefinitely . Other gaps and holes will eventually be filled ( Figure 1 ) . Thus , we only optimize for the regular part in this paper . This is done through a topological loss function , which for a choice of i ≤ j ( which , along with the dimension of topological hole , will specify our topological prior as we will see below ) and a function g : R2 → R , is defined as Ltop ( D ) : = j∑ k=i , dk < ∞ g ( bk , dk ) , where d1 − b1 ≥ d2 − b2 ≥ . . . . ( 2 ) It turns out that for many useful definitions of g , Ltop ( D ) has a well-defined Clarke subdifferential with respect to the parameters defining the filtration from which the persistence diagram D is ob- tained . In this paper , we will consistently use the α-filtration as shown in Figure 1a ( see Appendix A for its formal definition ) , and these parameters are entire point clouds ( in this paper embeddings ) E ∈ ( Rd ) n of size n in the d-dimensional Euclidean space . Ltop ( D ) can then be easily optimized with respect to these parameters through stochastic subgradient algorithms ( Carriere et al. , 2021 ) . As it directly measures the prominence of topological holes , we let g : R2 → R : ( b , d ) 7→ µ ( d− b ) be proportional to the persistence function . By ordering the points by persistence , Ltop is a function of persistence , i.e. , it is invariant to permutations of the points in D ( Carriere et al. , 2021 ) . The factor of proportionality µ ∈ { 1 , −1 } indicates whether we want to minimize ( µ = 1 ) or maximize ( µ = −1 ) persistence , i.e , the prominence of topological holes . Thus , µ determines whether more clusters , cycles , . . . , should be present ( µ = −1 ) or missing ( µ = 1 ) . The loss ( 2 ) then reduces to Ltop ( E ) : = Ltop ( D ) = µ j∑ k=i , dk < ∞ ( dk − bk ) , where d1 − b1 ≥ d2 − b2 ≥ . . . . ( 3 ) Here , the data matrix E ( in this paper the embedding ) defines the diagram D through persistent homology of the α-filtration of E , and a persistence ( topological hole ) dimension to optimize for . For example , consider ( 3 ) with i = 2 , j = ∞ , µ = 1 , restricted to 0-dimensional persistence ( measuring the prominence of connected components ) of the α-filtration . Figure 2 shows the data from Figure 1 optimized for this loss function for various epochs . The optimized point cloud quickly resembles a single connected component for smaller numbers of epochs . This is the single goal of the current loss , which neglects all other structural structural properties of the data such as its underlying cycles ( e.g. , the circular hole in the ‘ R ’ ) or local neighborhoods . Larger numbers of epochs mainly affect the scale of the data . While this scale has an absolute effect on the total persistence , thus , the loss , the point cloud visually represents a single connected topological component equally well . We also observe that while local neighborhoods are preserved well during the first epochs simply by nature of topological optimization , they are increasingly distorted for a larger number of epochs . | This paper proposes a topological regularization method for incorporating topological prior knowledge for shaping data embeddings. This is achieved by introducing a new family of loss functions based on the characteristics of persistence diagrams. The results of the empirical evaluation suggest that the embeddings produced with the proposed method better capture the topological aspects of input data, by taking into account the prior topological knowledge. | SP:950d611a139d1be5839d7f09e0aa9f2393871cc4 |
Topologically Regularized Data Embeddings | 1 INTRODUCTION . Motivation Modern data often arrives in complex forms that complicate their analysis . For example , high-dimensional data can not be visualized directly , whereas relational data such as graphs lack the natural vectorized structure required by various machine learning models ( Bhagat et al. , 2011 ; Kazemi & Poole , 2018 ; Goyal & Ferrara , 2018 ) . Representation learning aims to derive mathematically and computationally convenient representations to process and learn from such data . However , obtaining an effective representation is often challenging , for example , due to the accumulation of noise in high-dimensional biological expression data ( Vandaele et al. , 2021b ) . In other examples such as community detection in social networks , graph embeddings struggle to clearly separate communities with interconnections between them . In such cases , expert prior knowledge of the topological model may improve learning from , visualizing , and interpreting the data . Unfortunately , a general tool for incorporating prior topological knowledge in representation learning is lacking . In this paper , we introduce such tool under the name of topological regularization . Here , we build on the recently developed differentiation frameworks for optimizing data to capture topological properties of interest ( Brüel-Gabrielsson et al. , 2020 ; Solomon et al. , 2021 ; Carriere et al. , 2021 ) . Unfortunately , such topological optimization has been poorly studied within the context of representation learning . For example , the used topological losses are indifferent to any structure other than topological , such as neighborhoods , which may be useful for learning . Therefore , topological optimization often destructs natural and informative properties of the data in favor of the topological loss . Our proposed method of topological regularization effectively resolves this by learning an embedding representation that incorporates the topological prior . As we will see in this paper , these priors can be directly postulated through topological loss functions . For example , if the prior is that the data lies on a circular model , we design a loss function that is lower whenever a more prominent cycle is present in the embedding . By extending the previously suggested topological losses to fit a wider set of models , we show that topological regularization effectively embeds data according to a variety of topological priors , ranging from clusters , cycles , and flares , to any combination of these . Related Work Certain methods that incorporate topological information into representation learning have already been developed . For example , Deep Embedded Clustering ( Xie et al. , 2016 ) simultaneously learns feature representations and cluster assignments using deep neural networks . Constrained embeddings of Euclidean data on spheres have also been studied by Bai et al . ( 2015 ) . However , such methods often require extensive development for one particular kind of input data and topological model . Contrary to this , incorporating topological optimization into representation learning provides a simple yet versatile approach towards combining any data embedding method with topological priors , that generalizes well to any input data as long as the output is a point cloud . Topological autoencoders ( Moor et al. , 2020 ) already combine topological optimization with a data embedding procedure . The main difference here is that the topological information used for optimization is obtained from the original high-dimensional data , and not passed as a prior . While this may sound as a major advantage—and certainly can be as shown by Moor et al . ( 2020 ) — obtaining such topological information heavily relies on distances between observations , which are often meaningless and unstable in high dimensions ( Aggarwal et al. , 2001 ) . Furthermore , certain constructions such as the α-filtration obtained from the Delaunay triangulation—which we will use extensively and is further discussed in Appendix A—are expensive to obtain from high-dimensional data ( Cignoni et al. , 1998 ) , and are therefore best computed from the low-dimensional embedding . Our work builds on a series of recent papers ( Brüel-Gabrielsson et al. , 2020 ; Solomon et al. , 2021 ; Carriere et al. , 2021 ) , which showed that topological optimization is possible in various settings and developed the mathematical foundation thereof . However , studying the use of topological optimization for data embedding and visualization applications , as well as the new losses we develop therefore and the insights we derive from them in this paper , are to best of our knowledge novel . Contributions We include a sufficient background on persistent homology—the method behind topological optimization—in Appendix A ( note that all of its concepts important for this paper are summarized in Figure 1 ) . We summarize the previous idea behind topological optimization of point clouds ( Section 2.1 ) . We introduce a new set of losses to model a wider variety of shapes in a natural manner ( Section 2.2 ) . We show how these can be used to topologically regularize embedding methods for which the output is a point cloud ( Equation ( 1 ) ) . We include experiments on synthetic and real data that show the usefulness and versatility of topological regularization , and provide additional insights into the performance of data embedding methods ( Section 3 & Appendix B ) . We discuss open problems in topological representation learning and conclude on our work ( Section 4 ) . 2 METHODS . The main purpose of this paper is to present a method to incorporate prior topological knowledge in a point cloud embedding E ( dimensionality reduction , graph embedding , . . . ) of a data set X . As will become clear below , these topological priors can be directly postulated through topological loss functions Ltop . Then , the goal is to find an embedding that that minimizes a total loss Ltot ( E , X ) : = Lemb ( E , X ) + λtopLtop ( E ) , ( 1 ) where Lemb is a loss that aims to preserve structural attributes of the original data , and λtop > 0 controls the strength of topological regularization . Note that , X itself is not required to be a point cloud , or reside in the same space as E , which is especially useful for representation learning . In this section , we mainly focus on topological optimization of point clouds , that is , the loss Ltop . The basic idea behind this recently introduced method—as presented by Brüel-Gabrielsson et al . ( 2020 ) —is illustrated in Section 2.1 . We show that direct topological optimization may neglect important structural information such as neighborhoods , which can effectively be resolved through ( 1 ) . Hence , as we will also see in Section 3 , while representation learning may benefit from topological losses for incorporating prior topological knowledge , topological optimization itself may also benefit from other structural losses to represent the topological prior in a more truthful manner . Nevertheless , some topological models remain difficult to naturally represent through topological optimization . Therefore , we introduce a new set of topological losses , and provide an overview of how different models can be postulated through them in Section 2.2 . Experiments with and comparisons to topological regularization of embeddings through ( 1 ) will be presented in Section 3 . 2.1 BACKGROUND ON TOPOLOGICAL OPTIMIZATION OF POINT CLOUDS . Topological optimization is performed through a topological loss function evaluated on one or more persistence diagrams ( Barannikov , 1994 ; Carlsson , 2009 ) . These diagrams—obtained through persistent homology as formally discussed in Appendix A—summarize all from the finest to coarsest topological holes ( connected components , cycles , voids , . . . ) in the data , as illustrated in Figure 1 . While methods that learn from persistent homology are now both well developed and diverse ( Pun et al. , 2018 ) , optimizing the data representation for the persistent homology thereof only gained recent attention ( Brüel-Gabrielsson et al. , 2020 ; Solomon et al. , 2021 ; Carriere et al. , 2021 ) . Persistent homology has a rather abstract mathematical foundation within algebraic topology ( Hatcher , 2002 ) , and its computation is inherently combinatorial ( Barannikov , 1994 ; Zomorodian & Carlsson , 2005 ) . This complicates working with usual derivatives for optimization . To accommodate for this , topological optimization makes use of Clarke subderivatives ( Clarke , 1990 ) , whose applicability to persistence builds on arguments from o-minimal geometry ( van den Dries , 1998 ; Carriere et al. , 2021 ) . Fortunately , thanks to the recent work of Brüel-Gabrielsson et al . ( 2020 ) and Carriere et al . ( 2021 ) , powerful tools for topological optimization have been developed for software libraries such as PyTorch and TensorFlow , allowing their application without deeper knowledge of these subjects . Topological optimization optimizes the data representation with respect to topological information summarized by its persistence diagram ( s ) D. We will use the approach by Brüel-Gabrielsson et al . ( 2020 ) , where ( birth , death ) tuples ( b1 , d1 ) , ( b2 , d2 ) , . . . , ( b|D| , d|D| ) in D are first ordered by decreasing persistence dk − bk . The points ( b , ∞ ) , usually plotted on top of the diagram such as in Figure 1b , form the essential part of D. The points with finite coordinates form the regular part of D. In case of point clouds , one and only one topological hole , i.e. , a connected component born at time α − 0 , will always persist indefinitely . Other gaps and holes will eventually be filled ( Figure 1 ) . Thus , we only optimize for the regular part in this paper . This is done through a topological loss function , which for a choice of i ≤ j ( which , along with the dimension of topological hole , will specify our topological prior as we will see below ) and a function g : R2 → R , is defined as Ltop ( D ) : = j∑ k=i , dk < ∞ g ( bk , dk ) , where d1 − b1 ≥ d2 − b2 ≥ . . . . ( 2 ) It turns out that for many useful definitions of g , Ltop ( D ) has a well-defined Clarke subdifferential with respect to the parameters defining the filtration from which the persistence diagram D is ob- tained . In this paper , we will consistently use the α-filtration as shown in Figure 1a ( see Appendix A for its formal definition ) , and these parameters are entire point clouds ( in this paper embeddings ) E ∈ ( Rd ) n of size n in the d-dimensional Euclidean space . Ltop ( D ) can then be easily optimized with respect to these parameters through stochastic subgradient algorithms ( Carriere et al. , 2021 ) . As it directly measures the prominence of topological holes , we let g : R2 → R : ( b , d ) 7→ µ ( d− b ) be proportional to the persistence function . By ordering the points by persistence , Ltop is a function of persistence , i.e. , it is invariant to permutations of the points in D ( Carriere et al. , 2021 ) . The factor of proportionality µ ∈ { 1 , −1 } indicates whether we want to minimize ( µ = 1 ) or maximize ( µ = −1 ) persistence , i.e , the prominence of topological holes . Thus , µ determines whether more clusters , cycles , . . . , should be present ( µ = −1 ) or missing ( µ = 1 ) . The loss ( 2 ) then reduces to Ltop ( E ) : = Ltop ( D ) = µ j∑ k=i , dk < ∞ ( dk − bk ) , where d1 − b1 ≥ d2 − b2 ≥ . . . . ( 3 ) Here , the data matrix E ( in this paper the embedding ) defines the diagram D through persistent homology of the α-filtration of E , and a persistence ( topological hole ) dimension to optimize for . For example , consider ( 3 ) with i = 2 , j = ∞ , µ = 1 , restricted to 0-dimensional persistence ( measuring the prominence of connected components ) of the α-filtration . Figure 2 shows the data from Figure 1 optimized for this loss function for various epochs . The optimized point cloud quickly resembles a single connected component for smaller numbers of epochs . This is the single goal of the current loss , which neglects all other structural structural properties of the data such as its underlying cycles ( e.g. , the circular hole in the ‘ R ’ ) or local neighborhoods . Larger numbers of epochs mainly affect the scale of the data . While this scale has an absolute effect on the total persistence , thus , the loss , the point cloud visually represents a single connected topological component equally well . We also observe that while local neighborhoods are preserved well during the first epochs simply by nature of topological optimization , they are increasingly distorted for a larger number of epochs . | The paper argues for including prior topological information about the structure between data points in some lower dimensional embedding space with the intention of improving the quality of the embeddings produced by tasks such as dimensionality reduction. The paper accomplishes this by adding a novel topological regularisation term to the standard embedding loss. The topological regularisation term incorporates the persistent homology of filtrations of the embedded data points. How this information is manipulated within the term encodes the prior knowledge the investigator wishes to use, such as the embedding should comprise a single cycle. The paper supports its method an array of qualitative and quantitative analyses on synthetic and real datasets. | SP:950d611a139d1be5839d7f09e0aa9f2393871cc4 |
Topologically Regularized Data Embeddings | 1 INTRODUCTION . Motivation Modern data often arrives in complex forms that complicate their analysis . For example , high-dimensional data can not be visualized directly , whereas relational data such as graphs lack the natural vectorized structure required by various machine learning models ( Bhagat et al. , 2011 ; Kazemi & Poole , 2018 ; Goyal & Ferrara , 2018 ) . Representation learning aims to derive mathematically and computationally convenient representations to process and learn from such data . However , obtaining an effective representation is often challenging , for example , due to the accumulation of noise in high-dimensional biological expression data ( Vandaele et al. , 2021b ) . In other examples such as community detection in social networks , graph embeddings struggle to clearly separate communities with interconnections between them . In such cases , expert prior knowledge of the topological model may improve learning from , visualizing , and interpreting the data . Unfortunately , a general tool for incorporating prior topological knowledge in representation learning is lacking . In this paper , we introduce such tool under the name of topological regularization . Here , we build on the recently developed differentiation frameworks for optimizing data to capture topological properties of interest ( Brüel-Gabrielsson et al. , 2020 ; Solomon et al. , 2021 ; Carriere et al. , 2021 ) . Unfortunately , such topological optimization has been poorly studied within the context of representation learning . For example , the used topological losses are indifferent to any structure other than topological , such as neighborhoods , which may be useful for learning . Therefore , topological optimization often destructs natural and informative properties of the data in favor of the topological loss . Our proposed method of topological regularization effectively resolves this by learning an embedding representation that incorporates the topological prior . As we will see in this paper , these priors can be directly postulated through topological loss functions . For example , if the prior is that the data lies on a circular model , we design a loss function that is lower whenever a more prominent cycle is present in the embedding . By extending the previously suggested topological losses to fit a wider set of models , we show that topological regularization effectively embeds data according to a variety of topological priors , ranging from clusters , cycles , and flares , to any combination of these . Related Work Certain methods that incorporate topological information into representation learning have already been developed . For example , Deep Embedded Clustering ( Xie et al. , 2016 ) simultaneously learns feature representations and cluster assignments using deep neural networks . Constrained embeddings of Euclidean data on spheres have also been studied by Bai et al . ( 2015 ) . However , such methods often require extensive development for one particular kind of input data and topological model . Contrary to this , incorporating topological optimization into representation learning provides a simple yet versatile approach towards combining any data embedding method with topological priors , that generalizes well to any input data as long as the output is a point cloud . Topological autoencoders ( Moor et al. , 2020 ) already combine topological optimization with a data embedding procedure . The main difference here is that the topological information used for optimization is obtained from the original high-dimensional data , and not passed as a prior . While this may sound as a major advantage—and certainly can be as shown by Moor et al . ( 2020 ) — obtaining such topological information heavily relies on distances between observations , which are often meaningless and unstable in high dimensions ( Aggarwal et al. , 2001 ) . Furthermore , certain constructions such as the α-filtration obtained from the Delaunay triangulation—which we will use extensively and is further discussed in Appendix A—are expensive to obtain from high-dimensional data ( Cignoni et al. , 1998 ) , and are therefore best computed from the low-dimensional embedding . Our work builds on a series of recent papers ( Brüel-Gabrielsson et al. , 2020 ; Solomon et al. , 2021 ; Carriere et al. , 2021 ) , which showed that topological optimization is possible in various settings and developed the mathematical foundation thereof . However , studying the use of topological optimization for data embedding and visualization applications , as well as the new losses we develop therefore and the insights we derive from them in this paper , are to best of our knowledge novel . Contributions We include a sufficient background on persistent homology—the method behind topological optimization—in Appendix A ( note that all of its concepts important for this paper are summarized in Figure 1 ) . We summarize the previous idea behind topological optimization of point clouds ( Section 2.1 ) . We introduce a new set of losses to model a wider variety of shapes in a natural manner ( Section 2.2 ) . We show how these can be used to topologically regularize embedding methods for which the output is a point cloud ( Equation ( 1 ) ) . We include experiments on synthetic and real data that show the usefulness and versatility of topological regularization , and provide additional insights into the performance of data embedding methods ( Section 3 & Appendix B ) . We discuss open problems in topological representation learning and conclude on our work ( Section 4 ) . 2 METHODS . The main purpose of this paper is to present a method to incorporate prior topological knowledge in a point cloud embedding E ( dimensionality reduction , graph embedding , . . . ) of a data set X . As will become clear below , these topological priors can be directly postulated through topological loss functions Ltop . Then , the goal is to find an embedding that that minimizes a total loss Ltot ( E , X ) : = Lemb ( E , X ) + λtopLtop ( E ) , ( 1 ) where Lemb is a loss that aims to preserve structural attributes of the original data , and λtop > 0 controls the strength of topological regularization . Note that , X itself is not required to be a point cloud , or reside in the same space as E , which is especially useful for representation learning . In this section , we mainly focus on topological optimization of point clouds , that is , the loss Ltop . The basic idea behind this recently introduced method—as presented by Brüel-Gabrielsson et al . ( 2020 ) —is illustrated in Section 2.1 . We show that direct topological optimization may neglect important structural information such as neighborhoods , which can effectively be resolved through ( 1 ) . Hence , as we will also see in Section 3 , while representation learning may benefit from topological losses for incorporating prior topological knowledge , topological optimization itself may also benefit from other structural losses to represent the topological prior in a more truthful manner . Nevertheless , some topological models remain difficult to naturally represent through topological optimization . Therefore , we introduce a new set of topological losses , and provide an overview of how different models can be postulated through them in Section 2.2 . Experiments with and comparisons to topological regularization of embeddings through ( 1 ) will be presented in Section 3 . 2.1 BACKGROUND ON TOPOLOGICAL OPTIMIZATION OF POINT CLOUDS . Topological optimization is performed through a topological loss function evaluated on one or more persistence diagrams ( Barannikov , 1994 ; Carlsson , 2009 ) . These diagrams—obtained through persistent homology as formally discussed in Appendix A—summarize all from the finest to coarsest topological holes ( connected components , cycles , voids , . . . ) in the data , as illustrated in Figure 1 . While methods that learn from persistent homology are now both well developed and diverse ( Pun et al. , 2018 ) , optimizing the data representation for the persistent homology thereof only gained recent attention ( Brüel-Gabrielsson et al. , 2020 ; Solomon et al. , 2021 ; Carriere et al. , 2021 ) . Persistent homology has a rather abstract mathematical foundation within algebraic topology ( Hatcher , 2002 ) , and its computation is inherently combinatorial ( Barannikov , 1994 ; Zomorodian & Carlsson , 2005 ) . This complicates working with usual derivatives for optimization . To accommodate for this , topological optimization makes use of Clarke subderivatives ( Clarke , 1990 ) , whose applicability to persistence builds on arguments from o-minimal geometry ( van den Dries , 1998 ; Carriere et al. , 2021 ) . Fortunately , thanks to the recent work of Brüel-Gabrielsson et al . ( 2020 ) and Carriere et al . ( 2021 ) , powerful tools for topological optimization have been developed for software libraries such as PyTorch and TensorFlow , allowing their application without deeper knowledge of these subjects . Topological optimization optimizes the data representation with respect to topological information summarized by its persistence diagram ( s ) D. We will use the approach by Brüel-Gabrielsson et al . ( 2020 ) , where ( birth , death ) tuples ( b1 , d1 ) , ( b2 , d2 ) , . . . , ( b|D| , d|D| ) in D are first ordered by decreasing persistence dk − bk . The points ( b , ∞ ) , usually plotted on top of the diagram such as in Figure 1b , form the essential part of D. The points with finite coordinates form the regular part of D. In case of point clouds , one and only one topological hole , i.e. , a connected component born at time α − 0 , will always persist indefinitely . Other gaps and holes will eventually be filled ( Figure 1 ) . Thus , we only optimize for the regular part in this paper . This is done through a topological loss function , which for a choice of i ≤ j ( which , along with the dimension of topological hole , will specify our topological prior as we will see below ) and a function g : R2 → R , is defined as Ltop ( D ) : = j∑ k=i , dk < ∞ g ( bk , dk ) , where d1 − b1 ≥ d2 − b2 ≥ . . . . ( 2 ) It turns out that for many useful definitions of g , Ltop ( D ) has a well-defined Clarke subdifferential with respect to the parameters defining the filtration from which the persistence diagram D is ob- tained . In this paper , we will consistently use the α-filtration as shown in Figure 1a ( see Appendix A for its formal definition ) , and these parameters are entire point clouds ( in this paper embeddings ) E ∈ ( Rd ) n of size n in the d-dimensional Euclidean space . Ltop ( D ) can then be easily optimized with respect to these parameters through stochastic subgradient algorithms ( Carriere et al. , 2021 ) . As it directly measures the prominence of topological holes , we let g : R2 → R : ( b , d ) 7→ µ ( d− b ) be proportional to the persistence function . By ordering the points by persistence , Ltop is a function of persistence , i.e. , it is invariant to permutations of the points in D ( Carriere et al. , 2021 ) . The factor of proportionality µ ∈ { 1 , −1 } indicates whether we want to minimize ( µ = 1 ) or maximize ( µ = −1 ) persistence , i.e , the prominence of topological holes . Thus , µ determines whether more clusters , cycles , . . . , should be present ( µ = −1 ) or missing ( µ = 1 ) . The loss ( 2 ) then reduces to Ltop ( E ) : = Ltop ( D ) = µ j∑ k=i , dk < ∞ ( dk − bk ) , where d1 − b1 ≥ d2 − b2 ≥ . . . . ( 3 ) Here , the data matrix E ( in this paper the embedding ) defines the diagram D through persistent homology of the α-filtration of E , and a persistence ( topological hole ) dimension to optimize for . For example , consider ( 3 ) with i = 2 , j = ∞ , µ = 1 , restricted to 0-dimensional persistence ( measuring the prominence of connected components ) of the α-filtration . Figure 2 shows the data from Figure 1 optimized for this loss function for various epochs . The optimized point cloud quickly resembles a single connected component for smaller numbers of epochs . This is the single goal of the current loss , which neglects all other structural structural properties of the data such as its underlying cycles ( e.g. , the circular hole in the ‘ R ’ ) or local neighborhoods . Larger numbers of epochs mainly affect the scale of the data . While this scale has an absolute effect on the total persistence , thus , the loss , the point cloud visually represents a single connected topological component equally well . We also observe that while local neighborhoods are preserved well during the first epochs simply by nature of topological optimization , they are increasingly distorted for a larger number of epochs . | The paper develops a way to construct low-dimensional embeddings that combine an embedding cost (e.g. PCA, t-SNE, UMAP, etc.) with the topological cost, where the latter forces the embedding to have a particular topological structure as represented by the persistent homology. As an example, one can construct a data embedding and force it to be a circle, or a figure-8, or have three clusters. The paper is based on a series of several 2020-2021 papers, with which I am unfamiliar, so it was difficult for me to judge on the novelty. I found the paper interesting, and the method seems to work well. My biggest criticism is that the goal of the method appears somewhat artificial: low-dimensional embeddings are typically done for data exploration, but one does not want to enforce any particular structure for the purposes of data exploration, and the topological structure is typically not known a priori. That's why I am giving a borderline score. | SP:950d611a139d1be5839d7f09e0aa9f2393871cc4 |
Revisiting Out-of-Distribution Detection: A Simple Baseline is Surprisingly Effective | 1 INTRODUCTION . While deep learning has significantly improved performance in many application domains , there are serious concerns for using deep neural networks in applications which are of safety-critical nature . With one major problem being adversarial samples ( Szegedy et al. , 2014 ; Madry et al. , 2018 ) , which are small imperceptible modifications of the image that change the decision of the classifier , another major problem are overconfident predictions ( Nguyen et al. , 2015 ; Hendrycks & Gimpel , 2017 ; Hein et al. , 2019 ) for images not belonging to the classes of the actual task . Here , one distinguishes between far out-of-distribution data , e.g . different forms of noise or completely unrelated tasks like CIFAR-10 vs. SVHN , and close out-of-distribution data which can for example occur in related image classification tasks where the semantic structure is very similar e.g . CIFAR-10 vs. CIFAR-100 . Both are important to be distinguished from the in-distribution , but it is conceivable that close out-of-distribution data is the more difficult problem with potentially fatal consequences : in an automated diagnosis system we want that the system recognizes that it “ does not know ” when a new unseen disease comes in rather than assigning high confidence into a known class leading to fatal treatment decisions . Thus out-of-distribution awareness is a key property of trustworthy AI systems . In this paper , we focus on the setting of OOD detection where during training time , there is no information available on the distribution of OOD inputs that might appear when the model is used for inference . A large number of different approaches to OOD detection based on combinations of density estimation , classifier confidence , logit space energy , feature space geometry , behaviour on auxiliary tasks , and other principles has been proposed to tackle this problem . We give a detailed overview of existing OOD detection methods in Appendix D. However , most OOD detection papers are focused on establishing superior empirical detection performance and provide little theoretical background on differences but also similarities to existing methods . In this paper we want to take a different path as we believe that a solid theoretical basis is needed to make further progress in this field . Our goal is to identify , at least for a particular subclass of techniques , whether the differences are indeed due to a different underlying theoretical principle or whether they are due to the efficiency of different estimation techniques for the same underlying detection criterion , called “ scoring function ” . In some cases , we will see that one can even disentangle the estimation procedure from the scoring function , so that one can simulate several different scoring functions from one model ’ s estimated quantities . A simple approach to OOD detection is to treat it as a binary discrimination problem between inand out-of-distribution , or more generally predicting a score how likely the input is OOD . In this paper , we show that from the perspective of Bayesian decision theory , several established methods are indeed equivalent to this binary discriminator . Differences arise mainly from i ) the choice of the training out-distribution , e.g . the popular Outlier Exposure of Hendrycks et al . ( 2019a ) has advocated the use of a rich and large set of natural images as a proxy for the distribution of natural images , and ii ) differences in the estimation procedure . Concretely , the main contributions of this paper are : • We show that several OOD detection approaches are equivalent to the binary discriminator between in- and out-distribution when analyzing the rankings induced by the Bayes optimal classifier/density . • We derive the implicit scoring functions for the confidence loss ( Lee et al. , 2018a ) used by Outlier Exposure ( Hendrycks et al. , 2019a ) and for using an additional background class for the out-distribution ( Thulasidasan et al. , 2021 ) . The confidence scoring function turns out not to be equivalent to the “ optimal ” scoring function of the binary discriminator when training and test out-distributions are the same . • We show that when training the binary discriminator between in- and out-distribution together with a standard classifier on the in-distribution in a shared fashion , the binary discriminator reaches state-of-the-art OOD detection performance . • We show that density estimation is equivalent to discrimination between the in-distribution and uniform noise which explains the frequent observation that standard density estimates are not suitable for OOD detection . Even though we identify that a simple baseline is competitive with the state-of-the-art , the main aim of this paper is a better understanding of the key components of different OOD detection methods and to identify the key properties which lead to SOTA OOD detection performance . All of our findings are supported by extensive experiments on CIFAR-10 and CIFAR-100 with evaluation on various challenging out-of-distribution test datasets . 2 MODELS FOR OOD DATA AND EQUIVALENCE OF OOD DETECTION SCORES . As most work in the literature we consider OOD detection on a compact input domain X where the most important example is image classification where X = [ 0 , 1 ] D. The most popular approach to OOD detection is the construction of an in-distribution-scoring function f : X → R ∪ { ±∞ } such that f ( x ) tends to be smaller if x is drawn from an out-distribution than if it is drawn from the in-distribution . There is a variety of different performance metrics for this task , with a very common one being the area under the receiver-operator characteristic curve ( AUC ) . The AUC for a scoring function f distinguishing between an in-distribution p ( x|i ) and an out-distribution p ( x|o ) is given by AUCf ( p ( x|i ) , p ( x|o ) ) = E x∼p ( x|i ) y∼p ( y|o ) [ 1f ( x ) > f ( y ) + 1 2 1f ( x ) =f ( y ) ] . ( 1 ) We define an equivalence of scoring functions based on their AUCs and will show that this equivalence implies equality of other employed performance metrics as well . Definition 1 . Two scoring functions f and g are equivalent and we write f ∼= g if AUCf ( p ( x|i ) , p ( x|o ) ) = AUCg ( p ( x|i ) , p ( x|o ) ) ( 2 ) for all potential distributions p ( x|i ) and p ( x|o ) . As the AUC is not dependent on the actual values of f but just on the ranking induced by f one obtains the following characterization of the equivalence of two scoring functions . Theorem 1 . Two scoring functions f , g are equivalent f ∼= g if and only if there exists a strictly monotonously increasing function ϕ : range ( g ) → range ( f ) , such that f = ϕ ( g ) . Corollary 1 . The equivalence between scoring functions in Def . 1 is an equivalence relation . Another metric is the false positive rate at a fixed true positive rate q , denoted as FPR @ qTPR . A commonly used value for the TPR is 95 % . The smaller the FPR @ qTPR , the better the OOD discrimination performance . Lemma 1 . Two equivalent scoring functions f ∼= g have the same FPR @ qTPR for any pair of inand out-distributions p ( x|i ) , p ( x|o ) and for any chosen TPR q . In the next section , we use the previous results to show that the Bayes optimal scoring functions of , several proposed methods for out-of-distribution detection are equivalent to the scoring functions of simple binary discriminators . 3 BAYES-OPTIMAL BEHAVIOUR OF BINARY DISCRIMINATORS AND COMMON OOD DETECTION METHODS . In the following we will show that the Bayes optimal function of several existing approaches to OOD detection for unlabeled data are equivalent to a binary discriminator between in- and a ( training ) outdistribution whereas differences arise when one has labeled data . As the equivalences are based on the Bayes optimal solution , these are asymptotic statements and thus it has to be noted that convergence to the Bayes optimal solution can be infinitely slow and that the methods can have implicit inductive biases . This is why we additionally support our findings with extensive experiments . 3.1 OOD DETECTION FOR METHODS USING UNLABELED DATA ONLY . We first provide a formal definition of OOD detection before we show the equivalence of density estimators resp . likelihood to a binary discriminator . The OOD problem In order to make rigorous statements about the OOD detection problem we first have to provide the mathematical basis for doing so . We assume that we are given an in-distribution p ( x|i ) and potentially also a training out-distribution p ( x|o ) . At this particular point no labeled data is involved , so both of them are just distributions over X . For simplicity we assume in the following that they both have a density wrt . the Lebesgue measure on X = [ 0 , 1 ] d. We assume that in practice we get samples from the mixture distribution p ( x ) = p ( x|i ) p ( i ) + p ( x|o ) p ( o ) = p ( x|i ) p ( i ) + p ( x|o ) ( 1− p ( i ) ) ( 3 ) where p ( i ) is the probability that we expect to see in-distribution samples in total . In order to make the decision between in-and out-distribution for a given point x it is then optimal to consider p ( i|x ) = p ( x|i ) p ( i ) p ( x ) = p ( x|i ) p ( i ) p ( x|i ) p ( i ) + p ( x|o ) p ( o ) , ( 4 ) which is defined for all x ∈ [ 0 , 1 ] d with p ( x ) > 0 ( assuming p ( x|i ) and p ( x|o ) can be written as densities ) . If the training out-distribution is also the test out-distribution then this is already optimal but we would like that the approach generalizes to other unseen test out-distributions and thus an important choice is the training out-distribution p ( x|o ) . Note that as p ( i|x ) is only well-defined for all x with p ( x ) > 0 , it is thus reasonable to choose for p ( x|o ) a distribution with support in [ 0 , 1 ] d , that is p ( x|o ) > 0 for all x ∈ [ 0 , 1 ] d. In this case we ensure that the criterion with which we perform OOD detection is defined for any possible input x . This is desirable as OOD detection should work for any possible input x ∈ X . Optimal prediction of a binary discriminator between in- and out-distribution We consider a binary discriminator with model parameters θ between in- and ( training ) out-distribution , where p̂θ ( i|x ) is the predicted probability for the in-distribution . Under the assumption that p ( i ) is the probability for in-distribution samples and using cross-entropy ( which in this case is the logistic loss up to a constant global factor of log ( 2 ) ) the expected loss becomes : min θ p ( i ) E x∼p ( x|i ) [ − log p̂θ ( i|x ) ] + p ( o ) E x∼p ( x|o ) [ − log ( 1− p̂θ ( i|x ) ) ] . ( 5 ) One can derive that the Bayes optimal classifier minimizing the expected loss has the predictive distribution : p̂θ∗ ( i|x ) = p ( x|i ) p ( i ) p ( x|i ) p ( i ) + p ( x|o ) p ( o ) = p ( i|x ) . ( 6 ) Thus at least for the training out-distribution a binary classifier based on samples from in- and ( training ) out-distribution would suffice to solve the OOD detection problem perfectly . Equivalence of density estimation and binary discrimination for OOD detection In this section we further analyze the relationship of common OOD detection approaches with the binary discriminator between in-and out-distribution . We start with density estimators sourced from generative models . A basic approach that is known to yield relatively weak OOD performance ( Nalisnick et al. , 2019 ; Ren et al. , 2019 ; Xiao et al. , 2020 ) is directly utilizing a model ’ s estimate for the density p ( x|i ) at a sample input x . An improved density based approach which uses perturbed in-distribution samples as a surrogate training out-distribution is the Likelihood Ratios method ( Ren et al. , 2019 ) , which proposes to fit a generative model for both the in- and out-distribution and to use the ratio between the likelihoods output by the two models as a discriminative feature . We show that with respect to the scoring function , the correct density p ( x|i ) is equivalent to the Bayes optimal prediction of a binary discriminator between the in-distribution and uniform noise . Furthermore , the density ratio p ( x|i ) p ( x|o ) is equivalent to the prediction of a binary discriminator between the two distributions on which the respective models used for density estimation have been trained . Because of this equivalence , we argue that the use of binary discriminators is a simple alternative to these methods because of its easier training procedure . While this equivalence is an asymptotic statement , the experimental comparisons in the appendix show that the methods perform similarly poorly compared to the methods using labeled data . We first prove the more general case of arbitrary likelihood ratios . In the following we use the abbreviation λ = p ( o ) p ( i ) to save space and make the statements more concise . Lemma 2 . Assume that p ( x|i ) and p ( x|o ) can be represented by densities and the support of p ( x|o ) covers the whole input domain X . Then p ( x|i ) p ( x|o ) ∼= p ( x|i ) p ( x|i ) +λp ( x|o ) for any λ > 0 . This means that the likelihood ratio score of two optimal density estimators is equivalent to the indistribution probability p̂θ∗ ( i|x ) predicted by a binary discriminator and this is true for any possible ratio of p ( i ) to p ( o ) . In the experiments below , we show that using such a discriminator has similar performance as the likelihood ratios of the different trained generative models . For the approaches that try to directly use the likelihood of a generative model as a discriminative feature , this means that their objective is equivalent to training a binary discriminator against uniform noise , whose density is pUniform ( x ) = p ( x|o ) = 1 at any x. Lemma 3 . Assume that p ( x|i ) can be represented by a density . Then p ( x|i ) ∼= p ( x|i ) p ( x|i ) +λ for any λ > 0 . This provides additional evidence why a purely density based approach for many applications proves to be insufficient as an OOD detection score on the complex image domain : it is not reasonable to assume that a binary discriminator between certain classes of natural images on the one hand and uniform noise on the other hand provides much useful information about images from other classes or even about other nonsensical inputs . | The paper analyzes different OOD detection methods and show that even if the formulation for many OOD methods were different, the binary discrimination is equivalent to those different types of methods when the rankings induced by Bayes optimal classifier are analyzed. They also derive implicit scoring functions for confidence loss of OE and for BGC and compare them with “optimal” scoring functions. They also claim that training binary discriminator (in-dist vs out-dist) in a shared fashion along with standard classifier reaches state-of-the-art OOD performance. | SP:5bc0b94b888a61882f4b3dd3bf52a0dbaad8e900 |
Revisiting Out-of-Distribution Detection: A Simple Baseline is Surprisingly Effective | 1 INTRODUCTION . While deep learning has significantly improved performance in many application domains , there are serious concerns for using deep neural networks in applications which are of safety-critical nature . With one major problem being adversarial samples ( Szegedy et al. , 2014 ; Madry et al. , 2018 ) , which are small imperceptible modifications of the image that change the decision of the classifier , another major problem are overconfident predictions ( Nguyen et al. , 2015 ; Hendrycks & Gimpel , 2017 ; Hein et al. , 2019 ) for images not belonging to the classes of the actual task . Here , one distinguishes between far out-of-distribution data , e.g . different forms of noise or completely unrelated tasks like CIFAR-10 vs. SVHN , and close out-of-distribution data which can for example occur in related image classification tasks where the semantic structure is very similar e.g . CIFAR-10 vs. CIFAR-100 . Both are important to be distinguished from the in-distribution , but it is conceivable that close out-of-distribution data is the more difficult problem with potentially fatal consequences : in an automated diagnosis system we want that the system recognizes that it “ does not know ” when a new unseen disease comes in rather than assigning high confidence into a known class leading to fatal treatment decisions . Thus out-of-distribution awareness is a key property of trustworthy AI systems . In this paper , we focus on the setting of OOD detection where during training time , there is no information available on the distribution of OOD inputs that might appear when the model is used for inference . A large number of different approaches to OOD detection based on combinations of density estimation , classifier confidence , logit space energy , feature space geometry , behaviour on auxiliary tasks , and other principles has been proposed to tackle this problem . We give a detailed overview of existing OOD detection methods in Appendix D. However , most OOD detection papers are focused on establishing superior empirical detection performance and provide little theoretical background on differences but also similarities to existing methods . In this paper we want to take a different path as we believe that a solid theoretical basis is needed to make further progress in this field . Our goal is to identify , at least for a particular subclass of techniques , whether the differences are indeed due to a different underlying theoretical principle or whether they are due to the efficiency of different estimation techniques for the same underlying detection criterion , called “ scoring function ” . In some cases , we will see that one can even disentangle the estimation procedure from the scoring function , so that one can simulate several different scoring functions from one model ’ s estimated quantities . A simple approach to OOD detection is to treat it as a binary discrimination problem between inand out-of-distribution , or more generally predicting a score how likely the input is OOD . In this paper , we show that from the perspective of Bayesian decision theory , several established methods are indeed equivalent to this binary discriminator . Differences arise mainly from i ) the choice of the training out-distribution , e.g . the popular Outlier Exposure of Hendrycks et al . ( 2019a ) has advocated the use of a rich and large set of natural images as a proxy for the distribution of natural images , and ii ) differences in the estimation procedure . Concretely , the main contributions of this paper are : • We show that several OOD detection approaches are equivalent to the binary discriminator between in- and out-distribution when analyzing the rankings induced by the Bayes optimal classifier/density . • We derive the implicit scoring functions for the confidence loss ( Lee et al. , 2018a ) used by Outlier Exposure ( Hendrycks et al. , 2019a ) and for using an additional background class for the out-distribution ( Thulasidasan et al. , 2021 ) . The confidence scoring function turns out not to be equivalent to the “ optimal ” scoring function of the binary discriminator when training and test out-distributions are the same . • We show that when training the binary discriminator between in- and out-distribution together with a standard classifier on the in-distribution in a shared fashion , the binary discriminator reaches state-of-the-art OOD detection performance . • We show that density estimation is equivalent to discrimination between the in-distribution and uniform noise which explains the frequent observation that standard density estimates are not suitable for OOD detection . Even though we identify that a simple baseline is competitive with the state-of-the-art , the main aim of this paper is a better understanding of the key components of different OOD detection methods and to identify the key properties which lead to SOTA OOD detection performance . All of our findings are supported by extensive experiments on CIFAR-10 and CIFAR-100 with evaluation on various challenging out-of-distribution test datasets . 2 MODELS FOR OOD DATA AND EQUIVALENCE OF OOD DETECTION SCORES . As most work in the literature we consider OOD detection on a compact input domain X where the most important example is image classification where X = [ 0 , 1 ] D. The most popular approach to OOD detection is the construction of an in-distribution-scoring function f : X → R ∪ { ±∞ } such that f ( x ) tends to be smaller if x is drawn from an out-distribution than if it is drawn from the in-distribution . There is a variety of different performance metrics for this task , with a very common one being the area under the receiver-operator characteristic curve ( AUC ) . The AUC for a scoring function f distinguishing between an in-distribution p ( x|i ) and an out-distribution p ( x|o ) is given by AUCf ( p ( x|i ) , p ( x|o ) ) = E x∼p ( x|i ) y∼p ( y|o ) [ 1f ( x ) > f ( y ) + 1 2 1f ( x ) =f ( y ) ] . ( 1 ) We define an equivalence of scoring functions based on their AUCs and will show that this equivalence implies equality of other employed performance metrics as well . Definition 1 . Two scoring functions f and g are equivalent and we write f ∼= g if AUCf ( p ( x|i ) , p ( x|o ) ) = AUCg ( p ( x|i ) , p ( x|o ) ) ( 2 ) for all potential distributions p ( x|i ) and p ( x|o ) . As the AUC is not dependent on the actual values of f but just on the ranking induced by f one obtains the following characterization of the equivalence of two scoring functions . Theorem 1 . Two scoring functions f , g are equivalent f ∼= g if and only if there exists a strictly monotonously increasing function ϕ : range ( g ) → range ( f ) , such that f = ϕ ( g ) . Corollary 1 . The equivalence between scoring functions in Def . 1 is an equivalence relation . Another metric is the false positive rate at a fixed true positive rate q , denoted as FPR @ qTPR . A commonly used value for the TPR is 95 % . The smaller the FPR @ qTPR , the better the OOD discrimination performance . Lemma 1 . Two equivalent scoring functions f ∼= g have the same FPR @ qTPR for any pair of inand out-distributions p ( x|i ) , p ( x|o ) and for any chosen TPR q . In the next section , we use the previous results to show that the Bayes optimal scoring functions of , several proposed methods for out-of-distribution detection are equivalent to the scoring functions of simple binary discriminators . 3 BAYES-OPTIMAL BEHAVIOUR OF BINARY DISCRIMINATORS AND COMMON OOD DETECTION METHODS . In the following we will show that the Bayes optimal function of several existing approaches to OOD detection for unlabeled data are equivalent to a binary discriminator between in- and a ( training ) outdistribution whereas differences arise when one has labeled data . As the equivalences are based on the Bayes optimal solution , these are asymptotic statements and thus it has to be noted that convergence to the Bayes optimal solution can be infinitely slow and that the methods can have implicit inductive biases . This is why we additionally support our findings with extensive experiments . 3.1 OOD DETECTION FOR METHODS USING UNLABELED DATA ONLY . We first provide a formal definition of OOD detection before we show the equivalence of density estimators resp . likelihood to a binary discriminator . The OOD problem In order to make rigorous statements about the OOD detection problem we first have to provide the mathematical basis for doing so . We assume that we are given an in-distribution p ( x|i ) and potentially also a training out-distribution p ( x|o ) . At this particular point no labeled data is involved , so both of them are just distributions over X . For simplicity we assume in the following that they both have a density wrt . the Lebesgue measure on X = [ 0 , 1 ] d. We assume that in practice we get samples from the mixture distribution p ( x ) = p ( x|i ) p ( i ) + p ( x|o ) p ( o ) = p ( x|i ) p ( i ) + p ( x|o ) ( 1− p ( i ) ) ( 3 ) where p ( i ) is the probability that we expect to see in-distribution samples in total . In order to make the decision between in-and out-distribution for a given point x it is then optimal to consider p ( i|x ) = p ( x|i ) p ( i ) p ( x ) = p ( x|i ) p ( i ) p ( x|i ) p ( i ) + p ( x|o ) p ( o ) , ( 4 ) which is defined for all x ∈ [ 0 , 1 ] d with p ( x ) > 0 ( assuming p ( x|i ) and p ( x|o ) can be written as densities ) . If the training out-distribution is also the test out-distribution then this is already optimal but we would like that the approach generalizes to other unseen test out-distributions and thus an important choice is the training out-distribution p ( x|o ) . Note that as p ( i|x ) is only well-defined for all x with p ( x ) > 0 , it is thus reasonable to choose for p ( x|o ) a distribution with support in [ 0 , 1 ] d , that is p ( x|o ) > 0 for all x ∈ [ 0 , 1 ] d. In this case we ensure that the criterion with which we perform OOD detection is defined for any possible input x . This is desirable as OOD detection should work for any possible input x ∈ X . Optimal prediction of a binary discriminator between in- and out-distribution We consider a binary discriminator with model parameters θ between in- and ( training ) out-distribution , where p̂θ ( i|x ) is the predicted probability for the in-distribution . Under the assumption that p ( i ) is the probability for in-distribution samples and using cross-entropy ( which in this case is the logistic loss up to a constant global factor of log ( 2 ) ) the expected loss becomes : min θ p ( i ) E x∼p ( x|i ) [ − log p̂θ ( i|x ) ] + p ( o ) E x∼p ( x|o ) [ − log ( 1− p̂θ ( i|x ) ) ] . ( 5 ) One can derive that the Bayes optimal classifier minimizing the expected loss has the predictive distribution : p̂θ∗ ( i|x ) = p ( x|i ) p ( i ) p ( x|i ) p ( i ) + p ( x|o ) p ( o ) = p ( i|x ) . ( 6 ) Thus at least for the training out-distribution a binary classifier based on samples from in- and ( training ) out-distribution would suffice to solve the OOD detection problem perfectly . Equivalence of density estimation and binary discrimination for OOD detection In this section we further analyze the relationship of common OOD detection approaches with the binary discriminator between in-and out-distribution . We start with density estimators sourced from generative models . A basic approach that is known to yield relatively weak OOD performance ( Nalisnick et al. , 2019 ; Ren et al. , 2019 ; Xiao et al. , 2020 ) is directly utilizing a model ’ s estimate for the density p ( x|i ) at a sample input x . An improved density based approach which uses perturbed in-distribution samples as a surrogate training out-distribution is the Likelihood Ratios method ( Ren et al. , 2019 ) , which proposes to fit a generative model for both the in- and out-distribution and to use the ratio between the likelihoods output by the two models as a discriminative feature . We show that with respect to the scoring function , the correct density p ( x|i ) is equivalent to the Bayes optimal prediction of a binary discriminator between the in-distribution and uniform noise . Furthermore , the density ratio p ( x|i ) p ( x|o ) is equivalent to the prediction of a binary discriminator between the two distributions on which the respective models used for density estimation have been trained . Because of this equivalence , we argue that the use of binary discriminators is a simple alternative to these methods because of its easier training procedure . While this equivalence is an asymptotic statement , the experimental comparisons in the appendix show that the methods perform similarly poorly compared to the methods using labeled data . We first prove the more general case of arbitrary likelihood ratios . In the following we use the abbreviation λ = p ( o ) p ( i ) to save space and make the statements more concise . Lemma 2 . Assume that p ( x|i ) and p ( x|o ) can be represented by densities and the support of p ( x|o ) covers the whole input domain X . Then p ( x|i ) p ( x|o ) ∼= p ( x|i ) p ( x|i ) +λp ( x|o ) for any λ > 0 . This means that the likelihood ratio score of two optimal density estimators is equivalent to the indistribution probability p̂θ∗ ( i|x ) predicted by a binary discriminator and this is true for any possible ratio of p ( i ) to p ( o ) . In the experiments below , we show that using such a discriminator has similar performance as the likelihood ratios of the different trained generative models . For the approaches that try to directly use the likelihood of a generative model as a discriminative feature , this means that their objective is equivalent to training a binary discriminator against uniform noise , whose density is pUniform ( x ) = p ( x|o ) = 1 at any x. Lemma 3 . Assume that p ( x|i ) can be represented by a density . Then p ( x|i ) ∼= p ( x|i ) p ( x|i ) +λ for any λ > 0 . This provides additional evidence why a purely density based approach for many applications proves to be insufficient as an OOD detection score on the complex image domain : it is not reasonable to assume that a binary discriminator between certain classes of natural images on the one hand and uniform noise on the other hand provides much useful information about images from other classes or even about other nonsensical inputs . | In this paper, the authors bring together recent work in the OOD detection problem and provide to the reader a sound mathematical framework to understand similarities and differences among these. The framework is based on the equivalence class of scoring function under the AUC / FPR@qTPR metrics and bayes optimality. The tools introduced in the paper allow the authors to explain why different methods perform largely similarly, when one scoring function should be preferred to others and draw conclusions regarding training with / without label data of the in-distribution set. | SP:5bc0b94b888a61882f4b3dd3bf52a0dbaad8e900 |
Revisiting Out-of-Distribution Detection: A Simple Baseline is Surprisingly Effective | 1 INTRODUCTION . While deep learning has significantly improved performance in many application domains , there are serious concerns for using deep neural networks in applications which are of safety-critical nature . With one major problem being adversarial samples ( Szegedy et al. , 2014 ; Madry et al. , 2018 ) , which are small imperceptible modifications of the image that change the decision of the classifier , another major problem are overconfident predictions ( Nguyen et al. , 2015 ; Hendrycks & Gimpel , 2017 ; Hein et al. , 2019 ) for images not belonging to the classes of the actual task . Here , one distinguishes between far out-of-distribution data , e.g . different forms of noise or completely unrelated tasks like CIFAR-10 vs. SVHN , and close out-of-distribution data which can for example occur in related image classification tasks where the semantic structure is very similar e.g . CIFAR-10 vs. CIFAR-100 . Both are important to be distinguished from the in-distribution , but it is conceivable that close out-of-distribution data is the more difficult problem with potentially fatal consequences : in an automated diagnosis system we want that the system recognizes that it “ does not know ” when a new unseen disease comes in rather than assigning high confidence into a known class leading to fatal treatment decisions . Thus out-of-distribution awareness is a key property of trustworthy AI systems . In this paper , we focus on the setting of OOD detection where during training time , there is no information available on the distribution of OOD inputs that might appear when the model is used for inference . A large number of different approaches to OOD detection based on combinations of density estimation , classifier confidence , logit space energy , feature space geometry , behaviour on auxiliary tasks , and other principles has been proposed to tackle this problem . We give a detailed overview of existing OOD detection methods in Appendix D. However , most OOD detection papers are focused on establishing superior empirical detection performance and provide little theoretical background on differences but also similarities to existing methods . In this paper we want to take a different path as we believe that a solid theoretical basis is needed to make further progress in this field . Our goal is to identify , at least for a particular subclass of techniques , whether the differences are indeed due to a different underlying theoretical principle or whether they are due to the efficiency of different estimation techniques for the same underlying detection criterion , called “ scoring function ” . In some cases , we will see that one can even disentangle the estimation procedure from the scoring function , so that one can simulate several different scoring functions from one model ’ s estimated quantities . A simple approach to OOD detection is to treat it as a binary discrimination problem between inand out-of-distribution , or more generally predicting a score how likely the input is OOD . In this paper , we show that from the perspective of Bayesian decision theory , several established methods are indeed equivalent to this binary discriminator . Differences arise mainly from i ) the choice of the training out-distribution , e.g . the popular Outlier Exposure of Hendrycks et al . ( 2019a ) has advocated the use of a rich and large set of natural images as a proxy for the distribution of natural images , and ii ) differences in the estimation procedure . Concretely , the main contributions of this paper are : • We show that several OOD detection approaches are equivalent to the binary discriminator between in- and out-distribution when analyzing the rankings induced by the Bayes optimal classifier/density . • We derive the implicit scoring functions for the confidence loss ( Lee et al. , 2018a ) used by Outlier Exposure ( Hendrycks et al. , 2019a ) and for using an additional background class for the out-distribution ( Thulasidasan et al. , 2021 ) . The confidence scoring function turns out not to be equivalent to the “ optimal ” scoring function of the binary discriminator when training and test out-distributions are the same . • We show that when training the binary discriminator between in- and out-distribution together with a standard classifier on the in-distribution in a shared fashion , the binary discriminator reaches state-of-the-art OOD detection performance . • We show that density estimation is equivalent to discrimination between the in-distribution and uniform noise which explains the frequent observation that standard density estimates are not suitable for OOD detection . Even though we identify that a simple baseline is competitive with the state-of-the-art , the main aim of this paper is a better understanding of the key components of different OOD detection methods and to identify the key properties which lead to SOTA OOD detection performance . All of our findings are supported by extensive experiments on CIFAR-10 and CIFAR-100 with evaluation on various challenging out-of-distribution test datasets . 2 MODELS FOR OOD DATA AND EQUIVALENCE OF OOD DETECTION SCORES . As most work in the literature we consider OOD detection on a compact input domain X where the most important example is image classification where X = [ 0 , 1 ] D. The most popular approach to OOD detection is the construction of an in-distribution-scoring function f : X → R ∪ { ±∞ } such that f ( x ) tends to be smaller if x is drawn from an out-distribution than if it is drawn from the in-distribution . There is a variety of different performance metrics for this task , with a very common one being the area under the receiver-operator characteristic curve ( AUC ) . The AUC for a scoring function f distinguishing between an in-distribution p ( x|i ) and an out-distribution p ( x|o ) is given by AUCf ( p ( x|i ) , p ( x|o ) ) = E x∼p ( x|i ) y∼p ( y|o ) [ 1f ( x ) > f ( y ) + 1 2 1f ( x ) =f ( y ) ] . ( 1 ) We define an equivalence of scoring functions based on their AUCs and will show that this equivalence implies equality of other employed performance metrics as well . Definition 1 . Two scoring functions f and g are equivalent and we write f ∼= g if AUCf ( p ( x|i ) , p ( x|o ) ) = AUCg ( p ( x|i ) , p ( x|o ) ) ( 2 ) for all potential distributions p ( x|i ) and p ( x|o ) . As the AUC is not dependent on the actual values of f but just on the ranking induced by f one obtains the following characterization of the equivalence of two scoring functions . Theorem 1 . Two scoring functions f , g are equivalent f ∼= g if and only if there exists a strictly monotonously increasing function ϕ : range ( g ) → range ( f ) , such that f = ϕ ( g ) . Corollary 1 . The equivalence between scoring functions in Def . 1 is an equivalence relation . Another metric is the false positive rate at a fixed true positive rate q , denoted as FPR @ qTPR . A commonly used value for the TPR is 95 % . The smaller the FPR @ qTPR , the better the OOD discrimination performance . Lemma 1 . Two equivalent scoring functions f ∼= g have the same FPR @ qTPR for any pair of inand out-distributions p ( x|i ) , p ( x|o ) and for any chosen TPR q . In the next section , we use the previous results to show that the Bayes optimal scoring functions of , several proposed methods for out-of-distribution detection are equivalent to the scoring functions of simple binary discriminators . 3 BAYES-OPTIMAL BEHAVIOUR OF BINARY DISCRIMINATORS AND COMMON OOD DETECTION METHODS . In the following we will show that the Bayes optimal function of several existing approaches to OOD detection for unlabeled data are equivalent to a binary discriminator between in- and a ( training ) outdistribution whereas differences arise when one has labeled data . As the equivalences are based on the Bayes optimal solution , these are asymptotic statements and thus it has to be noted that convergence to the Bayes optimal solution can be infinitely slow and that the methods can have implicit inductive biases . This is why we additionally support our findings with extensive experiments . 3.1 OOD DETECTION FOR METHODS USING UNLABELED DATA ONLY . We first provide a formal definition of OOD detection before we show the equivalence of density estimators resp . likelihood to a binary discriminator . The OOD problem In order to make rigorous statements about the OOD detection problem we first have to provide the mathematical basis for doing so . We assume that we are given an in-distribution p ( x|i ) and potentially also a training out-distribution p ( x|o ) . At this particular point no labeled data is involved , so both of them are just distributions over X . For simplicity we assume in the following that they both have a density wrt . the Lebesgue measure on X = [ 0 , 1 ] d. We assume that in practice we get samples from the mixture distribution p ( x ) = p ( x|i ) p ( i ) + p ( x|o ) p ( o ) = p ( x|i ) p ( i ) + p ( x|o ) ( 1− p ( i ) ) ( 3 ) where p ( i ) is the probability that we expect to see in-distribution samples in total . In order to make the decision between in-and out-distribution for a given point x it is then optimal to consider p ( i|x ) = p ( x|i ) p ( i ) p ( x ) = p ( x|i ) p ( i ) p ( x|i ) p ( i ) + p ( x|o ) p ( o ) , ( 4 ) which is defined for all x ∈ [ 0 , 1 ] d with p ( x ) > 0 ( assuming p ( x|i ) and p ( x|o ) can be written as densities ) . If the training out-distribution is also the test out-distribution then this is already optimal but we would like that the approach generalizes to other unseen test out-distributions and thus an important choice is the training out-distribution p ( x|o ) . Note that as p ( i|x ) is only well-defined for all x with p ( x ) > 0 , it is thus reasonable to choose for p ( x|o ) a distribution with support in [ 0 , 1 ] d , that is p ( x|o ) > 0 for all x ∈ [ 0 , 1 ] d. In this case we ensure that the criterion with which we perform OOD detection is defined for any possible input x . This is desirable as OOD detection should work for any possible input x ∈ X . Optimal prediction of a binary discriminator between in- and out-distribution We consider a binary discriminator with model parameters θ between in- and ( training ) out-distribution , where p̂θ ( i|x ) is the predicted probability for the in-distribution . Under the assumption that p ( i ) is the probability for in-distribution samples and using cross-entropy ( which in this case is the logistic loss up to a constant global factor of log ( 2 ) ) the expected loss becomes : min θ p ( i ) E x∼p ( x|i ) [ − log p̂θ ( i|x ) ] + p ( o ) E x∼p ( x|o ) [ − log ( 1− p̂θ ( i|x ) ) ] . ( 5 ) One can derive that the Bayes optimal classifier minimizing the expected loss has the predictive distribution : p̂θ∗ ( i|x ) = p ( x|i ) p ( i ) p ( x|i ) p ( i ) + p ( x|o ) p ( o ) = p ( i|x ) . ( 6 ) Thus at least for the training out-distribution a binary classifier based on samples from in- and ( training ) out-distribution would suffice to solve the OOD detection problem perfectly . Equivalence of density estimation and binary discrimination for OOD detection In this section we further analyze the relationship of common OOD detection approaches with the binary discriminator between in-and out-distribution . We start with density estimators sourced from generative models . A basic approach that is known to yield relatively weak OOD performance ( Nalisnick et al. , 2019 ; Ren et al. , 2019 ; Xiao et al. , 2020 ) is directly utilizing a model ’ s estimate for the density p ( x|i ) at a sample input x . An improved density based approach which uses perturbed in-distribution samples as a surrogate training out-distribution is the Likelihood Ratios method ( Ren et al. , 2019 ) , which proposes to fit a generative model for both the in- and out-distribution and to use the ratio between the likelihoods output by the two models as a discriminative feature . We show that with respect to the scoring function , the correct density p ( x|i ) is equivalent to the Bayes optimal prediction of a binary discriminator between the in-distribution and uniform noise . Furthermore , the density ratio p ( x|i ) p ( x|o ) is equivalent to the prediction of a binary discriminator between the two distributions on which the respective models used for density estimation have been trained . Because of this equivalence , we argue that the use of binary discriminators is a simple alternative to these methods because of its easier training procedure . While this equivalence is an asymptotic statement , the experimental comparisons in the appendix show that the methods perform similarly poorly compared to the methods using labeled data . We first prove the more general case of arbitrary likelihood ratios . In the following we use the abbreviation λ = p ( o ) p ( i ) to save space and make the statements more concise . Lemma 2 . Assume that p ( x|i ) and p ( x|o ) can be represented by densities and the support of p ( x|o ) covers the whole input domain X . Then p ( x|i ) p ( x|o ) ∼= p ( x|i ) p ( x|i ) +λp ( x|o ) for any λ > 0 . This means that the likelihood ratio score of two optimal density estimators is equivalent to the indistribution probability p̂θ∗ ( i|x ) predicted by a binary discriminator and this is true for any possible ratio of p ( i ) to p ( o ) . In the experiments below , we show that using such a discriminator has similar performance as the likelihood ratios of the different trained generative models . For the approaches that try to directly use the likelihood of a generative model as a discriminative feature , this means that their objective is equivalent to training a binary discriminator against uniform noise , whose density is pUniform ( x ) = p ( x|o ) = 1 at any x. Lemma 3 . Assume that p ( x|i ) can be represented by a density . Then p ( x|i ) ∼= p ( x|i ) p ( x|i ) +λ for any λ > 0 . This provides additional evidence why a purely density based approach for many applications proves to be insufficient as an OOD detection score on the complex image domain : it is not reasonable to assume that a binary discriminator between certain classes of natural images on the one hand and uniform noise on the other hand provides much useful information about images from other classes or even about other nonsensical inputs . | The authors aim to to recognize common objectives in OOD detection as well as to identify the implicit scoring functions of different OOD detection methods. They show that binary discrimination between in- and (different) out distributions is equivalent to several different formulations of the OOD detection problem. They find that, when trained in a shared fashion with a standard classifier, this binary discriminator reaches an OOD detection performance similar to that of Outlier Exposure. | SP:5bc0b94b888a61882f4b3dd3bf52a0dbaad8e900 |
REFACTOR: Learning to Extract Theorems from Proofs | 1 INTRODUCTION . In the history of calculus , one remarkable early achievement was made by Archimedes in the 3rd century BC , who established a proof for the area of a parabolic segment to be 4/3 that of a certain inscribed triangle . In the proof he gave , he made use of a technique called the method of exhaustion , a precursor to modern calculus . However , as this was a strategy rather than a theorem , applying it to new problems required one to grasp and generalize the pattern , as only a handful of brilliant mathematicians were able to do . It wasn ’ t until millennia later that calculus finally became a powerful and broadly applicable tool , once these reasoning patterns were crystallized into modular concepts such as limits and integrals . A question arises – can we train a neural network to mimic humans ’ ability to extract modular components that are useful ? In this paper , we focus on a specific instance of the problem in the context of theorem proving , where the goal is to train a neural network model that can discover reusable theorems from a set of mathematical proofs . Specifically , we work under formal systems where each mathematical proof is represented by a tree called proof tree . Moreover , one can extract some connected component of the proof tree that constitutes a proof of a standalone theorem . Under this framework , we can reduce the problem to training a model that solves a binary classification problem where it determines whether each node in the proof tree belongs to the connected component that the model tries to predict . To this end , we propose a method called theoREm-from-prooF extrACTOR ( REFACTOR ) for mimicking humans ’ ability to extract theorems from proofs . Specifically , we propose to reverse the process of human theorem extraction to create machine learning datasets . Given a human proof T , we take a theorem s that is used by the proof . We then use the proof of theorem s , Ts , to re-write T as T ′ such that T ′ no longer contains the application of theorem s , and replace it by using the proof Ts . We call this re-writing process the expansion of proof T using s. The expanded proof T ′ becomes the input to our model , and the model ’ s task is to identify a connected component of T ′ , Ts , which corresponds to the theorem s that humans would use in T . We implement this idea within the Metamath theorem proving framework – an interactive theorem proving assistant that allows humans to write proofs of mathematical theorems and verify the correctness of these proofs . Metamath is known as a lightweight theorem proving assistant , and hence can be easily integrated with machine learning models ( Whalen , 2016 ; Polu & Sutskever , 2020 ) . It also contains one of the largest formal mathematics libraries , hence providing sufficient background for proving university-level or Olympiad mathematics . While our approach would be applicable to other formal systems ( such as Lean ( de Moura et al. , 2015 ) , Coq ( Barras et al. , 1999 ) , or HOL Light ( Harrison , 1996 ) ) , we chose Metamath for this project because of its features for reduced iteration time in the near term . Our work establishes the first proof of concept using neural network models to extract theorems from proofs . Our best REFACTOR model is able to extract exactly the same theorem as humans ’ ground truth ( without having seeing instances of it in the training set ) about 19.6 % of time . We also observe that REFACTOR ’ s performance improves when we increase the model size , suggesting significant room for improvement with more computational resources . Ultimately , the goal is not to recover known theorems but to discover new ones . To analyze those cases where REFACTOR ’ s predictions don ’ t match the human ground truth , we developed an algorithm to verify whether the predicted component constituent a valid proof of a theorem , and we found REFACTOR extracted 1907 valid , new theorems . We also applied REFACTOR to proofs from the existing Metamath library , from which REFACTOR extracted another 16 novel theorems . Remarkably , those 16 proofs are used very frequently in the Metamath library , with an average usage of 733.5 times . Furthermore , with newly extracted theorems , we show that the human theorem library can be refactored to be more concise : the extracted theorems reduce the total size by approximately 400k nodes . ( This is striking since REFACTOR doesn ’ t explicitly consider compression as an objective . ) Lastly , we demonstrate that training a prover on the refactored dataset leads to a 14-30 % relative improvement on proof success rates in proving new test theorems . Out of all proved test theorems , there are 43.6 % of them use the newly extracted theorems at least once . The usages also span across a diverse set of theorems : 141 unique newly extracted theorems are used , further suggesting diverse utility in new theorems we extracted . Our main contributions are as follows : 1 . We propose a novel method called REFACTOR to train neural network models for the theorem extraction problem , 2 . We demonstrate REFACTOR can extract unseen human theorems from proofs with a nontrivial accuracy of 19.6 % , 3 . We show REFACTOR is able to extract frequently used theorems from the existing human library , and as a result , shorten the proofs of the human library by a substantial amount . 4 . We show newtheorem refactored dataset can improve baseline theorem prover performance significantly with newly extracted theorem being used frequently and diversely . 2 RELATED WORK . Lemma Extraction Our work is generally related to lemma mining in Vyskočil et al . ( 2010 ) ; Hetzl et al . ( 2012 ) ; Gauthier & Kaliszyk ( 2015 ) ; Gauthier et al . ( 2016 ) and mostly related to the work of Kaliszyk & Urban ( 2015 ) ; Kaliszyk et al . ( 2015 ) . The authors propose to do lemma extraction on the synthetic proofs generated by Automated Theorem Provers ( ATP ) on the HOL Light and Flyspeck libraries . They showed the lemma extracted from the synthetic proofs further improves the ATP performances for premise selection . However , their proposed lemma selection methods require human-defined metrics and feature engineering , whereas we propose a novel way to create datasets for training a neural network model to do lemma/theorem selection . Unfortunately , as the Metamath theorem prover is not equipped with ATP automation to generate synthetic proofs , we could not easily compare our method to these past works . We leave more thorough comparisons on the other formal systems to future work . Discovering Reusable Structures Our work also is related to a broad question of discovering reusable structures and sub-routine learning . One line of the work that is notable to mention is the Explore-Compile-style ( EC , EC2 ) learning algorithms ( Dechter et al. , 2013 ; Ellis et al. , 2018 ; 2020 ) . These works focus on program synthesis while trying to discover a library of subroutines . As a subroutine in programming serves a very similar role as a theorem for theorem proving , their work is of great relevance to us . However they approach the problem from a different angle : they formalize sub-routine learning as a compression problem , by finding the best subroutine that compresses the explored solution space . However , these works have not yet been shown to be scalable to realistic program synthesis tasks or theorem proving . We , on the other hand , make use of human data to create suitable targets for subroutine learning and demonstrate the results on realistic formal theorem proving . Another related line of work build inductive biases to induce modular neural networks that can act as subrountines ( Andreas et al. , 2015 ; Gaunt et al. , 2017 ; Hudson & Manning , 2018 ; Mao et al. , 2019 ; Chang et al. , 2019 ; Wu et al. , 2020 ) . These works usually require domain knowledge of sub-routines for building neural architectures hence not suitable for our application . Machine Learning for Theorem Proving Interactive theorem provers have recently received enormous attention from the machine learning community as a testbed for theorem proving using deep learning methods ( Bansal et al. , 2019a ; b ; Gauthier et al. , 2018 ; Huang et al. , 2019 ; Yang & Deng , 2019 ; Wu et al. , 2021 ; Li et al. , 2021 ; Polu & Sutskever , 2020 ) . Previous works demonstrated that transformers can be used to solve symbolic mathematics problems ( Lample & Charton , 2020 ) , capture the underlying semantics of logical problems relevant to verification ( Hahn et al. , 2020 ) , and also generate mathematical conjectures ( Urban & Jakubův , 2020 ) . Rabe et al . ( 2020 ) showed that self-supervised training alone can give rise to mathematical reasoning . Li et al . ( 2021 ) used language models to synthesize high-level intermediate propositions from a local context . Piotrowski & Urban ( 2020 ) used RNNs to solve first-order logic in ATPs . Wang et al . ( 2020 ) used machine translation to convert synthetically generated natural language descriptions of proofs into formalized proofs . Yang & Deng ( 2019 ) augmented theorem prover with shorter synthetic theorems which consist of arbitrary steps from a longer proof with maximum length restriction . This is remotely related to our work where our extraction does not have such restrictions and instead closely mimic what human mathematicians would do . 3 METAMATH AND PROOF REPRESENTATION . In this section , we describe how one represents proof in the Metamath theorem proving environment . We would like to first note that even though the discussion here specializes in the Metamath environment , most of the other formal systems ( Isabelle/HOL , HOL Light , Coq , Lean ) have very similar representations . The fundamental idea is to think of a theorem as a function , and the proof tree essentially represents an abstract syntax tree of a series of function applications that lead to the intended conclusion . Proof of a theorem in the Metamath environment is represented as a tree . For example , the proof of the theorem a1i is shown in Figure 1 ( a ) . Each node of the tree is associated with a name ( labeled as N ) , which can refer to a premise of the theorem , an axiom , or a proved theorem from the existing theorem database . Given such a tree , one can then traverse the tree from the top to bottom , and iteratively prove a true proposition ( labeled as PROP ) for each node by making a step of theorem application . The top-level nodes usually represent the premises of the theorem , and the resulting proposition in the bottom node matches the conclusion of the theorem . In such a way , the theorem is proved . We now define one step of theorem application . When a node is connected by a set of parent nodes , it represents a step of theorem application . In particular , one can think of a theorem as a function that maps a set of hypothesis to a conclusion . Indeed , a node in the tree exactly represents such function mapping , that is to map the set of propositions of the parent nodes , to a new conclusion specified by the theorem . Formally , given a node c whose associated name refers to a theorem T , we denote its parent nodes as Pc . We can then prove a new proposition by applying the theorem T , to all propositions proved by nodes in Pc . The proof of the theorem a1i in Figure 1 ( a ) consists of 3 theorem applications . In plain language , the theorem is a proof of the fact that if ph is true , then ( ps- > ph ) is also true . The top-level nodes are the hypotheses of the theorem . Most of the hypotheses state that some expression is a well-formed formula so that the expression can be used to form a syntactically correct sentence . The more interesting hypothesis is a1i.1 , which states |-ph , meaning ph is assumed to be true . In the bottom node , the theorem invokes the theorem ax-mp , which takes in four propositions as hypotheses , and returns the conclusion |- ( ps- > ph ) . | This paper proposes a deep learning-based approach for extracting theorems from existing human proofs. The problem is formulated by classifying the nodes of the proof trees in/out of the extracted theorem. Graph neural networks are applied to this problem of node classification. This approach is applied to the Metamath dataset and extracts 1923 new theorems. These new theorems could compress the original human proofs and new test theorems are proved by the prover trained on the compressed set of proofs. | SP:6e719ff1e0c9c6f5158de741485802e1e6e818eb |
REFACTOR: Learning to Extract Theorems from Proofs | 1 INTRODUCTION . In the history of calculus , one remarkable early achievement was made by Archimedes in the 3rd century BC , who established a proof for the area of a parabolic segment to be 4/3 that of a certain inscribed triangle . In the proof he gave , he made use of a technique called the method of exhaustion , a precursor to modern calculus . However , as this was a strategy rather than a theorem , applying it to new problems required one to grasp and generalize the pattern , as only a handful of brilliant mathematicians were able to do . It wasn ’ t until millennia later that calculus finally became a powerful and broadly applicable tool , once these reasoning patterns were crystallized into modular concepts such as limits and integrals . A question arises – can we train a neural network to mimic humans ’ ability to extract modular components that are useful ? In this paper , we focus on a specific instance of the problem in the context of theorem proving , where the goal is to train a neural network model that can discover reusable theorems from a set of mathematical proofs . Specifically , we work under formal systems where each mathematical proof is represented by a tree called proof tree . Moreover , one can extract some connected component of the proof tree that constitutes a proof of a standalone theorem . Under this framework , we can reduce the problem to training a model that solves a binary classification problem where it determines whether each node in the proof tree belongs to the connected component that the model tries to predict . To this end , we propose a method called theoREm-from-prooF extrACTOR ( REFACTOR ) for mimicking humans ’ ability to extract theorems from proofs . Specifically , we propose to reverse the process of human theorem extraction to create machine learning datasets . Given a human proof T , we take a theorem s that is used by the proof . We then use the proof of theorem s , Ts , to re-write T as T ′ such that T ′ no longer contains the application of theorem s , and replace it by using the proof Ts . We call this re-writing process the expansion of proof T using s. The expanded proof T ′ becomes the input to our model , and the model ’ s task is to identify a connected component of T ′ , Ts , which corresponds to the theorem s that humans would use in T . We implement this idea within the Metamath theorem proving framework – an interactive theorem proving assistant that allows humans to write proofs of mathematical theorems and verify the correctness of these proofs . Metamath is known as a lightweight theorem proving assistant , and hence can be easily integrated with machine learning models ( Whalen , 2016 ; Polu & Sutskever , 2020 ) . It also contains one of the largest formal mathematics libraries , hence providing sufficient background for proving university-level or Olympiad mathematics . While our approach would be applicable to other formal systems ( such as Lean ( de Moura et al. , 2015 ) , Coq ( Barras et al. , 1999 ) , or HOL Light ( Harrison , 1996 ) ) , we chose Metamath for this project because of its features for reduced iteration time in the near term . Our work establishes the first proof of concept using neural network models to extract theorems from proofs . Our best REFACTOR model is able to extract exactly the same theorem as humans ’ ground truth ( without having seeing instances of it in the training set ) about 19.6 % of time . We also observe that REFACTOR ’ s performance improves when we increase the model size , suggesting significant room for improvement with more computational resources . Ultimately , the goal is not to recover known theorems but to discover new ones . To analyze those cases where REFACTOR ’ s predictions don ’ t match the human ground truth , we developed an algorithm to verify whether the predicted component constituent a valid proof of a theorem , and we found REFACTOR extracted 1907 valid , new theorems . We also applied REFACTOR to proofs from the existing Metamath library , from which REFACTOR extracted another 16 novel theorems . Remarkably , those 16 proofs are used very frequently in the Metamath library , with an average usage of 733.5 times . Furthermore , with newly extracted theorems , we show that the human theorem library can be refactored to be more concise : the extracted theorems reduce the total size by approximately 400k nodes . ( This is striking since REFACTOR doesn ’ t explicitly consider compression as an objective . ) Lastly , we demonstrate that training a prover on the refactored dataset leads to a 14-30 % relative improvement on proof success rates in proving new test theorems . Out of all proved test theorems , there are 43.6 % of them use the newly extracted theorems at least once . The usages also span across a diverse set of theorems : 141 unique newly extracted theorems are used , further suggesting diverse utility in new theorems we extracted . Our main contributions are as follows : 1 . We propose a novel method called REFACTOR to train neural network models for the theorem extraction problem , 2 . We demonstrate REFACTOR can extract unseen human theorems from proofs with a nontrivial accuracy of 19.6 % , 3 . We show REFACTOR is able to extract frequently used theorems from the existing human library , and as a result , shorten the proofs of the human library by a substantial amount . 4 . We show newtheorem refactored dataset can improve baseline theorem prover performance significantly with newly extracted theorem being used frequently and diversely . 2 RELATED WORK . Lemma Extraction Our work is generally related to lemma mining in Vyskočil et al . ( 2010 ) ; Hetzl et al . ( 2012 ) ; Gauthier & Kaliszyk ( 2015 ) ; Gauthier et al . ( 2016 ) and mostly related to the work of Kaliszyk & Urban ( 2015 ) ; Kaliszyk et al . ( 2015 ) . The authors propose to do lemma extraction on the synthetic proofs generated by Automated Theorem Provers ( ATP ) on the HOL Light and Flyspeck libraries . They showed the lemma extracted from the synthetic proofs further improves the ATP performances for premise selection . However , their proposed lemma selection methods require human-defined metrics and feature engineering , whereas we propose a novel way to create datasets for training a neural network model to do lemma/theorem selection . Unfortunately , as the Metamath theorem prover is not equipped with ATP automation to generate synthetic proofs , we could not easily compare our method to these past works . We leave more thorough comparisons on the other formal systems to future work . Discovering Reusable Structures Our work also is related to a broad question of discovering reusable structures and sub-routine learning . One line of the work that is notable to mention is the Explore-Compile-style ( EC , EC2 ) learning algorithms ( Dechter et al. , 2013 ; Ellis et al. , 2018 ; 2020 ) . These works focus on program synthesis while trying to discover a library of subroutines . As a subroutine in programming serves a very similar role as a theorem for theorem proving , their work is of great relevance to us . However they approach the problem from a different angle : they formalize sub-routine learning as a compression problem , by finding the best subroutine that compresses the explored solution space . However , these works have not yet been shown to be scalable to realistic program synthesis tasks or theorem proving . We , on the other hand , make use of human data to create suitable targets for subroutine learning and demonstrate the results on realistic formal theorem proving . Another related line of work build inductive biases to induce modular neural networks that can act as subrountines ( Andreas et al. , 2015 ; Gaunt et al. , 2017 ; Hudson & Manning , 2018 ; Mao et al. , 2019 ; Chang et al. , 2019 ; Wu et al. , 2020 ) . These works usually require domain knowledge of sub-routines for building neural architectures hence not suitable for our application . Machine Learning for Theorem Proving Interactive theorem provers have recently received enormous attention from the machine learning community as a testbed for theorem proving using deep learning methods ( Bansal et al. , 2019a ; b ; Gauthier et al. , 2018 ; Huang et al. , 2019 ; Yang & Deng , 2019 ; Wu et al. , 2021 ; Li et al. , 2021 ; Polu & Sutskever , 2020 ) . Previous works demonstrated that transformers can be used to solve symbolic mathematics problems ( Lample & Charton , 2020 ) , capture the underlying semantics of logical problems relevant to verification ( Hahn et al. , 2020 ) , and also generate mathematical conjectures ( Urban & Jakubův , 2020 ) . Rabe et al . ( 2020 ) showed that self-supervised training alone can give rise to mathematical reasoning . Li et al . ( 2021 ) used language models to synthesize high-level intermediate propositions from a local context . Piotrowski & Urban ( 2020 ) used RNNs to solve first-order logic in ATPs . Wang et al . ( 2020 ) used machine translation to convert synthetically generated natural language descriptions of proofs into formalized proofs . Yang & Deng ( 2019 ) augmented theorem prover with shorter synthetic theorems which consist of arbitrary steps from a longer proof with maximum length restriction . This is remotely related to our work where our extraction does not have such restrictions and instead closely mimic what human mathematicians would do . 3 METAMATH AND PROOF REPRESENTATION . In this section , we describe how one represents proof in the Metamath theorem proving environment . We would like to first note that even though the discussion here specializes in the Metamath environment , most of the other formal systems ( Isabelle/HOL , HOL Light , Coq , Lean ) have very similar representations . The fundamental idea is to think of a theorem as a function , and the proof tree essentially represents an abstract syntax tree of a series of function applications that lead to the intended conclusion . Proof of a theorem in the Metamath environment is represented as a tree . For example , the proof of the theorem a1i is shown in Figure 1 ( a ) . Each node of the tree is associated with a name ( labeled as N ) , which can refer to a premise of the theorem , an axiom , or a proved theorem from the existing theorem database . Given such a tree , one can then traverse the tree from the top to bottom , and iteratively prove a true proposition ( labeled as PROP ) for each node by making a step of theorem application . The top-level nodes usually represent the premises of the theorem , and the resulting proposition in the bottom node matches the conclusion of the theorem . In such a way , the theorem is proved . We now define one step of theorem application . When a node is connected by a set of parent nodes , it represents a step of theorem application . In particular , one can think of a theorem as a function that maps a set of hypothesis to a conclusion . Indeed , a node in the tree exactly represents such function mapping , that is to map the set of propositions of the parent nodes , to a new conclusion specified by the theorem . Formally , given a node c whose associated name refers to a theorem T , we denote its parent nodes as Pc . We can then prove a new proposition by applying the theorem T , to all propositions proved by nodes in Pc . The proof of the theorem a1i in Figure 1 ( a ) consists of 3 theorem applications . In plain language , the theorem is a proof of the fact that if ph is true , then ( ps- > ph ) is also true . The top-level nodes are the hypotheses of the theorem . Most of the hypotheses state that some expression is a well-formed formula so that the expression can be used to form a syntactically correct sentence . The more interesting hypothesis is a1i.1 , which states |-ph , meaning ph is assumed to be true . In the bottom node , the theorem invokes the theorem ax-mp , which takes in four propositions as hypotheses , and returns the conclusion |- ( ps- > ph ) . | This paper is about a method that mines the library of existing proofs for frequently occurring patterns of proof steps that constitute separate proofs of theorems (lemmas) used in the original proof. The method uses graph neural networks. It extracts correct unseen theorems modestly frequently (19.6% of the time). These newly found theorems are then added separately to the library and used as lemmas to prove existing theorems. The result is that they are used frequently in many proofs, lead to shorter proofs, and can improve the baseline theorem prover's performance. | SP:6e719ff1e0c9c6f5158de741485802e1e6e818eb |
REFACTOR: Learning to Extract Theorems from Proofs | 1 INTRODUCTION . In the history of calculus , one remarkable early achievement was made by Archimedes in the 3rd century BC , who established a proof for the area of a parabolic segment to be 4/3 that of a certain inscribed triangle . In the proof he gave , he made use of a technique called the method of exhaustion , a precursor to modern calculus . However , as this was a strategy rather than a theorem , applying it to new problems required one to grasp and generalize the pattern , as only a handful of brilliant mathematicians were able to do . It wasn ’ t until millennia later that calculus finally became a powerful and broadly applicable tool , once these reasoning patterns were crystallized into modular concepts such as limits and integrals . A question arises – can we train a neural network to mimic humans ’ ability to extract modular components that are useful ? In this paper , we focus on a specific instance of the problem in the context of theorem proving , where the goal is to train a neural network model that can discover reusable theorems from a set of mathematical proofs . Specifically , we work under formal systems where each mathematical proof is represented by a tree called proof tree . Moreover , one can extract some connected component of the proof tree that constitutes a proof of a standalone theorem . Under this framework , we can reduce the problem to training a model that solves a binary classification problem where it determines whether each node in the proof tree belongs to the connected component that the model tries to predict . To this end , we propose a method called theoREm-from-prooF extrACTOR ( REFACTOR ) for mimicking humans ’ ability to extract theorems from proofs . Specifically , we propose to reverse the process of human theorem extraction to create machine learning datasets . Given a human proof T , we take a theorem s that is used by the proof . We then use the proof of theorem s , Ts , to re-write T as T ′ such that T ′ no longer contains the application of theorem s , and replace it by using the proof Ts . We call this re-writing process the expansion of proof T using s. The expanded proof T ′ becomes the input to our model , and the model ’ s task is to identify a connected component of T ′ , Ts , which corresponds to the theorem s that humans would use in T . We implement this idea within the Metamath theorem proving framework – an interactive theorem proving assistant that allows humans to write proofs of mathematical theorems and verify the correctness of these proofs . Metamath is known as a lightweight theorem proving assistant , and hence can be easily integrated with machine learning models ( Whalen , 2016 ; Polu & Sutskever , 2020 ) . It also contains one of the largest formal mathematics libraries , hence providing sufficient background for proving university-level or Olympiad mathematics . While our approach would be applicable to other formal systems ( such as Lean ( de Moura et al. , 2015 ) , Coq ( Barras et al. , 1999 ) , or HOL Light ( Harrison , 1996 ) ) , we chose Metamath for this project because of its features for reduced iteration time in the near term . Our work establishes the first proof of concept using neural network models to extract theorems from proofs . Our best REFACTOR model is able to extract exactly the same theorem as humans ’ ground truth ( without having seeing instances of it in the training set ) about 19.6 % of time . We also observe that REFACTOR ’ s performance improves when we increase the model size , suggesting significant room for improvement with more computational resources . Ultimately , the goal is not to recover known theorems but to discover new ones . To analyze those cases where REFACTOR ’ s predictions don ’ t match the human ground truth , we developed an algorithm to verify whether the predicted component constituent a valid proof of a theorem , and we found REFACTOR extracted 1907 valid , new theorems . We also applied REFACTOR to proofs from the existing Metamath library , from which REFACTOR extracted another 16 novel theorems . Remarkably , those 16 proofs are used very frequently in the Metamath library , with an average usage of 733.5 times . Furthermore , with newly extracted theorems , we show that the human theorem library can be refactored to be more concise : the extracted theorems reduce the total size by approximately 400k nodes . ( This is striking since REFACTOR doesn ’ t explicitly consider compression as an objective . ) Lastly , we demonstrate that training a prover on the refactored dataset leads to a 14-30 % relative improvement on proof success rates in proving new test theorems . Out of all proved test theorems , there are 43.6 % of them use the newly extracted theorems at least once . The usages also span across a diverse set of theorems : 141 unique newly extracted theorems are used , further suggesting diverse utility in new theorems we extracted . Our main contributions are as follows : 1 . We propose a novel method called REFACTOR to train neural network models for the theorem extraction problem , 2 . We demonstrate REFACTOR can extract unseen human theorems from proofs with a nontrivial accuracy of 19.6 % , 3 . We show REFACTOR is able to extract frequently used theorems from the existing human library , and as a result , shorten the proofs of the human library by a substantial amount . 4 . We show newtheorem refactored dataset can improve baseline theorem prover performance significantly with newly extracted theorem being used frequently and diversely . 2 RELATED WORK . Lemma Extraction Our work is generally related to lemma mining in Vyskočil et al . ( 2010 ) ; Hetzl et al . ( 2012 ) ; Gauthier & Kaliszyk ( 2015 ) ; Gauthier et al . ( 2016 ) and mostly related to the work of Kaliszyk & Urban ( 2015 ) ; Kaliszyk et al . ( 2015 ) . The authors propose to do lemma extraction on the synthetic proofs generated by Automated Theorem Provers ( ATP ) on the HOL Light and Flyspeck libraries . They showed the lemma extracted from the synthetic proofs further improves the ATP performances for premise selection . However , their proposed lemma selection methods require human-defined metrics and feature engineering , whereas we propose a novel way to create datasets for training a neural network model to do lemma/theorem selection . Unfortunately , as the Metamath theorem prover is not equipped with ATP automation to generate synthetic proofs , we could not easily compare our method to these past works . We leave more thorough comparisons on the other formal systems to future work . Discovering Reusable Structures Our work also is related to a broad question of discovering reusable structures and sub-routine learning . One line of the work that is notable to mention is the Explore-Compile-style ( EC , EC2 ) learning algorithms ( Dechter et al. , 2013 ; Ellis et al. , 2018 ; 2020 ) . These works focus on program synthesis while trying to discover a library of subroutines . As a subroutine in programming serves a very similar role as a theorem for theorem proving , their work is of great relevance to us . However they approach the problem from a different angle : they formalize sub-routine learning as a compression problem , by finding the best subroutine that compresses the explored solution space . However , these works have not yet been shown to be scalable to realistic program synthesis tasks or theorem proving . We , on the other hand , make use of human data to create suitable targets for subroutine learning and demonstrate the results on realistic formal theorem proving . Another related line of work build inductive biases to induce modular neural networks that can act as subrountines ( Andreas et al. , 2015 ; Gaunt et al. , 2017 ; Hudson & Manning , 2018 ; Mao et al. , 2019 ; Chang et al. , 2019 ; Wu et al. , 2020 ) . These works usually require domain knowledge of sub-routines for building neural architectures hence not suitable for our application . Machine Learning for Theorem Proving Interactive theorem provers have recently received enormous attention from the machine learning community as a testbed for theorem proving using deep learning methods ( Bansal et al. , 2019a ; b ; Gauthier et al. , 2018 ; Huang et al. , 2019 ; Yang & Deng , 2019 ; Wu et al. , 2021 ; Li et al. , 2021 ; Polu & Sutskever , 2020 ) . Previous works demonstrated that transformers can be used to solve symbolic mathematics problems ( Lample & Charton , 2020 ) , capture the underlying semantics of logical problems relevant to verification ( Hahn et al. , 2020 ) , and also generate mathematical conjectures ( Urban & Jakubův , 2020 ) . Rabe et al . ( 2020 ) showed that self-supervised training alone can give rise to mathematical reasoning . Li et al . ( 2021 ) used language models to synthesize high-level intermediate propositions from a local context . Piotrowski & Urban ( 2020 ) used RNNs to solve first-order logic in ATPs . Wang et al . ( 2020 ) used machine translation to convert synthetically generated natural language descriptions of proofs into formalized proofs . Yang & Deng ( 2019 ) augmented theorem prover with shorter synthetic theorems which consist of arbitrary steps from a longer proof with maximum length restriction . This is remotely related to our work where our extraction does not have such restrictions and instead closely mimic what human mathematicians would do . 3 METAMATH AND PROOF REPRESENTATION . In this section , we describe how one represents proof in the Metamath theorem proving environment . We would like to first note that even though the discussion here specializes in the Metamath environment , most of the other formal systems ( Isabelle/HOL , HOL Light , Coq , Lean ) have very similar representations . The fundamental idea is to think of a theorem as a function , and the proof tree essentially represents an abstract syntax tree of a series of function applications that lead to the intended conclusion . Proof of a theorem in the Metamath environment is represented as a tree . For example , the proof of the theorem a1i is shown in Figure 1 ( a ) . Each node of the tree is associated with a name ( labeled as N ) , which can refer to a premise of the theorem , an axiom , or a proved theorem from the existing theorem database . Given such a tree , one can then traverse the tree from the top to bottom , and iteratively prove a true proposition ( labeled as PROP ) for each node by making a step of theorem application . The top-level nodes usually represent the premises of the theorem , and the resulting proposition in the bottom node matches the conclusion of the theorem . In such a way , the theorem is proved . We now define one step of theorem application . When a node is connected by a set of parent nodes , it represents a step of theorem application . In particular , one can think of a theorem as a function that maps a set of hypothesis to a conclusion . Indeed , a node in the tree exactly represents such function mapping , that is to map the set of propositions of the parent nodes , to a new conclusion specified by the theorem . Formally , given a node c whose associated name refers to a theorem T , we denote its parent nodes as Pc . We can then prove a new proposition by applying the theorem T , to all propositions proved by nodes in Pc . The proof of the theorem a1i in Figure 1 ( a ) consists of 3 theorem applications . In plain language , the theorem is a proof of the fact that if ph is true , then ( ps- > ph ) is also true . The top-level nodes are the hypotheses of the theorem . Most of the hypotheses state that some expression is a well-formed formula so that the expression can be used to form a syntactically correct sentence . The more interesting hypothesis is a1i.1 , which states |-ph , meaning ph is assumed to be true . In the bottom node , the theorem invokes the theorem ax-mp , which takes in four propositions as hypotheses , and returns the conclusion |- ( ps- > ph ) . | The idea of the paper is based on the view of a theorem + proof as a tree, wherein sub-trees are also theorem + proofs. The goal is to extract useful lemmas from existing proof trees. This is then cast as the problem of predicting whether each node should be extracted as a new theorem (or lemma). The main idea of the paper is to construct a training set for this prediction problem by replacing calls to lemmas with their proof tree, and labelling the nodes which root these newly inserted trees as positive examples, i.e. those to be extracted as useful lemmas. | SP:6e719ff1e0c9c6f5158de741485802e1e6e818eb |
Generalized Demographic Parity for Group Fairness | 1 INTRODUCTION . Fairness problem has attracted increasing attention in many high-stakes applications , such as credit rating , insurance pricing and college admission ( Mehrabi et al. , 2021 ; Du et al. , 2020 ; Bellamy et al. , 2018 ) , the adopted machine learning models encode and even amplify societal biases toward the group with different sensitive attributes . The majority of existing fairness metrics , such as demographic parity ( DP ) ( Feldman et al. , 2015 ) , equal odds ( EO ) ( Hardt et al. , 2016 ) , presumably consider discrete sensitive variables such as gender and race . In many real-world applications including urban studies and mobility predictions ( Tessum et al. , 2021 ) , however , individuals ’ sensitive attributes are unavailable due to privacy constraints . Instead , only aggregated attributes presenting in continuous distributions are available , and thus fairness requires unbiased prediction over neighborhood or region-level objects . Additionally , the sensitive attributes , such as age and weight , are inherently continuous ( Mary et al. , 2019 ; Grari et al. , IJCAI ’ 20 ) . The widely existing continuous sensitive attributes stimulate further fairness metrics definition and bias mitigation methods . Existing fairness metrics on continuous sensitive attributes rely on the statistical measurement of independence , such as Hirschfeld-Gebelein-Renyi ( HGR ) maximal correlation coefficient ( Mary et al. , 2019 ) and mutual information ( Jha et al. , 2021 ; Creager et al. , 2019 ) , which are computationintractable due to the involved functional optimization . Note that the mutual information involves the ratio of probability density function , it is intractable to directly estimate mutual information via probability density function estimation due to the sensitivity over probability density function , especially for the low probability density value . Previous works ( Roh et al. , 2020 ; Lowy et al. , 2021 ; Cho et al. , 2020 ) adopting mutual information or variations as regularizer , however , rely on tractable bound , or computationally complex singular value decomposition operation ( Mary et al. , 2019 ) , or training-needed neural network approximation ( Belghazi et al. , 2018 ) , such as Donsker-Varadhan representation ( Belghazi et al. , 2018 ) , variational bounds ( Poole et al. , 2019 ) . Nevertheless , it is unreliable to adopt the mathematical bound of fairness metric to evaluate different algorithms since lower metrics bound does not necessarily imply lower prediction bias . A question is raised : and generalized demographic parity for continuous sensitive attribute . Markers represent average prediction among specific discrete sensitive attributes , Red dashed line and blue solid line represent prediction average among all data and that with specific sensitive attribute . TV ( · , · ) represents weighted total variation distance . Can we extend DP for continuous attributes while preserving tractable computation ? In this work , we provide positive answers via proposing generalized demographic parity ( GDP ) from regression perspective . Figure 1 provides an illustrative example for DP and GDP . The local prediction average ( blue solid curve ) and the global prediction average ( red dashed line ) represent the average prediction value given sensitive attributes and the whole data samples , respectively . The local and global prediction average should be consistent at any specific continuous sensitive attributes . Therefore , we define GDP , via the weighted total variation distance , to measure the distance between the local and global prediction average , where the weight is the probability density of continuous sensitive attributes . We also theoretically demonstrate the equivalence of GDP and DP for binary sensitive attributes , provide an understanding of GDP from probability perspective , and reveal the bias mitigation methods connection between GDP regularizer and adversarial debiasing . Although GDP is clearly defined on the unknown underlying joint distribution of prediction and sensitive attributes , only data samples , in practice , are available . To this end , we propose two GDP estimation methods , named histogram estimation ( hard group strategy ) and kernel estimation ( soft group strategy ) methods , where kernel estimation is provable with faster estimation error convergence rate w.r.t . data sample size . Specifically , histogram estimation manually divides continuous sensitive attributes into several disjointed and complementary sensitive attributes bins , and then each sample only belongs to one specific bin containing the sensitive attribute of the sample . In other words , the group indicator of the sample is one-hot . As for kernel estimation , instead of quantizing continuous sensitive attributes as several groups , the group indicator of the sample is treated as a kernel function . In other words , to calculate the mean prediction value given specific sensitive attributes , group smoothing strategy is adopted via a soft indicator determined by the sensitive attribute distance between the sample sensitive attribute with target-specific sensitive attribute . In short , the contributions of this paper are : • We develop a tractable group fairness metric GDP for continuous sensitive attributes . We theoretically justify GDP via demonstrating the equivalence with DP for binary sensitive attributes , providing GDP understanding from probability perspective , and revealing the connection between GDP regularizer and adversarial debiasing . • We propose histogram and kernel GDP estimation and provably demonstrate the superiority of kernel GDP estimation method with faster estimation error convergence rate w.r.t . sample size . • We experimentally evaluate the effectiveness and expansibility of GDP on different domain benchmarks ( e.g. , tabular , graph , and temporal graph data ) , tasks ( e.g , classification and regression tasks ) , and compositional sensitive attributes . 2 RELATED WORK . Machine Learning Fairness Fair machine learning targets bias mitigation for automated decisionmaking systems . Various fairness definitions , such as group fairness and individual fairness , have been proposed ( Zemel et al. , 2013 ) . Group metrics , such as DP and EO , measure prediction difference between the groups with different sensitive attributes such as gender , age ( Louizos et al. , 2016 ; Hardt et al. , 2016 ) . While pre- and post-processing methods have been proposed for fairness boosting , these methods can still lead to higher prediction bias ( Barocas et al. , 2017 ) compared with in-processing methods , such as adding regularizer , adversarial debiasing and data augmentation . For example , the covariance between the predictions and sensitive attributes regularization are imposed to boost the independence in ( Woodworth et al. , 2017 ) . ( Zafar et al. , 2017 ) constrains the decision boundaries of classifier to minimize prediction disparity between different groups . Adversarial training has been originally proposed for deep generative modeling ( Goodfellow et al. , 2014 ) and has been introduced for prediction debias in representation learning ( Zhao et al. , 2020 ; Beutel et al. , 2017 ; Louppe et al. , 2017 ) and transfer learning ( Madras et al. , 2018 ) . Data augmentation , such as fair mixup ( Chuang & Mroueh , 2020 ) , can improve the generalization ability for fairness . Representation neutralization is proposed to boost fairness without sensitive attribute ( Du et al. , 2021 ) . Kernel Density Estimation and Kernel Regression Kernel density estimation ( KDE ) is a nonparametric method to estimate the continuous probability density function of a random variable ( Davis et al. , 2011 ; Parzen , 1962 ) . Given finite data samples , KDE smoothly estimate the probability function via weighted summation , where the weight is determined via kernel function ( Epanechnikov , 1969 ) . Kernel regression is a non-parametric technique to estimate the conditional expectation of a random variable ( Nadaraya , 1964 ) . Nadaraya-Watson Kernel regression function estimator is proposed for regression via locally normalized weighted average in ( Bierens , 1988 ) , where the sample weight is determined by kernel function . 3 GENERALIZED DEMOGRAPHIC PARITY . Without loss of generality , we consider a binary classification task to predict the output variable Y given the input variable X , while avoiding prediction bias for sensitive attribute S. Define the input X ∈ X ⊂ Rd , labels Y ∈ { 0 , 1 } , and machine learning model f : Rd → [ 0 , 1 ] provides prediction score Ŷ = f ( X ) . Fairness requires predictor Ŷ to be independent of sensitive attribute S , regardless of continuous or discrete , i.e. , P ( Ŷ = ŷ ) = P ( Ŷ = ŷ|S = s ) for any support value y and s ( Beutel et al. , 2017 ) . Since the independent constraint is difficult to optimize , the relaxed demographic parity ( DP ) ( Madras et al. , 2018 ) metrics are proposed to quantitatively measure the predictor bias for binary sensitive attribute S ∈ { 0 , 1 } . Formally , the demographic parity is defined as ∆DP = |EŶ [ Ŷ |S = 0 ] − EŶ [ Ŷ |S = 1 ] | , where E [ · ] represents variable expectation . For categorical sensitive attribute S ∈ S , work ( Cho et al. , 2020 ) introduces a fairness metric , named difference w.r.t . demographic parity ( DDP ) ∆DDP = ∑ s∈S |EŶ [ Ŷ |S = s ] − EŶ [ Ŷ ] | . Although DP has been widely used to evaluate the prediction bias , it is still inapplicable for continuous sensitive attributes since the data samples can not be directly divided into several distinctive groups based on the sensitive attributes . Without loss of generality , we assume continuous sensitive attributes S ∈ [ 0 , 1 ] and propose GDP to extend tractable DP for continuous sensitive attributes . Assume the joint distribution of tuple ( S , Ŷ ) is PS , Ŷ ( s , ŷ ) , the local prediction average and global prediction average are defined as the prediction expectation given sensitive attribute S = s and without any sensitive attribute condition , i.e. , local prediction average m ( s ) △ = E [ Ŷ |S = s ] and global prediction average mavg △ = ES [ m ( S ) ] = E [ Ŷ ] , respectively . Then , we adopt weighted total variation distance on local prediction average and global prediction average , where the weight is specified by the probability density function of the sensitive attribute . The formal definition of the discrepancy demographic parity for continuous sensitive attributes is as follows : ∆GDP = ∫ 1 0 ∣∣∣m ( s ) −mavg∣∣∣PS ( S = s ) ds = ES [ |m ( S ) −mavg| ] , ( 1 ) We also provide the connection of GDP and DP for binary sensitive attributes , which implies that GDP is equivalent to DP for binary sensitive attributes , as follows : Theorem 1 ( Connection between DP and GDP ) . For binary sensitive attribute S ∈ { 0 , 1 } , GPD and DP are equivalent except the coefficient only dependent on datasets . Specifically , the relation of ∆GDP and ∆DP satisfies ∆GDP = 2PS ( S = 1 ) · PS ( S = 0 ) ·∆DP . The proof of Theorem 1 is presented in Appendix A . For categorical sensitive attributes , it is easy to obtain that ∆GDP = ∑ s∈S PS ( S = s ) |EŶ [ Ŷ |S = s ] − EŶ [ Ŷ ] | , i.e. , GDP is weighted DDP for categorical sensitive attributes . In a nutshell , GDP is a natural fairness metric extension for binary and categorical sensitive attributes . Since the independence between prediction Ŷ and sensitive attribute S implies that the joint distribution PS , Ŷ ( s , ŷ ) and product marginal distribution PS ( s ) PŶ ( ŷ ) are the same , the bias can be measured by the distance of the joint distribution and product marginal distribution . Subsequently , we show the connection of GDP and prediction-weighted total variation distance between these two distributions as follows : Theorem 2 ( Probability View of GDP ) . Assume the joint distribution of ( Ŷ , S ) with support [ 0 , 1 ] 2 is PS , Ŷ ( s , ŷ ) . Define the prediction-weighted total variation distance as TVpred ( P 1 , P 2 ) △ =∫ 1 0 ∫ 1 0 ŷ|P 1 ( ŷ , s ) − P 2 ( ŷ , s ) |dŷds . Then the proposed fairness for continuous senstive attribute is upper bounded by prediction- weighted total variation distance between the joint distribution and product marginal distribution : ∆GDP = ∫ 1 0 ∫ 1 0 ∣∣∣ŷ [ PS , Ŷ ( s , ŷ ) − PS ( s ) PŶ ( ŷ ) ] ∣∣∣dŷds ≤ TVpred ( PS , Ŷ ( s , ŷ ) , PS ( s ) PŶ ( ŷ ) ) . The proof of Theorem 2 is presented in Appendix B. Theorem 2 demonstrates that GDP is actually a lower bound for prediction-weighted total variation distance between these two distributions and implies the necessity of GDP for bias measurement . | This paper proposes a generalized demographic parity for group fairness which is computationally feasible for both continuous and discrete protected attributes. Two estimations, histogram and kernel, are proposed for efficient estimation and the kernel estimation has faster estimation error governance. The connection between the GDP regularization and adversarial debiasing is built. The experiments on syntehtic/tabular/graph datasets show the effectiveness and efficiency of the GDP kernel estimation. | SP:bb1a291cdfa4e3d1566688021a844a16bce29c90 |
Generalized Demographic Parity for Group Fairness | 1 INTRODUCTION . Fairness problem has attracted increasing attention in many high-stakes applications , such as credit rating , insurance pricing and college admission ( Mehrabi et al. , 2021 ; Du et al. , 2020 ; Bellamy et al. , 2018 ) , the adopted machine learning models encode and even amplify societal biases toward the group with different sensitive attributes . The majority of existing fairness metrics , such as demographic parity ( DP ) ( Feldman et al. , 2015 ) , equal odds ( EO ) ( Hardt et al. , 2016 ) , presumably consider discrete sensitive variables such as gender and race . In many real-world applications including urban studies and mobility predictions ( Tessum et al. , 2021 ) , however , individuals ’ sensitive attributes are unavailable due to privacy constraints . Instead , only aggregated attributes presenting in continuous distributions are available , and thus fairness requires unbiased prediction over neighborhood or region-level objects . Additionally , the sensitive attributes , such as age and weight , are inherently continuous ( Mary et al. , 2019 ; Grari et al. , IJCAI ’ 20 ) . The widely existing continuous sensitive attributes stimulate further fairness metrics definition and bias mitigation methods . Existing fairness metrics on continuous sensitive attributes rely on the statistical measurement of independence , such as Hirschfeld-Gebelein-Renyi ( HGR ) maximal correlation coefficient ( Mary et al. , 2019 ) and mutual information ( Jha et al. , 2021 ; Creager et al. , 2019 ) , which are computationintractable due to the involved functional optimization . Note that the mutual information involves the ratio of probability density function , it is intractable to directly estimate mutual information via probability density function estimation due to the sensitivity over probability density function , especially for the low probability density value . Previous works ( Roh et al. , 2020 ; Lowy et al. , 2021 ; Cho et al. , 2020 ) adopting mutual information or variations as regularizer , however , rely on tractable bound , or computationally complex singular value decomposition operation ( Mary et al. , 2019 ) , or training-needed neural network approximation ( Belghazi et al. , 2018 ) , such as Donsker-Varadhan representation ( Belghazi et al. , 2018 ) , variational bounds ( Poole et al. , 2019 ) . Nevertheless , it is unreliable to adopt the mathematical bound of fairness metric to evaluate different algorithms since lower metrics bound does not necessarily imply lower prediction bias . A question is raised : and generalized demographic parity for continuous sensitive attribute . Markers represent average prediction among specific discrete sensitive attributes , Red dashed line and blue solid line represent prediction average among all data and that with specific sensitive attribute . TV ( · , · ) represents weighted total variation distance . Can we extend DP for continuous attributes while preserving tractable computation ? In this work , we provide positive answers via proposing generalized demographic parity ( GDP ) from regression perspective . Figure 1 provides an illustrative example for DP and GDP . The local prediction average ( blue solid curve ) and the global prediction average ( red dashed line ) represent the average prediction value given sensitive attributes and the whole data samples , respectively . The local and global prediction average should be consistent at any specific continuous sensitive attributes . Therefore , we define GDP , via the weighted total variation distance , to measure the distance between the local and global prediction average , where the weight is the probability density of continuous sensitive attributes . We also theoretically demonstrate the equivalence of GDP and DP for binary sensitive attributes , provide an understanding of GDP from probability perspective , and reveal the bias mitigation methods connection between GDP regularizer and adversarial debiasing . Although GDP is clearly defined on the unknown underlying joint distribution of prediction and sensitive attributes , only data samples , in practice , are available . To this end , we propose two GDP estimation methods , named histogram estimation ( hard group strategy ) and kernel estimation ( soft group strategy ) methods , where kernel estimation is provable with faster estimation error convergence rate w.r.t . data sample size . Specifically , histogram estimation manually divides continuous sensitive attributes into several disjointed and complementary sensitive attributes bins , and then each sample only belongs to one specific bin containing the sensitive attribute of the sample . In other words , the group indicator of the sample is one-hot . As for kernel estimation , instead of quantizing continuous sensitive attributes as several groups , the group indicator of the sample is treated as a kernel function . In other words , to calculate the mean prediction value given specific sensitive attributes , group smoothing strategy is adopted via a soft indicator determined by the sensitive attribute distance between the sample sensitive attribute with target-specific sensitive attribute . In short , the contributions of this paper are : • We develop a tractable group fairness metric GDP for continuous sensitive attributes . We theoretically justify GDP via demonstrating the equivalence with DP for binary sensitive attributes , providing GDP understanding from probability perspective , and revealing the connection between GDP regularizer and adversarial debiasing . • We propose histogram and kernel GDP estimation and provably demonstrate the superiority of kernel GDP estimation method with faster estimation error convergence rate w.r.t . sample size . • We experimentally evaluate the effectiveness and expansibility of GDP on different domain benchmarks ( e.g. , tabular , graph , and temporal graph data ) , tasks ( e.g , classification and regression tasks ) , and compositional sensitive attributes . 2 RELATED WORK . Machine Learning Fairness Fair machine learning targets bias mitigation for automated decisionmaking systems . Various fairness definitions , such as group fairness and individual fairness , have been proposed ( Zemel et al. , 2013 ) . Group metrics , such as DP and EO , measure prediction difference between the groups with different sensitive attributes such as gender , age ( Louizos et al. , 2016 ; Hardt et al. , 2016 ) . While pre- and post-processing methods have been proposed for fairness boosting , these methods can still lead to higher prediction bias ( Barocas et al. , 2017 ) compared with in-processing methods , such as adding regularizer , adversarial debiasing and data augmentation . For example , the covariance between the predictions and sensitive attributes regularization are imposed to boost the independence in ( Woodworth et al. , 2017 ) . ( Zafar et al. , 2017 ) constrains the decision boundaries of classifier to minimize prediction disparity between different groups . Adversarial training has been originally proposed for deep generative modeling ( Goodfellow et al. , 2014 ) and has been introduced for prediction debias in representation learning ( Zhao et al. , 2020 ; Beutel et al. , 2017 ; Louppe et al. , 2017 ) and transfer learning ( Madras et al. , 2018 ) . Data augmentation , such as fair mixup ( Chuang & Mroueh , 2020 ) , can improve the generalization ability for fairness . Representation neutralization is proposed to boost fairness without sensitive attribute ( Du et al. , 2021 ) . Kernel Density Estimation and Kernel Regression Kernel density estimation ( KDE ) is a nonparametric method to estimate the continuous probability density function of a random variable ( Davis et al. , 2011 ; Parzen , 1962 ) . Given finite data samples , KDE smoothly estimate the probability function via weighted summation , where the weight is determined via kernel function ( Epanechnikov , 1969 ) . Kernel regression is a non-parametric technique to estimate the conditional expectation of a random variable ( Nadaraya , 1964 ) . Nadaraya-Watson Kernel regression function estimator is proposed for regression via locally normalized weighted average in ( Bierens , 1988 ) , where the sample weight is determined by kernel function . 3 GENERALIZED DEMOGRAPHIC PARITY . Without loss of generality , we consider a binary classification task to predict the output variable Y given the input variable X , while avoiding prediction bias for sensitive attribute S. Define the input X ∈ X ⊂ Rd , labels Y ∈ { 0 , 1 } , and machine learning model f : Rd → [ 0 , 1 ] provides prediction score Ŷ = f ( X ) . Fairness requires predictor Ŷ to be independent of sensitive attribute S , regardless of continuous or discrete , i.e. , P ( Ŷ = ŷ ) = P ( Ŷ = ŷ|S = s ) for any support value y and s ( Beutel et al. , 2017 ) . Since the independent constraint is difficult to optimize , the relaxed demographic parity ( DP ) ( Madras et al. , 2018 ) metrics are proposed to quantitatively measure the predictor bias for binary sensitive attribute S ∈ { 0 , 1 } . Formally , the demographic parity is defined as ∆DP = |EŶ [ Ŷ |S = 0 ] − EŶ [ Ŷ |S = 1 ] | , where E [ · ] represents variable expectation . For categorical sensitive attribute S ∈ S , work ( Cho et al. , 2020 ) introduces a fairness metric , named difference w.r.t . demographic parity ( DDP ) ∆DDP = ∑ s∈S |EŶ [ Ŷ |S = s ] − EŶ [ Ŷ ] | . Although DP has been widely used to evaluate the prediction bias , it is still inapplicable for continuous sensitive attributes since the data samples can not be directly divided into several distinctive groups based on the sensitive attributes . Without loss of generality , we assume continuous sensitive attributes S ∈ [ 0 , 1 ] and propose GDP to extend tractable DP for continuous sensitive attributes . Assume the joint distribution of tuple ( S , Ŷ ) is PS , Ŷ ( s , ŷ ) , the local prediction average and global prediction average are defined as the prediction expectation given sensitive attribute S = s and without any sensitive attribute condition , i.e. , local prediction average m ( s ) △ = E [ Ŷ |S = s ] and global prediction average mavg △ = ES [ m ( S ) ] = E [ Ŷ ] , respectively . Then , we adopt weighted total variation distance on local prediction average and global prediction average , where the weight is specified by the probability density function of the sensitive attribute . The formal definition of the discrepancy demographic parity for continuous sensitive attributes is as follows : ∆GDP = ∫ 1 0 ∣∣∣m ( s ) −mavg∣∣∣PS ( S = s ) ds = ES [ |m ( S ) −mavg| ] , ( 1 ) We also provide the connection of GDP and DP for binary sensitive attributes , which implies that GDP is equivalent to DP for binary sensitive attributes , as follows : Theorem 1 ( Connection between DP and GDP ) . For binary sensitive attribute S ∈ { 0 , 1 } , GPD and DP are equivalent except the coefficient only dependent on datasets . Specifically , the relation of ∆GDP and ∆DP satisfies ∆GDP = 2PS ( S = 1 ) · PS ( S = 0 ) ·∆DP . The proof of Theorem 1 is presented in Appendix A . For categorical sensitive attributes , it is easy to obtain that ∆GDP = ∑ s∈S PS ( S = s ) |EŶ [ Ŷ |S = s ] − EŶ [ Ŷ ] | , i.e. , GDP is weighted DDP for categorical sensitive attributes . In a nutshell , GDP is a natural fairness metric extension for binary and categorical sensitive attributes . Since the independence between prediction Ŷ and sensitive attribute S implies that the joint distribution PS , Ŷ ( s , ŷ ) and product marginal distribution PS ( s ) PŶ ( ŷ ) are the same , the bias can be measured by the distance of the joint distribution and product marginal distribution . Subsequently , we show the connection of GDP and prediction-weighted total variation distance between these two distributions as follows : Theorem 2 ( Probability View of GDP ) . Assume the joint distribution of ( Ŷ , S ) with support [ 0 , 1 ] 2 is PS , Ŷ ( s , ŷ ) . Define the prediction-weighted total variation distance as TVpred ( P 1 , P 2 ) △ =∫ 1 0 ∫ 1 0 ŷ|P 1 ( ŷ , s ) − P 2 ( ŷ , s ) |dŷds . Then the proposed fairness for continuous senstive attribute is upper bounded by prediction- weighted total variation distance between the joint distribution and product marginal distribution : ∆GDP = ∫ 1 0 ∫ 1 0 ∣∣∣ŷ [ PS , Ŷ ( s , ŷ ) − PS ( s ) PŶ ( ŷ ) ] ∣∣∣dŷds ≤ TVpred ( PS , Ŷ ( s , ŷ ) , PS ( s ) PŶ ( ŷ ) ) . The proof of Theorem 2 is presented in Appendix B. Theorem 2 demonstrates that GDP is actually a lower bound for prediction-weighted total variation distance between these two distributions and implies the necessity of GDP for bias measurement . | The paper is motivated by the need to account for continuous attributes in fair Machine Learning. In particular, it proposes a Generalized Demographic Parity metric (GDP), i.e., a group fairness metric that can work with both continuous and discrete variables. A key challenge in doing so is preserving tractable computation. Theoretically, GDP is defined via the weighted total variation distance and measures the distance between the local and global prediction average, where the weight corresponds to the pdf of the continuous sensitive attributes. Since the joint distribution between model prediction and sensitive attributes might not be available in practice, the paper also proposes two estimation methods: histogram estimation and kernel estimation. The former quantizes the continuous sensitive attributes into bins, whereas in the latter method the group indicator of the sample is treated as a kernel function. The kernel method leads to faster estimation error convergence rate in terms of sample size. The paper provides an extensive evaluation with tabular, graph and temporal graph data, synthetic experiments, and classification and regression tasks. It is demonstrated experimentally that the kernel method has better bias mitigation performance. | SP:bb1a291cdfa4e3d1566688021a844a16bce29c90 |
Generalized Demographic Parity for Group Fairness | 1 INTRODUCTION . Fairness problem has attracted increasing attention in many high-stakes applications , such as credit rating , insurance pricing and college admission ( Mehrabi et al. , 2021 ; Du et al. , 2020 ; Bellamy et al. , 2018 ) , the adopted machine learning models encode and even amplify societal biases toward the group with different sensitive attributes . The majority of existing fairness metrics , such as demographic parity ( DP ) ( Feldman et al. , 2015 ) , equal odds ( EO ) ( Hardt et al. , 2016 ) , presumably consider discrete sensitive variables such as gender and race . In many real-world applications including urban studies and mobility predictions ( Tessum et al. , 2021 ) , however , individuals ’ sensitive attributes are unavailable due to privacy constraints . Instead , only aggregated attributes presenting in continuous distributions are available , and thus fairness requires unbiased prediction over neighborhood or region-level objects . Additionally , the sensitive attributes , such as age and weight , are inherently continuous ( Mary et al. , 2019 ; Grari et al. , IJCAI ’ 20 ) . The widely existing continuous sensitive attributes stimulate further fairness metrics definition and bias mitigation methods . Existing fairness metrics on continuous sensitive attributes rely on the statistical measurement of independence , such as Hirschfeld-Gebelein-Renyi ( HGR ) maximal correlation coefficient ( Mary et al. , 2019 ) and mutual information ( Jha et al. , 2021 ; Creager et al. , 2019 ) , which are computationintractable due to the involved functional optimization . Note that the mutual information involves the ratio of probability density function , it is intractable to directly estimate mutual information via probability density function estimation due to the sensitivity over probability density function , especially for the low probability density value . Previous works ( Roh et al. , 2020 ; Lowy et al. , 2021 ; Cho et al. , 2020 ) adopting mutual information or variations as regularizer , however , rely on tractable bound , or computationally complex singular value decomposition operation ( Mary et al. , 2019 ) , or training-needed neural network approximation ( Belghazi et al. , 2018 ) , such as Donsker-Varadhan representation ( Belghazi et al. , 2018 ) , variational bounds ( Poole et al. , 2019 ) . Nevertheless , it is unreliable to adopt the mathematical bound of fairness metric to evaluate different algorithms since lower metrics bound does not necessarily imply lower prediction bias . A question is raised : and generalized demographic parity for continuous sensitive attribute . Markers represent average prediction among specific discrete sensitive attributes , Red dashed line and blue solid line represent prediction average among all data and that with specific sensitive attribute . TV ( · , · ) represents weighted total variation distance . Can we extend DP for continuous attributes while preserving tractable computation ? In this work , we provide positive answers via proposing generalized demographic parity ( GDP ) from regression perspective . Figure 1 provides an illustrative example for DP and GDP . The local prediction average ( blue solid curve ) and the global prediction average ( red dashed line ) represent the average prediction value given sensitive attributes and the whole data samples , respectively . The local and global prediction average should be consistent at any specific continuous sensitive attributes . Therefore , we define GDP , via the weighted total variation distance , to measure the distance between the local and global prediction average , where the weight is the probability density of continuous sensitive attributes . We also theoretically demonstrate the equivalence of GDP and DP for binary sensitive attributes , provide an understanding of GDP from probability perspective , and reveal the bias mitigation methods connection between GDP regularizer and adversarial debiasing . Although GDP is clearly defined on the unknown underlying joint distribution of prediction and sensitive attributes , only data samples , in practice , are available . To this end , we propose two GDP estimation methods , named histogram estimation ( hard group strategy ) and kernel estimation ( soft group strategy ) methods , where kernel estimation is provable with faster estimation error convergence rate w.r.t . data sample size . Specifically , histogram estimation manually divides continuous sensitive attributes into several disjointed and complementary sensitive attributes bins , and then each sample only belongs to one specific bin containing the sensitive attribute of the sample . In other words , the group indicator of the sample is one-hot . As for kernel estimation , instead of quantizing continuous sensitive attributes as several groups , the group indicator of the sample is treated as a kernel function . In other words , to calculate the mean prediction value given specific sensitive attributes , group smoothing strategy is adopted via a soft indicator determined by the sensitive attribute distance between the sample sensitive attribute with target-specific sensitive attribute . In short , the contributions of this paper are : • We develop a tractable group fairness metric GDP for continuous sensitive attributes . We theoretically justify GDP via demonstrating the equivalence with DP for binary sensitive attributes , providing GDP understanding from probability perspective , and revealing the connection between GDP regularizer and adversarial debiasing . • We propose histogram and kernel GDP estimation and provably demonstrate the superiority of kernel GDP estimation method with faster estimation error convergence rate w.r.t . sample size . • We experimentally evaluate the effectiveness and expansibility of GDP on different domain benchmarks ( e.g. , tabular , graph , and temporal graph data ) , tasks ( e.g , classification and regression tasks ) , and compositional sensitive attributes . 2 RELATED WORK . Machine Learning Fairness Fair machine learning targets bias mitigation for automated decisionmaking systems . Various fairness definitions , such as group fairness and individual fairness , have been proposed ( Zemel et al. , 2013 ) . Group metrics , such as DP and EO , measure prediction difference between the groups with different sensitive attributes such as gender , age ( Louizos et al. , 2016 ; Hardt et al. , 2016 ) . While pre- and post-processing methods have been proposed for fairness boosting , these methods can still lead to higher prediction bias ( Barocas et al. , 2017 ) compared with in-processing methods , such as adding regularizer , adversarial debiasing and data augmentation . For example , the covariance between the predictions and sensitive attributes regularization are imposed to boost the independence in ( Woodworth et al. , 2017 ) . ( Zafar et al. , 2017 ) constrains the decision boundaries of classifier to minimize prediction disparity between different groups . Adversarial training has been originally proposed for deep generative modeling ( Goodfellow et al. , 2014 ) and has been introduced for prediction debias in representation learning ( Zhao et al. , 2020 ; Beutel et al. , 2017 ; Louppe et al. , 2017 ) and transfer learning ( Madras et al. , 2018 ) . Data augmentation , such as fair mixup ( Chuang & Mroueh , 2020 ) , can improve the generalization ability for fairness . Representation neutralization is proposed to boost fairness without sensitive attribute ( Du et al. , 2021 ) . Kernel Density Estimation and Kernel Regression Kernel density estimation ( KDE ) is a nonparametric method to estimate the continuous probability density function of a random variable ( Davis et al. , 2011 ; Parzen , 1962 ) . Given finite data samples , KDE smoothly estimate the probability function via weighted summation , where the weight is determined via kernel function ( Epanechnikov , 1969 ) . Kernel regression is a non-parametric technique to estimate the conditional expectation of a random variable ( Nadaraya , 1964 ) . Nadaraya-Watson Kernel regression function estimator is proposed for regression via locally normalized weighted average in ( Bierens , 1988 ) , where the sample weight is determined by kernel function . 3 GENERALIZED DEMOGRAPHIC PARITY . Without loss of generality , we consider a binary classification task to predict the output variable Y given the input variable X , while avoiding prediction bias for sensitive attribute S. Define the input X ∈ X ⊂ Rd , labels Y ∈ { 0 , 1 } , and machine learning model f : Rd → [ 0 , 1 ] provides prediction score Ŷ = f ( X ) . Fairness requires predictor Ŷ to be independent of sensitive attribute S , regardless of continuous or discrete , i.e. , P ( Ŷ = ŷ ) = P ( Ŷ = ŷ|S = s ) for any support value y and s ( Beutel et al. , 2017 ) . Since the independent constraint is difficult to optimize , the relaxed demographic parity ( DP ) ( Madras et al. , 2018 ) metrics are proposed to quantitatively measure the predictor bias for binary sensitive attribute S ∈ { 0 , 1 } . Formally , the demographic parity is defined as ∆DP = |EŶ [ Ŷ |S = 0 ] − EŶ [ Ŷ |S = 1 ] | , where E [ · ] represents variable expectation . For categorical sensitive attribute S ∈ S , work ( Cho et al. , 2020 ) introduces a fairness metric , named difference w.r.t . demographic parity ( DDP ) ∆DDP = ∑ s∈S |EŶ [ Ŷ |S = s ] − EŶ [ Ŷ ] | . Although DP has been widely used to evaluate the prediction bias , it is still inapplicable for continuous sensitive attributes since the data samples can not be directly divided into several distinctive groups based on the sensitive attributes . Without loss of generality , we assume continuous sensitive attributes S ∈ [ 0 , 1 ] and propose GDP to extend tractable DP for continuous sensitive attributes . Assume the joint distribution of tuple ( S , Ŷ ) is PS , Ŷ ( s , ŷ ) , the local prediction average and global prediction average are defined as the prediction expectation given sensitive attribute S = s and without any sensitive attribute condition , i.e. , local prediction average m ( s ) △ = E [ Ŷ |S = s ] and global prediction average mavg △ = ES [ m ( S ) ] = E [ Ŷ ] , respectively . Then , we adopt weighted total variation distance on local prediction average and global prediction average , where the weight is specified by the probability density function of the sensitive attribute . The formal definition of the discrepancy demographic parity for continuous sensitive attributes is as follows : ∆GDP = ∫ 1 0 ∣∣∣m ( s ) −mavg∣∣∣PS ( S = s ) ds = ES [ |m ( S ) −mavg| ] , ( 1 ) We also provide the connection of GDP and DP for binary sensitive attributes , which implies that GDP is equivalent to DP for binary sensitive attributes , as follows : Theorem 1 ( Connection between DP and GDP ) . For binary sensitive attribute S ∈ { 0 , 1 } , GPD and DP are equivalent except the coefficient only dependent on datasets . Specifically , the relation of ∆GDP and ∆DP satisfies ∆GDP = 2PS ( S = 1 ) · PS ( S = 0 ) ·∆DP . The proof of Theorem 1 is presented in Appendix A . For categorical sensitive attributes , it is easy to obtain that ∆GDP = ∑ s∈S PS ( S = s ) |EŶ [ Ŷ |S = s ] − EŶ [ Ŷ ] | , i.e. , GDP is weighted DDP for categorical sensitive attributes . In a nutshell , GDP is a natural fairness metric extension for binary and categorical sensitive attributes . Since the independence between prediction Ŷ and sensitive attribute S implies that the joint distribution PS , Ŷ ( s , ŷ ) and product marginal distribution PS ( s ) PŶ ( ŷ ) are the same , the bias can be measured by the distance of the joint distribution and product marginal distribution . Subsequently , we show the connection of GDP and prediction-weighted total variation distance between these two distributions as follows : Theorem 2 ( Probability View of GDP ) . Assume the joint distribution of ( Ŷ , S ) with support [ 0 , 1 ] 2 is PS , Ŷ ( s , ŷ ) . Define the prediction-weighted total variation distance as TVpred ( P 1 , P 2 ) △ =∫ 1 0 ∫ 1 0 ŷ|P 1 ( ŷ , s ) − P 2 ( ŷ , s ) |dŷds . Then the proposed fairness for continuous senstive attribute is upper bounded by prediction- weighted total variation distance between the joint distribution and product marginal distribution : ∆GDP = ∫ 1 0 ∫ 1 0 ∣∣∣ŷ [ PS , Ŷ ( s , ŷ ) − PS ( s ) PŶ ( ŷ ) ] ∣∣∣dŷds ≤ TVpred ( PS , Ŷ ( s , ŷ ) , PS ( s ) PŶ ( ŷ ) ) . The proof of Theorem 2 is presented in Appendix B. Theorem 2 demonstrates that GDP is actually a lower bound for prediction-weighted total variation distance between these two distributions and implies the necessity of GDP for bias measurement . | This paper proposes a new extension of demographic parity when the sensitive attribute is continuous. It proposes two techniques to estimate this quantity from data. The authors also consider adding this quantity as a regularization term to control discrimination while designing machine learning models. | SP:bb1a291cdfa4e3d1566688021a844a16bce29c90 |
A Communication-Efficient Distributed Gradient Clipping Algorithm for Training Deep Neural Networks | ( 1 Nϵ4 ) iteration complexity for finding an ϵ-stationary point , where N is the number of machines . This indicates that our algorithm enjoys linear speedup . Our experiments on several benchmark datasets and various scenarios demonstrate that our algorithm indeed exhibits fast convergence speed in practice and validate our theory . 1 INTRODUCTION . Deep learning has achieved tremendous successes in many domains including computer vision ( Krizhevsky et al. , 2012 ; He et al. , 2016 ) and natural language processing ( Devlin et al. , 2018 ) , game ( Silver et al. , 2016 ) . To obtain good empirical performance , people usually need to train large models on a huge amount of data , and it is usually very computationally expensive . To speedup the training process , distributed training becomes indispensable ( Dean et al. , 2012 ) . For example , Goyal et al . ( 2017 ) trained a ResNet-50 on ImageNet dataset by distributed SGD with minibatch size 8192 on 256 GPUs in only one hour , which not only matches the small minibatch accuracy but also enjoys parallel speedup , and hence improves the running time . Recently , there is an increasing interest in an variant of distributed learning , namely Federated Learning ( FL ) ( McMahan et al. , 2017 ) , which focuses on the cases where the training data is non-i.i.d . across devices and only limited communication is allowed . McMahan et al . ( 2017 ) proposed an algorithm named Federated Averaging , which runs multiple steps of SGD on each clients before communicating with other clients . Despite the empirical success of distributed SGD and its variants ( e.g. , Federated Averaging ) in deep learning , they may not exhibit good performance when training some neural networks ( e.g. , Recurrent Neural Networks , LSTMs ) , due to the exploding gradient problem ( Pascanu et al. , 2012 ; 2013 ) . To address this issue , Pascanu et al . ( 2013 ) proposed to use the gradient clipping strategy , and it has become a standard technique when training language models ( Gehring et al. , 2017 ; Peters et al. , 2018 ; Merity et al. , 2018 ) . There are some recent works trying to theoretically explain gradient clipping from nonconvex optimization ’ s perspective ( Zhang et al. , 2019 ; 2020 ) . These works are built upon an important observation made in ( Zhang et al. , 2019 ) : for certain neural networks such as LSTM , the gradient does not vary uniformly over the loss landscape ( i.e. , the gradient is not Lipschitz continuous with a uniform constant ) , and the gradient Lipschitz constant can scale linearly with respect to the gradient norm . This is referred to as the relaxed smoothness condition ( i.e. , ( L0 , L1 ) -smoothness defined in Definition 2 ) , which generalizes but strictly relaxes the usual smoothness condition ( i.e. , L-smoothness defined in Definition 1 ) . Under the relaxed smoothness condition , Zhang et al . ( 2019 ; 2020 ) proved that gradient clipping enjoys polynomial-time iteration complexity for finding the first-order stationary point in the single machine setting , and it can be arbitrarily faster than fix-step gradient descent . In practice , both distributed learning ( or FL ) and gradient clipping are important techniques to accelerate neural network training . However , the theoretical analysis of gradient clipping is only restricted to the single machine setting ( Zhang et al. , 2019 ; 2020 ) . Hence it naturally motivates us to consider the following question : Is it possible that the gradient clipping scheme can take advantage of multiple machines to enjoy parallel speedup in training deep neural networks , with data heterogeneity across machines and limited communication ? . In this paper , we give an affirmative answer to the above question . Built upon the relaxed smoothness condition as in ( Zhang et al. , 2019 ; 2020 ) , we design a communication-efficient distributed gradient clipping algorithm . The key characteristics of our algorithm are : ( i ) unlike naive parallel gradient clipping algorithm which requires averaging model weights and gradients from all machines for every iteration , our algorithm only aggregates weights with other machines after certain number of local updates on each machine ; ( ii ) our algorithm clips the gradient according to the norm of the local gradient on each machine , instead of the norm of the averaged gradients across machines as in the naive parallel version . These key features make our algorithm amenable to the FL setting and it is nontrivial to establish desired theoretical guarantees ( e.g. , linear speedup , reduced communication complexity ) . The main difficulty in the analysis lies at dealing with nonconvex objective function , non-Lipschitz continuous gradient , and skipping communication rounds simultaneously . Our main contribution is summarized as the following : • We design a novel communication-efficient distributed stochastic local gradient clipping algorithm , namely CELGC , for solving a nonconvex optimization problem under the relaxed smoothness condition . The algorithm only needs to clip the gradient according to the local gradient ’ s magnitude and globally averages the weights on all machines periodically . To the best of our knowledge , this is the first work proposing communication-efficient distributed stochastic gradient clipping algorithms under the relaxed smoothness condition . • Under the relaxed smoothness condition , we prove iteration and communication complexity results of our algorithm for finding an ϵ-stationary point . First , comparing with ( Zhang et al. , 2020 ) , we prove that our algorithm enjoys linear speedup , which means that the iteration complexity of our algorithm is reduced by a factor of N ( the number of machines ) . Second , comparing with naive parallel verion of the algorithm of ( Zhang et al. , 2020 ) , we 1In this setting , we assume the gradient norm is upper bounded by M such that the gradient is ( L0+L1M ) Lipschitz . However , we want to emphasize that the original paper of ( Ghadimi & Lan , 2013 ) does not require bounded gradient assumption , instead they require L-Lipschitz gradient and bounded variance σ2 . Under their assumption , their complexity result is O ( ∆Lϵ−2 +∆Lσ2ϵ−4 ) . 2Naive Parallel of ( Zhang et al . 2020 ) is different from our algorithm ( CELGC ) with I = 1 in that the naive version requires averaging gradients across all machines to update the model while CELGC only updates the model using local gradients computed in that machine . This also means that in each iteration , naive version clips the gradient based on the globally averaged gradient while ours only bases on the local gradient . prove that our algorithm enjoys better communication complexity . Specifically , our algorithm ’ s communication complexity is smaller than naive parallel clipping algorithm if the number of machines is not too large ( i.e. , N ≤ O ( 1/ϵ ) ) . The detailed comparison over existing algorithms under the same relaxed smoothness condition is described in Table 1 . Please refer to ( Koloskova et al. , 2020 ) for local SGD complexity results for L-smooth functions . • We empirically verify our theoretical results by conducting experiments on different neural network architectures on benchmark datasets and on various scenarios including small to large batch-sizes , homogeneous and heterogeneous data distributions , and partial participation of machines . The experimental results demonstrate that our proposed algorithm indeed exhibit speedup in practice . 2 RELATED WORK . Gradient Clipping/Normalization Algorithms In deep learning literature , gradient clipping ( normalization ) technique was initially proposed by ( Pascanu et al. , 2013 ) to address the issue of exploding gradient problem in ( Pascanu et al. , 2012 ) , and it has become a standard technique when training language models ( Gehring et al. , 2017 ; Peters et al. , 2018 ; Merity et al. , 2018 ) . Menon et al . ( 2019 ) showed that gradient clipping is robust and can mitigate label noise . Recently gradient normalization techniques ( You et al. , 2017 ; 2019 ) were applied to train deep neural networks on the very large batch setting . For example , You et al . ( 2017 ) designed LARS algorithm to train a ResNet50 on ImageNet with batch size 32k , which utilized different learning rate according to the norm of the weights and the norm of the gradient . In optimization literature , gradient clipping ( normalization ) was used in early days in the field of convex optimization ( Ermoliev , 1988 ; Alber et al. , 1998 ; Shor , 2012 ) . Nesterov ( 1984 ) and Hazan et al . ( 2015 ) considered normalized gradient descent for quasi-convex functions in deterministic and stochastic cases respectively . Gorbunov et al . ( 2020 ) designed an accelerated gradient clipping method to solve convex optimization problem with heavy-tailed noise in stochastic gradients . Mai & Johansson ( 2021 ) established the stability and convergence of stochastic gradient clipping algorithms for convex and weakly convex functions . In nonconvex optimization , Levy ( 2016 ) showed that normalized gradient descent can escape from saddle points . Cutkosky & Mehta ( 2020 ) showed that adding a momentum provably improves the normalized SGD in nonconvex optimization . Zhang et al . ( 2019 ) and Zhang et al . ( 2020 ) analyzed the gradient clipping for nonconvex optimization under the relaxed smoothness condition rather than the traditional L-smoothness condition in nonconvex optimization ( Ghadimi & Lan , 2013 ) . However , all of them only consider the algorithm in the single machine setting or the naive parallel setting , and none of them can apply to FL setting where data on different nodes is heterogeneous and only limited communication is allowed . Communication-Efficient Algorithms in Distributed and Federated Learning In large-scale machine learning , people usually train their model using first-order methods on multiple machines and these machines communicate and aggregate their model parameters periodically . When the function is convex , there is scheme named one-shot averaging ( Zinkevich et al. , 2010 ; McDonald et al. , 2010 ; Zhang et al. , 2013 ; Shamir & Srebro , 2014 ) , in which every machine runs an stochastic approximation algorithm and averages the model weights across machines only at the very last iteration . One-shot averaging scheme is communication-efficient and enjoys statistical convergence with one pass of the data ( Zhang et al. , 2013 ; Shamir & Srebro , 2014 ; Jain et al. , 2017 ; Koloskova et al. , 2019 ) , but the training error may not converge in practice . McMahan et al . ( 2017 ) considered the Federated Learning setting where the data is decentralized and might be non-i.i.d . across devices and communication is expensive . McMahan et al . ( 2017 ) designed the very first algorithm for FL ( a.k.a. , FedAvg ) , which is communication-efficient since every node communicates with other nodes infrequently . Stich ( 2018 ) considered a concrete case of FedAvg , namely local SGD , which runs SGD independently in parallel on different works and averages the model parameters only once in a while . Stich ( 2018 ) also showed that local SGD enjoys linear speedup for stronglyconvex objective function . There are also some works analyzing local SGD and its variants on convex ( Dieuleveut & Patel , 2019 ; Khaled et al. , 2020 ; Karimireddy et al. , 2020 ; Woodworth et al. , 2020a ; b ; Gorbunov et al. , 2021 ; Yuan et al. , 2021 ) and nonconvex smooth functions ( Zhou & Cong , 2017 ; Yu et al. , 2019a ; b ; Jiang & Agrawal , 2018 ; Wang & Joshi , 2018 ; Lin et al. , 2018 ; Basu et al. , 2019 ; Haddadpour et al. , 2019 ; Karimireddy et al. , 2020 ) . Recently , Woodworth et al . ( 2020a ; b ) analyzed advantages and drawbacks of local SGD compared with minibatch SGD for convex objectives . Woodworth et al . ( 2021 ) proved hardness results for distributed stochastic convex optimization . Reddi et al . ( 2021 ) introduced a general framework of federated optimization and designed several federated versions of adaptive optimizers . Zhang et al . ( 2021 ) considered to employ gradient clipping to optimize L-smooth functions and achieve differential privacy . Due to a vast amount of literature of FL and limited space , we refer readers to ( Kairouz et al. , 2019 ) and references therein . However , all of these works either assume the objective function is convex or L-smooth . To the best of our knowledge , our algorithm is the first communication-efficient algorithm which does not rely on these assumptions but still enjoys linear speedup . | The paper considers the effect of gradient clipping in Federated Learning and how it affects the convergence rate. They focused on the relaxed-smooth loss function. Each worker uses local gradient clipping and runs multiple steps of SGD before communicating and averaging the local models. The authors theoretically analyzed the algorithm and showed that for $N$ workers, the algorithm has $O(1/N\\epsilon^4)$ iteration complexity to find an $\\epsilon$-stationary point. Finally, the theoretical results are experimentally verified on CIFAR-10 (Resent-56 model), Penn Treebank and WikiText (LSTM models). | SP:006a3aabe6e2d03798d9b441665ef1d1d30a7e1e |
A Communication-Efficient Distributed Gradient Clipping Algorithm for Training Deep Neural Networks | ( 1 Nϵ4 ) iteration complexity for finding an ϵ-stationary point , where N is the number of machines . This indicates that our algorithm enjoys linear speedup . Our experiments on several benchmark datasets and various scenarios demonstrate that our algorithm indeed exhibits fast convergence speed in practice and validate our theory . 1 INTRODUCTION . Deep learning has achieved tremendous successes in many domains including computer vision ( Krizhevsky et al. , 2012 ; He et al. , 2016 ) and natural language processing ( Devlin et al. , 2018 ) , game ( Silver et al. , 2016 ) . To obtain good empirical performance , people usually need to train large models on a huge amount of data , and it is usually very computationally expensive . To speedup the training process , distributed training becomes indispensable ( Dean et al. , 2012 ) . For example , Goyal et al . ( 2017 ) trained a ResNet-50 on ImageNet dataset by distributed SGD with minibatch size 8192 on 256 GPUs in only one hour , which not only matches the small minibatch accuracy but also enjoys parallel speedup , and hence improves the running time . Recently , there is an increasing interest in an variant of distributed learning , namely Federated Learning ( FL ) ( McMahan et al. , 2017 ) , which focuses on the cases where the training data is non-i.i.d . across devices and only limited communication is allowed . McMahan et al . ( 2017 ) proposed an algorithm named Federated Averaging , which runs multiple steps of SGD on each clients before communicating with other clients . Despite the empirical success of distributed SGD and its variants ( e.g. , Federated Averaging ) in deep learning , they may not exhibit good performance when training some neural networks ( e.g. , Recurrent Neural Networks , LSTMs ) , due to the exploding gradient problem ( Pascanu et al. , 2012 ; 2013 ) . To address this issue , Pascanu et al . ( 2013 ) proposed to use the gradient clipping strategy , and it has become a standard technique when training language models ( Gehring et al. , 2017 ; Peters et al. , 2018 ; Merity et al. , 2018 ) . There are some recent works trying to theoretically explain gradient clipping from nonconvex optimization ’ s perspective ( Zhang et al. , 2019 ; 2020 ) . These works are built upon an important observation made in ( Zhang et al. , 2019 ) : for certain neural networks such as LSTM , the gradient does not vary uniformly over the loss landscape ( i.e. , the gradient is not Lipschitz continuous with a uniform constant ) , and the gradient Lipschitz constant can scale linearly with respect to the gradient norm . This is referred to as the relaxed smoothness condition ( i.e. , ( L0 , L1 ) -smoothness defined in Definition 2 ) , which generalizes but strictly relaxes the usual smoothness condition ( i.e. , L-smoothness defined in Definition 1 ) . Under the relaxed smoothness condition , Zhang et al . ( 2019 ; 2020 ) proved that gradient clipping enjoys polynomial-time iteration complexity for finding the first-order stationary point in the single machine setting , and it can be arbitrarily faster than fix-step gradient descent . In practice , both distributed learning ( or FL ) and gradient clipping are important techniques to accelerate neural network training . However , the theoretical analysis of gradient clipping is only restricted to the single machine setting ( Zhang et al. , 2019 ; 2020 ) . Hence it naturally motivates us to consider the following question : Is it possible that the gradient clipping scheme can take advantage of multiple machines to enjoy parallel speedup in training deep neural networks , with data heterogeneity across machines and limited communication ? . In this paper , we give an affirmative answer to the above question . Built upon the relaxed smoothness condition as in ( Zhang et al. , 2019 ; 2020 ) , we design a communication-efficient distributed gradient clipping algorithm . The key characteristics of our algorithm are : ( i ) unlike naive parallel gradient clipping algorithm which requires averaging model weights and gradients from all machines for every iteration , our algorithm only aggregates weights with other machines after certain number of local updates on each machine ; ( ii ) our algorithm clips the gradient according to the norm of the local gradient on each machine , instead of the norm of the averaged gradients across machines as in the naive parallel version . These key features make our algorithm amenable to the FL setting and it is nontrivial to establish desired theoretical guarantees ( e.g. , linear speedup , reduced communication complexity ) . The main difficulty in the analysis lies at dealing with nonconvex objective function , non-Lipschitz continuous gradient , and skipping communication rounds simultaneously . Our main contribution is summarized as the following : • We design a novel communication-efficient distributed stochastic local gradient clipping algorithm , namely CELGC , for solving a nonconvex optimization problem under the relaxed smoothness condition . The algorithm only needs to clip the gradient according to the local gradient ’ s magnitude and globally averages the weights on all machines periodically . To the best of our knowledge , this is the first work proposing communication-efficient distributed stochastic gradient clipping algorithms under the relaxed smoothness condition . • Under the relaxed smoothness condition , we prove iteration and communication complexity results of our algorithm for finding an ϵ-stationary point . First , comparing with ( Zhang et al. , 2020 ) , we prove that our algorithm enjoys linear speedup , which means that the iteration complexity of our algorithm is reduced by a factor of N ( the number of machines ) . Second , comparing with naive parallel verion of the algorithm of ( Zhang et al. , 2020 ) , we 1In this setting , we assume the gradient norm is upper bounded by M such that the gradient is ( L0+L1M ) Lipschitz . However , we want to emphasize that the original paper of ( Ghadimi & Lan , 2013 ) does not require bounded gradient assumption , instead they require L-Lipschitz gradient and bounded variance σ2 . Under their assumption , their complexity result is O ( ∆Lϵ−2 +∆Lσ2ϵ−4 ) . 2Naive Parallel of ( Zhang et al . 2020 ) is different from our algorithm ( CELGC ) with I = 1 in that the naive version requires averaging gradients across all machines to update the model while CELGC only updates the model using local gradients computed in that machine . This also means that in each iteration , naive version clips the gradient based on the globally averaged gradient while ours only bases on the local gradient . prove that our algorithm enjoys better communication complexity . Specifically , our algorithm ’ s communication complexity is smaller than naive parallel clipping algorithm if the number of machines is not too large ( i.e. , N ≤ O ( 1/ϵ ) ) . The detailed comparison over existing algorithms under the same relaxed smoothness condition is described in Table 1 . Please refer to ( Koloskova et al. , 2020 ) for local SGD complexity results for L-smooth functions . • We empirically verify our theoretical results by conducting experiments on different neural network architectures on benchmark datasets and on various scenarios including small to large batch-sizes , homogeneous and heterogeneous data distributions , and partial participation of machines . The experimental results demonstrate that our proposed algorithm indeed exhibit speedup in practice . 2 RELATED WORK . Gradient Clipping/Normalization Algorithms In deep learning literature , gradient clipping ( normalization ) technique was initially proposed by ( Pascanu et al. , 2013 ) to address the issue of exploding gradient problem in ( Pascanu et al. , 2012 ) , and it has become a standard technique when training language models ( Gehring et al. , 2017 ; Peters et al. , 2018 ; Merity et al. , 2018 ) . Menon et al . ( 2019 ) showed that gradient clipping is robust and can mitigate label noise . Recently gradient normalization techniques ( You et al. , 2017 ; 2019 ) were applied to train deep neural networks on the very large batch setting . For example , You et al . ( 2017 ) designed LARS algorithm to train a ResNet50 on ImageNet with batch size 32k , which utilized different learning rate according to the norm of the weights and the norm of the gradient . In optimization literature , gradient clipping ( normalization ) was used in early days in the field of convex optimization ( Ermoliev , 1988 ; Alber et al. , 1998 ; Shor , 2012 ) . Nesterov ( 1984 ) and Hazan et al . ( 2015 ) considered normalized gradient descent for quasi-convex functions in deterministic and stochastic cases respectively . Gorbunov et al . ( 2020 ) designed an accelerated gradient clipping method to solve convex optimization problem with heavy-tailed noise in stochastic gradients . Mai & Johansson ( 2021 ) established the stability and convergence of stochastic gradient clipping algorithms for convex and weakly convex functions . In nonconvex optimization , Levy ( 2016 ) showed that normalized gradient descent can escape from saddle points . Cutkosky & Mehta ( 2020 ) showed that adding a momentum provably improves the normalized SGD in nonconvex optimization . Zhang et al . ( 2019 ) and Zhang et al . ( 2020 ) analyzed the gradient clipping for nonconvex optimization under the relaxed smoothness condition rather than the traditional L-smoothness condition in nonconvex optimization ( Ghadimi & Lan , 2013 ) . However , all of them only consider the algorithm in the single machine setting or the naive parallel setting , and none of them can apply to FL setting where data on different nodes is heterogeneous and only limited communication is allowed . Communication-Efficient Algorithms in Distributed and Federated Learning In large-scale machine learning , people usually train their model using first-order methods on multiple machines and these machines communicate and aggregate their model parameters periodically . When the function is convex , there is scheme named one-shot averaging ( Zinkevich et al. , 2010 ; McDonald et al. , 2010 ; Zhang et al. , 2013 ; Shamir & Srebro , 2014 ) , in which every machine runs an stochastic approximation algorithm and averages the model weights across machines only at the very last iteration . One-shot averaging scheme is communication-efficient and enjoys statistical convergence with one pass of the data ( Zhang et al. , 2013 ; Shamir & Srebro , 2014 ; Jain et al. , 2017 ; Koloskova et al. , 2019 ) , but the training error may not converge in practice . McMahan et al . ( 2017 ) considered the Federated Learning setting where the data is decentralized and might be non-i.i.d . across devices and communication is expensive . McMahan et al . ( 2017 ) designed the very first algorithm for FL ( a.k.a. , FedAvg ) , which is communication-efficient since every node communicates with other nodes infrequently . Stich ( 2018 ) considered a concrete case of FedAvg , namely local SGD , which runs SGD independently in parallel on different works and averages the model parameters only once in a while . Stich ( 2018 ) also showed that local SGD enjoys linear speedup for stronglyconvex objective function . There are also some works analyzing local SGD and its variants on convex ( Dieuleveut & Patel , 2019 ; Khaled et al. , 2020 ; Karimireddy et al. , 2020 ; Woodworth et al. , 2020a ; b ; Gorbunov et al. , 2021 ; Yuan et al. , 2021 ) and nonconvex smooth functions ( Zhou & Cong , 2017 ; Yu et al. , 2019a ; b ; Jiang & Agrawal , 2018 ; Wang & Joshi , 2018 ; Lin et al. , 2018 ; Basu et al. , 2019 ; Haddadpour et al. , 2019 ; Karimireddy et al. , 2020 ) . Recently , Woodworth et al . ( 2020a ; b ) analyzed advantages and drawbacks of local SGD compared with minibatch SGD for convex objectives . Woodworth et al . ( 2021 ) proved hardness results for distributed stochastic convex optimization . Reddi et al . ( 2021 ) introduced a general framework of federated optimization and designed several federated versions of adaptive optimizers . Zhang et al . ( 2021 ) considered to employ gradient clipping to optimize L-smooth functions and achieve differential privacy . Due to a vast amount of literature of FL and limited space , we refer readers to ( Kairouz et al. , 2019 ) and references therein . However , all of these works either assume the objective function is convex or L-smooth . To the best of our knowledge , our algorithm is the first communication-efficient algorithm which does not rely on these assumptions but still enjoys linear speedup . | Gradient clipping is an important technique in training deep neural network. Typically, ones need to use the globally averaged gradient to estimate the norm. However, it requires gradient synchronization for every iterations, which is not practical in federated learning. In practice, practitioners only apply local gradient clipping to the local iterates while the theoretical analysis is lacking. This paper analyzes the convergence of the local gradient clipping. The theoretical results show that the convergence is guaranteed when both the number of workers and the number of local iterations are not too large. | SP:006a3aabe6e2d03798d9b441665ef1d1d30a7e1e |
A Communication-Efficient Distributed Gradient Clipping Algorithm for Training Deep Neural Networks | ( 1 Nϵ4 ) iteration complexity for finding an ϵ-stationary point , where N is the number of machines . This indicates that our algorithm enjoys linear speedup . Our experiments on several benchmark datasets and various scenarios demonstrate that our algorithm indeed exhibits fast convergence speed in practice and validate our theory . 1 INTRODUCTION . Deep learning has achieved tremendous successes in many domains including computer vision ( Krizhevsky et al. , 2012 ; He et al. , 2016 ) and natural language processing ( Devlin et al. , 2018 ) , game ( Silver et al. , 2016 ) . To obtain good empirical performance , people usually need to train large models on a huge amount of data , and it is usually very computationally expensive . To speedup the training process , distributed training becomes indispensable ( Dean et al. , 2012 ) . For example , Goyal et al . ( 2017 ) trained a ResNet-50 on ImageNet dataset by distributed SGD with minibatch size 8192 on 256 GPUs in only one hour , which not only matches the small minibatch accuracy but also enjoys parallel speedup , and hence improves the running time . Recently , there is an increasing interest in an variant of distributed learning , namely Federated Learning ( FL ) ( McMahan et al. , 2017 ) , which focuses on the cases where the training data is non-i.i.d . across devices and only limited communication is allowed . McMahan et al . ( 2017 ) proposed an algorithm named Federated Averaging , which runs multiple steps of SGD on each clients before communicating with other clients . Despite the empirical success of distributed SGD and its variants ( e.g. , Federated Averaging ) in deep learning , they may not exhibit good performance when training some neural networks ( e.g. , Recurrent Neural Networks , LSTMs ) , due to the exploding gradient problem ( Pascanu et al. , 2012 ; 2013 ) . To address this issue , Pascanu et al . ( 2013 ) proposed to use the gradient clipping strategy , and it has become a standard technique when training language models ( Gehring et al. , 2017 ; Peters et al. , 2018 ; Merity et al. , 2018 ) . There are some recent works trying to theoretically explain gradient clipping from nonconvex optimization ’ s perspective ( Zhang et al. , 2019 ; 2020 ) . These works are built upon an important observation made in ( Zhang et al. , 2019 ) : for certain neural networks such as LSTM , the gradient does not vary uniformly over the loss landscape ( i.e. , the gradient is not Lipschitz continuous with a uniform constant ) , and the gradient Lipschitz constant can scale linearly with respect to the gradient norm . This is referred to as the relaxed smoothness condition ( i.e. , ( L0 , L1 ) -smoothness defined in Definition 2 ) , which generalizes but strictly relaxes the usual smoothness condition ( i.e. , L-smoothness defined in Definition 1 ) . Under the relaxed smoothness condition , Zhang et al . ( 2019 ; 2020 ) proved that gradient clipping enjoys polynomial-time iteration complexity for finding the first-order stationary point in the single machine setting , and it can be arbitrarily faster than fix-step gradient descent . In practice , both distributed learning ( or FL ) and gradient clipping are important techniques to accelerate neural network training . However , the theoretical analysis of gradient clipping is only restricted to the single machine setting ( Zhang et al. , 2019 ; 2020 ) . Hence it naturally motivates us to consider the following question : Is it possible that the gradient clipping scheme can take advantage of multiple machines to enjoy parallel speedup in training deep neural networks , with data heterogeneity across machines and limited communication ? . In this paper , we give an affirmative answer to the above question . Built upon the relaxed smoothness condition as in ( Zhang et al. , 2019 ; 2020 ) , we design a communication-efficient distributed gradient clipping algorithm . The key characteristics of our algorithm are : ( i ) unlike naive parallel gradient clipping algorithm which requires averaging model weights and gradients from all machines for every iteration , our algorithm only aggregates weights with other machines after certain number of local updates on each machine ; ( ii ) our algorithm clips the gradient according to the norm of the local gradient on each machine , instead of the norm of the averaged gradients across machines as in the naive parallel version . These key features make our algorithm amenable to the FL setting and it is nontrivial to establish desired theoretical guarantees ( e.g. , linear speedup , reduced communication complexity ) . The main difficulty in the analysis lies at dealing with nonconvex objective function , non-Lipschitz continuous gradient , and skipping communication rounds simultaneously . Our main contribution is summarized as the following : • We design a novel communication-efficient distributed stochastic local gradient clipping algorithm , namely CELGC , for solving a nonconvex optimization problem under the relaxed smoothness condition . The algorithm only needs to clip the gradient according to the local gradient ’ s magnitude and globally averages the weights on all machines periodically . To the best of our knowledge , this is the first work proposing communication-efficient distributed stochastic gradient clipping algorithms under the relaxed smoothness condition . • Under the relaxed smoothness condition , we prove iteration and communication complexity results of our algorithm for finding an ϵ-stationary point . First , comparing with ( Zhang et al. , 2020 ) , we prove that our algorithm enjoys linear speedup , which means that the iteration complexity of our algorithm is reduced by a factor of N ( the number of machines ) . Second , comparing with naive parallel verion of the algorithm of ( Zhang et al. , 2020 ) , we 1In this setting , we assume the gradient norm is upper bounded by M such that the gradient is ( L0+L1M ) Lipschitz . However , we want to emphasize that the original paper of ( Ghadimi & Lan , 2013 ) does not require bounded gradient assumption , instead they require L-Lipschitz gradient and bounded variance σ2 . Under their assumption , their complexity result is O ( ∆Lϵ−2 +∆Lσ2ϵ−4 ) . 2Naive Parallel of ( Zhang et al . 2020 ) is different from our algorithm ( CELGC ) with I = 1 in that the naive version requires averaging gradients across all machines to update the model while CELGC only updates the model using local gradients computed in that machine . This also means that in each iteration , naive version clips the gradient based on the globally averaged gradient while ours only bases on the local gradient . prove that our algorithm enjoys better communication complexity . Specifically , our algorithm ’ s communication complexity is smaller than naive parallel clipping algorithm if the number of machines is not too large ( i.e. , N ≤ O ( 1/ϵ ) ) . The detailed comparison over existing algorithms under the same relaxed smoothness condition is described in Table 1 . Please refer to ( Koloskova et al. , 2020 ) for local SGD complexity results for L-smooth functions . • We empirically verify our theoretical results by conducting experiments on different neural network architectures on benchmark datasets and on various scenarios including small to large batch-sizes , homogeneous and heterogeneous data distributions , and partial participation of machines . The experimental results demonstrate that our proposed algorithm indeed exhibit speedup in practice . 2 RELATED WORK . Gradient Clipping/Normalization Algorithms In deep learning literature , gradient clipping ( normalization ) technique was initially proposed by ( Pascanu et al. , 2013 ) to address the issue of exploding gradient problem in ( Pascanu et al. , 2012 ) , and it has become a standard technique when training language models ( Gehring et al. , 2017 ; Peters et al. , 2018 ; Merity et al. , 2018 ) . Menon et al . ( 2019 ) showed that gradient clipping is robust and can mitigate label noise . Recently gradient normalization techniques ( You et al. , 2017 ; 2019 ) were applied to train deep neural networks on the very large batch setting . For example , You et al . ( 2017 ) designed LARS algorithm to train a ResNet50 on ImageNet with batch size 32k , which utilized different learning rate according to the norm of the weights and the norm of the gradient . In optimization literature , gradient clipping ( normalization ) was used in early days in the field of convex optimization ( Ermoliev , 1988 ; Alber et al. , 1998 ; Shor , 2012 ) . Nesterov ( 1984 ) and Hazan et al . ( 2015 ) considered normalized gradient descent for quasi-convex functions in deterministic and stochastic cases respectively . Gorbunov et al . ( 2020 ) designed an accelerated gradient clipping method to solve convex optimization problem with heavy-tailed noise in stochastic gradients . Mai & Johansson ( 2021 ) established the stability and convergence of stochastic gradient clipping algorithms for convex and weakly convex functions . In nonconvex optimization , Levy ( 2016 ) showed that normalized gradient descent can escape from saddle points . Cutkosky & Mehta ( 2020 ) showed that adding a momentum provably improves the normalized SGD in nonconvex optimization . Zhang et al . ( 2019 ) and Zhang et al . ( 2020 ) analyzed the gradient clipping for nonconvex optimization under the relaxed smoothness condition rather than the traditional L-smoothness condition in nonconvex optimization ( Ghadimi & Lan , 2013 ) . However , all of them only consider the algorithm in the single machine setting or the naive parallel setting , and none of them can apply to FL setting where data on different nodes is heterogeneous and only limited communication is allowed . Communication-Efficient Algorithms in Distributed and Federated Learning In large-scale machine learning , people usually train their model using first-order methods on multiple machines and these machines communicate and aggregate their model parameters periodically . When the function is convex , there is scheme named one-shot averaging ( Zinkevich et al. , 2010 ; McDonald et al. , 2010 ; Zhang et al. , 2013 ; Shamir & Srebro , 2014 ) , in which every machine runs an stochastic approximation algorithm and averages the model weights across machines only at the very last iteration . One-shot averaging scheme is communication-efficient and enjoys statistical convergence with one pass of the data ( Zhang et al. , 2013 ; Shamir & Srebro , 2014 ; Jain et al. , 2017 ; Koloskova et al. , 2019 ) , but the training error may not converge in practice . McMahan et al . ( 2017 ) considered the Federated Learning setting where the data is decentralized and might be non-i.i.d . across devices and communication is expensive . McMahan et al . ( 2017 ) designed the very first algorithm for FL ( a.k.a. , FedAvg ) , which is communication-efficient since every node communicates with other nodes infrequently . Stich ( 2018 ) considered a concrete case of FedAvg , namely local SGD , which runs SGD independently in parallel on different works and averages the model parameters only once in a while . Stich ( 2018 ) also showed that local SGD enjoys linear speedup for stronglyconvex objective function . There are also some works analyzing local SGD and its variants on convex ( Dieuleveut & Patel , 2019 ; Khaled et al. , 2020 ; Karimireddy et al. , 2020 ; Woodworth et al. , 2020a ; b ; Gorbunov et al. , 2021 ; Yuan et al. , 2021 ) and nonconvex smooth functions ( Zhou & Cong , 2017 ; Yu et al. , 2019a ; b ; Jiang & Agrawal , 2018 ; Wang & Joshi , 2018 ; Lin et al. , 2018 ; Basu et al. , 2019 ; Haddadpour et al. , 2019 ; Karimireddy et al. , 2020 ) . Recently , Woodworth et al . ( 2020a ; b ) analyzed advantages and drawbacks of local SGD compared with minibatch SGD for convex objectives . Woodworth et al . ( 2021 ) proved hardness results for distributed stochastic convex optimization . Reddi et al . ( 2021 ) introduced a general framework of federated optimization and designed several federated versions of adaptive optimizers . Zhang et al . ( 2021 ) considered to employ gradient clipping to optimize L-smooth functions and achieve differential privacy . Due to a vast amount of literature of FL and limited space , we refer readers to ( Kairouz et al. , 2019 ) and references therein . However , all of these works either assume the objective function is convex or L-smooth . To the best of our knowledge , our algorithm is the first communication-efficient algorithm which does not rely on these assumptions but still enjoys linear speedup . | The paper proposes a new variant of SGD with clipping and infrequent communications (local updates) called Communication Efficient Local Gradient Clipping (CELGC) aimed at solving non-convex federated learning problems under the generalized smoothness ($(L_0, L_1)$-smoothness) assumption. The authors derive ergodic convergence guarantees for the convergence of CELGC to the first-order $\epsilon$-stationary point assuming additionally that the noise in stochastic gradients is bounded with probability $1$ and heterogeneity of the local data on clients is also bounded. Although the authors claim that their result shows that CELGC achieves linear speed-up and has better communication complexity than the naive version of parallel Clipped SGD, the proofs contain several significant inaccuracies making the main result of the paper incorrect in general. Moreover, several assumptions about the parameters such as the number of workers $N$ and the number of local steps between two consequent communication steps $I$ are restrictive. | SP:006a3aabe6e2d03798d9b441665ef1d1d30a7e1e |
Do Androids Dream of Electric Fences? Safety-Aware Reinforcement Learning with Latent Shielding | The growing trend of fledgling reinforcement learning systems making their way into real-world applications has been accompanied by growing concerns for their safety and robustness . In recent years , a variety of approaches have been put forward to address the challenges of safety-aware reinforcement learning ; however , these methods often either require a handcrafted model of the environment to be provided beforehand , or that the environment is relatively simple and lowdimensional . We present a novel approach to safety-aware deep reinforcement learning in high-dimensional environments called latent shielding . Latent shielding leverages internal representations of the environment learnt by model-based agents to “ imagine ” future trajectories and avoid those deemed unsafe . We experimentally demonstrate that this approach leads to improved adherence to formallydefined safety specifications . 1 INTRODUCTION . The steady trickle of reinforcement learning ( RL ) systems making their way out of the lab and into the real world has cast a spotlight on the safety and robustness of RL agents . The motivation behind this should be relatively easy to grasp : when training an agent in real-world settings , it is desirable that some states are never reached as they could , for instance , cause permanent damage to the hardware the agent is controlling . We can thus informally define the notion of safety-aware RL in terms of the classical RL setup with the added requirement that the number of unsafe states visited be minimised . Under this definition , however , it has been found that many state-of-the-art RL algorithms unnecessarily enter unsafe states despite safe alternatives being available and there being a positive correlation between avoiding such states and reward ( Giacobbe et al. , 2021 ) . The field of safety-aware RL encompasses a multitude of approaches ranging from constrained policy optimisation ( Chow et al. , 2017 ; Achiam et al. , 2017 ; Yang et al. , 2020 ) to safety critics ( Srinivasan et al. , 2020 ; Bharadhwaj et al. , 2021 ; Thananjeyan et al. , 2021 ) to meta-learning ( Turchetta et al. , 2020 ) . In this work , we focus on a particular family of approaches known as shielding ( Alshiekh et al. , 2018 ; Anderson et al. , 2020 ; Giacobbe et al. , 2021 ; ElSayed-Aly et al. , 2021 ; Pranger et al. , 2021 ) . Central to shielding is the notion of a shield , a filter that checks actions proposed by the agent ’ s existing policy with reference to a model of the environment ’ s dynamics and some formal safety specification . The shield overrides actions that may lead to an unsafe state using some other safe ( but by no means optimal ) policy . A key advantage of many shielding approaches is that the resulting shielded policies are formally verifiable ; however , a shortcoming is that they require a model of environmental dynamics - typically handcrafted - to be provided in advance . Providing such a model may prove difficult for complex real-world environments , with inaccuracies and human biases creeping into handcrafted models . In this work , we propose a safe RL agent that makes uses of latent shielding , an approach to shielding in environments where a formally-specified dynamics model is not available in advance . At an intuitive level , the agent uses a data-driven approach to learn its own latent world model ( a component of which is a dynamics model ) which is then leveraged by a shield . The shield then uses the agent ’ s model to “ imagine ” trajectories arising from different actions , forcing the agent to avoid those it foresees leading to unsafe states . In addition , the agent can be trained within its own latent world model thus reducing the number of safety violations seen during training . Contributions The main contribution of this work is a framework for shielding agents in complex , stochastic and high-dimensional environments without knowledge of environmental dynamics a priori . We further introduce a new method to aid exploration when training shielded agents . Though our framework loses the formal safety guarantees associated with traditional symbolic shielding approaches , our experiments illustrate that latent shielding reduces unsafe behaviour during training and achieves testing performance comparable to previous symbolic approaches . 2 PRELIMINARIES . In this section , we cover some relevant background topics . We begin by introducing our problem setup for safety-aware RL and give an overview of the specification language used in this work . This is followed by an outline of the latent world model we make use of in this work as well as a discussion on shielding . 2.1 PROBLEM SETUP . We consider an agent interacting with an environment E modelled as a partially observable Markov decision process ( POMDP ) with states s ∈ SE , observations ot ∈ OE , agent-generated actions at ∈ AE and scalar rewards rt ∈ R over discrete time steps t ∈ [ 0 , 1 , ... , T − 1 ] . We assume the environment has been augmented with a labelling function LφE : SE → { safe , unsafe } that , at each time step , informs us whether a violation has occurred with respect to some formal safety specification φ . For the avoidance of doubt , we define a violation to have occurred whenever φ does not hold . This is a weaker assumption than previous works in shielding ( which assume access to an abstraction of the environment ) and can be thought as a secondary safety-focused reward function with a binary output . Intuitively , the goal of the agent is to learn a policy π that maximises its expected cumulative reward while minimising the number of violations of φ . 2.2 SYNTACTICALLY CO-SAFE LINEAR TEMPORAL LOGIC . In this work , we use syntactically co-safe Linear Temporal Logic ( scLTL ) ( Kupferman & Vardi , 2001 ) as our specification language . Valid scLTL formulae over some set of atomic propositions AP can be constructed according to the following grammar : φ : := true | d | ¬d | φ ∨ φ | φ ∧ φ | ©φ | φ ∪ φ | φ ( 1 ) where d ∈ AP , ¬ ( negation ) , ∨ ( disjunction ) , ∧ ( conjunction ) are the familiar operators from propositional logic , and © ( next ) , ∪ ( until ) and ( eventually ) are temporal operators . We can monitor a co-safe LTL specification using a technique known as progression ( Bacchus & Kabanza , 2000 ) . 2.3 RECURRENT STATE-SPACE MODELS . We refer to the predictive model of an environment maintained by a model-based agent as its world model . World models can be learnt from experience and be used both as a substitute for the environment during training ( Ha & Schmidhuber , 2018 ; Hafner et al. , 2021 ) and for planning at run-time ( Hafner et al. , 2019b ) . Though many realisations of the notion of a world model exist , the world model used in this work is based on the recurrent state-space model ( RSSM ) proposed by Hafner et al . ( 2019b ) . An RSSM is composed of three key components : a latent dynamics model , a reward model , and an observation model . These components act on compact states formed from the concatenation of a deterministic latent state ht and stochastic latent state zt . Latent Dynamics Model The latent dynamics model is made up of a number of smaller models . First , the recurrent model ht = f ( ht−1 , zt−1 , at−1 ) is used to compute the deterministic latent state based on the previous compact state and action . From ht and the current observation ot , a distribution q ( zt|ht , ot ) over posterior stochastic latent states zt is computed by the representation model . At the same time , a distribution p ( ẑt|ht ) over prior stochastic latent states ẑt is computed by the transition model , based only on ht . During training , the transition model attempts to minimise the Kullback Leibler ( KL ) divergence between the prior and posterior stochastic latent state distributions . In doing this , the RSSM learns to predict future latent states ( using the recurrent and transition models ) without access to future observations . Observation Model The observation model computes the distribution p ( ôt|ht , zt ) over observations ôt for a particular state . Though not strictly needed , the observation model can prove useful for visualising predicted future states and providing a richer training signal . Reward Model The reward model computes the distribution p ( r̂t|ht , zt ) over rewards r̂t for a particular state . In practice , the distributions p and q are implemented with neural networks pθ and qθ respectively , parameterised by some set of parameters θ . These latent dynamics models define a fully-observable Markov decision process ( MDP ) as the latent states in the agent ’ s own internal model can always be observed by the agent ( Hafner et al. , 2019a ) . We denote the state space of this MDP ( comprised of compact latent states ) as SI . 2.4 SHIELDING . The classical formulation of shielding in RL is given by Alshiekh et al . ( 2018 ) . It assumes access to two ingredients : an LTL safety specification and abstraction ( a MDP model of the environment that captures the aspects of the environment relevant for planning ahead with respect to the safety specification ) . These ingredients are used to construct a formally verifiable reactive system that monitors the agent ’ s actions , overriding those which lead to violation states . Proposed by Giacobbe et al . ( 2021 ) , bounded prescience shielding ( BPS ) avoids the need for handcrafted abstractions by exploiting the fact that some agents are trained in computer simulations . The shield operates by leveraging access to the program underlying the simulation to look ahead into future states within some finite horizon . Using BPS over classical shielding does , however , come with a few disadvantages . Firstly , it requires access to the simulation at run-time which may prove difficult to provide ( especially in cases where running the simulation is computationally expensive ) . Moreover , an agent using BPS , even when starting from a safe state , can find itself entering unsafe states in cases where the number of steps between a violation being caused by an action and the violation state itself exceeds the shield ’ s look-ahead horizon . This is not the case for classical shielding which resembles BPS with an infinite horizon . 2.5 BOUNDED SAFETY . The notion of safety used by BPS is defined over MDPs . For an arbitrary MDP with states S and actions A , a bounded trajectory ρ of length H is a sequence of states and actions s0 a0−→ s1 a1−→ . . . an−1−−−→ sn comprised of no more than H states and with the final state sn either being a terminal state or n = H − 1 . We further denote the set of all finite trajectories starting from some arbitrary state s ∈ S by % ( s ) and the set of all bounded trajectories of length H that start from s by % H ( s ) . We say a bounded trajectory ρ of length H satisfies H-bounded safety with respect to safety specification φ , written SH ( ρ , φ ) , if and only if for all si ∈ ρ , LφE ( si ) = safe . Moreover , we can extend the notion of H-bounded safety over the set of policies : a policy π is H-bounded safe with respect to φ , denoted as SH ( π , φ ) , if and only if for all s ∈ S , • either there exists some ρ ∈ % H ( s ) such that SH ( ρ , φ ) and π ( s0 ) = a0 ; • or for all ρ ∈ % H ( s ) , ¬SH ( ρ , φ ) . In other words , the policy will choose a safe trajectory as long as one exists . Finally , we formally define a violation of φ to be inevitable in state s0 ∈ S if and only if for all ρ ∈ % ( s0 ) , ¬SH ( ρ , φ ) . | The paper introduces shielding technique for RL safety into world model RL agents, like the Dreamer model. The shielding technique works on the latent space and is trained from violation information only, i.e. without the hand-crafted design of safety rules. An experimental evaluation is performed on two environments and compared against the unshielded version and BPS from the literature. | SP:a2ced98b09570a5ab17565a2f6223fd198eddbdd |
Do Androids Dream of Electric Fences? Safety-Aware Reinforcement Learning with Latent Shielding | The growing trend of fledgling reinforcement learning systems making their way into real-world applications has been accompanied by growing concerns for their safety and robustness . In recent years , a variety of approaches have been put forward to address the challenges of safety-aware reinforcement learning ; however , these methods often either require a handcrafted model of the environment to be provided beforehand , or that the environment is relatively simple and lowdimensional . We present a novel approach to safety-aware deep reinforcement learning in high-dimensional environments called latent shielding . Latent shielding leverages internal representations of the environment learnt by model-based agents to “ imagine ” future trajectories and avoid those deemed unsafe . We experimentally demonstrate that this approach leads to improved adherence to formallydefined safety specifications . 1 INTRODUCTION . The steady trickle of reinforcement learning ( RL ) systems making their way out of the lab and into the real world has cast a spotlight on the safety and robustness of RL agents . The motivation behind this should be relatively easy to grasp : when training an agent in real-world settings , it is desirable that some states are never reached as they could , for instance , cause permanent damage to the hardware the agent is controlling . We can thus informally define the notion of safety-aware RL in terms of the classical RL setup with the added requirement that the number of unsafe states visited be minimised . Under this definition , however , it has been found that many state-of-the-art RL algorithms unnecessarily enter unsafe states despite safe alternatives being available and there being a positive correlation between avoiding such states and reward ( Giacobbe et al. , 2021 ) . The field of safety-aware RL encompasses a multitude of approaches ranging from constrained policy optimisation ( Chow et al. , 2017 ; Achiam et al. , 2017 ; Yang et al. , 2020 ) to safety critics ( Srinivasan et al. , 2020 ; Bharadhwaj et al. , 2021 ; Thananjeyan et al. , 2021 ) to meta-learning ( Turchetta et al. , 2020 ) . In this work , we focus on a particular family of approaches known as shielding ( Alshiekh et al. , 2018 ; Anderson et al. , 2020 ; Giacobbe et al. , 2021 ; ElSayed-Aly et al. , 2021 ; Pranger et al. , 2021 ) . Central to shielding is the notion of a shield , a filter that checks actions proposed by the agent ’ s existing policy with reference to a model of the environment ’ s dynamics and some formal safety specification . The shield overrides actions that may lead to an unsafe state using some other safe ( but by no means optimal ) policy . A key advantage of many shielding approaches is that the resulting shielded policies are formally verifiable ; however , a shortcoming is that they require a model of environmental dynamics - typically handcrafted - to be provided in advance . Providing such a model may prove difficult for complex real-world environments , with inaccuracies and human biases creeping into handcrafted models . In this work , we propose a safe RL agent that makes uses of latent shielding , an approach to shielding in environments where a formally-specified dynamics model is not available in advance . At an intuitive level , the agent uses a data-driven approach to learn its own latent world model ( a component of which is a dynamics model ) which is then leveraged by a shield . The shield then uses the agent ’ s model to “ imagine ” trajectories arising from different actions , forcing the agent to avoid those it foresees leading to unsafe states . In addition , the agent can be trained within its own latent world model thus reducing the number of safety violations seen during training . Contributions The main contribution of this work is a framework for shielding agents in complex , stochastic and high-dimensional environments without knowledge of environmental dynamics a priori . We further introduce a new method to aid exploration when training shielded agents . Though our framework loses the formal safety guarantees associated with traditional symbolic shielding approaches , our experiments illustrate that latent shielding reduces unsafe behaviour during training and achieves testing performance comparable to previous symbolic approaches . 2 PRELIMINARIES . In this section , we cover some relevant background topics . We begin by introducing our problem setup for safety-aware RL and give an overview of the specification language used in this work . This is followed by an outline of the latent world model we make use of in this work as well as a discussion on shielding . 2.1 PROBLEM SETUP . We consider an agent interacting with an environment E modelled as a partially observable Markov decision process ( POMDP ) with states s ∈ SE , observations ot ∈ OE , agent-generated actions at ∈ AE and scalar rewards rt ∈ R over discrete time steps t ∈ [ 0 , 1 , ... , T − 1 ] . We assume the environment has been augmented with a labelling function LφE : SE → { safe , unsafe } that , at each time step , informs us whether a violation has occurred with respect to some formal safety specification φ . For the avoidance of doubt , we define a violation to have occurred whenever φ does not hold . This is a weaker assumption than previous works in shielding ( which assume access to an abstraction of the environment ) and can be thought as a secondary safety-focused reward function with a binary output . Intuitively , the goal of the agent is to learn a policy π that maximises its expected cumulative reward while minimising the number of violations of φ . 2.2 SYNTACTICALLY CO-SAFE LINEAR TEMPORAL LOGIC . In this work , we use syntactically co-safe Linear Temporal Logic ( scLTL ) ( Kupferman & Vardi , 2001 ) as our specification language . Valid scLTL formulae over some set of atomic propositions AP can be constructed according to the following grammar : φ : := true | d | ¬d | φ ∨ φ | φ ∧ φ | ©φ | φ ∪ φ | φ ( 1 ) where d ∈ AP , ¬ ( negation ) , ∨ ( disjunction ) , ∧ ( conjunction ) are the familiar operators from propositional logic , and © ( next ) , ∪ ( until ) and ( eventually ) are temporal operators . We can monitor a co-safe LTL specification using a technique known as progression ( Bacchus & Kabanza , 2000 ) . 2.3 RECURRENT STATE-SPACE MODELS . We refer to the predictive model of an environment maintained by a model-based agent as its world model . World models can be learnt from experience and be used both as a substitute for the environment during training ( Ha & Schmidhuber , 2018 ; Hafner et al. , 2021 ) and for planning at run-time ( Hafner et al. , 2019b ) . Though many realisations of the notion of a world model exist , the world model used in this work is based on the recurrent state-space model ( RSSM ) proposed by Hafner et al . ( 2019b ) . An RSSM is composed of three key components : a latent dynamics model , a reward model , and an observation model . These components act on compact states formed from the concatenation of a deterministic latent state ht and stochastic latent state zt . Latent Dynamics Model The latent dynamics model is made up of a number of smaller models . First , the recurrent model ht = f ( ht−1 , zt−1 , at−1 ) is used to compute the deterministic latent state based on the previous compact state and action . From ht and the current observation ot , a distribution q ( zt|ht , ot ) over posterior stochastic latent states zt is computed by the representation model . At the same time , a distribution p ( ẑt|ht ) over prior stochastic latent states ẑt is computed by the transition model , based only on ht . During training , the transition model attempts to minimise the Kullback Leibler ( KL ) divergence between the prior and posterior stochastic latent state distributions . In doing this , the RSSM learns to predict future latent states ( using the recurrent and transition models ) without access to future observations . Observation Model The observation model computes the distribution p ( ôt|ht , zt ) over observations ôt for a particular state . Though not strictly needed , the observation model can prove useful for visualising predicted future states and providing a richer training signal . Reward Model The reward model computes the distribution p ( r̂t|ht , zt ) over rewards r̂t for a particular state . In practice , the distributions p and q are implemented with neural networks pθ and qθ respectively , parameterised by some set of parameters θ . These latent dynamics models define a fully-observable Markov decision process ( MDP ) as the latent states in the agent ’ s own internal model can always be observed by the agent ( Hafner et al. , 2019a ) . We denote the state space of this MDP ( comprised of compact latent states ) as SI . 2.4 SHIELDING . The classical formulation of shielding in RL is given by Alshiekh et al . ( 2018 ) . It assumes access to two ingredients : an LTL safety specification and abstraction ( a MDP model of the environment that captures the aspects of the environment relevant for planning ahead with respect to the safety specification ) . These ingredients are used to construct a formally verifiable reactive system that monitors the agent ’ s actions , overriding those which lead to violation states . Proposed by Giacobbe et al . ( 2021 ) , bounded prescience shielding ( BPS ) avoids the need for handcrafted abstractions by exploiting the fact that some agents are trained in computer simulations . The shield operates by leveraging access to the program underlying the simulation to look ahead into future states within some finite horizon . Using BPS over classical shielding does , however , come with a few disadvantages . Firstly , it requires access to the simulation at run-time which may prove difficult to provide ( especially in cases where running the simulation is computationally expensive ) . Moreover , an agent using BPS , even when starting from a safe state , can find itself entering unsafe states in cases where the number of steps between a violation being caused by an action and the violation state itself exceeds the shield ’ s look-ahead horizon . This is not the case for classical shielding which resembles BPS with an infinite horizon . 2.5 BOUNDED SAFETY . The notion of safety used by BPS is defined over MDPs . For an arbitrary MDP with states S and actions A , a bounded trajectory ρ of length H is a sequence of states and actions s0 a0−→ s1 a1−→ . . . an−1−−−→ sn comprised of no more than H states and with the final state sn either being a terminal state or n = H − 1 . We further denote the set of all finite trajectories starting from some arbitrary state s ∈ S by % ( s ) and the set of all bounded trajectories of length H that start from s by % H ( s ) . We say a bounded trajectory ρ of length H satisfies H-bounded safety with respect to safety specification φ , written SH ( ρ , φ ) , if and only if for all si ∈ ρ , LφE ( si ) = safe . Moreover , we can extend the notion of H-bounded safety over the set of policies : a policy π is H-bounded safe with respect to φ , denoted as SH ( π , φ ) , if and only if for all s ∈ S , • either there exists some ρ ∈ % H ( s ) such that SH ( ρ , φ ) and π ( s0 ) = a0 ; • or for all ρ ∈ % H ( s ) , ¬SH ( ρ , φ ) . In other words , the policy will choose a safe trajectory as long as one exists . Finally , we formally define a violation of φ to be inevitable in state s0 ∈ S if and only if for all ρ ∈ % ( s0 ) , ¬SH ( ρ , φ ) . | This paper proposes a safe Reinforcement Learning approach using Latent Shielding. The agent learns a data-driven recurrent state-space model representing a world model based on an existing approach (Hafner et al., 2019b). The learned latent world model allows agents to foresee various trajectories arising from different actions and avoid unsafe states. In their setting, the environment also provides binary labeling over states indicating an occurrence of violation in each step, formulated as syntactically co-safe Linear Temporal Logic (scLTL) (Kupferman & Vardi, 2001). The proposed approach is evaluated on two environments (1) Visual Grid World and (2) Cliff driver and compared against Dreamer agent with no shielding (Hafner et al., 2019a; 2021) and a Dreamer agent with Bounded Prescience Shield (Giacobbe et al., 2021). | SP:a2ced98b09570a5ab17565a2f6223fd198eddbdd |
Do Androids Dream of Electric Fences? Safety-Aware Reinforcement Learning with Latent Shielding | The growing trend of fledgling reinforcement learning systems making their way into real-world applications has been accompanied by growing concerns for their safety and robustness . In recent years , a variety of approaches have been put forward to address the challenges of safety-aware reinforcement learning ; however , these methods often either require a handcrafted model of the environment to be provided beforehand , or that the environment is relatively simple and lowdimensional . We present a novel approach to safety-aware deep reinforcement learning in high-dimensional environments called latent shielding . Latent shielding leverages internal representations of the environment learnt by model-based agents to “ imagine ” future trajectories and avoid those deemed unsafe . We experimentally demonstrate that this approach leads to improved adherence to formallydefined safety specifications . 1 INTRODUCTION . The steady trickle of reinforcement learning ( RL ) systems making their way out of the lab and into the real world has cast a spotlight on the safety and robustness of RL agents . The motivation behind this should be relatively easy to grasp : when training an agent in real-world settings , it is desirable that some states are never reached as they could , for instance , cause permanent damage to the hardware the agent is controlling . We can thus informally define the notion of safety-aware RL in terms of the classical RL setup with the added requirement that the number of unsafe states visited be minimised . Under this definition , however , it has been found that many state-of-the-art RL algorithms unnecessarily enter unsafe states despite safe alternatives being available and there being a positive correlation between avoiding such states and reward ( Giacobbe et al. , 2021 ) . The field of safety-aware RL encompasses a multitude of approaches ranging from constrained policy optimisation ( Chow et al. , 2017 ; Achiam et al. , 2017 ; Yang et al. , 2020 ) to safety critics ( Srinivasan et al. , 2020 ; Bharadhwaj et al. , 2021 ; Thananjeyan et al. , 2021 ) to meta-learning ( Turchetta et al. , 2020 ) . In this work , we focus on a particular family of approaches known as shielding ( Alshiekh et al. , 2018 ; Anderson et al. , 2020 ; Giacobbe et al. , 2021 ; ElSayed-Aly et al. , 2021 ; Pranger et al. , 2021 ) . Central to shielding is the notion of a shield , a filter that checks actions proposed by the agent ’ s existing policy with reference to a model of the environment ’ s dynamics and some formal safety specification . The shield overrides actions that may lead to an unsafe state using some other safe ( but by no means optimal ) policy . A key advantage of many shielding approaches is that the resulting shielded policies are formally verifiable ; however , a shortcoming is that they require a model of environmental dynamics - typically handcrafted - to be provided in advance . Providing such a model may prove difficult for complex real-world environments , with inaccuracies and human biases creeping into handcrafted models . In this work , we propose a safe RL agent that makes uses of latent shielding , an approach to shielding in environments where a formally-specified dynamics model is not available in advance . At an intuitive level , the agent uses a data-driven approach to learn its own latent world model ( a component of which is a dynamics model ) which is then leveraged by a shield . The shield then uses the agent ’ s model to “ imagine ” trajectories arising from different actions , forcing the agent to avoid those it foresees leading to unsafe states . In addition , the agent can be trained within its own latent world model thus reducing the number of safety violations seen during training . Contributions The main contribution of this work is a framework for shielding agents in complex , stochastic and high-dimensional environments without knowledge of environmental dynamics a priori . We further introduce a new method to aid exploration when training shielded agents . Though our framework loses the formal safety guarantees associated with traditional symbolic shielding approaches , our experiments illustrate that latent shielding reduces unsafe behaviour during training and achieves testing performance comparable to previous symbolic approaches . 2 PRELIMINARIES . In this section , we cover some relevant background topics . We begin by introducing our problem setup for safety-aware RL and give an overview of the specification language used in this work . This is followed by an outline of the latent world model we make use of in this work as well as a discussion on shielding . 2.1 PROBLEM SETUP . We consider an agent interacting with an environment E modelled as a partially observable Markov decision process ( POMDP ) with states s ∈ SE , observations ot ∈ OE , agent-generated actions at ∈ AE and scalar rewards rt ∈ R over discrete time steps t ∈ [ 0 , 1 , ... , T − 1 ] . We assume the environment has been augmented with a labelling function LφE : SE → { safe , unsafe } that , at each time step , informs us whether a violation has occurred with respect to some formal safety specification φ . For the avoidance of doubt , we define a violation to have occurred whenever φ does not hold . This is a weaker assumption than previous works in shielding ( which assume access to an abstraction of the environment ) and can be thought as a secondary safety-focused reward function with a binary output . Intuitively , the goal of the agent is to learn a policy π that maximises its expected cumulative reward while minimising the number of violations of φ . 2.2 SYNTACTICALLY CO-SAFE LINEAR TEMPORAL LOGIC . In this work , we use syntactically co-safe Linear Temporal Logic ( scLTL ) ( Kupferman & Vardi , 2001 ) as our specification language . Valid scLTL formulae over some set of atomic propositions AP can be constructed according to the following grammar : φ : := true | d | ¬d | φ ∨ φ | φ ∧ φ | ©φ | φ ∪ φ | φ ( 1 ) where d ∈ AP , ¬ ( negation ) , ∨ ( disjunction ) , ∧ ( conjunction ) are the familiar operators from propositional logic , and © ( next ) , ∪ ( until ) and ( eventually ) are temporal operators . We can monitor a co-safe LTL specification using a technique known as progression ( Bacchus & Kabanza , 2000 ) . 2.3 RECURRENT STATE-SPACE MODELS . We refer to the predictive model of an environment maintained by a model-based agent as its world model . World models can be learnt from experience and be used both as a substitute for the environment during training ( Ha & Schmidhuber , 2018 ; Hafner et al. , 2021 ) and for planning at run-time ( Hafner et al. , 2019b ) . Though many realisations of the notion of a world model exist , the world model used in this work is based on the recurrent state-space model ( RSSM ) proposed by Hafner et al . ( 2019b ) . An RSSM is composed of three key components : a latent dynamics model , a reward model , and an observation model . These components act on compact states formed from the concatenation of a deterministic latent state ht and stochastic latent state zt . Latent Dynamics Model The latent dynamics model is made up of a number of smaller models . First , the recurrent model ht = f ( ht−1 , zt−1 , at−1 ) is used to compute the deterministic latent state based on the previous compact state and action . From ht and the current observation ot , a distribution q ( zt|ht , ot ) over posterior stochastic latent states zt is computed by the representation model . At the same time , a distribution p ( ẑt|ht ) over prior stochastic latent states ẑt is computed by the transition model , based only on ht . During training , the transition model attempts to minimise the Kullback Leibler ( KL ) divergence between the prior and posterior stochastic latent state distributions . In doing this , the RSSM learns to predict future latent states ( using the recurrent and transition models ) without access to future observations . Observation Model The observation model computes the distribution p ( ôt|ht , zt ) over observations ôt for a particular state . Though not strictly needed , the observation model can prove useful for visualising predicted future states and providing a richer training signal . Reward Model The reward model computes the distribution p ( r̂t|ht , zt ) over rewards r̂t for a particular state . In practice , the distributions p and q are implemented with neural networks pθ and qθ respectively , parameterised by some set of parameters θ . These latent dynamics models define a fully-observable Markov decision process ( MDP ) as the latent states in the agent ’ s own internal model can always be observed by the agent ( Hafner et al. , 2019a ) . We denote the state space of this MDP ( comprised of compact latent states ) as SI . 2.4 SHIELDING . The classical formulation of shielding in RL is given by Alshiekh et al . ( 2018 ) . It assumes access to two ingredients : an LTL safety specification and abstraction ( a MDP model of the environment that captures the aspects of the environment relevant for planning ahead with respect to the safety specification ) . These ingredients are used to construct a formally verifiable reactive system that monitors the agent ’ s actions , overriding those which lead to violation states . Proposed by Giacobbe et al . ( 2021 ) , bounded prescience shielding ( BPS ) avoids the need for handcrafted abstractions by exploiting the fact that some agents are trained in computer simulations . The shield operates by leveraging access to the program underlying the simulation to look ahead into future states within some finite horizon . Using BPS over classical shielding does , however , come with a few disadvantages . Firstly , it requires access to the simulation at run-time which may prove difficult to provide ( especially in cases where running the simulation is computationally expensive ) . Moreover , an agent using BPS , even when starting from a safe state , can find itself entering unsafe states in cases where the number of steps between a violation being caused by an action and the violation state itself exceeds the shield ’ s look-ahead horizon . This is not the case for classical shielding which resembles BPS with an infinite horizon . 2.5 BOUNDED SAFETY . The notion of safety used by BPS is defined over MDPs . For an arbitrary MDP with states S and actions A , a bounded trajectory ρ of length H is a sequence of states and actions s0 a0−→ s1 a1−→ . . . an−1−−−→ sn comprised of no more than H states and with the final state sn either being a terminal state or n = H − 1 . We further denote the set of all finite trajectories starting from some arbitrary state s ∈ S by % ( s ) and the set of all bounded trajectories of length H that start from s by % H ( s ) . We say a bounded trajectory ρ of length H satisfies H-bounded safety with respect to safety specification φ , written SH ( ρ , φ ) , if and only if for all si ∈ ρ , LφE ( si ) = safe . Moreover , we can extend the notion of H-bounded safety over the set of policies : a policy π is H-bounded safe with respect to φ , denoted as SH ( π , φ ) , if and only if for all s ∈ S , • either there exists some ρ ∈ % H ( s ) such that SH ( ρ , φ ) and π ( s0 ) = a0 ; • or for all ρ ∈ % H ( s ) , ¬SH ( ρ , φ ) . In other words , the policy will choose a safe trajectory as long as one exists . Finally , we formally define a violation of φ to be inevitable in state s0 ∈ S if and only if for all ρ ∈ % ( s0 ) , ¬SH ( ρ , φ ) . | The paper proposes a safe model-based reinforcement learning algorithm that trains a classifier to predict whether or not a state is unsafe, and plans trajectories that avoid unsafe states. The key idea is to train the classifier on latent representations of states given by the latent dynamics model in a recurrent state-space model. Experiments in simulated navigation tasks show that in some environments, the proposed method achieves higher reward and violates safety constraints less often than an ablated variant that does not constrain planned trajectories to be safe. | SP:a2ced98b09570a5ab17565a2f6223fd198eddbdd |
On the Convergence of Nonconvex Continual Learning with Adaptive Learning Rate | 1 INTRODUCTION . Learning new tasks without forgetting previously learned tasks is a key aspect of artificial intelligence to be as versatile as humans . Unlike the conventional deep learning that observes tasks from an i.i.d . distribution , continual learning train sequentially a model on a non-stationary stream of data ( Ring , 1995 ; Thrun , 1994 ) . The continual learning AI systems struggle with catastrophic forgetting when the data access of previously learned tasks is restricted ( French & Chater , 2002 ) . To overcome catastrophic forgetting , continual learning algorithms introduce novel methods such as a replay memory to store and replay the previously learned examples ( Lopez-Paz & Ranzato , 2017 ; Aljundi et al. , 2019 ; Chaudhry et al. , 2019a ) , regularization methods that penalize neural networks ( Kirkpatrick et al. , 2017 ; Zenke et al. , 2017 ) , Bayesian methods that utilize the uncertainty of parameters or data points ( Nguyen et al. , 2018 ; Ebrahimi et al. , 2020 ) , and other recent approaches ( Yoon et al. , 2018 ; Lee et al. , 2019 ) . In this paper , we focus on the online continual learning with replay memory . The learner stores a small subset of the data for previous tasks into a memory and utilizes the memory by replaying samples to keep the model staying in a feasible region corresponding to moderate suboptimal region . Gradient episodic memory ( GEM ) ( Lopez-Paz & Ranzato , 2017 ) first formulated the replay based continual learning as a constrained optimization problem . This formulation rephrases the constraints on objectives for previous tasks as the inequalities based on the inner product of loss gradient vectors for previous tasks and a current task . However , the theoretical convergence analysis of the performance of previously learned tasks , which implies a measure of catastrophic forgetting , has not been rigorously studied in the literature . Without convergence analysis , this intuitive reformulation of constrained optimization does not provide theoretical guarantee to prevent catastrophic forgetting . Nonconvex finite-sum optimization problem offers a solution to analyze catastrophic forgetting by measuring the convergence of previously learned tasks , which is related to the performance . Most deep learning problems are defined as nonconvex optimization , and the target objective is composed of the sum of objectives for each data point . Now we express our continual learning problem of the form min x∈Rd f ( x ) = 1 n n∑ i=1 fi ( x ) , ( 1 ) where we assume that each objective fi ( x ) with a model x and a data point i is nonconvex with Lipschitz gradient . Here , we expect that a stochastic gradient descent based algorithm reaches a stationary point instead of the global minimum in the nonconvex optimization . This generic form is well studied to demonstrate the convergence and complexity of stochastic gradient methods for a nonconvex setting ( Zhou & Gu , 2019 ; Lei et al. , 2017 ; Reddi et al. , 2016a ; Zaheer et al. , 2018 ) . Unlike the convex case , the convergence is generally measured by the expectation of the squared norm of the gradient E‖∇f ( x ) ‖2 . The theoretical complexity is derived from the -accurate solution , which is also known as a stationary point with E‖∇f ( x ) ‖2 ≤ . Suppose we divide the entire sum of objectives into two terms for previous tasks and current tasks , and measure the convergence on each term . Then , we can observe the transition of convergences on the previous and current tasks respectively while learning sequentially from a data stream . We consider this transition of convergence on the previous task as catastrophic forgetting if E‖∇fP ( x ) ‖2 with the set of data points from previous tasks P increases over iterations . In this work , we formulate continual learning problem as a nonconvex finite-sum optimization with stochastic gradient descent algorithm that updates both previously learned tasks from replay memory and the current task simultaneously , and present a theoretical convergence analysis of continual learning problems by leveraging the replay-based update method . This extends the continual learning algorithm such as ER-Reservoir ( experience replay with reservoir sampling ) with fixed and same learning rates on both two tasks to our adaptive method which controls the relative importance between tasks at each step with theoretical guarantee . In addition , the replay-based continual learning has the critical limitation of overfitting to memory , which also degrades the performance of previously learned tasks like catastrophic forgetting by interference . It is known that choosing the perfect memory for continual learning to prevent catastrophic forgetting is an NP-hard problem by ( Knoblauch et al. , 2020 ) . We present that the inductive bias by replay memory , which prevents perfect continual learning is inevitable in view of optimization . 2 BACKGROUNDS . The continual learning algorithm with a replay memory with size m can not access the whole dataset of the previously learned tasks with nf samples , but uses limited samples in the memory when a learner trains on the current task . This limited access does not guarantee to completely prevent catastrophic forgetting , and causes the overfitting problem with the biased gradient on a memory . In Section 3 , we provide the convergence analysis of the previously learned tasks f ( x ) , which are vulnerable to catastrophic forgetting . As we denote fi ( x ) as the component , which indicates the loss of sample i from the previously learned tasks with the model parameter x , we define ∇fi ( x ) as its gradient . We use It , Jt as the mini-batch of samples at iteration t and denote the mini-batch size |It| and |Jt| as bf , bg throughout the paper . We also note that gj ( x ) , which denotes the loss for the current task and will be defined in Section 3.1 , satisfies the above and following assumptions . To formulate the convergence over iterations , we introduce the Incremental First-order Oracle ( IFO ) framework ( Ghadimi & Lan , 2013 ) , which is defined as a unit of cost by sampling the pair ( ∇fi ( x ) , fi ( x ) ) . For example , a stochastic gradient descent algorithm requires the cost as much as the batch size bt at each step , and the total cost is the sum of batch sizes ∑T t=1 bt . Let T ( ) be the minimum number of iterations to guarantee -accurate solutions . Then the average bound of IFO complexity is less than or equal to ∑T ( ) t=1 bt . To analyze the convergence and compute the IFO complexity , we define the supremum of loss gap between a local optimal point x0 and the global optimum x∗ as ∆f = sup x0 f ( x0 ) − f ( x∗ ) . ( 2 ) Suppose that sup f ( x0 ) is same with f ( x∗ ) , then we have ∆f = 0 , which might be much smaller than the loss gap of general SGD . Without the continual learning scenario , a general nonconvex SGD updates the parameters from an randomly initialized point , which is highly likely to have the large loss f ( x0 ) . Then , ∆f > 0 is the key constant to determine the IFO complexity for convergence as ∆f is in the numerator of Equation 11 . However , a continual learning algorithm has already converged to a local optimal point x0 for the previous task f ( x ) and might get a much smaller ∆f than the general SGD . It means that ∆f for nonconvex continual learining in Equation 11 dose not have a large impact on the IFO complexity . To generalize the theoretical result , we define the worst local minimum to explain the upper bound of convergence rate in Equation 2 . This implies that ∆f is not a critical reason for moving away from stationary points of f by catastrophic forgetting , which we will explain in Section 3 . We also define σf and σg for f and g , respectively , as the upper bounds on the variance of the stochastic gradients of a given mini-batch . For brevity , we write only one of them σf , σf = sup x 1 nf nf∑ i=1 ‖∇fi ( x ) −∇f ( x ) ‖2 . ( 3 ) Throughout the paper , we assume the L-smoothness . Assumption 1. fi is L-smooth that there exists a constant L > 0 such that for any x , y ∈ Rd , ‖∇fi ( x ) −∇fi ( y ) ‖ ≤ L‖x− y‖ ( 4 ) where ‖·‖ denotes the Euclidean norm . Then the following inequality directly holds that −L 2 ‖x− y‖2 ≤ fi ( x ) − fi ( y ) − 〈∇fi ( y ) , x− y〉 ≤ L 2 ‖x− y‖2 . ( 5 ) We derive the inequality in Appendix B . With Assumption 1 , we can successfully handle individual noncovex objectives for each data point . In the next section , we investigate nonconvex continual learning with adaptive learning rates to overcome catastrophic forgetting . 3 NONCONVEX CONTINUAL LEARNING . We first present a theoretical convergence analysis of memory-based continual learning in nonconvex setting . We use the convergence rate of stochastic gradient methods , which denotes the IFO complexity to reach an -accurate solution for smooth nonconvex finite-sum problem ( Reddi et al. , 2016a ) . This generic form enables both deep learning and optimization communities to formulate various accelerated gradient methods with theoretical guarantee . We seek to understand why catastrophic forgetting happens in terms of the convergence rate , and propose non-convex continual learning ( NCCL ) algorithms with theoretical convergence analysis . 3.1 PROBLEM FORMULATION . Given two finite sets P and C at the initial time step t = 0 , we let two sets denote the sets of indices for previously learned data points and upcoming data points , respectively . Note that the task description for a continual learner is two separate sets . In this section , we will show a convergence analysis of the model parameter that we have trained on P and starts to learn C. Thus , we simply denote a data stream of continual learning as two consecutive sets P and C. We consider our goal as a smooth nonconvex finite-sum optimization problem with two objectives min x∈Rd F ( x ) = f ( x ) + g ( x ) = 1 nf ∑ i∈P fi ( x ) + 1 ng ∑ j∈C gj ( x ) , ( 6 ) where fi ( x ) and gj ( x ) denote the objectives of data points i ∈ P and j ∈ C , respectively . In addition , nf and ng are the numbers of elements for P and C. To ease exposition , we use a different notation gj ( x ) for a data point j ∈ C , which is usually the same objective function for a data point i ∈ P . To formulate a theoretical convergence analysis of continual learning , we consider a replay memory based method of which memory is a subset of P ∪ C. Let a random variable Mt ⊂ P ∪ C be the replay memory at time step t ∈ [ 0 , T ] , whose union is of the form M : = ∪tMt . We focus both the episodic memory and the replay memory with dropping rule . The episodic memory based methods include GEM ( Aljundi et al. , 2019 ) , A-GEM ( Chaudhry et al. , 2019a ) , and ORTHOG-SUBSPACE ( Chaudhry et al. , 2020 ) . ER-Reservoir ( Chaudhry et al. , 2019b ) is a replay memory based method with dropping rule , which replaces the dropped sample d ∈Mt with a sample in the stream for C. We now define the gradient update of continual learning xt+1 = xt − αHt∇fIt ( xt ) − βHt∇gJt ( xt ) , ( 7 ) where It ⊂Mt and Jt ⊂ C denote the mini-batches from the replay memory and the current data stream , respectively . Here , Ht is the union of It and Jt . The adaptive learning rates of ∇fIt ( xt ) and ∇gJt ( xt ) are denoted by αHt and βHt which are the functions of Ht . Strictly speaking , the mini-batch It from Mt might contain a datapoint d ∈ C for ER-Reservoir . We describe the details of the problem in Appendix B and assume that the notation It indicates a subset of P for convenience . Equation 7 is a generalized version of continual learning algorithms , which is our novelty to prove the convergence rates in the nonconvex setting for the proposed method , A-GEM , and ER-Reservoir later . | The authors propose a stochastic gradient descent algorithm with adaptive learning rates for continual learning providing an analysis of convergence. The main idea is to formulate the problem as a finite sum of non-convex objective functions with each component of the sum corresponding to a different task. The authors claim that the use of adaptive learning rates allow them to better control the relative importance of different tasks. | SP:c9f4cde64fa9183fc44259b8b1b6f18d9f5fe104 |
On the Convergence of Nonconvex Continual Learning with Adaptive Learning Rate | 1 INTRODUCTION . Learning new tasks without forgetting previously learned tasks is a key aspect of artificial intelligence to be as versatile as humans . Unlike the conventional deep learning that observes tasks from an i.i.d . distribution , continual learning train sequentially a model on a non-stationary stream of data ( Ring , 1995 ; Thrun , 1994 ) . The continual learning AI systems struggle with catastrophic forgetting when the data access of previously learned tasks is restricted ( French & Chater , 2002 ) . To overcome catastrophic forgetting , continual learning algorithms introduce novel methods such as a replay memory to store and replay the previously learned examples ( Lopez-Paz & Ranzato , 2017 ; Aljundi et al. , 2019 ; Chaudhry et al. , 2019a ) , regularization methods that penalize neural networks ( Kirkpatrick et al. , 2017 ; Zenke et al. , 2017 ) , Bayesian methods that utilize the uncertainty of parameters or data points ( Nguyen et al. , 2018 ; Ebrahimi et al. , 2020 ) , and other recent approaches ( Yoon et al. , 2018 ; Lee et al. , 2019 ) . In this paper , we focus on the online continual learning with replay memory . The learner stores a small subset of the data for previous tasks into a memory and utilizes the memory by replaying samples to keep the model staying in a feasible region corresponding to moderate suboptimal region . Gradient episodic memory ( GEM ) ( Lopez-Paz & Ranzato , 2017 ) first formulated the replay based continual learning as a constrained optimization problem . This formulation rephrases the constraints on objectives for previous tasks as the inequalities based on the inner product of loss gradient vectors for previous tasks and a current task . However , the theoretical convergence analysis of the performance of previously learned tasks , which implies a measure of catastrophic forgetting , has not been rigorously studied in the literature . Without convergence analysis , this intuitive reformulation of constrained optimization does not provide theoretical guarantee to prevent catastrophic forgetting . Nonconvex finite-sum optimization problem offers a solution to analyze catastrophic forgetting by measuring the convergence of previously learned tasks , which is related to the performance . Most deep learning problems are defined as nonconvex optimization , and the target objective is composed of the sum of objectives for each data point . Now we express our continual learning problem of the form min x∈Rd f ( x ) = 1 n n∑ i=1 fi ( x ) , ( 1 ) where we assume that each objective fi ( x ) with a model x and a data point i is nonconvex with Lipschitz gradient . Here , we expect that a stochastic gradient descent based algorithm reaches a stationary point instead of the global minimum in the nonconvex optimization . This generic form is well studied to demonstrate the convergence and complexity of stochastic gradient methods for a nonconvex setting ( Zhou & Gu , 2019 ; Lei et al. , 2017 ; Reddi et al. , 2016a ; Zaheer et al. , 2018 ) . Unlike the convex case , the convergence is generally measured by the expectation of the squared norm of the gradient E‖∇f ( x ) ‖2 . The theoretical complexity is derived from the -accurate solution , which is also known as a stationary point with E‖∇f ( x ) ‖2 ≤ . Suppose we divide the entire sum of objectives into two terms for previous tasks and current tasks , and measure the convergence on each term . Then , we can observe the transition of convergences on the previous and current tasks respectively while learning sequentially from a data stream . We consider this transition of convergence on the previous task as catastrophic forgetting if E‖∇fP ( x ) ‖2 with the set of data points from previous tasks P increases over iterations . In this work , we formulate continual learning problem as a nonconvex finite-sum optimization with stochastic gradient descent algorithm that updates both previously learned tasks from replay memory and the current task simultaneously , and present a theoretical convergence analysis of continual learning problems by leveraging the replay-based update method . This extends the continual learning algorithm such as ER-Reservoir ( experience replay with reservoir sampling ) with fixed and same learning rates on both two tasks to our adaptive method which controls the relative importance between tasks at each step with theoretical guarantee . In addition , the replay-based continual learning has the critical limitation of overfitting to memory , which also degrades the performance of previously learned tasks like catastrophic forgetting by interference . It is known that choosing the perfect memory for continual learning to prevent catastrophic forgetting is an NP-hard problem by ( Knoblauch et al. , 2020 ) . We present that the inductive bias by replay memory , which prevents perfect continual learning is inevitable in view of optimization . 2 BACKGROUNDS . The continual learning algorithm with a replay memory with size m can not access the whole dataset of the previously learned tasks with nf samples , but uses limited samples in the memory when a learner trains on the current task . This limited access does not guarantee to completely prevent catastrophic forgetting , and causes the overfitting problem with the biased gradient on a memory . In Section 3 , we provide the convergence analysis of the previously learned tasks f ( x ) , which are vulnerable to catastrophic forgetting . As we denote fi ( x ) as the component , which indicates the loss of sample i from the previously learned tasks with the model parameter x , we define ∇fi ( x ) as its gradient . We use It , Jt as the mini-batch of samples at iteration t and denote the mini-batch size |It| and |Jt| as bf , bg throughout the paper . We also note that gj ( x ) , which denotes the loss for the current task and will be defined in Section 3.1 , satisfies the above and following assumptions . To formulate the convergence over iterations , we introduce the Incremental First-order Oracle ( IFO ) framework ( Ghadimi & Lan , 2013 ) , which is defined as a unit of cost by sampling the pair ( ∇fi ( x ) , fi ( x ) ) . For example , a stochastic gradient descent algorithm requires the cost as much as the batch size bt at each step , and the total cost is the sum of batch sizes ∑T t=1 bt . Let T ( ) be the minimum number of iterations to guarantee -accurate solutions . Then the average bound of IFO complexity is less than or equal to ∑T ( ) t=1 bt . To analyze the convergence and compute the IFO complexity , we define the supremum of loss gap between a local optimal point x0 and the global optimum x∗ as ∆f = sup x0 f ( x0 ) − f ( x∗ ) . ( 2 ) Suppose that sup f ( x0 ) is same with f ( x∗ ) , then we have ∆f = 0 , which might be much smaller than the loss gap of general SGD . Without the continual learning scenario , a general nonconvex SGD updates the parameters from an randomly initialized point , which is highly likely to have the large loss f ( x0 ) . Then , ∆f > 0 is the key constant to determine the IFO complexity for convergence as ∆f is in the numerator of Equation 11 . However , a continual learning algorithm has already converged to a local optimal point x0 for the previous task f ( x ) and might get a much smaller ∆f than the general SGD . It means that ∆f for nonconvex continual learining in Equation 11 dose not have a large impact on the IFO complexity . To generalize the theoretical result , we define the worst local minimum to explain the upper bound of convergence rate in Equation 2 . This implies that ∆f is not a critical reason for moving away from stationary points of f by catastrophic forgetting , which we will explain in Section 3 . We also define σf and σg for f and g , respectively , as the upper bounds on the variance of the stochastic gradients of a given mini-batch . For brevity , we write only one of them σf , σf = sup x 1 nf nf∑ i=1 ‖∇fi ( x ) −∇f ( x ) ‖2 . ( 3 ) Throughout the paper , we assume the L-smoothness . Assumption 1. fi is L-smooth that there exists a constant L > 0 such that for any x , y ∈ Rd , ‖∇fi ( x ) −∇fi ( y ) ‖ ≤ L‖x− y‖ ( 4 ) where ‖·‖ denotes the Euclidean norm . Then the following inequality directly holds that −L 2 ‖x− y‖2 ≤ fi ( x ) − fi ( y ) − 〈∇fi ( y ) , x− y〉 ≤ L 2 ‖x− y‖2 . ( 5 ) We derive the inequality in Appendix B . With Assumption 1 , we can successfully handle individual noncovex objectives for each data point . In the next section , we investigate nonconvex continual learning with adaptive learning rates to overcome catastrophic forgetting . 3 NONCONVEX CONTINUAL LEARNING . We first present a theoretical convergence analysis of memory-based continual learning in nonconvex setting . We use the convergence rate of stochastic gradient methods , which denotes the IFO complexity to reach an -accurate solution for smooth nonconvex finite-sum problem ( Reddi et al. , 2016a ) . This generic form enables both deep learning and optimization communities to formulate various accelerated gradient methods with theoretical guarantee . We seek to understand why catastrophic forgetting happens in terms of the convergence rate , and propose non-convex continual learning ( NCCL ) algorithms with theoretical convergence analysis . 3.1 PROBLEM FORMULATION . Given two finite sets P and C at the initial time step t = 0 , we let two sets denote the sets of indices for previously learned data points and upcoming data points , respectively . Note that the task description for a continual learner is two separate sets . In this section , we will show a convergence analysis of the model parameter that we have trained on P and starts to learn C. Thus , we simply denote a data stream of continual learning as two consecutive sets P and C. We consider our goal as a smooth nonconvex finite-sum optimization problem with two objectives min x∈Rd F ( x ) = f ( x ) + g ( x ) = 1 nf ∑ i∈P fi ( x ) + 1 ng ∑ j∈C gj ( x ) , ( 6 ) where fi ( x ) and gj ( x ) denote the objectives of data points i ∈ P and j ∈ C , respectively . In addition , nf and ng are the numbers of elements for P and C. To ease exposition , we use a different notation gj ( x ) for a data point j ∈ C , which is usually the same objective function for a data point i ∈ P . To formulate a theoretical convergence analysis of continual learning , we consider a replay memory based method of which memory is a subset of P ∪ C. Let a random variable Mt ⊂ P ∪ C be the replay memory at time step t ∈ [ 0 , T ] , whose union is of the form M : = ∪tMt . We focus both the episodic memory and the replay memory with dropping rule . The episodic memory based methods include GEM ( Aljundi et al. , 2019 ) , A-GEM ( Chaudhry et al. , 2019a ) , and ORTHOG-SUBSPACE ( Chaudhry et al. , 2020 ) . ER-Reservoir ( Chaudhry et al. , 2019b ) is a replay memory based method with dropping rule , which replaces the dropped sample d ∈Mt with a sample in the stream for C. We now define the gradient update of continual learning xt+1 = xt − αHt∇fIt ( xt ) − βHt∇gJt ( xt ) , ( 7 ) where It ⊂Mt and Jt ⊂ C denote the mini-batches from the replay memory and the current data stream , respectively . Here , Ht is the union of It and Jt . The adaptive learning rates of ∇fIt ( xt ) and ∇gJt ( xt ) are denoted by αHt and βHt which are the functions of Ht . Strictly speaking , the mini-batch It from Mt might contain a datapoint d ∈ C for ER-Reservoir . We describe the details of the problem in Appendix B and assume that the notation It indicates a subset of P for convenience . Equation 7 is a generalized version of continual learning algorithms , which is our novelty to prove the convergence rates in the nonconvex setting for the proposed method , A-GEM , and ER-Reservoir later . | In this paper, the authors analyses the convergence rate of episodic memory-based continual learning methods. The authors formulate the continual learning problem as a nonconvex finite-sum optimization problem. Based on the analysis, the authors propose an adaptive learning rate scheduling methods to adjust the learning rates based on the gradients computed in each iteration. The results on several benchmarks show that the proposed method can achieve better results than the baselines. | SP:c9f4cde64fa9183fc44259b8b1b6f18d9f5fe104 |
On the Convergence of Nonconvex Continual Learning with Adaptive Learning Rate | 1 INTRODUCTION . Learning new tasks without forgetting previously learned tasks is a key aspect of artificial intelligence to be as versatile as humans . Unlike the conventional deep learning that observes tasks from an i.i.d . distribution , continual learning train sequentially a model on a non-stationary stream of data ( Ring , 1995 ; Thrun , 1994 ) . The continual learning AI systems struggle with catastrophic forgetting when the data access of previously learned tasks is restricted ( French & Chater , 2002 ) . To overcome catastrophic forgetting , continual learning algorithms introduce novel methods such as a replay memory to store and replay the previously learned examples ( Lopez-Paz & Ranzato , 2017 ; Aljundi et al. , 2019 ; Chaudhry et al. , 2019a ) , regularization methods that penalize neural networks ( Kirkpatrick et al. , 2017 ; Zenke et al. , 2017 ) , Bayesian methods that utilize the uncertainty of parameters or data points ( Nguyen et al. , 2018 ; Ebrahimi et al. , 2020 ) , and other recent approaches ( Yoon et al. , 2018 ; Lee et al. , 2019 ) . In this paper , we focus on the online continual learning with replay memory . The learner stores a small subset of the data for previous tasks into a memory and utilizes the memory by replaying samples to keep the model staying in a feasible region corresponding to moderate suboptimal region . Gradient episodic memory ( GEM ) ( Lopez-Paz & Ranzato , 2017 ) first formulated the replay based continual learning as a constrained optimization problem . This formulation rephrases the constraints on objectives for previous tasks as the inequalities based on the inner product of loss gradient vectors for previous tasks and a current task . However , the theoretical convergence analysis of the performance of previously learned tasks , which implies a measure of catastrophic forgetting , has not been rigorously studied in the literature . Without convergence analysis , this intuitive reformulation of constrained optimization does not provide theoretical guarantee to prevent catastrophic forgetting . Nonconvex finite-sum optimization problem offers a solution to analyze catastrophic forgetting by measuring the convergence of previously learned tasks , which is related to the performance . Most deep learning problems are defined as nonconvex optimization , and the target objective is composed of the sum of objectives for each data point . Now we express our continual learning problem of the form min x∈Rd f ( x ) = 1 n n∑ i=1 fi ( x ) , ( 1 ) where we assume that each objective fi ( x ) with a model x and a data point i is nonconvex with Lipschitz gradient . Here , we expect that a stochastic gradient descent based algorithm reaches a stationary point instead of the global minimum in the nonconvex optimization . This generic form is well studied to demonstrate the convergence and complexity of stochastic gradient methods for a nonconvex setting ( Zhou & Gu , 2019 ; Lei et al. , 2017 ; Reddi et al. , 2016a ; Zaheer et al. , 2018 ) . Unlike the convex case , the convergence is generally measured by the expectation of the squared norm of the gradient E‖∇f ( x ) ‖2 . The theoretical complexity is derived from the -accurate solution , which is also known as a stationary point with E‖∇f ( x ) ‖2 ≤ . Suppose we divide the entire sum of objectives into two terms for previous tasks and current tasks , and measure the convergence on each term . Then , we can observe the transition of convergences on the previous and current tasks respectively while learning sequentially from a data stream . We consider this transition of convergence on the previous task as catastrophic forgetting if E‖∇fP ( x ) ‖2 with the set of data points from previous tasks P increases over iterations . In this work , we formulate continual learning problem as a nonconvex finite-sum optimization with stochastic gradient descent algorithm that updates both previously learned tasks from replay memory and the current task simultaneously , and present a theoretical convergence analysis of continual learning problems by leveraging the replay-based update method . This extends the continual learning algorithm such as ER-Reservoir ( experience replay with reservoir sampling ) with fixed and same learning rates on both two tasks to our adaptive method which controls the relative importance between tasks at each step with theoretical guarantee . In addition , the replay-based continual learning has the critical limitation of overfitting to memory , which also degrades the performance of previously learned tasks like catastrophic forgetting by interference . It is known that choosing the perfect memory for continual learning to prevent catastrophic forgetting is an NP-hard problem by ( Knoblauch et al. , 2020 ) . We present that the inductive bias by replay memory , which prevents perfect continual learning is inevitable in view of optimization . 2 BACKGROUNDS . The continual learning algorithm with a replay memory with size m can not access the whole dataset of the previously learned tasks with nf samples , but uses limited samples in the memory when a learner trains on the current task . This limited access does not guarantee to completely prevent catastrophic forgetting , and causes the overfitting problem with the biased gradient on a memory . In Section 3 , we provide the convergence analysis of the previously learned tasks f ( x ) , which are vulnerable to catastrophic forgetting . As we denote fi ( x ) as the component , which indicates the loss of sample i from the previously learned tasks with the model parameter x , we define ∇fi ( x ) as its gradient . We use It , Jt as the mini-batch of samples at iteration t and denote the mini-batch size |It| and |Jt| as bf , bg throughout the paper . We also note that gj ( x ) , which denotes the loss for the current task and will be defined in Section 3.1 , satisfies the above and following assumptions . To formulate the convergence over iterations , we introduce the Incremental First-order Oracle ( IFO ) framework ( Ghadimi & Lan , 2013 ) , which is defined as a unit of cost by sampling the pair ( ∇fi ( x ) , fi ( x ) ) . For example , a stochastic gradient descent algorithm requires the cost as much as the batch size bt at each step , and the total cost is the sum of batch sizes ∑T t=1 bt . Let T ( ) be the minimum number of iterations to guarantee -accurate solutions . Then the average bound of IFO complexity is less than or equal to ∑T ( ) t=1 bt . To analyze the convergence and compute the IFO complexity , we define the supremum of loss gap between a local optimal point x0 and the global optimum x∗ as ∆f = sup x0 f ( x0 ) − f ( x∗ ) . ( 2 ) Suppose that sup f ( x0 ) is same with f ( x∗ ) , then we have ∆f = 0 , which might be much smaller than the loss gap of general SGD . Without the continual learning scenario , a general nonconvex SGD updates the parameters from an randomly initialized point , which is highly likely to have the large loss f ( x0 ) . Then , ∆f > 0 is the key constant to determine the IFO complexity for convergence as ∆f is in the numerator of Equation 11 . However , a continual learning algorithm has already converged to a local optimal point x0 for the previous task f ( x ) and might get a much smaller ∆f than the general SGD . It means that ∆f for nonconvex continual learining in Equation 11 dose not have a large impact on the IFO complexity . To generalize the theoretical result , we define the worst local minimum to explain the upper bound of convergence rate in Equation 2 . This implies that ∆f is not a critical reason for moving away from stationary points of f by catastrophic forgetting , which we will explain in Section 3 . We also define σf and σg for f and g , respectively , as the upper bounds on the variance of the stochastic gradients of a given mini-batch . For brevity , we write only one of them σf , σf = sup x 1 nf nf∑ i=1 ‖∇fi ( x ) −∇f ( x ) ‖2 . ( 3 ) Throughout the paper , we assume the L-smoothness . Assumption 1. fi is L-smooth that there exists a constant L > 0 such that for any x , y ∈ Rd , ‖∇fi ( x ) −∇fi ( y ) ‖ ≤ L‖x− y‖ ( 4 ) where ‖·‖ denotes the Euclidean norm . Then the following inequality directly holds that −L 2 ‖x− y‖2 ≤ fi ( x ) − fi ( y ) − 〈∇fi ( y ) , x− y〉 ≤ L 2 ‖x− y‖2 . ( 5 ) We derive the inequality in Appendix B . With Assumption 1 , we can successfully handle individual noncovex objectives for each data point . In the next section , we investigate nonconvex continual learning with adaptive learning rates to overcome catastrophic forgetting . 3 NONCONVEX CONTINUAL LEARNING . We first present a theoretical convergence analysis of memory-based continual learning in nonconvex setting . We use the convergence rate of stochastic gradient methods , which denotes the IFO complexity to reach an -accurate solution for smooth nonconvex finite-sum problem ( Reddi et al. , 2016a ) . This generic form enables both deep learning and optimization communities to formulate various accelerated gradient methods with theoretical guarantee . We seek to understand why catastrophic forgetting happens in terms of the convergence rate , and propose non-convex continual learning ( NCCL ) algorithms with theoretical convergence analysis . 3.1 PROBLEM FORMULATION . Given two finite sets P and C at the initial time step t = 0 , we let two sets denote the sets of indices for previously learned data points and upcoming data points , respectively . Note that the task description for a continual learner is two separate sets . In this section , we will show a convergence analysis of the model parameter that we have trained on P and starts to learn C. Thus , we simply denote a data stream of continual learning as two consecutive sets P and C. We consider our goal as a smooth nonconvex finite-sum optimization problem with two objectives min x∈Rd F ( x ) = f ( x ) + g ( x ) = 1 nf ∑ i∈P fi ( x ) + 1 ng ∑ j∈C gj ( x ) , ( 6 ) where fi ( x ) and gj ( x ) denote the objectives of data points i ∈ P and j ∈ C , respectively . In addition , nf and ng are the numbers of elements for P and C. To ease exposition , we use a different notation gj ( x ) for a data point j ∈ C , which is usually the same objective function for a data point i ∈ P . To formulate a theoretical convergence analysis of continual learning , we consider a replay memory based method of which memory is a subset of P ∪ C. Let a random variable Mt ⊂ P ∪ C be the replay memory at time step t ∈ [ 0 , T ] , whose union is of the form M : = ∪tMt . We focus both the episodic memory and the replay memory with dropping rule . The episodic memory based methods include GEM ( Aljundi et al. , 2019 ) , A-GEM ( Chaudhry et al. , 2019a ) , and ORTHOG-SUBSPACE ( Chaudhry et al. , 2020 ) . ER-Reservoir ( Chaudhry et al. , 2019b ) is a replay memory based method with dropping rule , which replaces the dropped sample d ∈Mt with a sample in the stream for C. We now define the gradient update of continual learning xt+1 = xt − αHt∇fIt ( xt ) − βHt∇gJt ( xt ) , ( 7 ) where It ⊂Mt and Jt ⊂ C denote the mini-batches from the replay memory and the current data stream , respectively . Here , Ht is the union of It and Jt . The adaptive learning rates of ∇fIt ( xt ) and ∇gJt ( xt ) are denoted by αHt and βHt which are the functions of Ht . Strictly speaking , the mini-batch It from Mt might contain a datapoint d ∈ C for ER-Reservoir . We describe the details of the problem in Appendix B and assume that the notation It indicates a subset of P for convenience . Equation 7 is a generalized version of continual learning algorithms , which is our novelty to prove the convergence rates in the nonconvex setting for the proposed method , A-GEM , and ER-Reservoir later . | This paper provides a convergence rate characterization for a continual learning problem where the objective functions (for two consecutive tasks) are generally nonconvex and have a finite-sum form over data samples (which belong to current and previous tasks). The analysis uses the tools from nonconvex optimization, with a goal to find a stationary point for current and previous tasks. Under the setting of continual learning, the authors consider the standard replay memory based method, focusing on both the episodic memory and the replay memory with sample dropping. For the analysis, the authors first address two important types of errors: gradient estimation bias at time t and the catastrophic forgetting error. They show that for these two memory based methods, with a good initialization of replay memory, the gradient estimation error vanishes. They further show that in term of the convergence rate of previous task, i.e., $f$ in its context, the catastrophic forgetting error seems to be inevitable and can be even unbounded. To address this error, they further propose some adaptive learning rate schemes, which show some effectiveness in several experiments. | SP:c9f4cde64fa9183fc44259b8b1b6f18d9f5fe104 |
Post hoc Explanations may be Ineffective for Detecting Unknown Spurious Correlation | Ascertaining that a deep network does not rely on an unknown spurious signal as basis for its output , prior to deployment , is crucial in high stakes settings like healthcare . While many post hoc explanation methods have been shown to be useful for some end tasks , theoretical and empirical evidence has also accumulated that show that these methods may not be faithful or useful . This leaves little guidance for a practitioner or a researcher using these methods in their decision process . To address this gap , we investigate whether three classes of post hoc explanations–feature attribution , concept activation , and training point ranking–can alert a practitioner to a model ’ s reliance on unknown spurious signals . We test them in two medical domains with plausible spurious signals . In a broad experimental sweep across datasets , models , and spurious signals , we find that the post hoc explanations tested can be used to identify a model ’ s reliance on a visible spurious signal provided the spurious signal is known ahead of time by the practitioner using the explanation method . Otherwise , a search over possible spurious signals and available data is required . This finding casts doubt on the utility of these approaches , in the hands of a practitioner , for detecting a model ’ s reliance on spurious signals . It is hard to find a needle in a haystack , it is much harder if you haven ’ t seen a needle before ( Pearl ) . —Judea Pearl 1 INTRODUCTION . A challenge that precludes the deployment of modern deep neural networks ( DNN ) in high stakes domains is their tendency to latch onto ‘ spurious signals ’ —shortcuts—in the training data ( Geirhos et al. , 2020 ) . For example , Badgeley et al . ( 2019 ) showed that an Inception-V3 model trained to detect hip fracture relied on the scanner type for its classification decision . Deep learning models easily base output predictions on object backgrounds ( Xiao et al. , 2020 ) , image texture ( Geirhos et al. , 2018 ) , and skin tone ( Stock and Cisse , 2018 ) . Post hoc model explanation methods—approaches that give insight into the associations that a model has learned—are increasingly used to determine whether a model relies on spurious signals . Ribeiro et al . ( 2016 ) used LIME to show an Inception-V3 model ’ s reliance on the snow background for identifying Wolves . Such demonstration and others ( Lapuschkin et al. , 2019 ; Rieger et al. , 2020 ) point to post hoc explanation methods as effective tools for the spurious signal detection task . However , these results conflict with evidence that indicates that practitioners ( and researchers ) struggle to use explanations to identify spurious signals ( Chen et al. , 2021 ; Chu et al. , 2020 ; Alqaraawi et al. , 2020 ; Adebayo et al. , 2020 ; Poursabzi-Sangdeh et al. , 2018 ) . We seek to resolve this conflict by answering the simple but important question : Can post hoc explanations help detect a model ’ s reliance on unknown spurious training signal ? Motivating Example . Consider a machine learning ( ML ) engineer whose job is to train DNN models to detect knee arthritis from radiographs . She—the engineer—is handed a trained ResNet-50 model , to be deployed in a hospital , that relies on a hospital tag in the radiographs to detect knee arthritis . She has no prior knowledge of the model ’ s reliance on the spurious tags . In this work , our key concern is whether the ML engineer can use post hoc explanations to identify that the model is defective . 1.1 OUR CONTRIBUTIONS . We address the motivating question above in a two-pronged manner . First , we develop an actionable methodology based on the ability to carefully craft datasets to induce spurious correlation in trained models . Second , we backup this experimental design with a human subject study . Taken together , the takeaway of the work is that : post hoc explanations can be used to identify a model ’ s reliance on a visible spurious signal , provided the signal is known ahead of time by the practitioner . While this conclusion may seem unsurprising , it has important implications for how post hoc explanation methods should be used effectively . Experimental Design & Performance Measures . We provide an end-to-end experimental design for assessing the effectiveness of an explanation method for detecting a model ’ s reliance on spurious training signals . We define a spurious score that helps quantify the strength of a model ’ s dependence on a training signal . Using carefully crafted semi-synthetic datasets , we are able to train models where the ground-truth expected explanation is known . Additionally , we develop 3 performance measures : i ) Known Spurious Signal Detection Measure ( K-SSD ) , ii ) Cause-for-Concern Measure ( CCM ) , and iii ) a False Alarm Measure ( FAM ) . These measures help characterize different notions of reliability for the spurious signal detection task . We instantiate the proposed design on 3 classes of post hoc explanation types—feature attribution , concept activation , and training point ranking—where we comprehensively assess the performance of these approaches across 3 datasets ( 2 medical tasks , and dog species classification task ) , and different model architectures . When the spurious signal is known , we find that the feature attribution methods tested , and the concept activation importance approach are able to detect visible spurious signals like a text tag and distinctive stripped patterns . However , we find these approaches less effective for non-pronounced signals like background blur . The false alarm measure further indicates that feature attribution methods are susceptible to erroneously indicating dependence on spurious signals . The cause-for-concern measure quantifies the similarity between explanations of ‘ normal ’ inputs derived from spurious and normal models when the spurious signal is unknown . Across the settings considered , we find that the methods tested are unable to conclusively detect model reliance on unknown spurious signals . Blinded Study . The findings from our empirical assessment question the reliability of the methods tested ; however , it might not correlate with utility in the hands of practitioners . To address this issue , we conduct a user study where practitioners are randomly assigned to one of two groups : the first group is told explicitly of potential spurious signals , and the second is not . We consider three different kinds of explanation methods along with a control where only model predictions are shown . We find that when participants are not provided with prior knowledge of the spurious signal , none of the methods tested are effective , in the hands of the participants , for detecting model reliance on spurious signals . More surprisingly , even when the participants had prior knowledge of the spurious signal , we find evidence that only the concept activation approach , for visible spurious signals , is effective . These findings cast doubt on the reliability of current post hoc tools for spurious signal detection . Guidance . On the basis our analysis , we can provide the following guidance for using the approaches tested , in this work , for detecting model reliance on spurious signals when the signal of interest is visible : • Feature Attributions : to confirm that a model is relying on a ‘ visible ’ spurious signal , the practitioner needs to obtain attributions for inputs that contain the hypothesized spurious signal , and the attribution should be computed for the output class to which the spurious signal is aligned . • Concept Activation : the spurious concept should be known ahead of time , and tested against the output class to which the concept is aligned . • Training Point Ranking : an input that contains the hypothesized spurious signal of interest should be used at test-time in computing training point ranking . 1.2 RELATED WORK . This paper belongs to a line of work on assessing the effectiveness of post hoc explanations methods ( Alqaraawi et al. , 2020 ; Adebayo et al. , 2020 ; Chu et al. , 2020 ; Hooker et al. , 2019 ; Meng et al. , 2018 ; Poursabzi-Sangdeh et al. , 2018 ; Tomsett et al. , 2020 ) . Here we focus on directly relevant literature , and defer an extensive discussion of the literature to Section A in the Appendix . This work departs from previous work in two ways : 1 ) we focus exclusively on whether these explanations can be used by a practitioner ( or researcher ) to detect spurious signals that are unknown to her at test-time , and 2 ) we move beyond sole focus on the feature attribution setting to test concept activation and training point ranking methods . Han et al . ( 2020 ) and Adebayo et al . ( 2020 ) find that certain kinds of feature attributions and training point ranking via influence functions are able to detect a model ’ s reliance on spurious signals . However , in their setting , the spurious signal is known ahead of time . More recently , Zhou et al . ( 2021 ) conduct an extensive assessment of several feature attribution methods also under the spurious correlation setting , for visible and non-visible artifacts , and find that these class of methods are not effective for non-visible artifacts . Further , they also propose an experimental methodology for controlling model dependence on training set features , which allows them to quantify attribution effectiveness carefully and precisely . Overall , our findings align with theirs ; however , we focus , specifically , on the setting where the spurious signal is not known ahead of time . Further , we consider other kinds of post hoc explanation approaches in this work . Kim et al . ( 2021 ) conduct an assessment of several feature attribution methods using a synthetic evaluation framework where the ground-truth explanation is known for simple and complex reasoning tasks . They find that feature attribution methods often attribute irrelevant features even in simple settings , and these methods show high variability across data modalities and tasks . Plumb et al . ( 2021 ) introduce a method that searches over a dataset to identify important associations that a model might have learned , detect which of these associations are spurious , and propose a data augmentation procedure to overcome the reliance . Nguyen et al . ( 2021 ) conduct a large-scale user study on lay and expert users to assess the effectiveness of feature attribution methods on image tasks . They find that feature attributions are not more effective than showing end users nearest neighbor training points . 2 EXPERIMENTAL METHODOLOGY In this section , we setup our experimental methodology . We discuss quantitative analysis of post hoc explanations derived from models trained to rely on pre-defined spurious signals , and a blinded user study that measures the ability of users to use the post hoc explanation methods tested to detect model reliance on spurious signals . We discuss the types of spurious signals considered , define a spurious score that allows us to ascertain that a model indeed relies on a signal as basis of its classification decision , and layout performance measures that capture the reliability of the explanation methods . We conclude with an overview of the methods tested , datasets , and models . 2.1 EXPERIMENTAL DESIGN . Spurious Signals & Score . We consider a spurious signal to be input features that encode for the output but have ‘ no meaningful connection ’ to the ‘ data generating process ’ ( DGP ) of the task . A hospital tag present in a hand radiograph is not clinically relevant to the age of the patient . If the tag encodes for the output then it is a spurious signal . Domain expertise is ultimately required for adjudicating that a signal is spurious . We consider 3 ( 2 visible and 1 non-visible ) kinds of spurious signals ( See Figure 1 ) : i ) a localized tag ; ii ) a distinctive stripped pattern ; and iii ) Gaussian blur applied to the image background . The signals are all spatially localized , so we can easily obtain ground-truth expected explanations . To induce reliance on spurious signals , we train models on `` contaminated ” versions of the training set . Given input-label pairs , { ( xi , yi ) } ni , where xi ∈ X and yi ∈ Y , we can learn a classifier , fθ , via empirical risk minimization ( ERM ) that corresponds to minimizing a loss function , ` : arg minθ ∑n i=1 ` ( x i , yi ; θ ) . To contaminate the training set , we apply a spurious contamination function ( SCF ) to the training set ; SCF : X × Y × C → S , where C is the spurious signal set and S is the transformed set . An example of an SCF is a function that pastes an hospital tag onto the bone age radiographs of all pre-puberty individuals in the dataset . To derive models reliant on a spurious signal , ci ∈ C , we simply learn a new classifier via ERM on the modified dataset as follows : arg minθ ∑n i=1 ` ( SCF ( xi , yi , ci ) ) to obtain θspu . Contemporary evidence suggests that this approach produces models that easily latch onto the spurious signal ( Nagarajan et al. , 2020 ) . We focus on the classification setting , and restrict spurious signals to encode , only , for a single class—the spurious aligned class . We measure a model ’ s reliance on the spurious signal via a score . Definition 2.1 . ( Spurious Score ) . Given a spurious signal , ci , the index of its spurious aligned class , j ∈ [ k ] , a model , θspu : Rd → Rk , where arg max ( θspu ) indicates the classifier ’ s predicted class , we define the spurious score as : SCci , j ( θspu ) : = P { xi|θspu ( xi ) ! = j } [ arg max ( θspu ( SCF ( x i , yi , ci ) ) ) = j ] . Given an input that does not contain the spurious signal , and for which the model ’ s prediction is not the spurious aligned class , the model ’ s spurious score is the probability that the model assigns the input to the spurious aligned class if the spurious signal is added to the input . Model Conditions . We focus our analysis on two model conditions : i ) a ‘ normal model ’ , fnorm , for which we can rule out dependence on any of the spurious signals tested across all classes on the basis of the spurious score , and ii ) a ‘ spurious model ’ , fspu , for which one of the spurious signals encodes for a particular output class . We empirically estimate the spurious score and term models that have a score above 0.85 for any of the pre-defined signals ‘ spurious models ’ . We term a model ‘ normal ’ if the spurious score is below 0.1 across all classes and the 3 pre-defined spurious signals . Spurious Signal Detection Reliability Measures . Equipped with spurious ( fspu ) and normal ( fnorm ) models , we are now able to quantitatively assess the motivating question of this work . We do this by comparing explanations derived from spurious models , fspu , to those derived from normal models ( fnorm ) . We can partition the kinds of inputs used for deriving explanations into two : 1 ) spurious inputs ( xspu ) —inputs that include the spurious signal and 2 ) normal inputs ( xnorm ) —inputs do not not contain the spurious signal . Comparing the explanations produced by these two classes of inputs for normal and spurious models , we derive reliability performance measures . • Known Spurious Signal Detection Measure ( K-SSD ) - measures the similarity of explanations derived from spurious models on spurious inputs to the ground truth explanation . The ground truth explanation is one that only assigns relevance to the spurious signal as explanation of the output of a spurious model on a spurious input . K-SSD measures method reliability when the spurious signal is known . Given a similarity metric , Sd , then K-SSD corresponds to : Sd ( Efspu ( xspu ) , xgt ) ) ; where Efspu ( xspu ) are explanations derived from the spurious model for spurious inputs , and xgt is the ground truth explanation . The similarity function , Sd , depends on the type of explanation considered—we will make our choice of this function concrete shortly . • Cause-for-Concern Measure ( CCM ) - measures the similarity of explanations derived from spurious models for normal inputs to explanations derived from normal models for normal inputs : Sd ( Efspu ( xnorm ) , Efnorm ( xnorm ) ) . This measure simulates the setting where a practitioner does not know the spurious signal , and can only inspect explanations for inputs without the signal . If this measure is high , then it is unlikely that such a method alert a practitioner that a spurious model exhibits defects . • False Alarm Measure ( FAM ) - measures the similarity of explanations derived from normal models for spurious inputs to explanations derived from spurious models for spurious inputs : Sd ( Efnorm ( xspur ) , Efspu ( xspu ) ) . We also introduce a variant of this measure , FAM-GT , which measures the similarity of a explanations derived from normal models for spurious inputs to the ground truth explanation of a spurious model for that spurious input . If this measure is high , then that approach is more likely to signal to a practitioner that a model is relying on spurious signal when the model does not . Having defined the metrics above , it remains which similarity function to use . Computing Metrics for Feature Attribution . For feature attribution methods , we follow prior literature ( Adebayo et al. , 2020 ) and use the Structural Similarity Index ( SSIM ) . SSIM measures the visual similarity between two images , so we use this metric as the measure of similarity between two attribution maps . Concretely , given a set of normal inputs , we can obtain a corresponding spurious set of these inputs by applying the spurious contamination function , SCF to these inputs . Consequently , we can then compute the K-SSD , CCM , and FAM metrics given these two sets of inputs using the SSIM metric . Computing Metrics for Concept Activation . A concept activation method provides a relevance score for a user defined concept . Given a set of user defined concepts , one can estimate and rank each concept by the relevance score . We measure comparison between two concept rankings using a Kolmogorov-Smirnoff ( KS ) test comparing two distributions where the null hypothesis is that the two distributions are identical ; we set significance level to be 0.05 . Computing Metrics for Training Point Ranking . The training point ranking approach assigns a ‘ relevance ’ score to each training point based on influence of each training point on the test loss of a particular sample . Recently , Hanawa et al . ( 2020 ) introduced the Identical class metric ’ ( ICM ) , which is the fraction of the top training inputs , for a given test example , that belong to the same class as the true class of the test example in question . Here we also use the KS test to compare the ICM distributions for two different models and set the significance level to be 0.05 . Taken together , these measures provide a comprehensive overview of an explanation method ’ s performance for detecting spurious signals . | The authors present an analysis on post-hoc explanation metrics that measure the reliance of a model to spurious signals. The paper offers insights on three metrics K-SSD, CCM and FAM by deploying them in the analysis of DNNs trained on medical image datasets. The authors also conduct a blinded study. | SP:aa123ac1ee777e2337675d073debe2e1ecd310ce |
Post hoc Explanations may be Ineffective for Detecting Unknown Spurious Correlation | Ascertaining that a deep network does not rely on an unknown spurious signal as basis for its output , prior to deployment , is crucial in high stakes settings like healthcare . While many post hoc explanation methods have been shown to be useful for some end tasks , theoretical and empirical evidence has also accumulated that show that these methods may not be faithful or useful . This leaves little guidance for a practitioner or a researcher using these methods in their decision process . To address this gap , we investigate whether three classes of post hoc explanations–feature attribution , concept activation , and training point ranking–can alert a practitioner to a model ’ s reliance on unknown spurious signals . We test them in two medical domains with plausible spurious signals . In a broad experimental sweep across datasets , models , and spurious signals , we find that the post hoc explanations tested can be used to identify a model ’ s reliance on a visible spurious signal provided the spurious signal is known ahead of time by the practitioner using the explanation method . Otherwise , a search over possible spurious signals and available data is required . This finding casts doubt on the utility of these approaches , in the hands of a practitioner , for detecting a model ’ s reliance on spurious signals . It is hard to find a needle in a haystack , it is much harder if you haven ’ t seen a needle before ( Pearl ) . —Judea Pearl 1 INTRODUCTION . A challenge that precludes the deployment of modern deep neural networks ( DNN ) in high stakes domains is their tendency to latch onto ‘ spurious signals ’ —shortcuts—in the training data ( Geirhos et al. , 2020 ) . For example , Badgeley et al . ( 2019 ) showed that an Inception-V3 model trained to detect hip fracture relied on the scanner type for its classification decision . Deep learning models easily base output predictions on object backgrounds ( Xiao et al. , 2020 ) , image texture ( Geirhos et al. , 2018 ) , and skin tone ( Stock and Cisse , 2018 ) . Post hoc model explanation methods—approaches that give insight into the associations that a model has learned—are increasingly used to determine whether a model relies on spurious signals . Ribeiro et al . ( 2016 ) used LIME to show an Inception-V3 model ’ s reliance on the snow background for identifying Wolves . Such demonstration and others ( Lapuschkin et al. , 2019 ; Rieger et al. , 2020 ) point to post hoc explanation methods as effective tools for the spurious signal detection task . However , these results conflict with evidence that indicates that practitioners ( and researchers ) struggle to use explanations to identify spurious signals ( Chen et al. , 2021 ; Chu et al. , 2020 ; Alqaraawi et al. , 2020 ; Adebayo et al. , 2020 ; Poursabzi-Sangdeh et al. , 2018 ) . We seek to resolve this conflict by answering the simple but important question : Can post hoc explanations help detect a model ’ s reliance on unknown spurious training signal ? Motivating Example . Consider a machine learning ( ML ) engineer whose job is to train DNN models to detect knee arthritis from radiographs . She—the engineer—is handed a trained ResNet-50 model , to be deployed in a hospital , that relies on a hospital tag in the radiographs to detect knee arthritis . She has no prior knowledge of the model ’ s reliance on the spurious tags . In this work , our key concern is whether the ML engineer can use post hoc explanations to identify that the model is defective . 1.1 OUR CONTRIBUTIONS . We address the motivating question above in a two-pronged manner . First , we develop an actionable methodology based on the ability to carefully craft datasets to induce spurious correlation in trained models . Second , we backup this experimental design with a human subject study . Taken together , the takeaway of the work is that : post hoc explanations can be used to identify a model ’ s reliance on a visible spurious signal , provided the signal is known ahead of time by the practitioner . While this conclusion may seem unsurprising , it has important implications for how post hoc explanation methods should be used effectively . Experimental Design & Performance Measures . We provide an end-to-end experimental design for assessing the effectiveness of an explanation method for detecting a model ’ s reliance on spurious training signals . We define a spurious score that helps quantify the strength of a model ’ s dependence on a training signal . Using carefully crafted semi-synthetic datasets , we are able to train models where the ground-truth expected explanation is known . Additionally , we develop 3 performance measures : i ) Known Spurious Signal Detection Measure ( K-SSD ) , ii ) Cause-for-Concern Measure ( CCM ) , and iii ) a False Alarm Measure ( FAM ) . These measures help characterize different notions of reliability for the spurious signal detection task . We instantiate the proposed design on 3 classes of post hoc explanation types—feature attribution , concept activation , and training point ranking—where we comprehensively assess the performance of these approaches across 3 datasets ( 2 medical tasks , and dog species classification task ) , and different model architectures . When the spurious signal is known , we find that the feature attribution methods tested , and the concept activation importance approach are able to detect visible spurious signals like a text tag and distinctive stripped patterns . However , we find these approaches less effective for non-pronounced signals like background blur . The false alarm measure further indicates that feature attribution methods are susceptible to erroneously indicating dependence on spurious signals . The cause-for-concern measure quantifies the similarity between explanations of ‘ normal ’ inputs derived from spurious and normal models when the spurious signal is unknown . Across the settings considered , we find that the methods tested are unable to conclusively detect model reliance on unknown spurious signals . Blinded Study . The findings from our empirical assessment question the reliability of the methods tested ; however , it might not correlate with utility in the hands of practitioners . To address this issue , we conduct a user study where practitioners are randomly assigned to one of two groups : the first group is told explicitly of potential spurious signals , and the second is not . We consider three different kinds of explanation methods along with a control where only model predictions are shown . We find that when participants are not provided with prior knowledge of the spurious signal , none of the methods tested are effective , in the hands of the participants , for detecting model reliance on spurious signals . More surprisingly , even when the participants had prior knowledge of the spurious signal , we find evidence that only the concept activation approach , for visible spurious signals , is effective . These findings cast doubt on the reliability of current post hoc tools for spurious signal detection . Guidance . On the basis our analysis , we can provide the following guidance for using the approaches tested , in this work , for detecting model reliance on spurious signals when the signal of interest is visible : • Feature Attributions : to confirm that a model is relying on a ‘ visible ’ spurious signal , the practitioner needs to obtain attributions for inputs that contain the hypothesized spurious signal , and the attribution should be computed for the output class to which the spurious signal is aligned . • Concept Activation : the spurious concept should be known ahead of time , and tested against the output class to which the concept is aligned . • Training Point Ranking : an input that contains the hypothesized spurious signal of interest should be used at test-time in computing training point ranking . 1.2 RELATED WORK . This paper belongs to a line of work on assessing the effectiveness of post hoc explanations methods ( Alqaraawi et al. , 2020 ; Adebayo et al. , 2020 ; Chu et al. , 2020 ; Hooker et al. , 2019 ; Meng et al. , 2018 ; Poursabzi-Sangdeh et al. , 2018 ; Tomsett et al. , 2020 ) . Here we focus on directly relevant literature , and defer an extensive discussion of the literature to Section A in the Appendix . This work departs from previous work in two ways : 1 ) we focus exclusively on whether these explanations can be used by a practitioner ( or researcher ) to detect spurious signals that are unknown to her at test-time , and 2 ) we move beyond sole focus on the feature attribution setting to test concept activation and training point ranking methods . Han et al . ( 2020 ) and Adebayo et al . ( 2020 ) find that certain kinds of feature attributions and training point ranking via influence functions are able to detect a model ’ s reliance on spurious signals . However , in their setting , the spurious signal is known ahead of time . More recently , Zhou et al . ( 2021 ) conduct an extensive assessment of several feature attribution methods also under the spurious correlation setting , for visible and non-visible artifacts , and find that these class of methods are not effective for non-visible artifacts . Further , they also propose an experimental methodology for controlling model dependence on training set features , which allows them to quantify attribution effectiveness carefully and precisely . Overall , our findings align with theirs ; however , we focus , specifically , on the setting where the spurious signal is not known ahead of time . Further , we consider other kinds of post hoc explanation approaches in this work . Kim et al . ( 2021 ) conduct an assessment of several feature attribution methods using a synthetic evaluation framework where the ground-truth explanation is known for simple and complex reasoning tasks . They find that feature attribution methods often attribute irrelevant features even in simple settings , and these methods show high variability across data modalities and tasks . Plumb et al . ( 2021 ) introduce a method that searches over a dataset to identify important associations that a model might have learned , detect which of these associations are spurious , and propose a data augmentation procedure to overcome the reliance . Nguyen et al . ( 2021 ) conduct a large-scale user study on lay and expert users to assess the effectiveness of feature attribution methods on image tasks . They find that feature attributions are not more effective than showing end users nearest neighbor training points . 2 EXPERIMENTAL METHODOLOGY In this section , we setup our experimental methodology . We discuss quantitative analysis of post hoc explanations derived from models trained to rely on pre-defined spurious signals , and a blinded user study that measures the ability of users to use the post hoc explanation methods tested to detect model reliance on spurious signals . We discuss the types of spurious signals considered , define a spurious score that allows us to ascertain that a model indeed relies on a signal as basis of its classification decision , and layout performance measures that capture the reliability of the explanation methods . We conclude with an overview of the methods tested , datasets , and models . 2.1 EXPERIMENTAL DESIGN . Spurious Signals & Score . We consider a spurious signal to be input features that encode for the output but have ‘ no meaningful connection ’ to the ‘ data generating process ’ ( DGP ) of the task . A hospital tag present in a hand radiograph is not clinically relevant to the age of the patient . If the tag encodes for the output then it is a spurious signal . Domain expertise is ultimately required for adjudicating that a signal is spurious . We consider 3 ( 2 visible and 1 non-visible ) kinds of spurious signals ( See Figure 1 ) : i ) a localized tag ; ii ) a distinctive stripped pattern ; and iii ) Gaussian blur applied to the image background . The signals are all spatially localized , so we can easily obtain ground-truth expected explanations . To induce reliance on spurious signals , we train models on `` contaminated ” versions of the training set . Given input-label pairs , { ( xi , yi ) } ni , where xi ∈ X and yi ∈ Y , we can learn a classifier , fθ , via empirical risk minimization ( ERM ) that corresponds to minimizing a loss function , ` : arg minθ ∑n i=1 ` ( x i , yi ; θ ) . To contaminate the training set , we apply a spurious contamination function ( SCF ) to the training set ; SCF : X × Y × C → S , where C is the spurious signal set and S is the transformed set . An example of an SCF is a function that pastes an hospital tag onto the bone age radiographs of all pre-puberty individuals in the dataset . To derive models reliant on a spurious signal , ci ∈ C , we simply learn a new classifier via ERM on the modified dataset as follows : arg minθ ∑n i=1 ` ( SCF ( xi , yi , ci ) ) to obtain θspu . Contemporary evidence suggests that this approach produces models that easily latch onto the spurious signal ( Nagarajan et al. , 2020 ) . We focus on the classification setting , and restrict spurious signals to encode , only , for a single class—the spurious aligned class . We measure a model ’ s reliance on the spurious signal via a score . Definition 2.1 . ( Spurious Score ) . Given a spurious signal , ci , the index of its spurious aligned class , j ∈ [ k ] , a model , θspu : Rd → Rk , where arg max ( θspu ) indicates the classifier ’ s predicted class , we define the spurious score as : SCci , j ( θspu ) : = P { xi|θspu ( xi ) ! = j } [ arg max ( θspu ( SCF ( x i , yi , ci ) ) ) = j ] . Given an input that does not contain the spurious signal , and for which the model ’ s prediction is not the spurious aligned class , the model ’ s spurious score is the probability that the model assigns the input to the spurious aligned class if the spurious signal is added to the input . Model Conditions . We focus our analysis on two model conditions : i ) a ‘ normal model ’ , fnorm , for which we can rule out dependence on any of the spurious signals tested across all classes on the basis of the spurious score , and ii ) a ‘ spurious model ’ , fspu , for which one of the spurious signals encodes for a particular output class . We empirically estimate the spurious score and term models that have a score above 0.85 for any of the pre-defined signals ‘ spurious models ’ . We term a model ‘ normal ’ if the spurious score is below 0.1 across all classes and the 3 pre-defined spurious signals . Spurious Signal Detection Reliability Measures . Equipped with spurious ( fspu ) and normal ( fnorm ) models , we are now able to quantitatively assess the motivating question of this work . We do this by comparing explanations derived from spurious models , fspu , to those derived from normal models ( fnorm ) . We can partition the kinds of inputs used for deriving explanations into two : 1 ) spurious inputs ( xspu ) —inputs that include the spurious signal and 2 ) normal inputs ( xnorm ) —inputs do not not contain the spurious signal . Comparing the explanations produced by these two classes of inputs for normal and spurious models , we derive reliability performance measures . • Known Spurious Signal Detection Measure ( K-SSD ) - measures the similarity of explanations derived from spurious models on spurious inputs to the ground truth explanation . The ground truth explanation is one that only assigns relevance to the spurious signal as explanation of the output of a spurious model on a spurious input . K-SSD measures method reliability when the spurious signal is known . Given a similarity metric , Sd , then K-SSD corresponds to : Sd ( Efspu ( xspu ) , xgt ) ) ; where Efspu ( xspu ) are explanations derived from the spurious model for spurious inputs , and xgt is the ground truth explanation . The similarity function , Sd , depends on the type of explanation considered—we will make our choice of this function concrete shortly . • Cause-for-Concern Measure ( CCM ) - measures the similarity of explanations derived from spurious models for normal inputs to explanations derived from normal models for normal inputs : Sd ( Efspu ( xnorm ) , Efnorm ( xnorm ) ) . This measure simulates the setting where a practitioner does not know the spurious signal , and can only inspect explanations for inputs without the signal . If this measure is high , then it is unlikely that such a method alert a practitioner that a spurious model exhibits defects . • False Alarm Measure ( FAM ) - measures the similarity of explanations derived from normal models for spurious inputs to explanations derived from spurious models for spurious inputs : Sd ( Efnorm ( xspur ) , Efspu ( xspu ) ) . We also introduce a variant of this measure , FAM-GT , which measures the similarity of a explanations derived from normal models for spurious inputs to the ground truth explanation of a spurious model for that spurious input . If this measure is high , then that approach is more likely to signal to a practitioner that a model is relying on spurious signal when the model does not . Having defined the metrics above , it remains which similarity function to use . Computing Metrics for Feature Attribution . For feature attribution methods , we follow prior literature ( Adebayo et al. , 2020 ) and use the Structural Similarity Index ( SSIM ) . SSIM measures the visual similarity between two images , so we use this metric as the measure of similarity between two attribution maps . Concretely , given a set of normal inputs , we can obtain a corresponding spurious set of these inputs by applying the spurious contamination function , SCF to these inputs . Consequently , we can then compute the K-SSD , CCM , and FAM metrics given these two sets of inputs using the SSIM metric . Computing Metrics for Concept Activation . A concept activation method provides a relevance score for a user defined concept . Given a set of user defined concepts , one can estimate and rank each concept by the relevance score . We measure comparison between two concept rankings using a Kolmogorov-Smirnoff ( KS ) test comparing two distributions where the null hypothesis is that the two distributions are identical ; we set significance level to be 0.05 . Computing Metrics for Training Point Ranking . The training point ranking approach assigns a ‘ relevance ’ score to each training point based on influence of each training point on the test loss of a particular sample . Recently , Hanawa et al . ( 2020 ) introduced the Identical class metric ’ ( ICM ) , which is the fraction of the top training inputs , for a given test example , that belong to the same class as the true class of the test example in question . Here we also use the KS test to compare the ICM distributions for two different models and set the significance level to be 0.05 . Taken together , these measures provide a comprehensive overview of an explanation method ’ s performance for detecting spurious signals . | The authors present work that aims to test, whether post hoc explanation methods can detect unknown spurious signals. They perform an analysis on two different medical image datasets to which they introduce two different types of synthetic, spurious signals: pronounced spurious signals (stripes and hospital tags) and non-pronounced signals (blurs). When testing the ability of different post hoc explanation methods to identify spurious signals, they find that feature attribution and concept activation are able to identify pronounced, but not non-pronounced spurious signals that are known. When faced with unknown signals, none of the methods that were tested appears to be able to identify them. They also provide results from an empirical study suggesting that none of the methods allowed practitioners to identify unknown spurious signals and only concept activation appears to enable practitioners to identify known spurious signals. The main contribution of this work are the empirical results and alerting researchers (and potentially practitioners) to very substantial problems with current post-hoc explanation. The technical contributions are negligible. | SP:aa123ac1ee777e2337675d073debe2e1ecd310ce |
Post hoc Explanations may be Ineffective for Detecting Unknown Spurious Correlation | Ascertaining that a deep network does not rely on an unknown spurious signal as basis for its output , prior to deployment , is crucial in high stakes settings like healthcare . While many post hoc explanation methods have been shown to be useful for some end tasks , theoretical and empirical evidence has also accumulated that show that these methods may not be faithful or useful . This leaves little guidance for a practitioner or a researcher using these methods in their decision process . To address this gap , we investigate whether three classes of post hoc explanations–feature attribution , concept activation , and training point ranking–can alert a practitioner to a model ’ s reliance on unknown spurious signals . We test them in two medical domains with plausible spurious signals . In a broad experimental sweep across datasets , models , and spurious signals , we find that the post hoc explanations tested can be used to identify a model ’ s reliance on a visible spurious signal provided the spurious signal is known ahead of time by the practitioner using the explanation method . Otherwise , a search over possible spurious signals and available data is required . This finding casts doubt on the utility of these approaches , in the hands of a practitioner , for detecting a model ’ s reliance on spurious signals . It is hard to find a needle in a haystack , it is much harder if you haven ’ t seen a needle before ( Pearl ) . —Judea Pearl 1 INTRODUCTION . A challenge that precludes the deployment of modern deep neural networks ( DNN ) in high stakes domains is their tendency to latch onto ‘ spurious signals ’ —shortcuts—in the training data ( Geirhos et al. , 2020 ) . For example , Badgeley et al . ( 2019 ) showed that an Inception-V3 model trained to detect hip fracture relied on the scanner type for its classification decision . Deep learning models easily base output predictions on object backgrounds ( Xiao et al. , 2020 ) , image texture ( Geirhos et al. , 2018 ) , and skin tone ( Stock and Cisse , 2018 ) . Post hoc model explanation methods—approaches that give insight into the associations that a model has learned—are increasingly used to determine whether a model relies on spurious signals . Ribeiro et al . ( 2016 ) used LIME to show an Inception-V3 model ’ s reliance on the snow background for identifying Wolves . Such demonstration and others ( Lapuschkin et al. , 2019 ; Rieger et al. , 2020 ) point to post hoc explanation methods as effective tools for the spurious signal detection task . However , these results conflict with evidence that indicates that practitioners ( and researchers ) struggle to use explanations to identify spurious signals ( Chen et al. , 2021 ; Chu et al. , 2020 ; Alqaraawi et al. , 2020 ; Adebayo et al. , 2020 ; Poursabzi-Sangdeh et al. , 2018 ) . We seek to resolve this conflict by answering the simple but important question : Can post hoc explanations help detect a model ’ s reliance on unknown spurious training signal ? Motivating Example . Consider a machine learning ( ML ) engineer whose job is to train DNN models to detect knee arthritis from radiographs . She—the engineer—is handed a trained ResNet-50 model , to be deployed in a hospital , that relies on a hospital tag in the radiographs to detect knee arthritis . She has no prior knowledge of the model ’ s reliance on the spurious tags . In this work , our key concern is whether the ML engineer can use post hoc explanations to identify that the model is defective . 1.1 OUR CONTRIBUTIONS . We address the motivating question above in a two-pronged manner . First , we develop an actionable methodology based on the ability to carefully craft datasets to induce spurious correlation in trained models . Second , we backup this experimental design with a human subject study . Taken together , the takeaway of the work is that : post hoc explanations can be used to identify a model ’ s reliance on a visible spurious signal , provided the signal is known ahead of time by the practitioner . While this conclusion may seem unsurprising , it has important implications for how post hoc explanation methods should be used effectively . Experimental Design & Performance Measures . We provide an end-to-end experimental design for assessing the effectiveness of an explanation method for detecting a model ’ s reliance on spurious training signals . We define a spurious score that helps quantify the strength of a model ’ s dependence on a training signal . Using carefully crafted semi-synthetic datasets , we are able to train models where the ground-truth expected explanation is known . Additionally , we develop 3 performance measures : i ) Known Spurious Signal Detection Measure ( K-SSD ) , ii ) Cause-for-Concern Measure ( CCM ) , and iii ) a False Alarm Measure ( FAM ) . These measures help characterize different notions of reliability for the spurious signal detection task . We instantiate the proposed design on 3 classes of post hoc explanation types—feature attribution , concept activation , and training point ranking—where we comprehensively assess the performance of these approaches across 3 datasets ( 2 medical tasks , and dog species classification task ) , and different model architectures . When the spurious signal is known , we find that the feature attribution methods tested , and the concept activation importance approach are able to detect visible spurious signals like a text tag and distinctive stripped patterns . However , we find these approaches less effective for non-pronounced signals like background blur . The false alarm measure further indicates that feature attribution methods are susceptible to erroneously indicating dependence on spurious signals . The cause-for-concern measure quantifies the similarity between explanations of ‘ normal ’ inputs derived from spurious and normal models when the spurious signal is unknown . Across the settings considered , we find that the methods tested are unable to conclusively detect model reliance on unknown spurious signals . Blinded Study . The findings from our empirical assessment question the reliability of the methods tested ; however , it might not correlate with utility in the hands of practitioners . To address this issue , we conduct a user study where practitioners are randomly assigned to one of two groups : the first group is told explicitly of potential spurious signals , and the second is not . We consider three different kinds of explanation methods along with a control where only model predictions are shown . We find that when participants are not provided with prior knowledge of the spurious signal , none of the methods tested are effective , in the hands of the participants , for detecting model reliance on spurious signals . More surprisingly , even when the participants had prior knowledge of the spurious signal , we find evidence that only the concept activation approach , for visible spurious signals , is effective . These findings cast doubt on the reliability of current post hoc tools for spurious signal detection . Guidance . On the basis our analysis , we can provide the following guidance for using the approaches tested , in this work , for detecting model reliance on spurious signals when the signal of interest is visible : • Feature Attributions : to confirm that a model is relying on a ‘ visible ’ spurious signal , the practitioner needs to obtain attributions for inputs that contain the hypothesized spurious signal , and the attribution should be computed for the output class to which the spurious signal is aligned . • Concept Activation : the spurious concept should be known ahead of time , and tested against the output class to which the concept is aligned . • Training Point Ranking : an input that contains the hypothesized spurious signal of interest should be used at test-time in computing training point ranking . 1.2 RELATED WORK . This paper belongs to a line of work on assessing the effectiveness of post hoc explanations methods ( Alqaraawi et al. , 2020 ; Adebayo et al. , 2020 ; Chu et al. , 2020 ; Hooker et al. , 2019 ; Meng et al. , 2018 ; Poursabzi-Sangdeh et al. , 2018 ; Tomsett et al. , 2020 ) . Here we focus on directly relevant literature , and defer an extensive discussion of the literature to Section A in the Appendix . This work departs from previous work in two ways : 1 ) we focus exclusively on whether these explanations can be used by a practitioner ( or researcher ) to detect spurious signals that are unknown to her at test-time , and 2 ) we move beyond sole focus on the feature attribution setting to test concept activation and training point ranking methods . Han et al . ( 2020 ) and Adebayo et al . ( 2020 ) find that certain kinds of feature attributions and training point ranking via influence functions are able to detect a model ’ s reliance on spurious signals . However , in their setting , the spurious signal is known ahead of time . More recently , Zhou et al . ( 2021 ) conduct an extensive assessment of several feature attribution methods also under the spurious correlation setting , for visible and non-visible artifacts , and find that these class of methods are not effective for non-visible artifacts . Further , they also propose an experimental methodology for controlling model dependence on training set features , which allows them to quantify attribution effectiveness carefully and precisely . Overall , our findings align with theirs ; however , we focus , specifically , on the setting where the spurious signal is not known ahead of time . Further , we consider other kinds of post hoc explanation approaches in this work . Kim et al . ( 2021 ) conduct an assessment of several feature attribution methods using a synthetic evaluation framework where the ground-truth explanation is known for simple and complex reasoning tasks . They find that feature attribution methods often attribute irrelevant features even in simple settings , and these methods show high variability across data modalities and tasks . Plumb et al . ( 2021 ) introduce a method that searches over a dataset to identify important associations that a model might have learned , detect which of these associations are spurious , and propose a data augmentation procedure to overcome the reliance . Nguyen et al . ( 2021 ) conduct a large-scale user study on lay and expert users to assess the effectiveness of feature attribution methods on image tasks . They find that feature attributions are not more effective than showing end users nearest neighbor training points . 2 EXPERIMENTAL METHODOLOGY In this section , we setup our experimental methodology . We discuss quantitative analysis of post hoc explanations derived from models trained to rely on pre-defined spurious signals , and a blinded user study that measures the ability of users to use the post hoc explanation methods tested to detect model reliance on spurious signals . We discuss the types of spurious signals considered , define a spurious score that allows us to ascertain that a model indeed relies on a signal as basis of its classification decision , and layout performance measures that capture the reliability of the explanation methods . We conclude with an overview of the methods tested , datasets , and models . 2.1 EXPERIMENTAL DESIGN . Spurious Signals & Score . We consider a spurious signal to be input features that encode for the output but have ‘ no meaningful connection ’ to the ‘ data generating process ’ ( DGP ) of the task . A hospital tag present in a hand radiograph is not clinically relevant to the age of the patient . If the tag encodes for the output then it is a spurious signal . Domain expertise is ultimately required for adjudicating that a signal is spurious . We consider 3 ( 2 visible and 1 non-visible ) kinds of spurious signals ( See Figure 1 ) : i ) a localized tag ; ii ) a distinctive stripped pattern ; and iii ) Gaussian blur applied to the image background . The signals are all spatially localized , so we can easily obtain ground-truth expected explanations . To induce reliance on spurious signals , we train models on `` contaminated ” versions of the training set . Given input-label pairs , { ( xi , yi ) } ni , where xi ∈ X and yi ∈ Y , we can learn a classifier , fθ , via empirical risk minimization ( ERM ) that corresponds to minimizing a loss function , ` : arg minθ ∑n i=1 ` ( x i , yi ; θ ) . To contaminate the training set , we apply a spurious contamination function ( SCF ) to the training set ; SCF : X × Y × C → S , where C is the spurious signal set and S is the transformed set . An example of an SCF is a function that pastes an hospital tag onto the bone age radiographs of all pre-puberty individuals in the dataset . To derive models reliant on a spurious signal , ci ∈ C , we simply learn a new classifier via ERM on the modified dataset as follows : arg minθ ∑n i=1 ` ( SCF ( xi , yi , ci ) ) to obtain θspu . Contemporary evidence suggests that this approach produces models that easily latch onto the spurious signal ( Nagarajan et al. , 2020 ) . We focus on the classification setting , and restrict spurious signals to encode , only , for a single class—the spurious aligned class . We measure a model ’ s reliance on the spurious signal via a score . Definition 2.1 . ( Spurious Score ) . Given a spurious signal , ci , the index of its spurious aligned class , j ∈ [ k ] , a model , θspu : Rd → Rk , where arg max ( θspu ) indicates the classifier ’ s predicted class , we define the spurious score as : SCci , j ( θspu ) : = P { xi|θspu ( xi ) ! = j } [ arg max ( θspu ( SCF ( x i , yi , ci ) ) ) = j ] . Given an input that does not contain the spurious signal , and for which the model ’ s prediction is not the spurious aligned class , the model ’ s spurious score is the probability that the model assigns the input to the spurious aligned class if the spurious signal is added to the input . Model Conditions . We focus our analysis on two model conditions : i ) a ‘ normal model ’ , fnorm , for which we can rule out dependence on any of the spurious signals tested across all classes on the basis of the spurious score , and ii ) a ‘ spurious model ’ , fspu , for which one of the spurious signals encodes for a particular output class . We empirically estimate the spurious score and term models that have a score above 0.85 for any of the pre-defined signals ‘ spurious models ’ . We term a model ‘ normal ’ if the spurious score is below 0.1 across all classes and the 3 pre-defined spurious signals . Spurious Signal Detection Reliability Measures . Equipped with spurious ( fspu ) and normal ( fnorm ) models , we are now able to quantitatively assess the motivating question of this work . We do this by comparing explanations derived from spurious models , fspu , to those derived from normal models ( fnorm ) . We can partition the kinds of inputs used for deriving explanations into two : 1 ) spurious inputs ( xspu ) —inputs that include the spurious signal and 2 ) normal inputs ( xnorm ) —inputs do not not contain the spurious signal . Comparing the explanations produced by these two classes of inputs for normal and spurious models , we derive reliability performance measures . • Known Spurious Signal Detection Measure ( K-SSD ) - measures the similarity of explanations derived from spurious models on spurious inputs to the ground truth explanation . The ground truth explanation is one that only assigns relevance to the spurious signal as explanation of the output of a spurious model on a spurious input . K-SSD measures method reliability when the spurious signal is known . Given a similarity metric , Sd , then K-SSD corresponds to : Sd ( Efspu ( xspu ) , xgt ) ) ; where Efspu ( xspu ) are explanations derived from the spurious model for spurious inputs , and xgt is the ground truth explanation . The similarity function , Sd , depends on the type of explanation considered—we will make our choice of this function concrete shortly . • Cause-for-Concern Measure ( CCM ) - measures the similarity of explanations derived from spurious models for normal inputs to explanations derived from normal models for normal inputs : Sd ( Efspu ( xnorm ) , Efnorm ( xnorm ) ) . This measure simulates the setting where a practitioner does not know the spurious signal , and can only inspect explanations for inputs without the signal . If this measure is high , then it is unlikely that such a method alert a practitioner that a spurious model exhibits defects . • False Alarm Measure ( FAM ) - measures the similarity of explanations derived from normal models for spurious inputs to explanations derived from spurious models for spurious inputs : Sd ( Efnorm ( xspur ) , Efspu ( xspu ) ) . We also introduce a variant of this measure , FAM-GT , which measures the similarity of a explanations derived from normal models for spurious inputs to the ground truth explanation of a spurious model for that spurious input . If this measure is high , then that approach is more likely to signal to a practitioner that a model is relying on spurious signal when the model does not . Having defined the metrics above , it remains which similarity function to use . Computing Metrics for Feature Attribution . For feature attribution methods , we follow prior literature ( Adebayo et al. , 2020 ) and use the Structural Similarity Index ( SSIM ) . SSIM measures the visual similarity between two images , so we use this metric as the measure of similarity between two attribution maps . Concretely , given a set of normal inputs , we can obtain a corresponding spurious set of these inputs by applying the spurious contamination function , SCF to these inputs . Consequently , we can then compute the K-SSD , CCM , and FAM metrics given these two sets of inputs using the SSIM metric . Computing Metrics for Concept Activation . A concept activation method provides a relevance score for a user defined concept . Given a set of user defined concepts , one can estimate and rank each concept by the relevance score . We measure comparison between two concept rankings using a Kolmogorov-Smirnoff ( KS ) test comparing two distributions where the null hypothesis is that the two distributions are identical ; we set significance level to be 0.05 . Computing Metrics for Training Point Ranking . The training point ranking approach assigns a ‘ relevance ’ score to each training point based on influence of each training point on the test loss of a particular sample . Recently , Hanawa et al . ( 2020 ) introduced the Identical class metric ’ ( ICM ) , which is the fraction of the top training inputs , for a given test example , that belong to the same class as the true class of the test example in question . Here we also use the KS test to compare the ICM distributions for two different models and set the significance level to be 0.05 . Taken together , these measures provide a comprehensive overview of an explanation method ’ s performance for detecting spurious signals . | The paper aims to validate whether post hoc explanation methods are effective for detecting unknown spurious correlations. The authors design 3 kinds of spurious signal detection reliability measures: Known Spurious Signal Detection Measure (K-SSD), Cause-for-Concern Measure (CCM), False Alarm Measure (FAM). Based on the 3 measurements, the authors conduct extensive experiments to validate the reliability of 3 kinds of post hoc explanation methods for detecting spurious correlations. | SP:aa123ac1ee777e2337675d073debe2e1ecd310ce |
LEAN: graph-based pruning for convolutional neural networks by extracting longest chains | 1 INTRODUCTION . In recent years , convolutional neural networks ( CNNs ) have become state-of-the-art for many imageto-image translation tasks ( LeCun et al. , 2015 ) , including image segmentation ( Ronneberger et al. , 2015 ) , and denoising ( Tian et al. , 2020 ) . They are increasingly used as a subcomponent of a larger system , e.g. , visual odometry ( Yang et al. , 2020 ) , as well as in energy-limited and real-time applications ( Yang et al. , 2017 ) . In these situations , the applicability of high-accuracy CNNs may be limited by large computational resource requirements . Small networks may be more applicable in such settings , but may lack accuracy . Neural network pruning ( Mozer & Smolensky , 1989 ; Karnin , 1990 ) has recently gained popularity as a technique to reduce the size of neural networks ( Blalock et al. , 2020 ) . Neural networks consist of learnable parameters , including the scalar components of the convolutional filters . When pruning , the neural network is reduced in size by removing such scalar parameters while trying to maintain high accuracy . We distinguish between individual parameter pruning ( Han et al. , 2016 ) , where each parameter of an operation is ranked and pruned separately , and structured pruning ( Li et al. , 2017 ; Luo et al. , 2017 ) , where entire convolutional filters are ranked and pruned . As convolution operators can only be removed once all scalar parameters of the filter kernel have been pruned , structured pruning is favored over individual pruning when aiming to improve computational performance ( Park et al. , 2017 ) . In the remainder of this paper , we focus on structured pruning . Although structured pruning methods take into account the division of a neural network into operations , they do not take into account the fact that the output of the network is formed by a sequence of such operations . This has two drawbacks . First , since the relative scaling of individual convolutions may vary without changing the output of the whole chain , pruning methods that prune individual operators could potentially prune a suboptimal set of operators from the chain . Second , to significantly reduce evaluation time , a severe pruning regime must be considered , i.e. , a pruning ratio ( percentage of remaining parameters after pruning ) of 1–10 % . In this regime , pruning can result in network disjointness , i.e. , the network contains sections that are not part of some path from the input to the network output . Some existing pruning methods take into account network structure to a limited degree ( Salehinejad & Valaee , 2021 ) . In practice , however , these methods do not contain safeguards to avoid network disjointness . In this paper , we present a novel pruning method called LongEst-chAiN ( LEAN ) pruning , which as opposed to conventional pruning approaches uses graph-based algorithms to keep or prune chains of operations collectively . In LEAN , a CNN is represented as a graph that contains all the CNN operators , with the operator norm of each operator as edge weights . We argue that strong subnetworks in a CNN can be discovered by extracting the longest ( multiplicative ) paths , using computationally efficient graph algorithms . The main focus of this work is to show how LEAN pruning can significantly improve the computation speed of CNNs for real-world image-to-image applications , and obtain high accuracy in the severe pruning regime that is difficult to achieve with existing approaches . This paper is structured as follows . In Section 2 , we explore existing pruning approaches . In Section 3 , we outline the preliminaries on CNNs , pruning filters , and the operator norm . Next , in Section 4 , we introduce LEAN pruning and describe how to calculate the operator norm of various convolutional operators . We discuss the setup of our experiments in Section 5 . In Section 6 , we demonstrate the results of the proposed pruning approach on a series of image segmentation problems and report practically realized wall time speedup . Our final conclusions are presented in Section 7 . 2 RELATED WORK . Reducing the size of neural networks by removing parameters has been studied for decades ( Mozer & Smolensky , 1989 ; Karnin , 1990 ; Hassibi et al. , 1993 ) . Several works take into account the structure of the network to some degree . In Lin et al . ( 2017 ) filters are pruned at runtime based on the feature maps ( Lin et al. , 2017 ) . Alternatively , one can prune entire channels ( He et al. , 2017 ) , or decide which channels to keep so that the feature maps approximate the output of the unpruned network over several training examples ( Luo et al. , 2017 ) . In recent work , a graph is built for each convolutional layer , and filters are pruned based on the properties of this graph ( Wang et al. , 2021 ) . In Salehinejad & Valaee ( 2021 ) a neural network is represented as a graph and interdependencies are determined using the Ising model . Many pruning approaches are aimed at reducing neural network size with little accuracy drop ( Dong & Yang , 2019 ; He et al. , 2019 ; Molchanov et al. , 2019 ; Zhao et al. , 2019 ) , as opposed to sacrificing accuracy in favor of computation speed . These approaches rarely exceed a pruning ratio of 12–50 % ( Blalock et al. , 2020 ; Luo et al. , 2017 ; Lin et al. , 2019 ) . When a high pruning ratio is used , e.g. , a range of 5–10 % ( Lin et al. , 2017 ; Liu et al. , 2019 ) , a significant drop in accuracy is observed . Pruning ratios of 2–10 % can be achieved with an accuracy drop of 1-3 % by learning-rate rewinding ( Renda et al. , 2020 ) . However , the reduction in FLOPs was less substantial ( 1.5–4.8 times ) . In Yeom et al . ( 2021 ) severe pruning ratios of up to 1 % have been considered , but the approach achieved limited improvements in terms of FLOPs reduction compared with existing pruning methods . Criteria for deciding which elements of a neural network to prune have been extensively studied . A parameter ’ s importance is commonly scored using its absolute value . Whether this is a reasonable metric has been questioned ( LeCun et al. , 1990 ) . Singular values ( which determine certain operator norms ) have been used to compress network layers ( Denton et al. , 2014 ) and to prune feed-forward networks ( Abid et al. , 2002 ) . Efficient methods for the computation of singular values have been developed for convolutional layers ( Sedghi et al. , 2019 ) . Furthermore , a definition of ReLU singular values was proposed recently with an accompanying upper bound ( Dittmer et al. , 2019 ) . 3 PRELIMINARIES . 3.1 CNNS FOR SEGMENTATION . A common image to image translation task is semantic image segmentation . The goal of semantic image segmentation is to assign a class label to each pixel in an image . A segmentation CNN computes a function f : Rm×n → [ 0 , 1 ] k×m×n , which specifies the probability of each pixel being in one of the k classes for an m× n image . CNNs are composed of layers of operations which pass images from one layer to the next . Every operation , e.g. , convolution , has an input x and output y . The input and output consist of one or more images , called channels . For clarity , we distinguish throughout this paper between an operation , which may have several input and output channels , and an operator , which computes the relation between a single input channel and a single output channel . For instance , in a convolutional operation with input channels x1 , . . . , xN , an output channel yj is computed by convolving input images with learned filters yj = ( N∑ i=1 hij ∗ xi ) + bj . ( 1 ) Here hij is the filter related to the convolution operator that acts between channel xi and yj , and bj is an additive bias parameter . In a similar way , every CNN operation produces an output which consists of a number of channels . The exact arrangement of operations , and connections between them , depends on the architecture . A common operator to downsample images is the strided convolution . The stride defines the step size of the convolution kernel . A convolution with stride s defines a map h : Rm×n → Rms ×ns . Upsampling images can be done by transposed convolutions . Transposed convolutions intersperse the input image pixels with zeroes so that the output image has larger dimensions . In addition to convolution operators , other common operators such as pooling and batch normalization are often used . A batch normalization operator ( Ioffe & Szegedy , 2015 ) normalizes the input images for convolutional layers . A batch normalization operator scales and shifts an image xi by yi = γ xi − µB√ σ2B + + β . ( 2 ) Here , γ and β are scaling and bias parameters which are learned during training , and µB and σ2B are the running mean and variance of the mini-batch , i.e. , the set of images used for the current training step . For an overview of CNN components we refer to Goodfellow et al . ( 2016 ) . 3.2 PRUNING CONVOLUTION FILTERS . Pruning techniques aim to remove extraneous parameters from a neural network . Several schemes exist to prune parameters from a network , but retraining the network after pruning is critical to avoid significantly impacting accuracy ( Han et al. , 2015 ) . Pruning a network once after training is called one-shot pruning . Alternatively , a network can be fine-tuned , where the network is repeatedly pruned by a certain percentage and is retrained for a few epochs after every pruning step . Fine-tuning typically gives better results than one-shot pruning ( Renda et al. , 2020 ) . Generic pruning algorithm : All pruning methods used in this work make use of the fine-tuning pruning algorithm outlined in Algorithm 1 . The selection criteria for determining which filters to keep for each step define the different pruning methods . The pruning ratio pRatio is the fraction of remaining convolutions we ultimately want to keep , and stepRatio is the fraction of convolutions that is pruned at each step . Algorithm 1 Fine-tuning pruning algorithm 1 : procedure PRUNE ( MODEL , PRATIO , NSTEPS , EPOCHS ) 2 : stepRatio← eln ( pRatio ) /nSteps 3 : for step← 0 to nSteps do 4 : pruneParams← selectPrunePars ( model , stepRatio ) 5 : model← removePars ( model , pruneParams ) 6 : for k← 0 to epochs do 7 : model← trainOneEpoch ( model , trainData ) return model Here , we focus on structured pruning . In structured pruning , a common approach to decide which filters to remove is structured magnitude pruning . When using structured magnitude pruning , a convolution filter h ∈ Rk×k is scored by its L1 vector norm ||h||1 . Filters with norms below a threshold are pruned . The threshold is determined by sorting a group of filters , and removing a percentage based on the pruning ratio . Thresholds can be set per layer or globally . Setting thresholds globally can give higher accuracy than setting thresholds per layer ( Blalock et al. , 2020 ) . | This paper proposes a structured pruning approach by building a graph according to the structure and weights of the CNN to be pruned. The operators with the longest chain in the graph as preserved while others are pruned. The proposed approach is evaluated with several network structures (MS-D, U-Net, ResNet) and datasets. | SP:91b4b789edbce2bfadde4f936a3843c2c6a1bedf |
LEAN: graph-based pruning for convolutional neural networks by extracting longest chains | 1 INTRODUCTION . In recent years , convolutional neural networks ( CNNs ) have become state-of-the-art for many imageto-image translation tasks ( LeCun et al. , 2015 ) , including image segmentation ( Ronneberger et al. , 2015 ) , and denoising ( Tian et al. , 2020 ) . They are increasingly used as a subcomponent of a larger system , e.g. , visual odometry ( Yang et al. , 2020 ) , as well as in energy-limited and real-time applications ( Yang et al. , 2017 ) . In these situations , the applicability of high-accuracy CNNs may be limited by large computational resource requirements . Small networks may be more applicable in such settings , but may lack accuracy . Neural network pruning ( Mozer & Smolensky , 1989 ; Karnin , 1990 ) has recently gained popularity as a technique to reduce the size of neural networks ( Blalock et al. , 2020 ) . Neural networks consist of learnable parameters , including the scalar components of the convolutional filters . When pruning , the neural network is reduced in size by removing such scalar parameters while trying to maintain high accuracy . We distinguish between individual parameter pruning ( Han et al. , 2016 ) , where each parameter of an operation is ranked and pruned separately , and structured pruning ( Li et al. , 2017 ; Luo et al. , 2017 ) , where entire convolutional filters are ranked and pruned . As convolution operators can only be removed once all scalar parameters of the filter kernel have been pruned , structured pruning is favored over individual pruning when aiming to improve computational performance ( Park et al. , 2017 ) . In the remainder of this paper , we focus on structured pruning . Although structured pruning methods take into account the division of a neural network into operations , they do not take into account the fact that the output of the network is formed by a sequence of such operations . This has two drawbacks . First , since the relative scaling of individual convolutions may vary without changing the output of the whole chain , pruning methods that prune individual operators could potentially prune a suboptimal set of operators from the chain . Second , to significantly reduce evaluation time , a severe pruning regime must be considered , i.e. , a pruning ratio ( percentage of remaining parameters after pruning ) of 1–10 % . In this regime , pruning can result in network disjointness , i.e. , the network contains sections that are not part of some path from the input to the network output . Some existing pruning methods take into account network structure to a limited degree ( Salehinejad & Valaee , 2021 ) . In practice , however , these methods do not contain safeguards to avoid network disjointness . In this paper , we present a novel pruning method called LongEst-chAiN ( LEAN ) pruning , which as opposed to conventional pruning approaches uses graph-based algorithms to keep or prune chains of operations collectively . In LEAN , a CNN is represented as a graph that contains all the CNN operators , with the operator norm of each operator as edge weights . We argue that strong subnetworks in a CNN can be discovered by extracting the longest ( multiplicative ) paths , using computationally efficient graph algorithms . The main focus of this work is to show how LEAN pruning can significantly improve the computation speed of CNNs for real-world image-to-image applications , and obtain high accuracy in the severe pruning regime that is difficult to achieve with existing approaches . This paper is structured as follows . In Section 2 , we explore existing pruning approaches . In Section 3 , we outline the preliminaries on CNNs , pruning filters , and the operator norm . Next , in Section 4 , we introduce LEAN pruning and describe how to calculate the operator norm of various convolutional operators . We discuss the setup of our experiments in Section 5 . In Section 6 , we demonstrate the results of the proposed pruning approach on a series of image segmentation problems and report practically realized wall time speedup . Our final conclusions are presented in Section 7 . 2 RELATED WORK . Reducing the size of neural networks by removing parameters has been studied for decades ( Mozer & Smolensky , 1989 ; Karnin , 1990 ; Hassibi et al. , 1993 ) . Several works take into account the structure of the network to some degree . In Lin et al . ( 2017 ) filters are pruned at runtime based on the feature maps ( Lin et al. , 2017 ) . Alternatively , one can prune entire channels ( He et al. , 2017 ) , or decide which channels to keep so that the feature maps approximate the output of the unpruned network over several training examples ( Luo et al. , 2017 ) . In recent work , a graph is built for each convolutional layer , and filters are pruned based on the properties of this graph ( Wang et al. , 2021 ) . In Salehinejad & Valaee ( 2021 ) a neural network is represented as a graph and interdependencies are determined using the Ising model . Many pruning approaches are aimed at reducing neural network size with little accuracy drop ( Dong & Yang , 2019 ; He et al. , 2019 ; Molchanov et al. , 2019 ; Zhao et al. , 2019 ) , as opposed to sacrificing accuracy in favor of computation speed . These approaches rarely exceed a pruning ratio of 12–50 % ( Blalock et al. , 2020 ; Luo et al. , 2017 ; Lin et al. , 2019 ) . When a high pruning ratio is used , e.g. , a range of 5–10 % ( Lin et al. , 2017 ; Liu et al. , 2019 ) , a significant drop in accuracy is observed . Pruning ratios of 2–10 % can be achieved with an accuracy drop of 1-3 % by learning-rate rewinding ( Renda et al. , 2020 ) . However , the reduction in FLOPs was less substantial ( 1.5–4.8 times ) . In Yeom et al . ( 2021 ) severe pruning ratios of up to 1 % have been considered , but the approach achieved limited improvements in terms of FLOPs reduction compared with existing pruning methods . Criteria for deciding which elements of a neural network to prune have been extensively studied . A parameter ’ s importance is commonly scored using its absolute value . Whether this is a reasonable metric has been questioned ( LeCun et al. , 1990 ) . Singular values ( which determine certain operator norms ) have been used to compress network layers ( Denton et al. , 2014 ) and to prune feed-forward networks ( Abid et al. , 2002 ) . Efficient methods for the computation of singular values have been developed for convolutional layers ( Sedghi et al. , 2019 ) . Furthermore , a definition of ReLU singular values was proposed recently with an accompanying upper bound ( Dittmer et al. , 2019 ) . 3 PRELIMINARIES . 3.1 CNNS FOR SEGMENTATION . A common image to image translation task is semantic image segmentation . The goal of semantic image segmentation is to assign a class label to each pixel in an image . A segmentation CNN computes a function f : Rm×n → [ 0 , 1 ] k×m×n , which specifies the probability of each pixel being in one of the k classes for an m× n image . CNNs are composed of layers of operations which pass images from one layer to the next . Every operation , e.g. , convolution , has an input x and output y . The input and output consist of one or more images , called channels . For clarity , we distinguish throughout this paper between an operation , which may have several input and output channels , and an operator , which computes the relation between a single input channel and a single output channel . For instance , in a convolutional operation with input channels x1 , . . . , xN , an output channel yj is computed by convolving input images with learned filters yj = ( N∑ i=1 hij ∗ xi ) + bj . ( 1 ) Here hij is the filter related to the convolution operator that acts between channel xi and yj , and bj is an additive bias parameter . In a similar way , every CNN operation produces an output which consists of a number of channels . The exact arrangement of operations , and connections between them , depends on the architecture . A common operator to downsample images is the strided convolution . The stride defines the step size of the convolution kernel . A convolution with stride s defines a map h : Rm×n → Rms ×ns . Upsampling images can be done by transposed convolutions . Transposed convolutions intersperse the input image pixels with zeroes so that the output image has larger dimensions . In addition to convolution operators , other common operators such as pooling and batch normalization are often used . A batch normalization operator ( Ioffe & Szegedy , 2015 ) normalizes the input images for convolutional layers . A batch normalization operator scales and shifts an image xi by yi = γ xi − µB√ σ2B + + β . ( 2 ) Here , γ and β are scaling and bias parameters which are learned during training , and µB and σ2B are the running mean and variance of the mini-batch , i.e. , the set of images used for the current training step . For an overview of CNN components we refer to Goodfellow et al . ( 2016 ) . 3.2 PRUNING CONVOLUTION FILTERS . Pruning techniques aim to remove extraneous parameters from a neural network . Several schemes exist to prune parameters from a network , but retraining the network after pruning is critical to avoid significantly impacting accuracy ( Han et al. , 2015 ) . Pruning a network once after training is called one-shot pruning . Alternatively , a network can be fine-tuned , where the network is repeatedly pruned by a certain percentage and is retrained for a few epochs after every pruning step . Fine-tuning typically gives better results than one-shot pruning ( Renda et al. , 2020 ) . Generic pruning algorithm : All pruning methods used in this work make use of the fine-tuning pruning algorithm outlined in Algorithm 1 . The selection criteria for determining which filters to keep for each step define the different pruning methods . The pruning ratio pRatio is the fraction of remaining convolutions we ultimately want to keep , and stepRatio is the fraction of convolutions that is pruned at each step . Algorithm 1 Fine-tuning pruning algorithm 1 : procedure PRUNE ( MODEL , PRATIO , NSTEPS , EPOCHS ) 2 : stepRatio← eln ( pRatio ) /nSteps 3 : for step← 0 to nSteps do 4 : pruneParams← selectPrunePars ( model , stepRatio ) 5 : model← removePars ( model , pruneParams ) 6 : for k← 0 to epochs do 7 : model← trainOneEpoch ( model , trainData ) return model Here , we focus on structured pruning . In structured pruning , a common approach to decide which filters to remove is structured magnitude pruning . When using structured magnitude pruning , a convolution filter h ∈ Rk×k is scored by its L1 vector norm ||h||1 . Filters with norms below a threshold are pruned . The threshold is determined by sorting a group of filters , and removing a percentage based on the pruning ratio . Thresholds can be set per layer or globally . Setting thresholds globally can give higher accuracy than setting thresholds per layer ( Blalock et al. , 2020 ) . | The authors propose a structured pruning method which turns structure pruning into a graph pruning problem. The authors represent each input, output pair as an operator node, with edges measured by operator norms between operators. The authors then propose an iterative structured pruning algorithm which prunes layers based on the longest path in the graph. They then evaluate their method on three image segmentation tasks. | SP:91b4b789edbce2bfadde4f936a3843c2c6a1bedf |
LEAN: graph-based pruning for convolutional neural networks by extracting longest chains | 1 INTRODUCTION . In recent years , convolutional neural networks ( CNNs ) have become state-of-the-art for many imageto-image translation tasks ( LeCun et al. , 2015 ) , including image segmentation ( Ronneberger et al. , 2015 ) , and denoising ( Tian et al. , 2020 ) . They are increasingly used as a subcomponent of a larger system , e.g. , visual odometry ( Yang et al. , 2020 ) , as well as in energy-limited and real-time applications ( Yang et al. , 2017 ) . In these situations , the applicability of high-accuracy CNNs may be limited by large computational resource requirements . Small networks may be more applicable in such settings , but may lack accuracy . Neural network pruning ( Mozer & Smolensky , 1989 ; Karnin , 1990 ) has recently gained popularity as a technique to reduce the size of neural networks ( Blalock et al. , 2020 ) . Neural networks consist of learnable parameters , including the scalar components of the convolutional filters . When pruning , the neural network is reduced in size by removing such scalar parameters while trying to maintain high accuracy . We distinguish between individual parameter pruning ( Han et al. , 2016 ) , where each parameter of an operation is ranked and pruned separately , and structured pruning ( Li et al. , 2017 ; Luo et al. , 2017 ) , where entire convolutional filters are ranked and pruned . As convolution operators can only be removed once all scalar parameters of the filter kernel have been pruned , structured pruning is favored over individual pruning when aiming to improve computational performance ( Park et al. , 2017 ) . In the remainder of this paper , we focus on structured pruning . Although structured pruning methods take into account the division of a neural network into operations , they do not take into account the fact that the output of the network is formed by a sequence of such operations . This has two drawbacks . First , since the relative scaling of individual convolutions may vary without changing the output of the whole chain , pruning methods that prune individual operators could potentially prune a suboptimal set of operators from the chain . Second , to significantly reduce evaluation time , a severe pruning regime must be considered , i.e. , a pruning ratio ( percentage of remaining parameters after pruning ) of 1–10 % . In this regime , pruning can result in network disjointness , i.e. , the network contains sections that are not part of some path from the input to the network output . Some existing pruning methods take into account network structure to a limited degree ( Salehinejad & Valaee , 2021 ) . In practice , however , these methods do not contain safeguards to avoid network disjointness . In this paper , we present a novel pruning method called LongEst-chAiN ( LEAN ) pruning , which as opposed to conventional pruning approaches uses graph-based algorithms to keep or prune chains of operations collectively . In LEAN , a CNN is represented as a graph that contains all the CNN operators , with the operator norm of each operator as edge weights . We argue that strong subnetworks in a CNN can be discovered by extracting the longest ( multiplicative ) paths , using computationally efficient graph algorithms . The main focus of this work is to show how LEAN pruning can significantly improve the computation speed of CNNs for real-world image-to-image applications , and obtain high accuracy in the severe pruning regime that is difficult to achieve with existing approaches . This paper is structured as follows . In Section 2 , we explore existing pruning approaches . In Section 3 , we outline the preliminaries on CNNs , pruning filters , and the operator norm . Next , in Section 4 , we introduce LEAN pruning and describe how to calculate the operator norm of various convolutional operators . We discuss the setup of our experiments in Section 5 . In Section 6 , we demonstrate the results of the proposed pruning approach on a series of image segmentation problems and report practically realized wall time speedup . Our final conclusions are presented in Section 7 . 2 RELATED WORK . Reducing the size of neural networks by removing parameters has been studied for decades ( Mozer & Smolensky , 1989 ; Karnin , 1990 ; Hassibi et al. , 1993 ) . Several works take into account the structure of the network to some degree . In Lin et al . ( 2017 ) filters are pruned at runtime based on the feature maps ( Lin et al. , 2017 ) . Alternatively , one can prune entire channels ( He et al. , 2017 ) , or decide which channels to keep so that the feature maps approximate the output of the unpruned network over several training examples ( Luo et al. , 2017 ) . In recent work , a graph is built for each convolutional layer , and filters are pruned based on the properties of this graph ( Wang et al. , 2021 ) . In Salehinejad & Valaee ( 2021 ) a neural network is represented as a graph and interdependencies are determined using the Ising model . Many pruning approaches are aimed at reducing neural network size with little accuracy drop ( Dong & Yang , 2019 ; He et al. , 2019 ; Molchanov et al. , 2019 ; Zhao et al. , 2019 ) , as opposed to sacrificing accuracy in favor of computation speed . These approaches rarely exceed a pruning ratio of 12–50 % ( Blalock et al. , 2020 ; Luo et al. , 2017 ; Lin et al. , 2019 ) . When a high pruning ratio is used , e.g. , a range of 5–10 % ( Lin et al. , 2017 ; Liu et al. , 2019 ) , a significant drop in accuracy is observed . Pruning ratios of 2–10 % can be achieved with an accuracy drop of 1-3 % by learning-rate rewinding ( Renda et al. , 2020 ) . However , the reduction in FLOPs was less substantial ( 1.5–4.8 times ) . In Yeom et al . ( 2021 ) severe pruning ratios of up to 1 % have been considered , but the approach achieved limited improvements in terms of FLOPs reduction compared with existing pruning methods . Criteria for deciding which elements of a neural network to prune have been extensively studied . A parameter ’ s importance is commonly scored using its absolute value . Whether this is a reasonable metric has been questioned ( LeCun et al. , 1990 ) . Singular values ( which determine certain operator norms ) have been used to compress network layers ( Denton et al. , 2014 ) and to prune feed-forward networks ( Abid et al. , 2002 ) . Efficient methods for the computation of singular values have been developed for convolutional layers ( Sedghi et al. , 2019 ) . Furthermore , a definition of ReLU singular values was proposed recently with an accompanying upper bound ( Dittmer et al. , 2019 ) . 3 PRELIMINARIES . 3.1 CNNS FOR SEGMENTATION . A common image to image translation task is semantic image segmentation . The goal of semantic image segmentation is to assign a class label to each pixel in an image . A segmentation CNN computes a function f : Rm×n → [ 0 , 1 ] k×m×n , which specifies the probability of each pixel being in one of the k classes for an m× n image . CNNs are composed of layers of operations which pass images from one layer to the next . Every operation , e.g. , convolution , has an input x and output y . The input and output consist of one or more images , called channels . For clarity , we distinguish throughout this paper between an operation , which may have several input and output channels , and an operator , which computes the relation between a single input channel and a single output channel . For instance , in a convolutional operation with input channels x1 , . . . , xN , an output channel yj is computed by convolving input images with learned filters yj = ( N∑ i=1 hij ∗ xi ) + bj . ( 1 ) Here hij is the filter related to the convolution operator that acts between channel xi and yj , and bj is an additive bias parameter . In a similar way , every CNN operation produces an output which consists of a number of channels . The exact arrangement of operations , and connections between them , depends on the architecture . A common operator to downsample images is the strided convolution . The stride defines the step size of the convolution kernel . A convolution with stride s defines a map h : Rm×n → Rms ×ns . Upsampling images can be done by transposed convolutions . Transposed convolutions intersperse the input image pixels with zeroes so that the output image has larger dimensions . In addition to convolution operators , other common operators such as pooling and batch normalization are often used . A batch normalization operator ( Ioffe & Szegedy , 2015 ) normalizes the input images for convolutional layers . A batch normalization operator scales and shifts an image xi by yi = γ xi − µB√ σ2B + + β . ( 2 ) Here , γ and β are scaling and bias parameters which are learned during training , and µB and σ2B are the running mean and variance of the mini-batch , i.e. , the set of images used for the current training step . For an overview of CNN components we refer to Goodfellow et al . ( 2016 ) . 3.2 PRUNING CONVOLUTION FILTERS . Pruning techniques aim to remove extraneous parameters from a neural network . Several schemes exist to prune parameters from a network , but retraining the network after pruning is critical to avoid significantly impacting accuracy ( Han et al. , 2015 ) . Pruning a network once after training is called one-shot pruning . Alternatively , a network can be fine-tuned , where the network is repeatedly pruned by a certain percentage and is retrained for a few epochs after every pruning step . Fine-tuning typically gives better results than one-shot pruning ( Renda et al. , 2020 ) . Generic pruning algorithm : All pruning methods used in this work make use of the fine-tuning pruning algorithm outlined in Algorithm 1 . The selection criteria for determining which filters to keep for each step define the different pruning methods . The pruning ratio pRatio is the fraction of remaining convolutions we ultimately want to keep , and stepRatio is the fraction of convolutions that is pruned at each step . Algorithm 1 Fine-tuning pruning algorithm 1 : procedure PRUNE ( MODEL , PRATIO , NSTEPS , EPOCHS ) 2 : stepRatio← eln ( pRatio ) /nSteps 3 : for step← 0 to nSteps do 4 : pruneParams← selectPrunePars ( model , stepRatio ) 5 : model← removePars ( model , pruneParams ) 6 : for k← 0 to epochs do 7 : model← trainOneEpoch ( model , trainData ) return model Here , we focus on structured pruning . In structured pruning , a common approach to decide which filters to remove is structured magnitude pruning . When using structured magnitude pruning , a convolution filter h ∈ Rk×k is scored by its L1 vector norm ||h||1 . Filters with norms below a threshold are pruned . The threshold is determined by sorting a group of filters , and removing a percentage based on the pruning ratio . Thresholds can be set per layer or globally . Setting thresholds globally can give higher accuracy than setting thresholds per layer ( Blalock et al. , 2020 ) . | This paper proposes the LongEst-chAiN (LEAN) method to perform structured pruning of CNN networks. LEAN maps a CNN network to a pruning graph, where every channel of input/output is a node, and every operator is an edge connecting input and output nodes. It uses the operator norms as the weights of edges. Then it prunes the network by keeping the longest path in the graph iteratively until it reaches the target pruning ratio. This paper demonstrates the effectiveness of LEAN pruning by comparing it to two structured pruning methods (structured magnitude pruning, operator norm pruning) across three image segmentation datasets (Simulated Circle-Square dataset, CamVid, Real-world dynamic CT dataset) and three CNN architectures (MS-D, U-Net4, and ResNet50). Experiment results show that LEAN outperforms the other two structured pruning methods as it achieves similar model qualities with much smaller pruning ratios in most cases. Also, the paper shows that a MS-D model pruned with LEAN is 10.9X faster than the unpruned network in practice. | SP:91b4b789edbce2bfadde4f936a3843c2c6a1bedf |
NASI: Label- and Data-agnostic Neural Architecture Search at Initialization | 1 INTRODUCTION . The past decade has witnessed the wide success of deep neural networks ( DNNs ) in computer vision and natural language processing . These DNNs , e.g. , VGG ( Simonyan & Zisserman , 2015 ) , ResNet ( He et al. , 2016 ) , and MobileNet ( Howard et al. , 2017 ) , are typically handcrafted by human experts with considerable trials and errors . The human efforts devoting to the design of these DNNs are , however , not affordable nor scalable due to an increasing demand of customizing DNNs for different tasks . To reduce such human efforts , Neural Architecture Search ( NAS ) ( Zoph & Le , 2017 ) has recently been introduced to automate the design of DNNs . As summarized in ( Elsken et al. , 2019 ) , NAS conventionally consists of a search space , a search algorithm , and a performance evaluation . Specifically , the search algorithm aims to select the best-performing neural architecture from the search space based on its evaluated performance via performance evaluation . In the literature , various search algorithms ( Luo et al. , 2018 ; Zoph et al. , 2018 ; Real et al. , 2019 ) have been proposed to search for architectures with comparable or even better performance than the handcrafted ones . However , these NAS algorithms are inefficient due to the requirement of model training for numerous candidate architectures during the search process . To improve the search inefficiency , one-shot NAS algorithms ( Dong & Yang , 2019 ; Pham et al. , 2018 ; Liu et al. , 2019 ; Xie et al. , 2019 ) have trained a single one-shot architecture and then evaluated the performance of candidate architectures with model parameters inherited from this fine-tuned one-shot architecture . So , these algorithms can considerably reduce the cost of model training , but still require the training of the one-shot architecture . This naturally leads to the question whether NAS is realizable at initialization such that model training can be completely avoided during the search process ? To the best of our knowledge , only a few efforts to date have been devoted to developing NAS algorithms without model training empirically ( Mellor et al. , 2020 ; Park et al. , 2020 ; Abdelfattah et al. , 2021 ; Chen et al. , 2021 ) . This paper presents a novel NAS algorithm called NAS at Initialization ( NASI ) that can completely avoid model training to boost search efficiency . To achieve this , NASI exploits the capability of a Neural Tangent Kernel ( NTK ) ( Jacot et al. , 2018 ; Lee et al. , 2019a ) in being able to formally characterize the performance of infinite-wide DNNs at initialization , hence allowing the performance of candidate architectures to be estimated and realizing NAS at initialization . Specifically , given the estimated performance of candidate architectures by NTK , NAS can be reformulated into an optimization problem without model training ( Sec . 3.1 ) . However , NTK is prohibitively costly to evaluate . Fortunately , we can approximate it1 with a similar form to gradient flow ( Wang et al. , 2020 ) ( Sec . 3.2 ) . This results in a reformulated NAS problem that can be solved efficiently by a gradient-based algorithm via additional relaxation with Gumbel-Softmax ( Jang et al. , 2017 ; Maddison et al. , 2017 ) ( Sec . 3.3 ) . Interestingly , NASI is shown to be label- and data-agnostic under mild conditions , which thus implies the transferability of architectures selected by NASI over different datasets ( Sec . 4 ) . We will firstly empirically demonstrate the improved search efficiency and the competitive search effectiveness achieved by NASI in NAS-Bench-1Shot1 ( Zela et al. , 2020b ) ( Sec . 5.1 ) . Compared with other NAS algorithms , NASI incurs the smallest search cost while preserving the competitive performance of its selected architectures . Meanwhile , the architectures selected by NASI from the DARTS ( Liu et al. , 2019 ) search space over CIFAR-10 consistently enjoy the competitive or even outperformed performance when evaluated on different benchmark datasets , e.g. , CIFAR-10/100 and ImageNet ( Sec . 5.2 ) , indicating the guaranteed transferability of architectures selected by our NASI . In Sec . 5.3 , NASI is further demonstrated to be able to select well-performing architectures on CIFAR-10 even with randomly generated labels or data , which strongly supports the label- and data-agnostic search and also the guaranteed transferability achieved by our NASI . 2 RELATED WORKS AND BACKGROUND . 2.1 NEURAL ARCHITECTURE SEARCH . A growing body of NAS algorithms have been proposed in the literature ( Zoph & Le , 2017 ; Liu et al. , 2018 ; Luo et al. , 2018 ; Zoph et al. , 2018 ; Real et al. , 2019 ) to automate the design of neural architectures . However , scaling existing NAS algorithms to large datasets is notoriously hard . Recently , attention has thus been shifted to improving the search efficiency of NAS without sacrificing the generalization performance of its selected architectures . In particular , a one-shot architecture is introduced by Pham et al . ( 2018 ) to share model parameters among candidate architectures , thereby reducing the cost of model training substantially . Recent works ( Chen et al. , 2019 ; Dong & Yang , 2019 ; Liu et al. , 2019 ; Xie et al. , 2019 ; Chen & Hsieh , 2020 ; Chu et al. , 2020 ) along this line have further formulated NAS as a continuous and differentiable optimization problem to yield efficient gradient-based solutions . These one-shot NAS algorithms have achieved considerable improvement in search efficiency . However , the model training of the one-shot architecture is still needed . More recently , a number of algorithms have been proposed to estimate the performance of candidate architectures without model training . For example , Mellor et al . ( 2020 ) have explored the correlation between the divergence of linear maps induced by data points at initialization and the performance of candidate architectures heuristically . Meanwhile , Park et al . ( 2020 ) have approximated the performance of candidate architectures by the performance of their corresponding Neural Network Gaussian Process ( NNGP ) with only initialized model parameters , which is yet computationally costly . Abdelfattah et al . ( 2021 ) have investigated several training-free proxies to rank candidate architectures in the search space , while Chen et al . ( 2021 ) intuitively adopt theoretical aspects in deep networks ( e.g. , NTK ( Jacot et al. , 2018 ) and linear regions of deep networks ( Raghu et al. , 2017 ) ) to select architectures with a good trade-off between its trainability and expressivity . Our NASI significantly advances this line of work in ( a ) providing theoretically grounded performance estimation by NTK ( compared with ( Mellor et al. , 2020 ; Abdelfattah et al. , 2021 ; Chen et al. , 2021 ) ) , ( b ) guaranteeing the transferability of its selected architectures with its provable label- and data-agnostic search under mild conditions ( compared with ( Mellor et al. , 2020 ; Park et al. , 2020 ; Abdelfattah et al. , 2021 ; Chen et al. , 2021 ) ) ) and ( c ) achieving SOTA performance in a large search space over various benchmark datasets ( compared with ( Mellor et al. , 2020 ; Park et al. , 2020 ; Abdelfattah et al. , 2021 ) ) . 2.2 NEURAL TANGENT KERNEL ( NTK ) . Let a dataset ( X , Y ) denote a pair comprising a set X of m n0-dimensional vectors of input features and a vector Y ∈ Rmn×1 concatenating the m n-dimensional vectors of corresponding output values , respectively . Let a DNN be parameterized by θt ∈ Rp at time t and output a vector 1More precisely , we approximate the trace norm of NTK . f ( X ; θt ) ∈ Rmn×1 ( abbreviated to ft ) of the predicted values of Y . Jacot et al . ( 2018 ) have revealed that the training dynamics of DNNs with gradient descent can be characterized by an NTK . Formally , define the NTK Θt ( X , X ) ∈ Rmn×mn ( abbreviated to Θt ) as Θt ( X , X ) , ∇θtf ( X ; θt ) ∇θtf ( X ; θt ) > . ( 1 ) Given a loss function Lt at time t and a learning rate η , the training dynamics of the DNN can then be characterized as ∇tft = −η Θt ( X , X ) ∇ftLt , ∇tLt = −η ∇ftL > t Θt ( X , X ) ∇ftLt . ( 2 ) Interestingly , as proven in ( Jacot et al. , 2018 ) , the NTK stays asymptotically constant during the course of training as the width of DNNs goes to infinity . NTK at initialization ( i.e. , Θ0 ) can thus characterize the training dynamics and also the performance of infinite-width DNNs . Lee et al . ( 2019a ) have further revealed that , for DNNs with over-parameterization , the aforementioned training dynamics can be governed by their first-order Taylor expansion ( or linearization ) at initialization . In particular , define f lin ( x ; θt ) , f ( x ; θ0 ) +∇θ0f ( x ; θ0 ) > ( θt − θ0 ) ( 3 ) for all x ∈ X . Then , f ( x ; θt ) and f lin ( x ; θt ) share similar training dynamics over time , as described formally in Appendix A.2 . Besides , following the definition of NTK in ( 1 ) , this linearization f lin achieves a constant NTK over time . Given the mean squared error ( MSE ) loss defined as Lt , m−1 ‖Y − f ( X ; θt ) ‖22 and the constant NTK Θt = Θ0 , the loss dynamics in ( 2 ) above can be analyzed in a closed form while applying gradient descent with learning rate η ( Arora et al. , 2019 ) : Lt = m−1 ∑mn i=1 ( 1− ηλi ) 2t ( u > i Y ) 2 , ( 4 ) where Θ0 = ∑mn i=1 λi ( Θ0 ) uiu > i , and λi ( Θ0 ) and ui denote the i-th largest eigenvalue and the corresponding eigenvector of Θ0 , respectively . 3 NEURAL ARCHITECTURE SEARCH AT INITIALIZATION . 3.1 REFORMULATING NAS VIA NTK . Given a loss function L and model parameters θ ( A ) of architecture A , we denote the training and validation loss as Ltrain and Lval , respectively . NAS is conventionally formulated as a bi-level optimization problem ( Liu et al. , 2019 ) : minA Lval ( θ∗ ( A ) ; A ) s.t . θ∗ ( A ) , arg minθ ( A ) Ltrain ( θ ( A ) ; A ) . ( 5 ) Notably , model training is required to evaluate the validation performance of each candidate architecture in ( 5 ) . The search efficiency of NAS algorithms ( Real et al. , 2019 ; Zoph et al. , 2018 ) based on ( 5 ) is thus severely limited by the cost of model training for each candidate architecture . Though recent works ( Pham et al. , 2018 ) have considerably reduced this training cost by introducing a one-shot architecture for model parameter sharing , such a one-shot architecture requires training and hence incurs the training cost . To completely avoid this training cost , we exploit the capability of NTK for characterizing the performance of DNNs at initialization . Specifically , Sec . 2.2 has revealed that the training dynamics of an over-parameterized DNN can be governed by its linearization at initialization . With the MSE loss , the training dynamics of such linearization are further determined by its constant NTK . Therefore , the training dynamics and hence the performance of a DNN can be characterized by the constant NTK of its linearization . However , this constant NTK is computationally costly to evaluate . To this end , we instead characterize the training dynamics ( i.e. , MSE ) of DNNs in Proposition 1 using the trace norm of NTK at initialization , which can be efficiently approximated . For simplicity , we use this MSE loss in our analysis . Other widely adopted loss functions ( e.g. , cross entropy with softmax ) can also be applied , as supported in our experiments . Note that throughout this paper , the parameterization and initialization of DNNs follow that of Jacot et al . ( 2018 ) . For a L-layer DNN , we denote the output dimension of its hidden layers and the last layer as n1 = · · · = nL−1 = k and nL = n , respectively . Proposition 1 . Suppose that ‖x‖2 ≤ 1 for all x ∈ X and Y ∈ [ 0 , 1 ] mn for a given dataset ( X , Y ) of size |X | = m , a given L-layer neural architecture A outputs ft ∈ [ 0 , 1 ] mn as predicted labels of Y with the corresponding MSE loss Lt , λmin ( Θ0 ) > 0 for the given NTK Θ0 w.r.t . ft at initialization , and gradient descent ( or gradient flow ) is applied with learning rate η < λ−1max ( Θ0 ) . Then , for any t ≥ 0 , there exists a constant c0 > 0 such that as k →∞ , Lt ≤ mn2 ( 1− ηλ ( Θ0 ) ) q + ( 6 ) with probability arbitrarily close to 1 where q is set to 2t if t < 0.5 , and 1 otherwise , λ ( Θ0 ) , ( mn ) −1 ∑mn i=1 λi ( Θ0 ) , and , 2c0 √ n/ ( mk ) ( 1 + c0 √ 1/k ) . Its proof is in Appendix A.3 . Proposition 1 implies that NAS can be realizable at initialization . Specifically , given a fixed and sufficiently large training budget t , in order to select the best-performing architecture , we can simply minimize the upper bound of Lt in ( 6 ) over all the candidate architectures in the search space . Here , Lt can be applied to approximated Lval since both strong theoretical ( Mohri et al. , 2018 ) and empirical ( Hardt et al. , 2016 ) justifications in the literature have shown that training and validation loss are generally highly related . Hence , ( 5 ) can be reformulated as minA mn 2 ( 1− ηλ ( Θ0 ( A ) ) ) + s.t . λ ( Θ0 ( A ) ) < η−1 . ( 7 ) Note that the constraint in ( 7 ) is derived from the condition η < λ−1max ( Θ0 ( A ) ) in Proposition 1 , and η and are typically constants2 during the search process . Following the definition of trace norm , ( 7 ) can be further reduced into maxA ‖Θ0 ( A ) ‖tr s.t . ‖Θ0 ( A ) ‖tr < mnη−1 . ( 8 ) Notably , Θ0 ( A ) only relies on the initialization of A . So , no model training is required in optimizing ( 8 ) , which achieves our objective of realizing NAS at initialization . Furthermore , ( 8 ) suggests an interesting interpretation of NAS : NAS intends to select architectures with a good trade-off between their model complexity and the optimization behavior in their model training . Particularly , architectures containing more model parameters will usually achieve a larger ‖Θ0 ( A ) ‖tr according to the definition in ( 1 ) , which hence provides an alternative to measuring the complexity of architectures . So , maximizing ‖Θ0 ( A ) ‖tr leads to architectures with large complexity and therefore strong representation power . On the other hand , the complexity of the selected architectures is limited by the constraint in ( 8 ) to ensure a well-behaved optimization with a large learning rate η in their model training . By combining these two effects , the optimization of ( 8 ) naturally trades off between the complexity of the selected architectures and the optimization behavior in their model training for the best performance . Appendix C.1 will validate such trade-off . Interestingly , Chen et al . ( 2021 ) have revealed a similar insight of NAS to us . | This paper proposes a new training-free NAS method, where it is not necessary to optimize the weight parameters of target networks for architecture search. To achieve this, the paper exploits the capability of NTK for estimating the performance of candidate architectures at weight initialization. Thus, the proposed method can avoid network training during the search and achieve a much efficient architecture search. The experimental results show that the proposed method achieves competitive performance with existing methods and also can adapt to the label- and data-agnostic scenarios. | SP:41184d03d925e31be2aeb90ff442124504671785 |
NASI: Label- and Data-agnostic Neural Architecture Search at Initialization | 1 INTRODUCTION . The past decade has witnessed the wide success of deep neural networks ( DNNs ) in computer vision and natural language processing . These DNNs , e.g. , VGG ( Simonyan & Zisserman , 2015 ) , ResNet ( He et al. , 2016 ) , and MobileNet ( Howard et al. , 2017 ) , are typically handcrafted by human experts with considerable trials and errors . The human efforts devoting to the design of these DNNs are , however , not affordable nor scalable due to an increasing demand of customizing DNNs for different tasks . To reduce such human efforts , Neural Architecture Search ( NAS ) ( Zoph & Le , 2017 ) has recently been introduced to automate the design of DNNs . As summarized in ( Elsken et al. , 2019 ) , NAS conventionally consists of a search space , a search algorithm , and a performance evaluation . Specifically , the search algorithm aims to select the best-performing neural architecture from the search space based on its evaluated performance via performance evaluation . In the literature , various search algorithms ( Luo et al. , 2018 ; Zoph et al. , 2018 ; Real et al. , 2019 ) have been proposed to search for architectures with comparable or even better performance than the handcrafted ones . However , these NAS algorithms are inefficient due to the requirement of model training for numerous candidate architectures during the search process . To improve the search inefficiency , one-shot NAS algorithms ( Dong & Yang , 2019 ; Pham et al. , 2018 ; Liu et al. , 2019 ; Xie et al. , 2019 ) have trained a single one-shot architecture and then evaluated the performance of candidate architectures with model parameters inherited from this fine-tuned one-shot architecture . So , these algorithms can considerably reduce the cost of model training , but still require the training of the one-shot architecture . This naturally leads to the question whether NAS is realizable at initialization such that model training can be completely avoided during the search process ? To the best of our knowledge , only a few efforts to date have been devoted to developing NAS algorithms without model training empirically ( Mellor et al. , 2020 ; Park et al. , 2020 ; Abdelfattah et al. , 2021 ; Chen et al. , 2021 ) . This paper presents a novel NAS algorithm called NAS at Initialization ( NASI ) that can completely avoid model training to boost search efficiency . To achieve this , NASI exploits the capability of a Neural Tangent Kernel ( NTK ) ( Jacot et al. , 2018 ; Lee et al. , 2019a ) in being able to formally characterize the performance of infinite-wide DNNs at initialization , hence allowing the performance of candidate architectures to be estimated and realizing NAS at initialization . Specifically , given the estimated performance of candidate architectures by NTK , NAS can be reformulated into an optimization problem without model training ( Sec . 3.1 ) . However , NTK is prohibitively costly to evaluate . Fortunately , we can approximate it1 with a similar form to gradient flow ( Wang et al. , 2020 ) ( Sec . 3.2 ) . This results in a reformulated NAS problem that can be solved efficiently by a gradient-based algorithm via additional relaxation with Gumbel-Softmax ( Jang et al. , 2017 ; Maddison et al. , 2017 ) ( Sec . 3.3 ) . Interestingly , NASI is shown to be label- and data-agnostic under mild conditions , which thus implies the transferability of architectures selected by NASI over different datasets ( Sec . 4 ) . We will firstly empirically demonstrate the improved search efficiency and the competitive search effectiveness achieved by NASI in NAS-Bench-1Shot1 ( Zela et al. , 2020b ) ( Sec . 5.1 ) . Compared with other NAS algorithms , NASI incurs the smallest search cost while preserving the competitive performance of its selected architectures . Meanwhile , the architectures selected by NASI from the DARTS ( Liu et al. , 2019 ) search space over CIFAR-10 consistently enjoy the competitive or even outperformed performance when evaluated on different benchmark datasets , e.g. , CIFAR-10/100 and ImageNet ( Sec . 5.2 ) , indicating the guaranteed transferability of architectures selected by our NASI . In Sec . 5.3 , NASI is further demonstrated to be able to select well-performing architectures on CIFAR-10 even with randomly generated labels or data , which strongly supports the label- and data-agnostic search and also the guaranteed transferability achieved by our NASI . 2 RELATED WORKS AND BACKGROUND . 2.1 NEURAL ARCHITECTURE SEARCH . A growing body of NAS algorithms have been proposed in the literature ( Zoph & Le , 2017 ; Liu et al. , 2018 ; Luo et al. , 2018 ; Zoph et al. , 2018 ; Real et al. , 2019 ) to automate the design of neural architectures . However , scaling existing NAS algorithms to large datasets is notoriously hard . Recently , attention has thus been shifted to improving the search efficiency of NAS without sacrificing the generalization performance of its selected architectures . In particular , a one-shot architecture is introduced by Pham et al . ( 2018 ) to share model parameters among candidate architectures , thereby reducing the cost of model training substantially . Recent works ( Chen et al. , 2019 ; Dong & Yang , 2019 ; Liu et al. , 2019 ; Xie et al. , 2019 ; Chen & Hsieh , 2020 ; Chu et al. , 2020 ) along this line have further formulated NAS as a continuous and differentiable optimization problem to yield efficient gradient-based solutions . These one-shot NAS algorithms have achieved considerable improvement in search efficiency . However , the model training of the one-shot architecture is still needed . More recently , a number of algorithms have been proposed to estimate the performance of candidate architectures without model training . For example , Mellor et al . ( 2020 ) have explored the correlation between the divergence of linear maps induced by data points at initialization and the performance of candidate architectures heuristically . Meanwhile , Park et al . ( 2020 ) have approximated the performance of candidate architectures by the performance of their corresponding Neural Network Gaussian Process ( NNGP ) with only initialized model parameters , which is yet computationally costly . Abdelfattah et al . ( 2021 ) have investigated several training-free proxies to rank candidate architectures in the search space , while Chen et al . ( 2021 ) intuitively adopt theoretical aspects in deep networks ( e.g. , NTK ( Jacot et al. , 2018 ) and linear regions of deep networks ( Raghu et al. , 2017 ) ) to select architectures with a good trade-off between its trainability and expressivity . Our NASI significantly advances this line of work in ( a ) providing theoretically grounded performance estimation by NTK ( compared with ( Mellor et al. , 2020 ; Abdelfattah et al. , 2021 ; Chen et al. , 2021 ) ) , ( b ) guaranteeing the transferability of its selected architectures with its provable label- and data-agnostic search under mild conditions ( compared with ( Mellor et al. , 2020 ; Park et al. , 2020 ; Abdelfattah et al. , 2021 ; Chen et al. , 2021 ) ) ) and ( c ) achieving SOTA performance in a large search space over various benchmark datasets ( compared with ( Mellor et al. , 2020 ; Park et al. , 2020 ; Abdelfattah et al. , 2021 ) ) . 2.2 NEURAL TANGENT KERNEL ( NTK ) . Let a dataset ( X , Y ) denote a pair comprising a set X of m n0-dimensional vectors of input features and a vector Y ∈ Rmn×1 concatenating the m n-dimensional vectors of corresponding output values , respectively . Let a DNN be parameterized by θt ∈ Rp at time t and output a vector 1More precisely , we approximate the trace norm of NTK . f ( X ; θt ) ∈ Rmn×1 ( abbreviated to ft ) of the predicted values of Y . Jacot et al . ( 2018 ) have revealed that the training dynamics of DNNs with gradient descent can be characterized by an NTK . Formally , define the NTK Θt ( X , X ) ∈ Rmn×mn ( abbreviated to Θt ) as Θt ( X , X ) , ∇θtf ( X ; θt ) ∇θtf ( X ; θt ) > . ( 1 ) Given a loss function Lt at time t and a learning rate η , the training dynamics of the DNN can then be characterized as ∇tft = −η Θt ( X , X ) ∇ftLt , ∇tLt = −η ∇ftL > t Θt ( X , X ) ∇ftLt . ( 2 ) Interestingly , as proven in ( Jacot et al. , 2018 ) , the NTK stays asymptotically constant during the course of training as the width of DNNs goes to infinity . NTK at initialization ( i.e. , Θ0 ) can thus characterize the training dynamics and also the performance of infinite-width DNNs . Lee et al . ( 2019a ) have further revealed that , for DNNs with over-parameterization , the aforementioned training dynamics can be governed by their first-order Taylor expansion ( or linearization ) at initialization . In particular , define f lin ( x ; θt ) , f ( x ; θ0 ) +∇θ0f ( x ; θ0 ) > ( θt − θ0 ) ( 3 ) for all x ∈ X . Then , f ( x ; θt ) and f lin ( x ; θt ) share similar training dynamics over time , as described formally in Appendix A.2 . Besides , following the definition of NTK in ( 1 ) , this linearization f lin achieves a constant NTK over time . Given the mean squared error ( MSE ) loss defined as Lt , m−1 ‖Y − f ( X ; θt ) ‖22 and the constant NTK Θt = Θ0 , the loss dynamics in ( 2 ) above can be analyzed in a closed form while applying gradient descent with learning rate η ( Arora et al. , 2019 ) : Lt = m−1 ∑mn i=1 ( 1− ηλi ) 2t ( u > i Y ) 2 , ( 4 ) where Θ0 = ∑mn i=1 λi ( Θ0 ) uiu > i , and λi ( Θ0 ) and ui denote the i-th largest eigenvalue and the corresponding eigenvector of Θ0 , respectively . 3 NEURAL ARCHITECTURE SEARCH AT INITIALIZATION . 3.1 REFORMULATING NAS VIA NTK . Given a loss function L and model parameters θ ( A ) of architecture A , we denote the training and validation loss as Ltrain and Lval , respectively . NAS is conventionally formulated as a bi-level optimization problem ( Liu et al. , 2019 ) : minA Lval ( θ∗ ( A ) ; A ) s.t . θ∗ ( A ) , arg minθ ( A ) Ltrain ( θ ( A ) ; A ) . ( 5 ) Notably , model training is required to evaluate the validation performance of each candidate architecture in ( 5 ) . The search efficiency of NAS algorithms ( Real et al. , 2019 ; Zoph et al. , 2018 ) based on ( 5 ) is thus severely limited by the cost of model training for each candidate architecture . Though recent works ( Pham et al. , 2018 ) have considerably reduced this training cost by introducing a one-shot architecture for model parameter sharing , such a one-shot architecture requires training and hence incurs the training cost . To completely avoid this training cost , we exploit the capability of NTK for characterizing the performance of DNNs at initialization . Specifically , Sec . 2.2 has revealed that the training dynamics of an over-parameterized DNN can be governed by its linearization at initialization . With the MSE loss , the training dynamics of such linearization are further determined by its constant NTK . Therefore , the training dynamics and hence the performance of a DNN can be characterized by the constant NTK of its linearization . However , this constant NTK is computationally costly to evaluate . To this end , we instead characterize the training dynamics ( i.e. , MSE ) of DNNs in Proposition 1 using the trace norm of NTK at initialization , which can be efficiently approximated . For simplicity , we use this MSE loss in our analysis . Other widely adopted loss functions ( e.g. , cross entropy with softmax ) can also be applied , as supported in our experiments . Note that throughout this paper , the parameterization and initialization of DNNs follow that of Jacot et al . ( 2018 ) . For a L-layer DNN , we denote the output dimension of its hidden layers and the last layer as n1 = · · · = nL−1 = k and nL = n , respectively . Proposition 1 . Suppose that ‖x‖2 ≤ 1 for all x ∈ X and Y ∈ [ 0 , 1 ] mn for a given dataset ( X , Y ) of size |X | = m , a given L-layer neural architecture A outputs ft ∈ [ 0 , 1 ] mn as predicted labels of Y with the corresponding MSE loss Lt , λmin ( Θ0 ) > 0 for the given NTK Θ0 w.r.t . ft at initialization , and gradient descent ( or gradient flow ) is applied with learning rate η < λ−1max ( Θ0 ) . Then , for any t ≥ 0 , there exists a constant c0 > 0 such that as k →∞ , Lt ≤ mn2 ( 1− ηλ ( Θ0 ) ) q + ( 6 ) with probability arbitrarily close to 1 where q is set to 2t if t < 0.5 , and 1 otherwise , λ ( Θ0 ) , ( mn ) −1 ∑mn i=1 λi ( Θ0 ) , and , 2c0 √ n/ ( mk ) ( 1 + c0 √ 1/k ) . Its proof is in Appendix A.3 . Proposition 1 implies that NAS can be realizable at initialization . Specifically , given a fixed and sufficiently large training budget t , in order to select the best-performing architecture , we can simply minimize the upper bound of Lt in ( 6 ) over all the candidate architectures in the search space . Here , Lt can be applied to approximated Lval since both strong theoretical ( Mohri et al. , 2018 ) and empirical ( Hardt et al. , 2016 ) justifications in the literature have shown that training and validation loss are generally highly related . Hence , ( 5 ) can be reformulated as minA mn 2 ( 1− ηλ ( Θ0 ( A ) ) ) + s.t . λ ( Θ0 ( A ) ) < η−1 . ( 7 ) Note that the constraint in ( 7 ) is derived from the condition η < λ−1max ( Θ0 ( A ) ) in Proposition 1 , and η and are typically constants2 during the search process . Following the definition of trace norm , ( 7 ) can be further reduced into maxA ‖Θ0 ( A ) ‖tr s.t . ‖Θ0 ( A ) ‖tr < mnη−1 . ( 8 ) Notably , Θ0 ( A ) only relies on the initialization of A . So , no model training is required in optimizing ( 8 ) , which achieves our objective of realizing NAS at initialization . Furthermore , ( 8 ) suggests an interesting interpretation of NAS : NAS intends to select architectures with a good trade-off between their model complexity and the optimization behavior in their model training . Particularly , architectures containing more model parameters will usually achieve a larger ‖Θ0 ( A ) ‖tr according to the definition in ( 1 ) , which hence provides an alternative to measuring the complexity of architectures . So , maximizing ‖Θ0 ( A ) ‖tr leads to architectures with large complexity and therefore strong representation power . On the other hand , the complexity of the selected architectures is limited by the constraint in ( 8 ) to ensure a well-behaved optimization with a large learning rate η in their model training . By combining these two effects , the optimization of ( 8 ) naturally trades off between the complexity of the selected architectures and the optimization behavior in their model training for the best performance . Appendix C.1 will validate such trade-off . Interestingly , Chen et al . ( 2021 ) have revealed a similar insight of NAS to us . | This paper proposes a training-free NAS method called NASI, which exploits the Neural Tangent Kernel (NTK) to characterize the performance of the candidate architectures at initialization. To alleviate the costly evaluation for NTK, the authors apply a similar form to gradient flow to approximate NKT. Moreover, they combined their NTK trick with gradient-based NAS algorithm via Gumbel-Softmax to solve NAS problem efficiently. The experiment results on various benchmarks illustrate the effect of NASI. | SP:41184d03d925e31be2aeb90ff442124504671785 |
NASI: Label- and Data-agnostic Neural Architecture Search at Initialization | 1 INTRODUCTION . The past decade has witnessed the wide success of deep neural networks ( DNNs ) in computer vision and natural language processing . These DNNs , e.g. , VGG ( Simonyan & Zisserman , 2015 ) , ResNet ( He et al. , 2016 ) , and MobileNet ( Howard et al. , 2017 ) , are typically handcrafted by human experts with considerable trials and errors . The human efforts devoting to the design of these DNNs are , however , not affordable nor scalable due to an increasing demand of customizing DNNs for different tasks . To reduce such human efforts , Neural Architecture Search ( NAS ) ( Zoph & Le , 2017 ) has recently been introduced to automate the design of DNNs . As summarized in ( Elsken et al. , 2019 ) , NAS conventionally consists of a search space , a search algorithm , and a performance evaluation . Specifically , the search algorithm aims to select the best-performing neural architecture from the search space based on its evaluated performance via performance evaluation . In the literature , various search algorithms ( Luo et al. , 2018 ; Zoph et al. , 2018 ; Real et al. , 2019 ) have been proposed to search for architectures with comparable or even better performance than the handcrafted ones . However , these NAS algorithms are inefficient due to the requirement of model training for numerous candidate architectures during the search process . To improve the search inefficiency , one-shot NAS algorithms ( Dong & Yang , 2019 ; Pham et al. , 2018 ; Liu et al. , 2019 ; Xie et al. , 2019 ) have trained a single one-shot architecture and then evaluated the performance of candidate architectures with model parameters inherited from this fine-tuned one-shot architecture . So , these algorithms can considerably reduce the cost of model training , but still require the training of the one-shot architecture . This naturally leads to the question whether NAS is realizable at initialization such that model training can be completely avoided during the search process ? To the best of our knowledge , only a few efforts to date have been devoted to developing NAS algorithms without model training empirically ( Mellor et al. , 2020 ; Park et al. , 2020 ; Abdelfattah et al. , 2021 ; Chen et al. , 2021 ) . This paper presents a novel NAS algorithm called NAS at Initialization ( NASI ) that can completely avoid model training to boost search efficiency . To achieve this , NASI exploits the capability of a Neural Tangent Kernel ( NTK ) ( Jacot et al. , 2018 ; Lee et al. , 2019a ) in being able to formally characterize the performance of infinite-wide DNNs at initialization , hence allowing the performance of candidate architectures to be estimated and realizing NAS at initialization . Specifically , given the estimated performance of candidate architectures by NTK , NAS can be reformulated into an optimization problem without model training ( Sec . 3.1 ) . However , NTK is prohibitively costly to evaluate . Fortunately , we can approximate it1 with a similar form to gradient flow ( Wang et al. , 2020 ) ( Sec . 3.2 ) . This results in a reformulated NAS problem that can be solved efficiently by a gradient-based algorithm via additional relaxation with Gumbel-Softmax ( Jang et al. , 2017 ; Maddison et al. , 2017 ) ( Sec . 3.3 ) . Interestingly , NASI is shown to be label- and data-agnostic under mild conditions , which thus implies the transferability of architectures selected by NASI over different datasets ( Sec . 4 ) . We will firstly empirically demonstrate the improved search efficiency and the competitive search effectiveness achieved by NASI in NAS-Bench-1Shot1 ( Zela et al. , 2020b ) ( Sec . 5.1 ) . Compared with other NAS algorithms , NASI incurs the smallest search cost while preserving the competitive performance of its selected architectures . Meanwhile , the architectures selected by NASI from the DARTS ( Liu et al. , 2019 ) search space over CIFAR-10 consistently enjoy the competitive or even outperformed performance when evaluated on different benchmark datasets , e.g. , CIFAR-10/100 and ImageNet ( Sec . 5.2 ) , indicating the guaranteed transferability of architectures selected by our NASI . In Sec . 5.3 , NASI is further demonstrated to be able to select well-performing architectures on CIFAR-10 even with randomly generated labels or data , which strongly supports the label- and data-agnostic search and also the guaranteed transferability achieved by our NASI . 2 RELATED WORKS AND BACKGROUND . 2.1 NEURAL ARCHITECTURE SEARCH . A growing body of NAS algorithms have been proposed in the literature ( Zoph & Le , 2017 ; Liu et al. , 2018 ; Luo et al. , 2018 ; Zoph et al. , 2018 ; Real et al. , 2019 ) to automate the design of neural architectures . However , scaling existing NAS algorithms to large datasets is notoriously hard . Recently , attention has thus been shifted to improving the search efficiency of NAS without sacrificing the generalization performance of its selected architectures . In particular , a one-shot architecture is introduced by Pham et al . ( 2018 ) to share model parameters among candidate architectures , thereby reducing the cost of model training substantially . Recent works ( Chen et al. , 2019 ; Dong & Yang , 2019 ; Liu et al. , 2019 ; Xie et al. , 2019 ; Chen & Hsieh , 2020 ; Chu et al. , 2020 ) along this line have further formulated NAS as a continuous and differentiable optimization problem to yield efficient gradient-based solutions . These one-shot NAS algorithms have achieved considerable improvement in search efficiency . However , the model training of the one-shot architecture is still needed . More recently , a number of algorithms have been proposed to estimate the performance of candidate architectures without model training . For example , Mellor et al . ( 2020 ) have explored the correlation between the divergence of linear maps induced by data points at initialization and the performance of candidate architectures heuristically . Meanwhile , Park et al . ( 2020 ) have approximated the performance of candidate architectures by the performance of their corresponding Neural Network Gaussian Process ( NNGP ) with only initialized model parameters , which is yet computationally costly . Abdelfattah et al . ( 2021 ) have investigated several training-free proxies to rank candidate architectures in the search space , while Chen et al . ( 2021 ) intuitively adopt theoretical aspects in deep networks ( e.g. , NTK ( Jacot et al. , 2018 ) and linear regions of deep networks ( Raghu et al. , 2017 ) ) to select architectures with a good trade-off between its trainability and expressivity . Our NASI significantly advances this line of work in ( a ) providing theoretically grounded performance estimation by NTK ( compared with ( Mellor et al. , 2020 ; Abdelfattah et al. , 2021 ; Chen et al. , 2021 ) ) , ( b ) guaranteeing the transferability of its selected architectures with its provable label- and data-agnostic search under mild conditions ( compared with ( Mellor et al. , 2020 ; Park et al. , 2020 ; Abdelfattah et al. , 2021 ; Chen et al. , 2021 ) ) ) and ( c ) achieving SOTA performance in a large search space over various benchmark datasets ( compared with ( Mellor et al. , 2020 ; Park et al. , 2020 ; Abdelfattah et al. , 2021 ) ) . 2.2 NEURAL TANGENT KERNEL ( NTK ) . Let a dataset ( X , Y ) denote a pair comprising a set X of m n0-dimensional vectors of input features and a vector Y ∈ Rmn×1 concatenating the m n-dimensional vectors of corresponding output values , respectively . Let a DNN be parameterized by θt ∈ Rp at time t and output a vector 1More precisely , we approximate the trace norm of NTK . f ( X ; θt ) ∈ Rmn×1 ( abbreviated to ft ) of the predicted values of Y . Jacot et al . ( 2018 ) have revealed that the training dynamics of DNNs with gradient descent can be characterized by an NTK . Formally , define the NTK Θt ( X , X ) ∈ Rmn×mn ( abbreviated to Θt ) as Θt ( X , X ) , ∇θtf ( X ; θt ) ∇θtf ( X ; θt ) > . ( 1 ) Given a loss function Lt at time t and a learning rate η , the training dynamics of the DNN can then be characterized as ∇tft = −η Θt ( X , X ) ∇ftLt , ∇tLt = −η ∇ftL > t Θt ( X , X ) ∇ftLt . ( 2 ) Interestingly , as proven in ( Jacot et al. , 2018 ) , the NTK stays asymptotically constant during the course of training as the width of DNNs goes to infinity . NTK at initialization ( i.e. , Θ0 ) can thus characterize the training dynamics and also the performance of infinite-width DNNs . Lee et al . ( 2019a ) have further revealed that , for DNNs with over-parameterization , the aforementioned training dynamics can be governed by their first-order Taylor expansion ( or linearization ) at initialization . In particular , define f lin ( x ; θt ) , f ( x ; θ0 ) +∇θ0f ( x ; θ0 ) > ( θt − θ0 ) ( 3 ) for all x ∈ X . Then , f ( x ; θt ) and f lin ( x ; θt ) share similar training dynamics over time , as described formally in Appendix A.2 . Besides , following the definition of NTK in ( 1 ) , this linearization f lin achieves a constant NTK over time . Given the mean squared error ( MSE ) loss defined as Lt , m−1 ‖Y − f ( X ; θt ) ‖22 and the constant NTK Θt = Θ0 , the loss dynamics in ( 2 ) above can be analyzed in a closed form while applying gradient descent with learning rate η ( Arora et al. , 2019 ) : Lt = m−1 ∑mn i=1 ( 1− ηλi ) 2t ( u > i Y ) 2 , ( 4 ) where Θ0 = ∑mn i=1 λi ( Θ0 ) uiu > i , and λi ( Θ0 ) and ui denote the i-th largest eigenvalue and the corresponding eigenvector of Θ0 , respectively . 3 NEURAL ARCHITECTURE SEARCH AT INITIALIZATION . 3.1 REFORMULATING NAS VIA NTK . Given a loss function L and model parameters θ ( A ) of architecture A , we denote the training and validation loss as Ltrain and Lval , respectively . NAS is conventionally formulated as a bi-level optimization problem ( Liu et al. , 2019 ) : minA Lval ( θ∗ ( A ) ; A ) s.t . θ∗ ( A ) , arg minθ ( A ) Ltrain ( θ ( A ) ; A ) . ( 5 ) Notably , model training is required to evaluate the validation performance of each candidate architecture in ( 5 ) . The search efficiency of NAS algorithms ( Real et al. , 2019 ; Zoph et al. , 2018 ) based on ( 5 ) is thus severely limited by the cost of model training for each candidate architecture . Though recent works ( Pham et al. , 2018 ) have considerably reduced this training cost by introducing a one-shot architecture for model parameter sharing , such a one-shot architecture requires training and hence incurs the training cost . To completely avoid this training cost , we exploit the capability of NTK for characterizing the performance of DNNs at initialization . Specifically , Sec . 2.2 has revealed that the training dynamics of an over-parameterized DNN can be governed by its linearization at initialization . With the MSE loss , the training dynamics of such linearization are further determined by its constant NTK . Therefore , the training dynamics and hence the performance of a DNN can be characterized by the constant NTK of its linearization . However , this constant NTK is computationally costly to evaluate . To this end , we instead characterize the training dynamics ( i.e. , MSE ) of DNNs in Proposition 1 using the trace norm of NTK at initialization , which can be efficiently approximated . For simplicity , we use this MSE loss in our analysis . Other widely adopted loss functions ( e.g. , cross entropy with softmax ) can also be applied , as supported in our experiments . Note that throughout this paper , the parameterization and initialization of DNNs follow that of Jacot et al . ( 2018 ) . For a L-layer DNN , we denote the output dimension of its hidden layers and the last layer as n1 = · · · = nL−1 = k and nL = n , respectively . Proposition 1 . Suppose that ‖x‖2 ≤ 1 for all x ∈ X and Y ∈ [ 0 , 1 ] mn for a given dataset ( X , Y ) of size |X | = m , a given L-layer neural architecture A outputs ft ∈ [ 0 , 1 ] mn as predicted labels of Y with the corresponding MSE loss Lt , λmin ( Θ0 ) > 0 for the given NTK Θ0 w.r.t . ft at initialization , and gradient descent ( or gradient flow ) is applied with learning rate η < λ−1max ( Θ0 ) . Then , for any t ≥ 0 , there exists a constant c0 > 0 such that as k →∞ , Lt ≤ mn2 ( 1− ηλ ( Θ0 ) ) q + ( 6 ) with probability arbitrarily close to 1 where q is set to 2t if t < 0.5 , and 1 otherwise , λ ( Θ0 ) , ( mn ) −1 ∑mn i=1 λi ( Θ0 ) , and , 2c0 √ n/ ( mk ) ( 1 + c0 √ 1/k ) . Its proof is in Appendix A.3 . Proposition 1 implies that NAS can be realizable at initialization . Specifically , given a fixed and sufficiently large training budget t , in order to select the best-performing architecture , we can simply minimize the upper bound of Lt in ( 6 ) over all the candidate architectures in the search space . Here , Lt can be applied to approximated Lval since both strong theoretical ( Mohri et al. , 2018 ) and empirical ( Hardt et al. , 2016 ) justifications in the literature have shown that training and validation loss are generally highly related . Hence , ( 5 ) can be reformulated as minA mn 2 ( 1− ηλ ( Θ0 ( A ) ) ) + s.t . λ ( Θ0 ( A ) ) < η−1 . ( 7 ) Note that the constraint in ( 7 ) is derived from the condition η < λ−1max ( Θ0 ( A ) ) in Proposition 1 , and η and are typically constants2 during the search process . Following the definition of trace norm , ( 7 ) can be further reduced into maxA ‖Θ0 ( A ) ‖tr s.t . ‖Θ0 ( A ) ‖tr < mnη−1 . ( 8 ) Notably , Θ0 ( A ) only relies on the initialization of A . So , no model training is required in optimizing ( 8 ) , which achieves our objective of realizing NAS at initialization . Furthermore , ( 8 ) suggests an interesting interpretation of NAS : NAS intends to select architectures with a good trade-off between their model complexity and the optimization behavior in their model training . Particularly , architectures containing more model parameters will usually achieve a larger ‖Θ0 ( A ) ‖tr according to the definition in ( 1 ) , which hence provides an alternative to measuring the complexity of architectures . So , maximizing ‖Θ0 ( A ) ‖tr leads to architectures with large complexity and therefore strong representation power . On the other hand , the complexity of the selected architectures is limited by the constraint in ( 8 ) to ensure a well-behaved optimization with a large learning rate η in their model training . By combining these two effects , the optimization of ( 8 ) naturally trades off between the complexity of the selected architectures and the optimization behavior in their model training for the best performance . Appendix C.1 will validate such trade-off . Interestingly , Chen et al . ( 2021 ) have revealed a similar insight of NAS to us . | This paper casts the problem of NAS into a training-free evaluation process by using neural tangent kernel (NTK). Specifically, the paper argues that the training dynamics and the performance of a DNN can be determined by the constant NTK of its linearization. Moreover, to efficiently evaluate the constant NTK of any network architectures, the paper proposes to use the trac norm of NTK at initialization as an approximation. Using the NAS method proposed in this paper, one can search high-quality architectures with little GPU-hours. Interestigly, the proposed method is robust when applied in a data-/label-free search setting. Extensive experiments show that the searched networks have good performance and can be well-transferred to other datasets. | SP:41184d03d925e31be2aeb90ff442124504671785 |
Improving the Accuracy of Learning Example Weights for Imbalance Classification | 1 INTRODUCTION . Classification is a fundamental task in machine learning , but in practical classification applications , the number of examples among classes may differ greatly , even by several orders of magnitude . Standard learning methods train the classification model on such an imbalanced data set , which makes the trained model biased . This bias is that the model will prefer the majority class and easily misclassify the minority class examples . This class-imbalance problem exists in many domains , such as Twitter spam detection ( Li & Liu , 2018 ) , named entity recognition ( Grancharova et al. , 2020 ) in text classification , and object detection ( Oksuz et al. , 2020 ) , video surveillance ( Wu & Chang , 2003 ) in image classification . There are very rich research lines on using the methods of weighting examples to solve the class imbalance problem . In general , the weight of the minority class is higher than that of the majority class , so that the bias towards the majority class is alleviated . Typically , the example weight value of each class is often set to inverse class frequency ( Wang et al. , 2017 ) or inverse square root of class frequency ( Mahajan et al. , 2018 ) . However , the example weights in these methods are designed empirically , hence they can not be adapted to different datasets and may perform poorly . Recent work has studied the methods of using learning mechanisms to adaptively calculate the example weights . Ren et al . ( 2018 ) propose to use a meta-learning paradigm ( Hospedales et al. , 2020 ) to learn the weights . In this method , the example weights can be regarded as a meta-learner and the classification model is a learner . The meta-learner guides the learner to learn by weighting the example loss in the model optimization objective . More specifically , the model objective is to get the optimal model that minimizes the example-weighted loss of the imbalanced training set . Obviously , different weights will affect the performance of the optimal model . Which weight values make the corresponding optimal model the best ? This method collects a small balanced validation set and evaluates the weight values through the validation performance of the model . Therefore , the meta-learner objective , namely meta-objective , gives the best weights that make the optimal model minimize the loss of the balanced validation set . This optimization problem is challenging . The key is that , in the meta-objective , the weights indirectly affect the loss through the optimal model , so it ∗Corresponding author is necessary to clearly define the dependence of the weights and the optimal model in the model objective for optimizing the weights . However , it is expensive to get this dependence through multiple gradient descent steps in the model objective . Ren et al . ( 2018 ) propose an online approximation method to estimate this dependence , that is , the method trains the model using a gradient descent step in the model objective and then can determine the relationship between the weights and the trained model in this step . Hu et al . ( 2019 ) propose to update the example weights iteratively to replace the re-estimation proposed by Ren et al . ( 2018 ) , but also adopt the local approximation to optimize the weights . However , this approximation only considers the influence of the weights on the trained model in a short term ( in a descent step ) , resulting in inaccurate learning of the weights . In this paper , we firstly propose a novel learning mechanism that can obtain the precise relationship between the weights and the trained model in the model objective , so that the weights and model can be optimized more accurately . In this mechanism , we convert the model objective into an equation of the current model and weights . Then , we derive their relationship from this equation , and then we use this relationship to optimize the weights in the meta-objective and update the corresponding model . Since this optimization process always satisfies this equation , we call it learning with a constraint . However , the mechanism only uses the model objective to calculate the relationship but does not optimize the model for the model objective . To solve this problem , we integrate the method proposed by Hu et al . ( 2019 ) into our learning mechanism and propose a combined algorithm . In this algorithm , the method of Hu et al . can help to further optimize the model in the model objective , and our learning mechanism can make the weights and model learn more accurately . Finally , we conduct a lot of experiments to validate the effectiveness of this algorithm . The experimental settings include ( 1 ) different domains , namely text and image classification ; ( 2 ) different scenarios , namely binary and multi-class classification , ( 3 ) different imbalance ratios . The results show that our algorithm not only outperforms the state-of-the-art ( SOTA ) method in data weighting but also performs best among other comparison methods in varieties of settings . The remainder of this paper is organized as follows . Section 2 introduces preliminaries of the two objectives and the main idea of Hu et al . ( 2019 ) . Section 3 presents our mechanism of learning with a constraint and the combined algorithm . Section 4 presents the experimental settings and evaluation results . Section 5 summarizes the related work and Section 6 concludes this paper . 2 PRELIMINARIES AND NOTATIONS . Let ( x , y ) be the input and target pair . For example , in image classification , x is the image and y is the image label . Let Dtrain denote the train set , and Dtrain = { ( xi , yi ) , 1 6 i 6 N } . Let Dval be a small balanced validation set , andDval = { ( xi , yi ) , 1 6 i 6M } whereM N . We denote neural network model as Φ ( x , θ ) , where θ ∈ RK is the model parameter . The predicted value ŷ = Φ ( x , θ ) . We use loss function f ( ŷ , y ) to measure the difference between predicted value ŷ and target value y , and the loss function of data xi is defined as fi ( θ ) for clarity . Standard training method is to minimize the expected loss on the training set : ∑N i=1 fi ( θ ) , and each example has same weight . However , for an imbalanced data set , the model obtained by this method will be biased towards the majority class . Here , we aim to learn a model parameter θ that is fair to the minority class and the majority class by minimizing the weighted loss of training examples : θ∗ ( w ) = arg minθ N∑ i=1 wifi ( θ ) ( 1 ) where w = ( w1 , ... , wN ) T is the weights of all training examples . We use Ltrain to represent the weighted loss on the training set Dtrain . For a given w , we can obtain the corresponding optimal θ∗ from Eq.1 . Thus , there is a dependence between θ∗ and w and we write it as θ∗ = θ∗ ( w ) . Learning to Weight Examples The recent work ( Ren et al. , 2018 ) proposed a method of learning the weights of training examples . In this method , the optimal w is to make the model parameter θ∗ obtained from Eq.1 minimize the loss on a balanced validation set . It means that this model performs well on a balanced validation set , and it can fairly distinguish examples from different classes . Formally , the optimal w is given as w∗ = arg minw 1 M M∑ i=1 fvi ( θ ∗ ( w ) ) ( 2 ) where the superscript v stands for validation set . Let Lval be the loss on the validation set Dval . Learning the Parameters The recent work ( Hu et al. , 2019 ) introduced an algorithm of solving the model parameter θ∗ and weight w∗ . The algorithm optimizes θ and w alternately until convergence . In each iteration , the algorithm utilizes a gradient descent step in Eq.1 to approximate the relationship between θ and w , and then calculates the gradient ∇wLval and ∇θLtrain to update w and θ respectively . More specifically , at the t-th iteration , the algorithm first calculates the approximate relationship between θ and w through the t-th gradient descent step in Eq.1 . We define a matrix F ( θ ) = ( ∇f1 ( θ ) , ... , ∇fN ( θ ) ) , whose column vector represents the derivative of fi ( θ ) with respect to θ , so we calculate the derivative of ∇θLtrain with respect to θ as ∇θLtrain = F ( θ ) w. Then , the t-th gradient descent step of θ is given as θ̂t+1 = θt − ηθF ( θt ) wt ( 3 ) where ηθ is the descent step size on θ . In order to avoid very expensive calculations , the algorithm ignores the influence ofw on θt . Therefore , in the single gradient descent step , θ̂t+1 linearly depends on w. Then , based on this linear dependence , the algorithm can calculate the gradient ∇wLval and uses gradient descent to update w , and then updates θ again to make it perform better on validation set . Substituting the updated θ̂t+1 into Eq.2 , we have Lval = 1M ∑M i=1 f v i ( θ̂t+1 ( w ) ) . We can observe that the w acts on θ̂t+1 and then affects Lval . Thus , combining Eq.3 , we can calculate the gradient ∇wLval = ( ∇wθ̂t+1 ) T ∇θ̂t+1Lval = −ηθF ( θt ) T∇θ̂t+1Lval , so the update on w at t step is given as wt+1 = wt + ηwηθF ( θt ) T∇θ̂t+1Lval ( 4 ) where ηw is the descent step size on w. According to gradient descent theory , when ηw is appropriately small , Lval ( wt+1 ) ≤ Lval ( wt ) . This means using wt+1 to update θ performs better than wt . Therefore , the algorithm substitutes the updated wt+1 into Eq.3 and obtain the new update on θ θt+1 = θt − ηθF ( θt ) wt+1 ( 5 ) where θt+1 satisfies Lval ( θt+1 ) ≤ Lval ( θ̂t+1 ) , that is , θt+1 have better validation performance than θ̂t+1 . Finally , the algorithm repeatedly calculates Eq.3 , 4 and 5 and alternately optimizes θ and w until convergence . 3 NEW METHOD OF LEARNING THE PARAMETERS . In this section , we introduce a new method to learn the model parameter θ∗ and weight w∗ in Eq.1 and Eq.2 . First , in Section 3.1 , we propose to learn θ and w with a constraint , which can accurately optimize θ and w. Then , in Section 3.2 , we propose a combined method to train θ and w to make the model parameter θ have better performance . 3.1 LEARNING WITH A CONSTRAINT . In the section , we first analyze the difficulty of solving θ∗ and w∗ . Gradient-based optimization is a commonly used method in machine learning . Thus , we first need to calculate the gradient∇θLtrain and ∇wLval . Based on Eq.2 , we have ∇wLval = ( ∇wθ∗ ) T∇θ∗Lval . However , it is difficult to explicitly give the form of function θ∗ ( w ) , resulting in ∇wLval can not be calculated directly . The previous work obtained the relationship between θ and w through the gradient descent process of θ , and only considered the influence of w on θ in a single gradient descent step . Based on this relationship , calculating the gradient and updating the parameter is not precise . Here , we obtain the relationship between θ and w from a new perspective . First , we observe the gradient∇θLtrain , that is , ∇θLtrain = F ( θ ) w = c ( 6 ) Algorithm 1 : Learning to Weight Examples Using a Combination Method Input : The network model parameter θ The weight of training examples w Training set Dtrain ; Validation set Dval The number of iterations of the combination method T The number of iterations of our method T ′ 1 Initialize model parameter θ and weight w 2 for t = 0 ... T − 1 do 3 Calculate the relationship between θ and w on Dtrain though Eq.3 4 Optimize w on Dval though Eq.4 5 Update θ though Eq.5 6 for t′ = 0 ... T ′ − 1 do 7 Calculate the derivative∇wθ on Dtrain though Eq.7 8 Optimize w on Dval though Eq.8 9 Update θ though Eq.9 Output : Trained model parameter θ∗ and weight w∗ where c is the gradient value . We can see that changing the value of w can find corresponding θ to satisfy Eq.6 . It means that there is a functional relationship between θ and w in Eq.6 . Because all θ and w satisfying this equation have the same value of ∇θLtrain , we also call Eq.6 a constraint of θ and w. In particular , the optimal model parameter θ∗ and w satisfy the constraint : F ( θ∗ ) w = 0 . Then , we can make use of the constraint to derive a precise relationship between θ and w. Our network model may be very complex , and we can not explicitly give the functional form of θ and w according to the constraint . However , by applying the implicit function theorem , the derivative of θ with respect to w in Eq.6 can be obtained as follow ∇wθ = − [ ∇θ ( F ( θ ) w ) ] −1F ( θ ) = −H−1F ( θ ) where H ∈ RK × RK is the Hessian matrix , namely , the second derivative of Ltrain with respect to θ . However , calculating an exact Hessian matrix is very expensive . Especially nowadays network models have a huge amount of parameters . In addition , in this case , we require the inverse of H , rather than H itself . Therefore , we adopt diagonal approximation to evaluate H ( Bishop , 2006 ) . In other words , we only need to calculate the diagonal elements of H . Furthermore , it is trivial to calculate the inverse by taking the reciprocal of the diagonal elements . Let h ∈ RK be the reciprocal of the diagonal elements of H . Then , the derivative is evaluated as ∇wθ = −diag ( h ) F ( θ ) ( 7 ) Next , we can make use of the derivative to calculate the gradient∇wLval , and then update w and θ . The update process always satisfies the constraint of Eq.6 , so we call it learning with a constraint . Combining Eq.7 , we have∇wLval = −F ( θ ) T diag ( h ) ∇θLval . Thus , the update of w is w′ = w + η′wF ( θ ) T diag ( h ) ∇θLval ( 8 ) where η′w is the step size . Then , we use the updated w ′ to calculate the corresponding θ′ in the constraint . Since we do not know the explicit functional form of θ and w in Eq.6 , we use the first order derivative to approximate θ′ . Combing Eq.7 , θ′ is evaluated as θ′ ≈ θ +∇wθ ( w′ − w ) = θ − diag ( h ) F ( θ ) ( w′ − w ) ( 9 ) Finally , under the condition of satisfying the constraint , we repeatedly optimize w and θ , corresponding to Eq.8 and Eq.9 , until convergence . The detailed proof of this convergence can be found in Theorem 1 in Appendix A.2 . | The paper presents a new method for learning example weights together with the parameters of a deep neural network. The difference between the proposed method and previous work is that they use a constraint to tie together the values of the parameters and the weights as they do the joint optimization through gradient descent. They do extensive experiments using both text and image datasets, with different imbalance ratios and show that the proposed method outperforms the state-of-the-art in terms of accuracy on a (balanced) test set. | SP:75dac9a02cdf66c38e8873c2e2c7bbd79be25770 |
Improving the Accuracy of Learning Example Weights for Imbalance Classification | 1 INTRODUCTION . Classification is a fundamental task in machine learning , but in practical classification applications , the number of examples among classes may differ greatly , even by several orders of magnitude . Standard learning methods train the classification model on such an imbalanced data set , which makes the trained model biased . This bias is that the model will prefer the majority class and easily misclassify the minority class examples . This class-imbalance problem exists in many domains , such as Twitter spam detection ( Li & Liu , 2018 ) , named entity recognition ( Grancharova et al. , 2020 ) in text classification , and object detection ( Oksuz et al. , 2020 ) , video surveillance ( Wu & Chang , 2003 ) in image classification . There are very rich research lines on using the methods of weighting examples to solve the class imbalance problem . In general , the weight of the minority class is higher than that of the majority class , so that the bias towards the majority class is alleviated . Typically , the example weight value of each class is often set to inverse class frequency ( Wang et al. , 2017 ) or inverse square root of class frequency ( Mahajan et al. , 2018 ) . However , the example weights in these methods are designed empirically , hence they can not be adapted to different datasets and may perform poorly . Recent work has studied the methods of using learning mechanisms to adaptively calculate the example weights . Ren et al . ( 2018 ) propose to use a meta-learning paradigm ( Hospedales et al. , 2020 ) to learn the weights . In this method , the example weights can be regarded as a meta-learner and the classification model is a learner . The meta-learner guides the learner to learn by weighting the example loss in the model optimization objective . More specifically , the model objective is to get the optimal model that minimizes the example-weighted loss of the imbalanced training set . Obviously , different weights will affect the performance of the optimal model . Which weight values make the corresponding optimal model the best ? This method collects a small balanced validation set and evaluates the weight values through the validation performance of the model . Therefore , the meta-learner objective , namely meta-objective , gives the best weights that make the optimal model minimize the loss of the balanced validation set . This optimization problem is challenging . The key is that , in the meta-objective , the weights indirectly affect the loss through the optimal model , so it ∗Corresponding author is necessary to clearly define the dependence of the weights and the optimal model in the model objective for optimizing the weights . However , it is expensive to get this dependence through multiple gradient descent steps in the model objective . Ren et al . ( 2018 ) propose an online approximation method to estimate this dependence , that is , the method trains the model using a gradient descent step in the model objective and then can determine the relationship between the weights and the trained model in this step . Hu et al . ( 2019 ) propose to update the example weights iteratively to replace the re-estimation proposed by Ren et al . ( 2018 ) , but also adopt the local approximation to optimize the weights . However , this approximation only considers the influence of the weights on the trained model in a short term ( in a descent step ) , resulting in inaccurate learning of the weights . In this paper , we firstly propose a novel learning mechanism that can obtain the precise relationship between the weights and the trained model in the model objective , so that the weights and model can be optimized more accurately . In this mechanism , we convert the model objective into an equation of the current model and weights . Then , we derive their relationship from this equation , and then we use this relationship to optimize the weights in the meta-objective and update the corresponding model . Since this optimization process always satisfies this equation , we call it learning with a constraint . However , the mechanism only uses the model objective to calculate the relationship but does not optimize the model for the model objective . To solve this problem , we integrate the method proposed by Hu et al . ( 2019 ) into our learning mechanism and propose a combined algorithm . In this algorithm , the method of Hu et al . can help to further optimize the model in the model objective , and our learning mechanism can make the weights and model learn more accurately . Finally , we conduct a lot of experiments to validate the effectiveness of this algorithm . The experimental settings include ( 1 ) different domains , namely text and image classification ; ( 2 ) different scenarios , namely binary and multi-class classification , ( 3 ) different imbalance ratios . The results show that our algorithm not only outperforms the state-of-the-art ( SOTA ) method in data weighting but also performs best among other comparison methods in varieties of settings . The remainder of this paper is organized as follows . Section 2 introduces preliminaries of the two objectives and the main idea of Hu et al . ( 2019 ) . Section 3 presents our mechanism of learning with a constraint and the combined algorithm . Section 4 presents the experimental settings and evaluation results . Section 5 summarizes the related work and Section 6 concludes this paper . 2 PRELIMINARIES AND NOTATIONS . Let ( x , y ) be the input and target pair . For example , in image classification , x is the image and y is the image label . Let Dtrain denote the train set , and Dtrain = { ( xi , yi ) , 1 6 i 6 N } . Let Dval be a small balanced validation set , andDval = { ( xi , yi ) , 1 6 i 6M } whereM N . We denote neural network model as Φ ( x , θ ) , where θ ∈ RK is the model parameter . The predicted value ŷ = Φ ( x , θ ) . We use loss function f ( ŷ , y ) to measure the difference between predicted value ŷ and target value y , and the loss function of data xi is defined as fi ( θ ) for clarity . Standard training method is to minimize the expected loss on the training set : ∑N i=1 fi ( θ ) , and each example has same weight . However , for an imbalanced data set , the model obtained by this method will be biased towards the majority class . Here , we aim to learn a model parameter θ that is fair to the minority class and the majority class by minimizing the weighted loss of training examples : θ∗ ( w ) = arg minθ N∑ i=1 wifi ( θ ) ( 1 ) where w = ( w1 , ... , wN ) T is the weights of all training examples . We use Ltrain to represent the weighted loss on the training set Dtrain . For a given w , we can obtain the corresponding optimal θ∗ from Eq.1 . Thus , there is a dependence between θ∗ and w and we write it as θ∗ = θ∗ ( w ) . Learning to Weight Examples The recent work ( Ren et al. , 2018 ) proposed a method of learning the weights of training examples . In this method , the optimal w is to make the model parameter θ∗ obtained from Eq.1 minimize the loss on a balanced validation set . It means that this model performs well on a balanced validation set , and it can fairly distinguish examples from different classes . Formally , the optimal w is given as w∗ = arg minw 1 M M∑ i=1 fvi ( θ ∗ ( w ) ) ( 2 ) where the superscript v stands for validation set . Let Lval be the loss on the validation set Dval . Learning the Parameters The recent work ( Hu et al. , 2019 ) introduced an algorithm of solving the model parameter θ∗ and weight w∗ . The algorithm optimizes θ and w alternately until convergence . In each iteration , the algorithm utilizes a gradient descent step in Eq.1 to approximate the relationship between θ and w , and then calculates the gradient ∇wLval and ∇θLtrain to update w and θ respectively . More specifically , at the t-th iteration , the algorithm first calculates the approximate relationship between θ and w through the t-th gradient descent step in Eq.1 . We define a matrix F ( θ ) = ( ∇f1 ( θ ) , ... , ∇fN ( θ ) ) , whose column vector represents the derivative of fi ( θ ) with respect to θ , so we calculate the derivative of ∇θLtrain with respect to θ as ∇θLtrain = F ( θ ) w. Then , the t-th gradient descent step of θ is given as θ̂t+1 = θt − ηθF ( θt ) wt ( 3 ) where ηθ is the descent step size on θ . In order to avoid very expensive calculations , the algorithm ignores the influence ofw on θt . Therefore , in the single gradient descent step , θ̂t+1 linearly depends on w. Then , based on this linear dependence , the algorithm can calculate the gradient ∇wLval and uses gradient descent to update w , and then updates θ again to make it perform better on validation set . Substituting the updated θ̂t+1 into Eq.2 , we have Lval = 1M ∑M i=1 f v i ( θ̂t+1 ( w ) ) . We can observe that the w acts on θ̂t+1 and then affects Lval . Thus , combining Eq.3 , we can calculate the gradient ∇wLval = ( ∇wθ̂t+1 ) T ∇θ̂t+1Lval = −ηθF ( θt ) T∇θ̂t+1Lval , so the update on w at t step is given as wt+1 = wt + ηwηθF ( θt ) T∇θ̂t+1Lval ( 4 ) where ηw is the descent step size on w. According to gradient descent theory , when ηw is appropriately small , Lval ( wt+1 ) ≤ Lval ( wt ) . This means using wt+1 to update θ performs better than wt . Therefore , the algorithm substitutes the updated wt+1 into Eq.3 and obtain the new update on θ θt+1 = θt − ηθF ( θt ) wt+1 ( 5 ) where θt+1 satisfies Lval ( θt+1 ) ≤ Lval ( θ̂t+1 ) , that is , θt+1 have better validation performance than θ̂t+1 . Finally , the algorithm repeatedly calculates Eq.3 , 4 and 5 and alternately optimizes θ and w until convergence . 3 NEW METHOD OF LEARNING THE PARAMETERS . In this section , we introduce a new method to learn the model parameter θ∗ and weight w∗ in Eq.1 and Eq.2 . First , in Section 3.1 , we propose to learn θ and w with a constraint , which can accurately optimize θ and w. Then , in Section 3.2 , we propose a combined method to train θ and w to make the model parameter θ have better performance . 3.1 LEARNING WITH A CONSTRAINT . In the section , we first analyze the difficulty of solving θ∗ and w∗ . Gradient-based optimization is a commonly used method in machine learning . Thus , we first need to calculate the gradient∇θLtrain and ∇wLval . Based on Eq.2 , we have ∇wLval = ( ∇wθ∗ ) T∇θ∗Lval . However , it is difficult to explicitly give the form of function θ∗ ( w ) , resulting in ∇wLval can not be calculated directly . The previous work obtained the relationship between θ and w through the gradient descent process of θ , and only considered the influence of w on θ in a single gradient descent step . Based on this relationship , calculating the gradient and updating the parameter is not precise . Here , we obtain the relationship between θ and w from a new perspective . First , we observe the gradient∇θLtrain , that is , ∇θLtrain = F ( θ ) w = c ( 6 ) Algorithm 1 : Learning to Weight Examples Using a Combination Method Input : The network model parameter θ The weight of training examples w Training set Dtrain ; Validation set Dval The number of iterations of the combination method T The number of iterations of our method T ′ 1 Initialize model parameter θ and weight w 2 for t = 0 ... T − 1 do 3 Calculate the relationship between θ and w on Dtrain though Eq.3 4 Optimize w on Dval though Eq.4 5 Update θ though Eq.5 6 for t′ = 0 ... T ′ − 1 do 7 Calculate the derivative∇wθ on Dtrain though Eq.7 8 Optimize w on Dval though Eq.8 9 Update θ though Eq.9 Output : Trained model parameter θ∗ and weight w∗ where c is the gradient value . We can see that changing the value of w can find corresponding θ to satisfy Eq.6 . It means that there is a functional relationship between θ and w in Eq.6 . Because all θ and w satisfying this equation have the same value of ∇θLtrain , we also call Eq.6 a constraint of θ and w. In particular , the optimal model parameter θ∗ and w satisfy the constraint : F ( θ∗ ) w = 0 . Then , we can make use of the constraint to derive a precise relationship between θ and w. Our network model may be very complex , and we can not explicitly give the functional form of θ and w according to the constraint . However , by applying the implicit function theorem , the derivative of θ with respect to w in Eq.6 can be obtained as follow ∇wθ = − [ ∇θ ( F ( θ ) w ) ] −1F ( θ ) = −H−1F ( θ ) where H ∈ RK × RK is the Hessian matrix , namely , the second derivative of Ltrain with respect to θ . However , calculating an exact Hessian matrix is very expensive . Especially nowadays network models have a huge amount of parameters . In addition , in this case , we require the inverse of H , rather than H itself . Therefore , we adopt diagonal approximation to evaluate H ( Bishop , 2006 ) . In other words , we only need to calculate the diagonal elements of H . Furthermore , it is trivial to calculate the inverse by taking the reciprocal of the diagonal elements . Let h ∈ RK be the reciprocal of the diagonal elements of H . Then , the derivative is evaluated as ∇wθ = −diag ( h ) F ( θ ) ( 7 ) Next , we can make use of the derivative to calculate the gradient∇wLval , and then update w and θ . The update process always satisfies the constraint of Eq.6 , so we call it learning with a constraint . Combining Eq.7 , we have∇wLval = −F ( θ ) T diag ( h ) ∇θLval . Thus , the update of w is w′ = w + η′wF ( θ ) T diag ( h ) ∇θLval ( 8 ) where η′w is the step size . Then , we use the updated w ′ to calculate the corresponding θ′ in the constraint . Since we do not know the explicit functional form of θ and w in Eq.6 , we use the first order derivative to approximate θ′ . Combing Eq.7 , θ′ is evaluated as θ′ ≈ θ +∇wθ ( w′ − w ) = θ − diag ( h ) F ( θ ) ( w′ − w ) ( 9 ) Finally , under the condition of satisfying the constraint , we repeatedly optimize w and θ , corresponding to Eq.8 and Eq.9 , until convergence . The detailed proof of this convergence can be found in Theorem 1 in Appendix A.2 . | This paper proposes a new approach to learn sample weights aiming to solve imbalance classification problem. The problem is challenging because the learnable model parameter and sample weights are coupled and cannot be directly optimized together. The previous method learns the parameter and weights in an alternative fashion, which solves the problem approximately. The authors argues that there exists a constraint and allow both parameters to be learned together thus this paper proposed a new algorithm combined with previous method to learn the sample weights. Experimental results shows that the proposed method outperforms other competing method especially for the extreme imbalance cases (1:100). | SP:75dac9a02cdf66c38e8873c2e2c7bbd79be25770 |
Improving the Accuracy of Learning Example Weights for Imbalance Classification | 1 INTRODUCTION . Classification is a fundamental task in machine learning , but in practical classification applications , the number of examples among classes may differ greatly , even by several orders of magnitude . Standard learning methods train the classification model on such an imbalanced data set , which makes the trained model biased . This bias is that the model will prefer the majority class and easily misclassify the minority class examples . This class-imbalance problem exists in many domains , such as Twitter spam detection ( Li & Liu , 2018 ) , named entity recognition ( Grancharova et al. , 2020 ) in text classification , and object detection ( Oksuz et al. , 2020 ) , video surveillance ( Wu & Chang , 2003 ) in image classification . There are very rich research lines on using the methods of weighting examples to solve the class imbalance problem . In general , the weight of the minority class is higher than that of the majority class , so that the bias towards the majority class is alleviated . Typically , the example weight value of each class is often set to inverse class frequency ( Wang et al. , 2017 ) or inverse square root of class frequency ( Mahajan et al. , 2018 ) . However , the example weights in these methods are designed empirically , hence they can not be adapted to different datasets and may perform poorly . Recent work has studied the methods of using learning mechanisms to adaptively calculate the example weights . Ren et al . ( 2018 ) propose to use a meta-learning paradigm ( Hospedales et al. , 2020 ) to learn the weights . In this method , the example weights can be regarded as a meta-learner and the classification model is a learner . The meta-learner guides the learner to learn by weighting the example loss in the model optimization objective . More specifically , the model objective is to get the optimal model that minimizes the example-weighted loss of the imbalanced training set . Obviously , different weights will affect the performance of the optimal model . Which weight values make the corresponding optimal model the best ? This method collects a small balanced validation set and evaluates the weight values through the validation performance of the model . Therefore , the meta-learner objective , namely meta-objective , gives the best weights that make the optimal model minimize the loss of the balanced validation set . This optimization problem is challenging . The key is that , in the meta-objective , the weights indirectly affect the loss through the optimal model , so it ∗Corresponding author is necessary to clearly define the dependence of the weights and the optimal model in the model objective for optimizing the weights . However , it is expensive to get this dependence through multiple gradient descent steps in the model objective . Ren et al . ( 2018 ) propose an online approximation method to estimate this dependence , that is , the method trains the model using a gradient descent step in the model objective and then can determine the relationship between the weights and the trained model in this step . Hu et al . ( 2019 ) propose to update the example weights iteratively to replace the re-estimation proposed by Ren et al . ( 2018 ) , but also adopt the local approximation to optimize the weights . However , this approximation only considers the influence of the weights on the trained model in a short term ( in a descent step ) , resulting in inaccurate learning of the weights . In this paper , we firstly propose a novel learning mechanism that can obtain the precise relationship between the weights and the trained model in the model objective , so that the weights and model can be optimized more accurately . In this mechanism , we convert the model objective into an equation of the current model and weights . Then , we derive their relationship from this equation , and then we use this relationship to optimize the weights in the meta-objective and update the corresponding model . Since this optimization process always satisfies this equation , we call it learning with a constraint . However , the mechanism only uses the model objective to calculate the relationship but does not optimize the model for the model objective . To solve this problem , we integrate the method proposed by Hu et al . ( 2019 ) into our learning mechanism and propose a combined algorithm . In this algorithm , the method of Hu et al . can help to further optimize the model in the model objective , and our learning mechanism can make the weights and model learn more accurately . Finally , we conduct a lot of experiments to validate the effectiveness of this algorithm . The experimental settings include ( 1 ) different domains , namely text and image classification ; ( 2 ) different scenarios , namely binary and multi-class classification , ( 3 ) different imbalance ratios . The results show that our algorithm not only outperforms the state-of-the-art ( SOTA ) method in data weighting but also performs best among other comparison methods in varieties of settings . The remainder of this paper is organized as follows . Section 2 introduces preliminaries of the two objectives and the main idea of Hu et al . ( 2019 ) . Section 3 presents our mechanism of learning with a constraint and the combined algorithm . Section 4 presents the experimental settings and evaluation results . Section 5 summarizes the related work and Section 6 concludes this paper . 2 PRELIMINARIES AND NOTATIONS . Let ( x , y ) be the input and target pair . For example , in image classification , x is the image and y is the image label . Let Dtrain denote the train set , and Dtrain = { ( xi , yi ) , 1 6 i 6 N } . Let Dval be a small balanced validation set , andDval = { ( xi , yi ) , 1 6 i 6M } whereM N . We denote neural network model as Φ ( x , θ ) , where θ ∈ RK is the model parameter . The predicted value ŷ = Φ ( x , θ ) . We use loss function f ( ŷ , y ) to measure the difference between predicted value ŷ and target value y , and the loss function of data xi is defined as fi ( θ ) for clarity . Standard training method is to minimize the expected loss on the training set : ∑N i=1 fi ( θ ) , and each example has same weight . However , for an imbalanced data set , the model obtained by this method will be biased towards the majority class . Here , we aim to learn a model parameter θ that is fair to the minority class and the majority class by minimizing the weighted loss of training examples : θ∗ ( w ) = arg minθ N∑ i=1 wifi ( θ ) ( 1 ) where w = ( w1 , ... , wN ) T is the weights of all training examples . We use Ltrain to represent the weighted loss on the training set Dtrain . For a given w , we can obtain the corresponding optimal θ∗ from Eq.1 . Thus , there is a dependence between θ∗ and w and we write it as θ∗ = θ∗ ( w ) . Learning to Weight Examples The recent work ( Ren et al. , 2018 ) proposed a method of learning the weights of training examples . In this method , the optimal w is to make the model parameter θ∗ obtained from Eq.1 minimize the loss on a balanced validation set . It means that this model performs well on a balanced validation set , and it can fairly distinguish examples from different classes . Formally , the optimal w is given as w∗ = arg minw 1 M M∑ i=1 fvi ( θ ∗ ( w ) ) ( 2 ) where the superscript v stands for validation set . Let Lval be the loss on the validation set Dval . Learning the Parameters The recent work ( Hu et al. , 2019 ) introduced an algorithm of solving the model parameter θ∗ and weight w∗ . The algorithm optimizes θ and w alternately until convergence . In each iteration , the algorithm utilizes a gradient descent step in Eq.1 to approximate the relationship between θ and w , and then calculates the gradient ∇wLval and ∇θLtrain to update w and θ respectively . More specifically , at the t-th iteration , the algorithm first calculates the approximate relationship between θ and w through the t-th gradient descent step in Eq.1 . We define a matrix F ( θ ) = ( ∇f1 ( θ ) , ... , ∇fN ( θ ) ) , whose column vector represents the derivative of fi ( θ ) with respect to θ , so we calculate the derivative of ∇θLtrain with respect to θ as ∇θLtrain = F ( θ ) w. Then , the t-th gradient descent step of θ is given as θ̂t+1 = θt − ηθF ( θt ) wt ( 3 ) where ηθ is the descent step size on θ . In order to avoid very expensive calculations , the algorithm ignores the influence ofw on θt . Therefore , in the single gradient descent step , θ̂t+1 linearly depends on w. Then , based on this linear dependence , the algorithm can calculate the gradient ∇wLval and uses gradient descent to update w , and then updates θ again to make it perform better on validation set . Substituting the updated θ̂t+1 into Eq.2 , we have Lval = 1M ∑M i=1 f v i ( θ̂t+1 ( w ) ) . We can observe that the w acts on θ̂t+1 and then affects Lval . Thus , combining Eq.3 , we can calculate the gradient ∇wLval = ( ∇wθ̂t+1 ) T ∇θ̂t+1Lval = −ηθF ( θt ) T∇θ̂t+1Lval , so the update on w at t step is given as wt+1 = wt + ηwηθF ( θt ) T∇θ̂t+1Lval ( 4 ) where ηw is the descent step size on w. According to gradient descent theory , when ηw is appropriately small , Lval ( wt+1 ) ≤ Lval ( wt ) . This means using wt+1 to update θ performs better than wt . Therefore , the algorithm substitutes the updated wt+1 into Eq.3 and obtain the new update on θ θt+1 = θt − ηθF ( θt ) wt+1 ( 5 ) where θt+1 satisfies Lval ( θt+1 ) ≤ Lval ( θ̂t+1 ) , that is , θt+1 have better validation performance than θ̂t+1 . Finally , the algorithm repeatedly calculates Eq.3 , 4 and 5 and alternately optimizes θ and w until convergence . 3 NEW METHOD OF LEARNING THE PARAMETERS . In this section , we introduce a new method to learn the model parameter θ∗ and weight w∗ in Eq.1 and Eq.2 . First , in Section 3.1 , we propose to learn θ and w with a constraint , which can accurately optimize θ and w. Then , in Section 3.2 , we propose a combined method to train θ and w to make the model parameter θ have better performance . 3.1 LEARNING WITH A CONSTRAINT . In the section , we first analyze the difficulty of solving θ∗ and w∗ . Gradient-based optimization is a commonly used method in machine learning . Thus , we first need to calculate the gradient∇θLtrain and ∇wLval . Based on Eq.2 , we have ∇wLval = ( ∇wθ∗ ) T∇θ∗Lval . However , it is difficult to explicitly give the form of function θ∗ ( w ) , resulting in ∇wLval can not be calculated directly . The previous work obtained the relationship between θ and w through the gradient descent process of θ , and only considered the influence of w on θ in a single gradient descent step . Based on this relationship , calculating the gradient and updating the parameter is not precise . Here , we obtain the relationship between θ and w from a new perspective . First , we observe the gradient∇θLtrain , that is , ∇θLtrain = F ( θ ) w = c ( 6 ) Algorithm 1 : Learning to Weight Examples Using a Combination Method Input : The network model parameter θ The weight of training examples w Training set Dtrain ; Validation set Dval The number of iterations of the combination method T The number of iterations of our method T ′ 1 Initialize model parameter θ and weight w 2 for t = 0 ... T − 1 do 3 Calculate the relationship between θ and w on Dtrain though Eq.3 4 Optimize w on Dval though Eq.4 5 Update θ though Eq.5 6 for t′ = 0 ... T ′ − 1 do 7 Calculate the derivative∇wθ on Dtrain though Eq.7 8 Optimize w on Dval though Eq.8 9 Update θ though Eq.9 Output : Trained model parameter θ∗ and weight w∗ where c is the gradient value . We can see that changing the value of w can find corresponding θ to satisfy Eq.6 . It means that there is a functional relationship between θ and w in Eq.6 . Because all θ and w satisfying this equation have the same value of ∇θLtrain , we also call Eq.6 a constraint of θ and w. In particular , the optimal model parameter θ∗ and w satisfy the constraint : F ( θ∗ ) w = 0 . Then , we can make use of the constraint to derive a precise relationship between θ and w. Our network model may be very complex , and we can not explicitly give the functional form of θ and w according to the constraint . However , by applying the implicit function theorem , the derivative of θ with respect to w in Eq.6 can be obtained as follow ∇wθ = − [ ∇θ ( F ( θ ) w ) ] −1F ( θ ) = −H−1F ( θ ) where H ∈ RK × RK is the Hessian matrix , namely , the second derivative of Ltrain with respect to θ . However , calculating an exact Hessian matrix is very expensive . Especially nowadays network models have a huge amount of parameters . In addition , in this case , we require the inverse of H , rather than H itself . Therefore , we adopt diagonal approximation to evaluate H ( Bishop , 2006 ) . In other words , we only need to calculate the diagonal elements of H . Furthermore , it is trivial to calculate the inverse by taking the reciprocal of the diagonal elements . Let h ∈ RK be the reciprocal of the diagonal elements of H . Then , the derivative is evaluated as ∇wθ = −diag ( h ) F ( θ ) ( 7 ) Next , we can make use of the derivative to calculate the gradient∇wLval , and then update w and θ . The update process always satisfies the constraint of Eq.6 , so we call it learning with a constraint . Combining Eq.7 , we have∇wLval = −F ( θ ) T diag ( h ) ∇θLval . Thus , the update of w is w′ = w + η′wF ( θ ) T diag ( h ) ∇θLval ( 8 ) where η′w is the step size . Then , we use the updated w ′ to calculate the corresponding θ′ in the constraint . Since we do not know the explicit functional form of θ and w in Eq.6 , we use the first order derivative to approximate θ′ . Combing Eq.7 , θ′ is evaluated as θ′ ≈ θ +∇wθ ( w′ − w ) = θ − diag ( h ) F ( θ ) ( w′ − w ) ( 9 ) Finally , under the condition of satisfying the constraint , we repeatedly optimize w and θ , corresponding to Eq.8 and Eq.9 , until convergence . The detailed proof of this convergence can be found in Theorem 1 in Appendix A.2 . | The authors propose an approach to tackle the class imbalance problem present widely in the machine learning domain. For this purpose, they first propose a mechanism to precisely learn the relationship between the weights and the trained model in the model objective. This allows the weights and models to be optimized more accurately. They then combine this process with the mechanism proposed by Hu et al., which helps the model learn the model objective better. Finally, they show the efficacy of their proposed method through experiments. | SP:75dac9a02cdf66c38e8873c2e2c7bbd79be25770 |
Offline Reinforcement Learning with In-sample Q-Learning | Offline reinforcement learning requires reconciling two conflicting aims : learning a policy that improves over the behavior policy that collected the dataset , while at the same time minimizing the deviation from the behavior policy so as to avoid errors due to distributional shift . This trade-off is critical , because most current offline reinforcement learning methods need to query the value of unseen actions during training to improve the policy , and therefore need to either constrain these actions to be in-distribution , or else regularize their values . We propose a new offline RL method that never needs to evaluate actions outside of the dataset , but still enables the learned policy to improve substantially over the best behavior in the data through generalization . The main insight in our work is that , instead of evaluating unseen actions from the latest policy , we can approximate the policy improvement step implicitly by treating the state value function as a random variable , with randomness determined by the action ( while still integrating over the dynamics to avoid excessive optimism ) , and then taking a state conditional upper expectile of this random variable to estimate the value of the best actions in that state . This leverages the generalization capacity of the function approximator to estimate the value of the best available action at a given state without ever directly querying a Q-function with this unseen action . Our algorithm alternates between fitting this upper expectile value function and backing it up into a Q-function , without any explicit policy . Then , we extract the policy via advantage-weighted behavioral cloning , which also avoids querying out-of-sample actions . We dub our method in-sample Q-learning ( IQL ) . IQL is easy to implement , computationally efficient , and only requires fitting an additional critic with an asymmetric L2 loss . IQL demonstrates the state-of-the-art performance on D4RL , a standard benchmark for offline reinforcement learning . We also demonstrate that IQL achieves strong performance fine-tuning using online interaction after offline initialization . 1 INTRODUCTION . Offline reinforcement learning ( RL ) addresses the problem of learning effective policies entirely from previously collected data , without online interaction ( Fujimoto et al. , 2019 ; Lange et al. , 2012 ) . This is very appealing in a range of real-world domains , from robotics to logistics and operations research , where real-world exploration with untrained policies is costly or dangerous , but prior data is available . However , this also carries with it major challenges : improving the policy beyond the level of the behavior policy that collected the data requires estimating values for actions other than those that were seen in the dataset , and this , in turn , requires trading off policy improvement against distributional shift , since the values of actions that are too different from those in the data are unlikely to be estimated accurately . Prior methods generally address this by either constraining the policy to limit how far it deviates from the behavior policy ( Fujimoto et al. , 2019 ; Wu et al. , 2019 ; Fujimoto & Gu , 2021 ; Kumar et al. , 2019 ; Nair et al. , 2020 ; Wang et al. , 2020 ) , or by regularizing the learned value functions to assign low values to out-of-distribution actions ( Kumar et al. , 2020 ; Kostrikov et al. , 2021 ) . Nevertheless , this imposes a trade-off between how much the policy improves and how vulnerable it is to misestimation due to distributional shift . Can we devise an offline RL method that avoids this issue by never needing to directly query or estimate values for actions that were not seen in the data ? In this work , we start from an observation that in-distribution constraints widely used in prior work might not be sufficient to avoid value function extrapolation , and we ask whether it is possible to learn an optimal policy with in-sample learning , without ever querying the values of any unseen actions . The key idea in our method is to approximate an upper expectile of the distribution over values with respect to the distribution of dataset actions for each state . We alternate between fitting this value function with expectile regression , and then using it to compute Bellman backups for training the Q-function . We show that we can do this simply by modifying the loss function in a SARSA-style TD backup , without ever using out-of-sample actions in the target value . Once thisQfunction has converged , we extract the corresponding policy using advantage-weighted behavioral cloning . This approach does not require explicit constraints or explicit regularization of out-ofdistribution actions during value function training , though our policy extraction step does implicitly enforce a constraint , as discussed in prior work on advantage-weighted regression ( Peters & Schaal , 2007 ; Peng et al. , 2019 ; Nair et al. , 2020 ; Wang et al. , 2020 ) . Our main contribution is in-sample Q-learning ( IQL ) , a new offline RL algorithm that avoids ever querying values of unseen actions while still being able to perform multi-step dynamic programming updates . Our method is easy to implement by making a small change to the loss function in a simple SARSA-like TD update and is computationally very efficient . Furthermore , our approach demonstrates the state-of-the-art performance on D4RL , a popular benchmark for offline reinforcement learning . In particular , our approach significantly improves over the prior state-of-the-art on challenging Ant Maze tasks that require to “ stitch ” several sub-optimal trajectories . Finally , we demonstrate that our approach is suitable for finetuning ; after initialization from offline RL , IQL is capable of improving policy performance utilizing additional interactions . 2 RELATED WORK . A significant portion of recently proposed offline RL methods are based on either constrained or regularized approximate dynamic programming ( e.g. , Q-learning or actor-critic methods ) , with the constraint or regularizer serving to limit deviation from the behavior policy . We will refer to these methods as “ multi-step dynamic programming ” algorithms , since they perform true dynamic programming for multiple iterations , and therefore can in principle recover the optimal policy if provided with high-coverage data . The constraints can be implemented via an explicit density model ( Wu et al. , 2019 ; Fujimoto et al. , 2019 ; Kumar et al. , 2019 ; Ghasemipour et al. , 2021 ) , implicit divergence constraints ( Nair et al. , 2020 ; Wang et al. , 2020 ; Peters & Schaal , 2007 ; Peng et al. , 2019 ; Siegel et al. , 2020 ) , or by adding a supervised learning term to the policy improvement objective ( Fujimoto & Gu , 2021 ) . Several works have also proposed to directly regularize the Q-function to produce low values for out-of-distribution actions ( Kostrikov et al. , 2021 ; Kumar et al. , 2020 ; Fakoor et al. , 2021 ) . Our method is also a multi-step dynamic programming algorithm . However , in contrast to prior works , our method completely avoids directly querying the learned Q-function with unseen actions during training , removing the need for any constraint during this stage , though the subsequent policy extraction , which is based on advantage-weighted regression ( Peng et al. , 2019 ; Nair et al. , 2020 ) , does apply an implicit constraint . However , this policy does not actually influence value function training . In contrast to multi-step dynamic programming methods , several recent works have proposed methods that rely either on a single step of policy iteration , fitting the value function or Q-function of the behavior policy and then extracting the corresponding greedy policy ( Peng et al. , 2019 ; Brandfonbrener et al. , 2021 ; Gulcehre et al. , 2021 ) , or else avoid value functions completely and utilize behavioral cloning-style objectives ( Chen et al. , 2021 ) . We collectively refer to these as “ single-step ” approaches . These methods avoid needing to query unseen actions as well , since they either use no value function at all , or learn the value function of the behavior policy . Although these methods are simple to implement and effective on the MuJoCo locomotion tasks in D4RL , we show that such single-step methods perform very poorly on more complex datasets in D4RL , which require combining parts of suboptimal trajectories ( “ stitching ” ) . Prior multi-step dynamic programming methods perform much better in such settings , as does our method . We discuss this distinction in more detail in Section 5.1 . Our method also shares the simplicity and computational efficiency of single-step approaches , providing an appealing combination of the strengths of both types of methods . Our method is based on estimating the characteristics of a random variable . Several recent works involve approximating statistical quantities of the value function distribution . In particular , quan- tile regression ( Koenker & Hallock , 2001 ) has been previously used in reinforcement learning to estimate the quantile function of a state-action value function ( Dabney et al. , 2018b ; a ; Kuznetsov et al. , 2020 ) . Although our method is related , in that we perform expectile regression , our aim is not to estimate the distribution of values that results from stochastic transitions , but rather estimate expectiles of the state value function with respect to random actions . This is a very different statistic : our aim is not to determine how the Q-value can vary with different future outcomes , but how the Q-value can vary with different actions while averaging together future outcomes due to stochastic dynamics . While prior work on distributional RL can also be used for offline RL , it would suffer from the same action extrapolation issues as other methods , and would require similar constraints or regularization , while our method does not . 3 PRELIMINARIES . The RL problem is formulated in the context of a Markov decision process ( MDP ) ( S , A , p0 ( s ) , p ( s′|s , a ) , r ( s , a ) , γ ) , where S is a state space , A is an action space , p0 ( s ) is a distribution of initial states , p ( s′|s , a ) is the environment dynamics , r ( s , a ) is a reward function , and γ is a discount factor . The agent interacts with the MDP according to a policy π ( a|s ) . The goal is to obtain a policy that maximizes the cumulative discounted returns : π∗ = argmax π Eπ [ ∞∑ t=0 γtr ( st , at ) |s0 ∼ p0 ( · ) , at ∼ π ( ·|st ) , st+1 ∼ p ( ·|st , at ) ] . Off-policy RL methods based on approximate dynamic programming typically utilize a state-action value function ( Q-function ) , referred to as Q ( s , a ) , which corresponds to the discounted returns obtained by starting from the state s and action a , and then following the policy π. Offline reinforcement learning . In contrast to online ( on-policy or off-policy ) RL methods , offline RL uses previously collected data without any additional data collection . Like many recent offline RL methods , our work builds on approximate dynamic programming methods that minimize temporal difference error , according to the following loss : LTD ( θ ) = E ( s , a , s′ ) ∼D [ ( r ( s , a ) + γmax a′ Qθ̂ ( s ′ , a′ ) −Qθ ( s , a ) ) 2 ] , ( 1 ) where D is the dataset , Qθ ( s , a ) is a parameterized Q-function , Qθ̂ ( s , a ) is a target network ( e.g. , with soft parameters updates defined via Polyak averaging ) , and the policy is defined as π ( s ) = argmaxaQθ ( s , a ) . Most recent offline RL methods modify either the value function loss ( above ) to regularize the value function in a way that keeps the resulting policy close to the data , or constrain the argmax policy directly . This is important because out-of-distribution actions a′ can produce erroneous values for Qθ̂ ( s ′ , a′ ) in the above objective , often leading to overestimation as the policy is defined to maximize the ( estimated ) Q-value . | This paper proposes a new offline RL algorithm that uses in-sample policy evaluation and advantage-weighted regression during policy improvement. Particularly, it utilizes sarsa-like TD to update Q-function and avoids the query value of unseen actions during training. To evaluate their proposed method, they used the D4RL benchmark to compare with previous offline RL methods. In addition, they show results for fine-tuning of learned offline-policies during online deployment. | SP:2895826eaac831a99c9a5f33921a3bcf1b89ec6e |
Offline Reinforcement Learning with In-sample Q-Learning | Offline reinforcement learning requires reconciling two conflicting aims : learning a policy that improves over the behavior policy that collected the dataset , while at the same time minimizing the deviation from the behavior policy so as to avoid errors due to distributional shift . This trade-off is critical , because most current offline reinforcement learning methods need to query the value of unseen actions during training to improve the policy , and therefore need to either constrain these actions to be in-distribution , or else regularize their values . We propose a new offline RL method that never needs to evaluate actions outside of the dataset , but still enables the learned policy to improve substantially over the best behavior in the data through generalization . The main insight in our work is that , instead of evaluating unseen actions from the latest policy , we can approximate the policy improvement step implicitly by treating the state value function as a random variable , with randomness determined by the action ( while still integrating over the dynamics to avoid excessive optimism ) , and then taking a state conditional upper expectile of this random variable to estimate the value of the best actions in that state . This leverages the generalization capacity of the function approximator to estimate the value of the best available action at a given state without ever directly querying a Q-function with this unseen action . Our algorithm alternates between fitting this upper expectile value function and backing it up into a Q-function , without any explicit policy . Then , we extract the policy via advantage-weighted behavioral cloning , which also avoids querying out-of-sample actions . We dub our method in-sample Q-learning ( IQL ) . IQL is easy to implement , computationally efficient , and only requires fitting an additional critic with an asymmetric L2 loss . IQL demonstrates the state-of-the-art performance on D4RL , a standard benchmark for offline reinforcement learning . We also demonstrate that IQL achieves strong performance fine-tuning using online interaction after offline initialization . 1 INTRODUCTION . Offline reinforcement learning ( RL ) addresses the problem of learning effective policies entirely from previously collected data , without online interaction ( Fujimoto et al. , 2019 ; Lange et al. , 2012 ) . This is very appealing in a range of real-world domains , from robotics to logistics and operations research , where real-world exploration with untrained policies is costly or dangerous , but prior data is available . However , this also carries with it major challenges : improving the policy beyond the level of the behavior policy that collected the data requires estimating values for actions other than those that were seen in the dataset , and this , in turn , requires trading off policy improvement against distributional shift , since the values of actions that are too different from those in the data are unlikely to be estimated accurately . Prior methods generally address this by either constraining the policy to limit how far it deviates from the behavior policy ( Fujimoto et al. , 2019 ; Wu et al. , 2019 ; Fujimoto & Gu , 2021 ; Kumar et al. , 2019 ; Nair et al. , 2020 ; Wang et al. , 2020 ) , or by regularizing the learned value functions to assign low values to out-of-distribution actions ( Kumar et al. , 2020 ; Kostrikov et al. , 2021 ) . Nevertheless , this imposes a trade-off between how much the policy improves and how vulnerable it is to misestimation due to distributional shift . Can we devise an offline RL method that avoids this issue by never needing to directly query or estimate values for actions that were not seen in the data ? In this work , we start from an observation that in-distribution constraints widely used in prior work might not be sufficient to avoid value function extrapolation , and we ask whether it is possible to learn an optimal policy with in-sample learning , without ever querying the values of any unseen actions . The key idea in our method is to approximate an upper expectile of the distribution over values with respect to the distribution of dataset actions for each state . We alternate between fitting this value function with expectile regression , and then using it to compute Bellman backups for training the Q-function . We show that we can do this simply by modifying the loss function in a SARSA-style TD backup , without ever using out-of-sample actions in the target value . Once thisQfunction has converged , we extract the corresponding policy using advantage-weighted behavioral cloning . This approach does not require explicit constraints or explicit regularization of out-ofdistribution actions during value function training , though our policy extraction step does implicitly enforce a constraint , as discussed in prior work on advantage-weighted regression ( Peters & Schaal , 2007 ; Peng et al. , 2019 ; Nair et al. , 2020 ; Wang et al. , 2020 ) . Our main contribution is in-sample Q-learning ( IQL ) , a new offline RL algorithm that avoids ever querying values of unseen actions while still being able to perform multi-step dynamic programming updates . Our method is easy to implement by making a small change to the loss function in a simple SARSA-like TD update and is computationally very efficient . Furthermore , our approach demonstrates the state-of-the-art performance on D4RL , a popular benchmark for offline reinforcement learning . In particular , our approach significantly improves over the prior state-of-the-art on challenging Ant Maze tasks that require to “ stitch ” several sub-optimal trajectories . Finally , we demonstrate that our approach is suitable for finetuning ; after initialization from offline RL , IQL is capable of improving policy performance utilizing additional interactions . 2 RELATED WORK . A significant portion of recently proposed offline RL methods are based on either constrained or regularized approximate dynamic programming ( e.g. , Q-learning or actor-critic methods ) , with the constraint or regularizer serving to limit deviation from the behavior policy . We will refer to these methods as “ multi-step dynamic programming ” algorithms , since they perform true dynamic programming for multiple iterations , and therefore can in principle recover the optimal policy if provided with high-coverage data . The constraints can be implemented via an explicit density model ( Wu et al. , 2019 ; Fujimoto et al. , 2019 ; Kumar et al. , 2019 ; Ghasemipour et al. , 2021 ) , implicit divergence constraints ( Nair et al. , 2020 ; Wang et al. , 2020 ; Peters & Schaal , 2007 ; Peng et al. , 2019 ; Siegel et al. , 2020 ) , or by adding a supervised learning term to the policy improvement objective ( Fujimoto & Gu , 2021 ) . Several works have also proposed to directly regularize the Q-function to produce low values for out-of-distribution actions ( Kostrikov et al. , 2021 ; Kumar et al. , 2020 ; Fakoor et al. , 2021 ) . Our method is also a multi-step dynamic programming algorithm . However , in contrast to prior works , our method completely avoids directly querying the learned Q-function with unseen actions during training , removing the need for any constraint during this stage , though the subsequent policy extraction , which is based on advantage-weighted regression ( Peng et al. , 2019 ; Nair et al. , 2020 ) , does apply an implicit constraint . However , this policy does not actually influence value function training . In contrast to multi-step dynamic programming methods , several recent works have proposed methods that rely either on a single step of policy iteration , fitting the value function or Q-function of the behavior policy and then extracting the corresponding greedy policy ( Peng et al. , 2019 ; Brandfonbrener et al. , 2021 ; Gulcehre et al. , 2021 ) , or else avoid value functions completely and utilize behavioral cloning-style objectives ( Chen et al. , 2021 ) . We collectively refer to these as “ single-step ” approaches . These methods avoid needing to query unseen actions as well , since they either use no value function at all , or learn the value function of the behavior policy . Although these methods are simple to implement and effective on the MuJoCo locomotion tasks in D4RL , we show that such single-step methods perform very poorly on more complex datasets in D4RL , which require combining parts of suboptimal trajectories ( “ stitching ” ) . Prior multi-step dynamic programming methods perform much better in such settings , as does our method . We discuss this distinction in more detail in Section 5.1 . Our method also shares the simplicity and computational efficiency of single-step approaches , providing an appealing combination of the strengths of both types of methods . Our method is based on estimating the characteristics of a random variable . Several recent works involve approximating statistical quantities of the value function distribution . In particular , quan- tile regression ( Koenker & Hallock , 2001 ) has been previously used in reinforcement learning to estimate the quantile function of a state-action value function ( Dabney et al. , 2018b ; a ; Kuznetsov et al. , 2020 ) . Although our method is related , in that we perform expectile regression , our aim is not to estimate the distribution of values that results from stochastic transitions , but rather estimate expectiles of the state value function with respect to random actions . This is a very different statistic : our aim is not to determine how the Q-value can vary with different future outcomes , but how the Q-value can vary with different actions while averaging together future outcomes due to stochastic dynamics . While prior work on distributional RL can also be used for offline RL , it would suffer from the same action extrapolation issues as other methods , and would require similar constraints or regularization , while our method does not . 3 PRELIMINARIES . The RL problem is formulated in the context of a Markov decision process ( MDP ) ( S , A , p0 ( s ) , p ( s′|s , a ) , r ( s , a ) , γ ) , where S is a state space , A is an action space , p0 ( s ) is a distribution of initial states , p ( s′|s , a ) is the environment dynamics , r ( s , a ) is a reward function , and γ is a discount factor . The agent interacts with the MDP according to a policy π ( a|s ) . The goal is to obtain a policy that maximizes the cumulative discounted returns : π∗ = argmax π Eπ [ ∞∑ t=0 γtr ( st , at ) |s0 ∼ p0 ( · ) , at ∼ π ( ·|st ) , st+1 ∼ p ( ·|st , at ) ] . Off-policy RL methods based on approximate dynamic programming typically utilize a state-action value function ( Q-function ) , referred to as Q ( s , a ) , which corresponds to the discounted returns obtained by starting from the state s and action a , and then following the policy π. Offline reinforcement learning . In contrast to online ( on-policy or off-policy ) RL methods , offline RL uses previously collected data without any additional data collection . Like many recent offline RL methods , our work builds on approximate dynamic programming methods that minimize temporal difference error , according to the following loss : LTD ( θ ) = E ( s , a , s′ ) ∼D [ ( r ( s , a ) + γmax a′ Qθ̂ ( s ′ , a′ ) −Qθ ( s , a ) ) 2 ] , ( 1 ) where D is the dataset , Qθ ( s , a ) is a parameterized Q-function , Qθ̂ ( s , a ) is a target network ( e.g. , with soft parameters updates defined via Polyak averaging ) , and the policy is defined as π ( s ) = argmaxaQθ ( s , a ) . Most recent offline RL methods modify either the value function loss ( above ) to regularize the value function in a way that keeps the resulting policy close to the data , or constrain the argmax policy directly . This is important because out-of-distribution actions a′ can produce erroneous values for Qθ̂ ( s ′ , a′ ) in the above objective , often leading to overestimation as the policy is defined to maximize the ( estimated ) Q-value . | This paper proposes an algorithm In-Sample Q-Learning (IQL), which learns a Q- and value function under behavior policy and applies advantage-weighted behavior cloning for policy learning. In one sentence, the main idea of the paper is to learn an approximately optimal Q/value function for better behavior cloning. The expectile regression instead of MSE is applied to find optimal Q-function of behavior policy. | SP:2895826eaac831a99c9a5f33921a3bcf1b89ec6e |
Offline Reinforcement Learning with In-sample Q-Learning | Offline reinforcement learning requires reconciling two conflicting aims : learning a policy that improves over the behavior policy that collected the dataset , while at the same time minimizing the deviation from the behavior policy so as to avoid errors due to distributional shift . This trade-off is critical , because most current offline reinforcement learning methods need to query the value of unseen actions during training to improve the policy , and therefore need to either constrain these actions to be in-distribution , or else regularize their values . We propose a new offline RL method that never needs to evaluate actions outside of the dataset , but still enables the learned policy to improve substantially over the best behavior in the data through generalization . The main insight in our work is that , instead of evaluating unseen actions from the latest policy , we can approximate the policy improvement step implicitly by treating the state value function as a random variable , with randomness determined by the action ( while still integrating over the dynamics to avoid excessive optimism ) , and then taking a state conditional upper expectile of this random variable to estimate the value of the best actions in that state . This leverages the generalization capacity of the function approximator to estimate the value of the best available action at a given state without ever directly querying a Q-function with this unseen action . Our algorithm alternates between fitting this upper expectile value function and backing it up into a Q-function , without any explicit policy . Then , we extract the policy via advantage-weighted behavioral cloning , which also avoids querying out-of-sample actions . We dub our method in-sample Q-learning ( IQL ) . IQL is easy to implement , computationally efficient , and only requires fitting an additional critic with an asymmetric L2 loss . IQL demonstrates the state-of-the-art performance on D4RL , a standard benchmark for offline reinforcement learning . We also demonstrate that IQL achieves strong performance fine-tuning using online interaction after offline initialization . 1 INTRODUCTION . Offline reinforcement learning ( RL ) addresses the problem of learning effective policies entirely from previously collected data , without online interaction ( Fujimoto et al. , 2019 ; Lange et al. , 2012 ) . This is very appealing in a range of real-world domains , from robotics to logistics and operations research , where real-world exploration with untrained policies is costly or dangerous , but prior data is available . However , this also carries with it major challenges : improving the policy beyond the level of the behavior policy that collected the data requires estimating values for actions other than those that were seen in the dataset , and this , in turn , requires trading off policy improvement against distributional shift , since the values of actions that are too different from those in the data are unlikely to be estimated accurately . Prior methods generally address this by either constraining the policy to limit how far it deviates from the behavior policy ( Fujimoto et al. , 2019 ; Wu et al. , 2019 ; Fujimoto & Gu , 2021 ; Kumar et al. , 2019 ; Nair et al. , 2020 ; Wang et al. , 2020 ) , or by regularizing the learned value functions to assign low values to out-of-distribution actions ( Kumar et al. , 2020 ; Kostrikov et al. , 2021 ) . Nevertheless , this imposes a trade-off between how much the policy improves and how vulnerable it is to misestimation due to distributional shift . Can we devise an offline RL method that avoids this issue by never needing to directly query or estimate values for actions that were not seen in the data ? In this work , we start from an observation that in-distribution constraints widely used in prior work might not be sufficient to avoid value function extrapolation , and we ask whether it is possible to learn an optimal policy with in-sample learning , without ever querying the values of any unseen actions . The key idea in our method is to approximate an upper expectile of the distribution over values with respect to the distribution of dataset actions for each state . We alternate between fitting this value function with expectile regression , and then using it to compute Bellman backups for training the Q-function . We show that we can do this simply by modifying the loss function in a SARSA-style TD backup , without ever using out-of-sample actions in the target value . Once thisQfunction has converged , we extract the corresponding policy using advantage-weighted behavioral cloning . This approach does not require explicit constraints or explicit regularization of out-ofdistribution actions during value function training , though our policy extraction step does implicitly enforce a constraint , as discussed in prior work on advantage-weighted regression ( Peters & Schaal , 2007 ; Peng et al. , 2019 ; Nair et al. , 2020 ; Wang et al. , 2020 ) . Our main contribution is in-sample Q-learning ( IQL ) , a new offline RL algorithm that avoids ever querying values of unseen actions while still being able to perform multi-step dynamic programming updates . Our method is easy to implement by making a small change to the loss function in a simple SARSA-like TD update and is computationally very efficient . Furthermore , our approach demonstrates the state-of-the-art performance on D4RL , a popular benchmark for offline reinforcement learning . In particular , our approach significantly improves over the prior state-of-the-art on challenging Ant Maze tasks that require to “ stitch ” several sub-optimal trajectories . Finally , we demonstrate that our approach is suitable for finetuning ; after initialization from offline RL , IQL is capable of improving policy performance utilizing additional interactions . 2 RELATED WORK . A significant portion of recently proposed offline RL methods are based on either constrained or regularized approximate dynamic programming ( e.g. , Q-learning or actor-critic methods ) , with the constraint or regularizer serving to limit deviation from the behavior policy . We will refer to these methods as “ multi-step dynamic programming ” algorithms , since they perform true dynamic programming for multiple iterations , and therefore can in principle recover the optimal policy if provided with high-coverage data . The constraints can be implemented via an explicit density model ( Wu et al. , 2019 ; Fujimoto et al. , 2019 ; Kumar et al. , 2019 ; Ghasemipour et al. , 2021 ) , implicit divergence constraints ( Nair et al. , 2020 ; Wang et al. , 2020 ; Peters & Schaal , 2007 ; Peng et al. , 2019 ; Siegel et al. , 2020 ) , or by adding a supervised learning term to the policy improvement objective ( Fujimoto & Gu , 2021 ) . Several works have also proposed to directly regularize the Q-function to produce low values for out-of-distribution actions ( Kostrikov et al. , 2021 ; Kumar et al. , 2020 ; Fakoor et al. , 2021 ) . Our method is also a multi-step dynamic programming algorithm . However , in contrast to prior works , our method completely avoids directly querying the learned Q-function with unseen actions during training , removing the need for any constraint during this stage , though the subsequent policy extraction , which is based on advantage-weighted regression ( Peng et al. , 2019 ; Nair et al. , 2020 ) , does apply an implicit constraint . However , this policy does not actually influence value function training . In contrast to multi-step dynamic programming methods , several recent works have proposed methods that rely either on a single step of policy iteration , fitting the value function or Q-function of the behavior policy and then extracting the corresponding greedy policy ( Peng et al. , 2019 ; Brandfonbrener et al. , 2021 ; Gulcehre et al. , 2021 ) , or else avoid value functions completely and utilize behavioral cloning-style objectives ( Chen et al. , 2021 ) . We collectively refer to these as “ single-step ” approaches . These methods avoid needing to query unseen actions as well , since they either use no value function at all , or learn the value function of the behavior policy . Although these methods are simple to implement and effective on the MuJoCo locomotion tasks in D4RL , we show that such single-step methods perform very poorly on more complex datasets in D4RL , which require combining parts of suboptimal trajectories ( “ stitching ” ) . Prior multi-step dynamic programming methods perform much better in such settings , as does our method . We discuss this distinction in more detail in Section 5.1 . Our method also shares the simplicity and computational efficiency of single-step approaches , providing an appealing combination of the strengths of both types of methods . Our method is based on estimating the characteristics of a random variable . Several recent works involve approximating statistical quantities of the value function distribution . In particular , quan- tile regression ( Koenker & Hallock , 2001 ) has been previously used in reinforcement learning to estimate the quantile function of a state-action value function ( Dabney et al. , 2018b ; a ; Kuznetsov et al. , 2020 ) . Although our method is related , in that we perform expectile regression , our aim is not to estimate the distribution of values that results from stochastic transitions , but rather estimate expectiles of the state value function with respect to random actions . This is a very different statistic : our aim is not to determine how the Q-value can vary with different future outcomes , but how the Q-value can vary with different actions while averaging together future outcomes due to stochastic dynamics . While prior work on distributional RL can also be used for offline RL , it would suffer from the same action extrapolation issues as other methods , and would require similar constraints or regularization , while our method does not . 3 PRELIMINARIES . The RL problem is formulated in the context of a Markov decision process ( MDP ) ( S , A , p0 ( s ) , p ( s′|s , a ) , r ( s , a ) , γ ) , where S is a state space , A is an action space , p0 ( s ) is a distribution of initial states , p ( s′|s , a ) is the environment dynamics , r ( s , a ) is a reward function , and γ is a discount factor . The agent interacts with the MDP according to a policy π ( a|s ) . The goal is to obtain a policy that maximizes the cumulative discounted returns : π∗ = argmax π Eπ [ ∞∑ t=0 γtr ( st , at ) |s0 ∼ p0 ( · ) , at ∼ π ( ·|st ) , st+1 ∼ p ( ·|st , at ) ] . Off-policy RL methods based on approximate dynamic programming typically utilize a state-action value function ( Q-function ) , referred to as Q ( s , a ) , which corresponds to the discounted returns obtained by starting from the state s and action a , and then following the policy π. Offline reinforcement learning . In contrast to online ( on-policy or off-policy ) RL methods , offline RL uses previously collected data without any additional data collection . Like many recent offline RL methods , our work builds on approximate dynamic programming methods that minimize temporal difference error , according to the following loss : LTD ( θ ) = E ( s , a , s′ ) ∼D [ ( r ( s , a ) + γmax a′ Qθ̂ ( s ′ , a′ ) −Qθ ( s , a ) ) 2 ] , ( 1 ) where D is the dataset , Qθ ( s , a ) is a parameterized Q-function , Qθ̂ ( s , a ) is a target network ( e.g. , with soft parameters updates defined via Polyak averaging ) , and the policy is defined as π ( s ) = argmaxaQθ ( s , a ) . Most recent offline RL methods modify either the value function loss ( above ) to regularize the value function in a way that keeps the resulting policy close to the data , or constrain the argmax policy directly . This is important because out-of-distribution actions a′ can produce erroneous values for Qθ̂ ( s ′ , a′ ) in the above objective , often leading to overestimation as the policy is defined to maximize the ( estimated ) Q-value . | This paper proposes an offline RL algorithm named IQL, which generalizes between Bellman expectation equation and Bellman optimality equation with expectile regression. Expectile regression assigns low weights for low-performing samples and assigns high weights for high-performing samples to learn the expectile. By performing $\tau$-expectile regression on the randomness of action in the dataset (from the sampling process of behavior policy), IQL is able to obtain a value function that is $\tau$ optimal. IQL is generalizing SARSA and Q-learning in the sense that SARSA corresponds to IQL with $\tau=0.5$ and Q-learning corresponds to IQL with $tau=1.0$. It offers very stable learning of a near-optimal value since it is only using in-distribution samples and the effective dataset size can be controlled through $\tau$. In the experiments, the paper has shown that IQL can obtain state-of-the-art performance. | SP:2895826eaac831a99c9a5f33921a3bcf1b89ec6e |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.