diff --git a/askyourhumansusinghumaninstructionstoimprovegeneralizationinreinforcementlearning/02332191-3640-4bf2-b0ec-5ad2e22699ee_content_list.json b/askyourhumansusinghumaninstructionstoimprovegeneralizationinreinforcementlearning/02332191-3640-4bf2-b0ec-5ad2e22699ee_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..e35e13ce6ea41a2947268c9566f541ee9cfed8c6
--- /dev/null
+++ b/askyourhumansusinghumaninstructionstoimprovegeneralizationinreinforcementlearning/02332191-3640-4bf2-b0ec-5ad2e22699ee_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c7474035f114460dfd4b5becd9cabff4c6f6f1649fea4ee8fc765fec5ebe4cf7
+size 126266
diff --git a/askyourhumansusinghumaninstructionstoimprovegeneralizationinreinforcementlearning/02332191-3640-4bf2-b0ec-5ad2e22699ee_model.json b/askyourhumansusinghumaninstructionstoimprovegeneralizationinreinforcementlearning/02332191-3640-4bf2-b0ec-5ad2e22699ee_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..5cb7b439d129dbf982e6f7a641f3fbfc62c165e6
--- /dev/null
+++ b/askyourhumansusinghumaninstructionstoimprovegeneralizationinreinforcementlearning/02332191-3640-4bf2-b0ec-5ad2e22699ee_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f6afd9d34b73c9f50e3abd491feb8dfb6907ff0a8b9378ffd4406f390a361e88
+size 145618
diff --git a/askyourhumansusinghumaninstructionstoimprovegeneralizationinreinforcementlearning/02332191-3640-4bf2-b0ec-5ad2e22699ee_origin.pdf b/askyourhumansusinghumaninstructionstoimprovegeneralizationinreinforcementlearning/02332191-3640-4bf2-b0ec-5ad2e22699ee_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..3b44c236ba2248abf553f1bfba222b2df4dbeef5
--- /dev/null
+++ b/askyourhumansusinghumaninstructionstoimprovegeneralizationinreinforcementlearning/02332191-3640-4bf2-b0ec-5ad2e22699ee_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e8bc5817f0348c06c11867ec616ca1ccd860bc0ddf6f2d6b00c319373f4fef81
+size 3521633
diff --git a/askyourhumansusinghumaninstructionstoimprovegeneralizationinreinforcementlearning/full.md b/askyourhumansusinghumaninstructionstoimprovegeneralizationinreinforcementlearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..d226f9a8c7ef821f49100db7bd7760c6695d94a3
--- /dev/null
+++ b/askyourhumansusinghumaninstructionstoimprovegeneralizationinreinforcementlearning/full.md
@@ -0,0 +1,668 @@
+# ASK YOUR HUMANS: USING HUMAN INSTRUCTIONS TO IMPROVE GENERALIZATION IN REINFORCEMENT LEARNING
+
+Valerie Chen, Abhinav Gupta, & Kenneth Marino
+
+Carnegie Mellon University
+
+{vchen2, abhinavg, kdmarino}@cs.cmu.edu
+
+# ABSTRACT
+
+Complex, multi-task problems have proven to be difficult to solve efficiently in a sparse-reward reinforcement learning setting. In order to be sample efficient, multi-task learning requires reuse and sharing of low-level policies. To facilitate the automatic decomposition of hierarchical tasks, we propose the use of step-by-step human demonstrations in the form of natural language instructions and action trajectories. We introduce a dataset of such demonstrations in a crafting-based grid world. Our model consists of a high-level language generator and low-level policy, conditioned on language. We find that human demonstrations help solve the most complex tasks. We also find that incorporating natural language allows the model to generalize to unseen tasks in a zero-shot setting and to learn quickly from a few demonstrations. Generalization is not only reflected in the actions of the agent, but also in the generated natural language instructions in unseen tasks. Our approach also gives our trained agent interpretable behaviors because it is able to generate a sequence of high-level descriptions of its actions.
+
+# 1 INTRODUCTION
+
+One of the most remarkable aspects of human intelligence is the ability to quickly adapt to new tasks and environments. From a young age, children are able to acquire new skills and solve new tasks through imitation and instruction (Council et al., 2000; Meltzoff, 1988; Hunt, 1965). The key is our ability to use language to learn abstract concepts and then reapply them in new settings. Inspired by this, one of the long term goals in AI is to build agents that can learn to accomplish new tasks and goals in an open-world setting using just a few examples or few instructions from humans. For example, if we had a health-care assistant robot, we might want to teach it how to bring us our favorite drink or make us a meal in just the way we like it, perhaps by showing it how to do this a few times and explaining the steps involved. However, the ability to adapt to new environments and tasks remains a distant dream.
+
+Previous work have considered using language as a high-level representation for RL (Andreas et al., 2017; Jiang et al., 2019). However, these approaches typically use language generated from templates that are hard-coded into the simulators the agents are tested in, allowing the agents to receive virtually unlimited training data to learn language abstractions. But both ideally and practically, instructions are a limited resource. If we want to build agents that can quickly adapt in open-world settings, they need to be able to learn from limited, real instruction data (Luketina et al., 2019). And unlike the clean ontologies generated in these previous approaches, human language is noisy and diverse; there are many ways to say the same thing. Approaches that aim to learn new tasks from humans must be able to use human-generated instructions.
+
+In this work, we take a step towards agents that can learn from limited human instruction and demonstration by collecting a new dataset with natural language annotated tasks and corresponding gameplay. The environment and dataset is designed to directly test multi-task and sub-task learning, as it consists of nearly 50 diverse crafting tasks.1 Crafts are designed to share similar features and
+
+
+Figure 1: From state observation at time step $t$ , the agent generates a natural language instruction "go to key and press grab," which guides the agent to grab the key. After the instruction is fulfilled and the agent grabs the key, the agent generates a new instruction at $t + 1$ .
+
+sub-steps so we would be able to test whether the method is able to learn these shared features and reuse existing knowledge to solve new, but related tasks more efficiently. Our dataset is collected in a crafting-based environment and contains over 6,000 game traces on 14 unique crafting tasks which serve as the training set. The other 35 crafting tasks will act as zero-shot tasks. The goal is for an agent to be able to learn one policy that is able to solve both tasks it was trained on as well as a variety of unseen tasks which contain similar sub-tasks as the training tasks.
+
+To do this, we train a neural network system to generate natural language instructions as a high-level representation of the sub-task, and then a policy to achieve the goal condition given these instructions. Figure 1 shows how our agent takes in the given state of the environment and a goal (Iron Ore), generates a language representation of the next instruction, and then uses the policy to select an action conditioned on the language representation - in this case to grab the key. We incorporate both imitation learning (IL) using both the language and human demonstrations and Reinforcement Learning (RL) rewards to train our agent to solve complicated multi-step tasks.
+
+Our approach which learns from human demonstrations and language outperforms or matches baseline methods in the standard RL setting. We demonstrate that language can be used to better generalize to new tasks without reward signals and outperforms baselines on average over 35 zero-shot crafting tasks. Our method uses language as a high-level task to help decompose a larger, complex task into sub-tasks and identify correct sub-tasks to utilize in a zero-shot task setting. We also show that the agent can learn few-shot tasks with only a few additional demos and instructions. Finally, training with human-generated instructions gives us an interpretable explanation of the agent's behavior in cases of success and failure. Generalization is further demonstrated in the agent's ability to explain how the task is decomposed in both train and evaluation settings, in a way that reflects the actual recipes that describe the crafting task. With our dataset collection procedure and language-conditioned method, we demonstrate that using natural human language can be practically applied to solving difficult RL problems and begin solving the generalization problem in RL. We hope that this will inspire future work that incorporates human annotation, specifically language annotation, to solve more difficult and diverse tasks.
+
+# 2 RELATED WORK
+
+Previous works on language descriptions of tasks and sub-tasks have generally relied on what Andreas et al. (2017) calls "sketches." A sketch specifies the necessary sub-tasks for a final task and is manually constructed for every task. The agent then relies on reward signals from the sketches in order to learn these predefined sub-tasks. However, in our setup, we want to infer such "sketches" from a limited number of instructions given by human demonstrations. This setting is not only more difficult but also more realistic for practical applications of RL where we might not have a predefined ontology and simulator, just a limited number of human-generated instructions. In addition, at test time, their true zero-shot task requires the sketch, whereas our method is able to generate the "sketches" in the form of high-level language with no additional training and supervision.
+
+Similarly, other works have used synthetically generated sub-goals and descriptions to train their methods and suffer from similar problems of impracticality. Shu et al. (2018) introduces a Stochastic Temporal Grammar to enable interpretable multi-task RL in the Minecraft environment. Similarly, the BabyAI platform Chevalier-Boisvert et al. (2019a) presents a synthetic language which models commands inside a grid-based environment. They utilize curriculum training to approach learning complex skills and demonstrate through experimentation in their environment that existing approaches of pure IL or pure RL are extremely sample inefficient. Cideron et al. (2019) extend Hindsight Experience Replay (HER) to language goals in the BabyAI platform to solve a single instruction generated from a hand-crafted language. The BabyAI environment is extended by Cao et al. (2020) to include descriptive texts of the environment to improve the generalization of RL agents. Jiang et al. (2019) also uses procedural generated language using the MuJoCo physics engine and the CLEVR engine to learn a hierarchical representation for multi-task RL. Oh et al. (2017) also tackles zero-shot generalizations, but like the others considers only procedurally-generated instructions, learning to use analogies to learn correspondences between similar sub-tasks.
+
+The main work that also investigates using a limited number of human-generated instructions in RL environments is Hu et al. (2019). This paper also uses natural language instructions in hierarchical decision making to play a real-time strategy game involving moving troop units across long time scales. This work uses only behavioral cloning with natural language instructions, whereas we use a mixture of RL and imitation learning. They also do not investigate the benefits of language in zero-shot or few-shot settings and do not demonstrate cross-task generalization as we do.
+
+Hierarchical approaches as a way of learning abstractions is well studied in the Hierarchical Reinforcement Learning (HRL) literature (Dayan & Hinton, 1993; Parr & Russell, 1998; Stolle & Precup, 2002). This is typically done by predefining the low-level policies by hand, by using some proxy reward to learn a diverse set of useful low-level policies (Heess et al., 2016; Florensa et al., 2017; Eysenbach et al., 2018; Hausman et al., 2018; Marino et al., 2019), or more generally learning options (Sutton et al., 1999). Our approach differs in that unlike in options and other frameworks, we generate language as a high-level state which conditions the agent's policy rather than handing control over to low-level policies directly.
+
+Other works have shown the effectiveness of using a combination of reinforcement learning and imitation learning. Le et al. (2018) presents a hybrid hierarchical reinforcement learning and imitation learning algorithm for the game Montezuma's revenge by leveraging IL for the high-level controller and RL for the low-level controller demonstrating the potential for combining IL and RL to achieve the benefits of both algorithms. By learning meta-actions, the agent is able to learn to solve the complex game. However, their meta-actions were also hand specified.
+
+Others have utilized natural language for other tasks, including Williams et al. (2018), Co-Reyes et al. (2018), and Andreas et al. (2018) but not focused on the multi-task learning setting. Matthews et al. (2019) demonstrates the use of word embeddings to inform robotic motor control as evidence of particular promise for exploiting the relationship between language and control. Narasimhan et al. (2018) uses language descriptions of the environment to aid domain transfer. The sub-field of language and vision navigation specifically has investigated how to train agents to navigate to a particular location in an environment given templated or natural language (Chaplot et al., 2018; Anderson et al., 2018; TELex et al., 2011; Mei et al., 2016; Chen & Mooney, 2011; Yu et al., 2018) or to navigate to a particular location to answer a question Das et al. (2018). Similarly to this work, Nguyen et al. (2019) uses human-generated language to find objects in a simulated environment, Zhong et al. (2020) and Branavan et al. (2012) reads a document (i.e. a players manual) to play a variety of games, and Lynch & Sermanet (2020) trains agents to follow both image and language-based goals. All of these works require the agent to read some text at both train and test time and follow those instructions to achieve some goal. In contrast, at test time, our agent only receives a high-level goal, which is what item to craft. Our agent must take the high-level goal as input and generates its own instructions to solve the task. In other words, our task is both instruction following and instruction generation. Related to instruction generation, some work have explored more generally intrinsic motivation for goal generation (Florensa et al., 2018; Forestier et al., 2017). In our work, however, we learn the goals via the human language instructions.
+
+# 3 HUMAN ANNOTATION COLLECTION
+
+The first step of our approach requires human demonstrations and instructions. To that requirement, we built an interface to collect human-annotated data to guide the learning model.
+
+
+Figure 2: (Left) Example view of game interface that the worker would see on AMT. On the left the worker is given the goal and recipes; the board is in the middle; the worker provides annotations on the right. (Right) Example sequence of instructions provided by the Turker for the given task of Stone Pickaxe.
+
+Crafting Environment: As shown in Figure 2, our environment is a Minecraft-inspired 5-by-5 gridworld. The Crafting Agent navigates the grid by moving up, down, left, and right. The agent can grab certain objects, like tools, if it is next to them and use the tools to mine resources. The agent must also use a key or switch to open doors blocking its path. Finally, the agent can also go to a crafting table to build final items. The agent can choose from 8 actions to execute: up, down, left, right, toggle, grab, mine, and craft. The environment is fully observable. Our crafting environment extends the crafting environment of Andreas et al. (2017) to include obstacles and crafts that are specified by material, introducing compositionally complex tasks (i.e. instead of 'Make Axe', we have 'Make Iron Axe' etc). In total, we consider about 50 crafting tasks, 14 of which we collect annotations for and 35 of which are used for test time. At the start of each game, all object/resource locations are fully randomized in the environment.
+
+Crafting Task: The goal of the agent in our world is to complete crafts. By design, a crafting-based world allows for complexity and hierarchy in how the agent interacts with items in the gridworld. To craft an item, the agent must generally first pick up a tool, go to a resource, mine the resource, and then go to a table to craft the item. To make an iron ore, the agent must use the pickaxe at the Iron Ore Vein to mine Iron Ore to complete the task. The Iron Ore recipe is an example of a 1-step task because it creates one item. A 5-step task, like Diamond Pickaxe, involves the mining and/or crafting of 5 items. We capped the tasks at a maximum length of 5 recipe steps to limit the amount of time a worker would have to spend on the task. Note that each recipe step requires multiple time-steps to complete. Crafts are designed to share similar features and sub-steps to test whether the agent is able to learn these shared features and reuse existing knowledge to solve new, but related tasks more efficiently (these relations between tasks are detailed in Table 3 and in Figure 10). While the task may seem simple to human annotators to solve, such compositional tasks still pose difficulties for sparse-reward RL. We further increase the difficulty of this task by restricting the agent to a limited number of steps (100) to complete the task, leaving little room to make unrecoverable mistakes such as spending time collecting or using unnecessary resources.
+
+Data Collection Process: Figure 2 shows our interface. Given the goal craft, relevant recipes, and the initial board configuration, the worker provides step-by-step instructions accompanied by execution on the actual game board of each instruction. The workflow would be to type one instruction, execute the instruction, then type the next instruction, and execute until the goal was completed. The data collection interface and a corresponding example set of natural language instructions provided by a Turker is illustrated on the rightmost side of Figure 2. This is but one way that a Turker might choose to break down the 5-step crafting task. The appendix has more details on the collection process in Section A.1. We will release the environment and dataset.
+
+Dataset Analysis: Between the 14 crafts, we collected 6,322 games on AMT. In total, this dataset contains 195,405 state-action pairs and 35,901 total instructions. In the supplementary material we present relevant summary statistics about the data, including the number of instructions provided for each $n$ -step task. The number of instructions, and consequently, actions required increases with the number steps as shown in Table 4.
+
+# 4 METHODS
+
+Our proposed approach to solving these multi-step crafting tasks is to learn from human-generated natural language instructions and demonstrations. The model is first pre-trained using imitation learning (IL) and then fine-tuned using sparse-reward in reinforcement learning (RL). The goal of the agent is to learn one policy that is able to solve a variety of tasks (around 50) in the environment including ones it has not seen when only trained on a subset of the total tasks.
+
+
+Figure 3: (Left) High-level language generator. (Right) Low-level policy conditioned on language.
+
+Architecture: As outlined in Figure 3, we factor the agent into a hierarchical set-up with a language generator at the high-level and policy conditioned on the language at the low-level. At each time step, the state encoder produces a vector representation that is then used as input to both the language generator and language-conditioned policy. Relevant information about the state, including the grid, inventory, and goal are encoded. Items which are relevant for crafting are embedded using a 300-dimension GloVe embedding, summing the embeddings for multiple word items (i.e. Iron Ore Vein). Non-crafting items, such as door, wall, or key, are represented using a one-hot vector. Further details are provided in Section B.
+
+Imitation Learning Pre-training: We warm-start the model using the human demonstrations. Language is generated at the high-level with an encoder-decoder framework. The encoding from the state encoder is decoded by an LSTM which generates a natural language instruction. The target language instruction is the AMT worker's provided instruction. In our dataset, the vocabulary size was 212, after filtering for words that appeared at least 5 times. At test time, we do not have access to the ground truth instructions, so instead the LSTM decoder feeds back the previously generated word as the next input and terminates when the stop token is generated. From the language generator module, we extract the last hidden state of the generated instruction. The hidden state is concatenated with the encoded state and passed through a series of fully connected layers. The final layer outputs the action. In the supervised training phase, the full model is trained by backpropagating through a language and action loss (cross entropy loss).
+
+Reinforcement Learning Fine-tuning: We use the proximal policy optimization (PPO) algorithm Schulman et al. (2017) in RL with the reward defined below to learn an optimal policy to map from state encoding to output action. The maximum number of steps in an episode is set to 100. We utilize a training set-up which samples from all tasks (1-step through 5-step tasks). In preliminary experiments, we observe that sampling from 3-step tasks alone, for example, poses too complex of an exploration problem for the model to receive any reward. We define a sparse reward, where the agent only receives a reward when it has completed the full craft. In RL fine-tuning, we freeze the language generator component because there is no more language supervision provided in the simulated environment. We also find that empirically backpropagating the loss through language distorts the output language as there is no constraint for it to continue to be similar to human language. All training hyperparameters and details are provided in supplementary materials.
+
+# 5 EXPERIMENTS
+
+We compare our method against five baselines (1-5) which are reduced forms of our method to evaluate the necessity of each component. We also consider two baselines (6-7), which swap out the language generator for alternative high-level tasks, to evaluate the usefulness of language as a selected high-level task. These baselines have the additional training that our method received, as well as the implicit compositionality, but without language. In both baselines (6-7), we perform the same training steps as with our method. Implementation details are presented in Section B.
+
+1. IL: The IL baseline uses the same low-level architecture as our method, without a high-level hidden state. The model learns to map state encoding to an output action.
+
+2. IL w/ Generative Language: IL w/ Generative Language is the supervised baseline of our method, which does not include RL reward. This baseline allows us to observe and compare the benefit of having a reward to train in simulation when the model has access to both actions and language instructions.
+
+3. IL w/ Discriminative Language: We compare our method to a closely adapted version of the method proposed in Hu et al. (2019) which similarly uses language in the high-level. Rather than generate language, their high-level language is selected from a set of instructions from the collected user annotation. We discuss this adaptation in the Appendix. They consider instruction sets of sizes $N = \{50, 250, 500\}$ and find the best performance on the largest instruction set $N = 500$ which is the size we use in our implementation.
+
+4. RL: Another baseline we consider is the reinforcement learning (RL) setting where the agent is provided no demonstrations but has access to sparse-reward in RL. The architecture we use here is the same as the IL architecture. This baseline demonstrates the capacity to learn the crafting tasks without any human demonstrations and allows us to see whether human demonstrations are useful.
+
+5. IL + RL: We also consider a baseline that does not incorporate language which is IL+RL. In IL+RL, we pretrain the same IL architecture using the human demonstrations as a warm-start to RL. It is important to note that this baseline does not include the natural language instructions as a part of training. We extract all of the state-action pairs at train a supervised model on the data as in the IL model and then we utilize the RL sparse-reward to fine-tune.
+
+6. State Reconstruction (SR): We train an autoencoder to perform state reconstruction. The autoencoder reconstructs the state encoding and the vector at the bottleneck of the autoencoder is used as the hidden layer for the policy. SR as a baseline allows us to consider latent representations in the state encoding as a signal for the policy.
+
+7. State Prediction (SP): We train a recurrent network, with the same architecture as our language generator, to perform state prediction. The model stores the past 3 states from time $T$ to predict the $T + 1$ state. So at time $T$ , the states $T - 2$ , $T - 1$ , and $T$ are used to predict state $T + 1$ . From the LSTM, the hidden state is extracted in the same manner as our IL + RL w/ Lang model. SP as a baseline allows us to compare against another recurrent high-level method with the additional computation power.
+
+# 5.1 RESULTS
+
+Standard setting: We evaluate the various methods on crafts which we have collected human demonstrations for to benchmark comparative performance in our environment. An initial analysis is to first consider how much the IL model is able to learn from human demonstrations alone, so we consider IL, IL with Generative Language, and IL with Discriminative Language (results are in Section C). None of these approaches are able to solve the most difficult 5-step tasks or the simpler tasks consistently, with an average of about $18 - 19\%$ success rate for 1-step tasks. We believe the 3 and 5-step tasks are difficult enough such that annotations alone were not able to capture the diversity of board configurations for the variety of crafts given that the board is randomly initialized each time. However, based on an analysis of the language selected (see Tables 11 vs. Table 12), the generated language is more interpretable and made more sense in a zero shot setting. Given the language is fixed after this point, all remaining experiments moving forward use generative language.
+
+As shown in Figure 4, our method performs well against baselines. We find that human demonstrations are necessary to guide learning because the learned behavior for RL is essential to arbitrarily
+
+
+Figure 4: Comparing baselines with our method on accuracy. Human demonstrations are necessary to complete tasks with 3 or more steps. Averaged over 3 runs.
+
+
+
+
+
+
+
+walk around the grid and interact with items. For simple 1 and 2 step tasks, this is a feasible strategy for the allotted steps for an episode. However, there is little room for error in the most difficult 5-step tasks, as even human demonstrations take on average 40 steps to solve. We also find that for the standard setting, incorporating a high-level network allows the model to achieve good results when comparing our method to SP and SR.
+
+In Figure 5 we show the result of our method when we ablate the number of demonstrations we use. This lets us see how many demonstrations we would feasibly need for the model to learn how to solve the crafting tasks. As we decrease the amount of data provided, we find that there is greater variance in the policy's ability to complete the task, but the performance only significantly degrades when we start using only $25\%$ of the data on the hardest tasks.
+
+
+Figure 5: Ablation of our method with varying amounts of human annotations (25%, 50%, 75% and 100%). For each fraction, we sample that number of demonstrations from the dataset for each type of task. Averaged over 3 runs.
+
+
+
+
+
+
+
+Zero Shot: Our method is able to use natural language instructions to improve performance on difficult tasks in the standard setting. But how well is our method able to do on completely new tasks not seen during training? We investigate our performance on zero-shot tasks, where the agent receives no human demonstrations or instructions, and no rewards on these tasks. The agent has to try to complete these tasks that it has never seen before and cannot train on at all. These unseen tasks do share sub-task structure with tasks which were seen in the training process, so the desired behavior is for the model to reuse subpolicies seen in other contexts for this new context. For example, in training the agent might have seen demonstrations or received rewards for a task like "Cobblestone Stairs" and "Iron Ingot." At test time, we can evaluate the agent on an item like "Cobblestone Ingot", which has never been seen by the agent. The agent should be able to infer the sub-task breakdown given prior knowledge of similar tasks.
+
+We present 35 examples of unseen tasks in Table 1. We find that overall our method outperforms all other baselines. While SR and SP were able to match our method's performance in standard setting, they are not able to generalize. SR and SP are viable solutions to learn complex tasks in the standard RL setting, but the representations these models learned do not aid in generalizing to unseen tasks. Here, we believe, using language is key because it creates a representation that better abstracts to new tasks. In the supplementary material, we show that in the cases of unseen tasks, the model indeed is able to generate language that properly corresponds to these new combinations of materials and items, particularly decomposing the complex item into sub-tasks that were previously seen in the training phase.
+
+**Demonstration Only and Few-Shot:** In the demonstration only, we assume that we have access to only human demonstrations for some subset of tasks. From the entire pool of 14 tasks we collected demonstrations for, we withhold 3 tasks (around $20\%$ of total tasks) for testing. These 3 tasks consist of a one, two, and three step task. We run results on 3 permutations of withholding 3
+
+Table 1: Accuracy evaluated on 100 games for 35 unseen crafts. Our method outperforms baselines. We do not list IL or IL w/ Language results which are $0\%$ for all tasks.
+
+
| Steps | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 |
| RL | 93 | 91 | 95 | 90 | 92 | 92 | 91 | 81 | 0 | 0 | 0 | 0 | 0 | 0 | 13 | 0 | 72 | 28 | 46 | |
| IL+RL | 92 | 98 | 85 | 94 | 91 | 83 | 95 | 97 | 21 | 96 | 37 | 18 | 97 | 82 | 97 | 93 | 94 | 91 | 81 | |
| SP | 0 | 20 | 0 | 33 | 37 | 2 | 69 | 2 | 90 | 0 | 98 | 0 | 89 | 1 | 78 | 1 | 74 | 2 | 33 | |
| SR | 96 | 64 | 0 | 0 | 67 | 70 | 60 | 99 | 0 | 79 | 88 | 74 | 69 | 37 | 16 | 98 | 70 | 0 | 55 | |
| Ours | 99 | 99 | 100 | 100 | 93 | 99 | 100 | 100 | 97 | 98 | 99 | 99 | 99 | 100 | 97 | 100 | 99 | 97 | 98 | |
| Overall, M/G | |
| Steps | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 5 | 5 | 5 | 5 | 5 | - | |
| RL | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 23 | |
| IL+RL | 90 | 87 | 0 | 0 | 0 | 0 | 0 | 0 | 85 | 29 | 86 | 87 | 39 | 0 | 0 | 0 | 0 | 0 | 55 | |
| SP | 89 | 0 | 12 | 0 | 4 | 10 | 0 | 47 | 0 | 1 | 26 | 0 | 16 | 0 | 0 | 0 | 0 | 0 | 22 | |
| SR | 1 | 0 | 2 | 0 | 12 | 0 | 0 | 0 | 0 | 38 | 6 | 0 | 5 | 0 | 0 | 0 | 0 | 0 | 30 | |
| Ours | 97 | 98 | 2 | 3 | 18 | 0 | 40 | 0 | 96 | 39 | 95 | 98 | 49 | 36 | 0 | 0 | 0 | 14 | 69 | |
+
+tasks. For each of the 3 withheld tasks, we include these demonstrations in the supervised training phase but do not provide reward in RL fine-tuning. We vary the amount of demonstrations that are provided: $5\%$ , $10\%$ , and $100\%$ . The most generous case is to assume that the model has access to all demonstrations that were collected in the dataset. Per task, the total number of demonstrations was about 300-500. Additionally we considered a more strict few-shot case where we reduce the number of demonstrations to 20-40 which is about $5 - 10\%$ of the original number of demonstrations. We do not include 5-step tasks because we only collected demonstrations for two 5-step tasks. From the results in Table 2, we can see that our method outperforms baselines in its ability to utilize the few demonstrations to improve performance.
+
+Table 2: Evaluation of few-shot tasks for our method against baseline comparisons. We consider three settings for how many demonstrations are given to the model: $5\%$ (20 demos), $10\%$ (40 demos), $100\%$ . Variance results are included in supplementary material. Results are averaged across 3 seeds.
+
+ | IL | IL w/Lang | IL+RL | SP | SR | Ours |
| Steps | 5% | 10% | 100% | 5% | 10% | 100% | 5% | 10% | 100% | 5% | 10% | 100% | 5% | 10% | 100% | 5% | 10% | 100% |
| 1-step | 16% | 18% | 18% | 17% | 19% | 19% | 96% | 91% | 98% | 96% | 98% | 97% | 53% | 84% | 94% | 97% | 90% | 95% |
| 2-step | 4% | 3% | 0% | 5% | 5% | 9% | 66% | 64% | 66% | 53% | 64% | 71% | 10% | 40% | 63% | 87% | 73% | 82% |
| 3-step | 1% | 2% | 0% | 1% | 3% | 4% | 1% | 23% | 22% | 10% | 27% | 46% | 0% | 31% | 50% | 5% | 47% | 74% |
+
+Interpretability: One key benefit of incorporating natural language into the model is the ability for humans to interpret how the model is making decisions. We observe that the generated instructions closely match those of the recipes that we provide to the annotators in the data collection phase in both train (Table 12) and test (Table 13, 14) settings. However, the discriminative language didn't break down the task into steps that made sense (Table 11). Figure 6 presents example instructions generated by our model.
+
+# 6 CONCLUSION
+
+In this paper, we present a dataset of human demonstrations and natural language instructions to solve hierarchical tasks in a crafting-based world. We also describe a hierarchical model to enable efficient learning from this data through a combined supervised and reinforcement learning approach. In general, we find that leveraging human demonstrations allows the model to drastically outperform RL baselines. Additionally, our results demonstrate that natural language not only allows the model to explain its decisions but it also improves the model's performance on the most difficult crafting tasks and further allows generalization to unseen tasks. We also demonstrate the model's ability to expand its skillset through few additional human demonstrations. While we demonstrate our approach's success in a grid-based crafting environment, we believe that our method is able to be adapted towards generalizable, multi-task learning in a variety of other environments.
+
+
+Figure 6: Generated language at test time for a 2-step craft. We only display key frames of the trajectory which led to changes in the language. These key frames match changes in the inventory to the object mentioned in the generated instruction. Qualitatively, the generated instructions are consistent during what we would describe as a sub-task. Quantitatively, the network will spend on average 4.8 steps in the environment for the same generated language output.
+
+# REFERENCES
+
+Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sunderhauf, Ian Reid, Stephen Gould, and Anton van den Hengel. Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3674-3683, 2018.
+Jacob Andreas, Dan Klein, and Sergey Levine. Modular multitask reinforcement learning with policy sketches. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 166-175. JMLR.org, 2017.
+Jacob Andreas, Dan Klein, and Sergey Levine. Learning with latent language. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 2166-2179, 2018.
+SRK Branavan, David Silver, and Regina Barzilay. Learning to win by reading manuals in a montecarlo framework. Journal of Artificial Intelligence Research, 43:661-704, 2012.
+Tianshi Cao, Jingkang Wang, Yining Zhang, and Sivabalan Manivasagam. Babyai++: Towards grounded-language learning beyond memorization. arXiv preprint arXiv:2004.07200, 2020.
+Devendra Singh Chaplot, Kanhashree Mysore Sathyendra, Rama Kumar Pasumarthi, Dheeraj Rajagopal, and Ruslan Salakhutdinov. Gated-attention architectures for task-oriented language grounding. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
+David L Chen and Raymond J Mooney. Learning to interpret natural language navigation instructions from observations. In Twenty-Fifth AAAI Conference on Artificial Intelligence, 2011.
+Maxime Chevalier-Boisvert, Dzmitry Bahdanau, Salem Lahlou, Lucas Willems, Chitwan Saharia, Thien Huu Nguyen, and Yoshua Bengio. BabyAI: First steps towards grounded language learning with a human in the loop. In International Conference on Learning Representations, 2019a. URL https://openreview.net/forum?id=rJeXCo0cYX.
+Maxime Chevalier-Boisvert, Dzmitry Bahdanau, Salem Lahlou, Lucas Willems, Chitwan Saharia, Thien Huu Nguyen, and Yoshua Bengio. BabyAI: First steps towards grounded language learning with a human in the loop. In International Conference on Learning Representations, 2019b. URL https://openreview.net/forum?id=rJeXCo0cYX.
+Geoffrey Cideron, Mathieu Seurin, Florian Strub, and Olivier Pietquin. Self-educated language agent with hindsight experience replay for instruction following. arXiv preprint arXiv:1910.09451, 2019.
+John D Co-Reyes, Abhishek Gupta, Suvansh Sanjeev, Nick Altieri, Jacob Andreas, John DeNero, Pieter Abbeel, and Sergey Levine. Guiding policies with language via meta-learning. In International Conference on Learning Representations, 2018.
+
+National Research Council et al. How people learn: Brain, mind, experience, and school: Expanded edition. National Academies Press, 2000.
+Abhishek Das, Samyak Datta, Georgia Gkioxari, Stefan Lee, Devi Parikh, and Dhruv Batra. Embodied Question Answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
+Peter Dayan and Geoffrey E Hinton. Feudal reinforcement learning. In Advances in neural information processing systems, pp. 271-278, 1993.
+Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, and Sergey Levine. Diversity is all you need: Learning skills without a reward function. In International Conference on Learning Representations, 2018.
+Carlos Florensa, Yan Duan, and Pieter Abbeel. Stochastic neural networks for hierarchical reinforcement learning. In International Conference on Learning Representations, 2017.
+Carlos Florensa, David Held, Xinyang Geng, and Pieter Abbeel. Automatic goal generation for reinforcement learning agents. In International conference on machine learning, pp. 1515-1528, 2018.
+Sebastien Forestier, Rémy Portelas, Yoan Mollard, and Pierre-Yves Oudeyer. Intrinsically motivated goal exploration processes with automatic curriculum learning. arXiv preprint arXiv:1708.02190, 2017.
+Karol Hausman, Jost Tobias Springenberg, Ziyu Wang, Nicolas Heess, and Martin Riedmiller. Learning an embedding space for transferable robot skills. 2018.
+Nicolas Heess, Greg Wayne, Yuval Tassa, Timothy Lillicrap, Martin Riedmiller, and David Silver. Learning and transfer of modulated locomotor controllers. arXiv preprint arXiv:1610.05182, 2016.
+Hengyuan Hu, Denis Yarats, Qucheng Gong, Yuandong Tian, and Mike Lewis. Hierarchical decision making by generating and following natural language instructions. In Advances in neural information processing systems, pp. 10025-10034, 2019.
+JMcVLevine Hunt. Intrinsic motivation and its role in psychological development. In Nebraska symposium on motivation, volume 13, pp. 189-282. University of Nebraska Press, 1965.
+Yiding Jiang, Shixiang Shane Gu, Kevin P Murphy, and Chelsea Finn. Language as an abstraction for hierarchical deep reinforcement learning. In Advances in Neural Information Processing Systems, pp. 9414-9426, 2019.
+Hoang Le, Nan Jiang, Alekh Agarwal, Miroslav Dudik, Yisong Yue, and Hal Daumé. Hierarchical imitation and reinforcement learning. In International Conference on Machine Learning, pp. 2917-2926, 2018.
+Jelena Luketina, Nantas Nardelli, Gregory Farquhar, Jakob Foerster, Jacob Andreas, Edward Grefenstette, Shimon Whiteson, and Tim Rocktäschel. A survey of reinforcement learning informed by natural language. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pp. 6309-6317. International Joint Conferences on Artificial Intelligence Organization, 7 2019. doi: 10.24963/ijcai.2019/880. URL https://doi.org/10.24963/ijcai.2019/880.
+Corey Lynch and Pierre Sermanet. Grounding language in play. arXiv preprint arXiv:2005.07648, 2020.
+Kenneth Marino, Abhinav Gupta, Rob Fergus, and Arthur Szlam. Hierarchical rl using an ensemble of proprioceptive periodic policies. ICLR, 2019.
+David Matthews, Sam Kriegman, Collin Cappelle, and Josh Bongard. Word2vec to behavior: morphology facilitates the grounding of language in machines. 2019.
+
+Hongyuan Mei, Mohit Bansal, and Matthew R Walter. Listen, attend, and walk: Neural mapping of navigational instructions to action sequences. In Thirtieth AAAI Conference on Artificial Intelligence, 2016.
+Andrew N Meltzoff. Imitation, objects, tools, and the rudiments of language in human ontogeny. Human evolution, 3(1-2):45-64, 1988.
+Karthik Narasimhan, Regina Barzilay, and Tommi Jaakkola. Grounding language for transfer in deep reinforcement learning. Journal of Artificial Intelligence Research, 63:849-874, 2018.
+Khanh Nguyen, Debadeepta Dey, Chris Brockett, and Bill Dolan. Vision-based navigation with language-based assistance via imitation learning with indirect intervention. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 12527-12537, 2019.
+Junhyuk Oh, Satinder Singh, Honglak Lee, and Pushmeet Kohli. Zero-shot task generalization with multi-task deep reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2661-2670. JMLR.org, 2017.
+Ronald Parr and Stuart J Russell. Reinforcement learning with hierarchies of machines. In Advances in neural information processing systems, pp. 1043-1049, 1998.
+John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
+Tianmin Shu, Caiming Xiong, and Richard Socher. Hierarchical and interpretable skill acquisition in multi-task reinforcement learning. In International Conference on Learning Representations, 2018.
+Martin Stolle and Doina Precup. Learning options in reinforcement learning. In International Symposium on abstraction, reformulation, and approximation, pp. 212-223. Springer, 2002.
+Richard S Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artificial intelligence, 112(1-2):181-211, 1999.
+Stefanie TELlex, Thomas Kollar, Steven Dickerson, Matthew R Walter, Ashis Gopal Banerjee, Seth Teller, and Nicholas Roy. Understanding natural language commands for robotic navigation and mobile manipulation. In Twenty-Fifth AAAI Conference on Artificial Intelligence, 2011.
+Edward C Williams, Nakul Gopalan, Mine Rhee, and Stefanie TELlex. Learning to parse natural language to grounded reward functions with weak supervision. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1-7. IEEE, 2018.
+Haonan Yu, Haichao Zhang, and Wei Xu. Interactive grounded language acquisition and generalization in a 2d world. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=H1UOm4gA-.
+Victor Zhong, Tim Roktaschel, and Edward Grefenstette. Rtfm: Generalising to new environment dynamics via reading. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=SJgob6NKvH.
+
+# A DATASET
+
+# A.1 COLLECTION PROCESS
+
+In this section, we provide additional details for our data collection process on Amazon Mechanical Turk (AMT). Firstly, we filter workers by the following criteria: a) HIT Approval % of greater than 95, b) Location is in an English speaking country, c) soft block has not been granted. These criteria help to ensure the quality of collected data, particularly in terms of natural language.
+
+
+Figure 7: Workflow to collect human demonstrations for our dataset.
+
+We collected the dataset over the course of a few weeks. For each HIT, we paid the Turker $0.65. On average the task took about 3-4 minutes. For each HIT, we generate a unique entrance code that is provided to the Turker on the AMT website. The Turker is also provided with a unique exit code once the HIT is complete. Given the entrance and exit code we are able to pay workers accordingly for their demonstrations.
+
+Workers were provided with an entrance code at the beginning of the task to enter the website and an exit code when they completed the task to be able to submit the HIT. This enforces that we do not have workers doing extra HITs that we are unable to pay for and to ensure that worker who submit HITs have indeed completed our task. Then we also wrote a parsing script to be able to quickly verify all submitted HITs to the task before payment. Specifically, we used this script to manually review the list of instructions generated for each task to ensure that the instructions provided were indeed pertinent to the task at hand.
+
+We had many returning workers who completed our task. A few even wrote emails to our requesting email to let us know that they really enjoyed the HIT and thought it was quite interesting to work on. This suggests that collecting demonstrations of this kind is relatively interesting for humans to provide.
+
+# A.2 HIT INSTRUCTIONS
+
+Prior to starting the task, each worker was provided with this instructions page which gives an analogy for cooking stir-fry as an analogy for the types of instructions that we believe they would provide. The demo page is shown in Figure 8.
+
+Invoking users to provide a particular level of specificity was difficult to convey without explicitly providing example instructions. We deliberately chose not to provide examples as to not prime the worker to a particular format to follow. New workers were given two short games to complete to familiarize themselves with the environment. Returning workers were given one longer game to complete as they already had experience with the task. Workers who completed the task as previously described were fully compensated.
+
+Originally, some workers provided not enough instructions, meaning that they wanted to finish the task as quickly as possible, and other workers provided instructions that were too granular, meaning that they did not abstract the task into sub-tasks and rather wrote "press left" or "go up 1" as their instruction. Prior to approving the HIT, checked prior to the worker submitting the HIT and built in precautions so that they had to redo a level if they did not comply with such instructions clearly delineated in the demo.
+
+# Demo
+
+Welcome to the HIT! Read instructions carefully. If you do not follow the instructions below, we reserve the right to not pay for the HIT.
+
+Below is a snippet of a demo from Wikihow of the type of annotation we will be looking for, but in our game setting.
+
+These are their steps to make vegetable stir-fry.
+
+Notice how the tutorial gives you a one sentence description of each high level step, but it is not too specific about each action.
+
+1. Select vegetables to use.
+
+
+
+2. Wash and dry the vegetables...
+
+
+
+3. Slice the vegetables into thin pieces.
+
+
+
+Credits to https://m.wikihow.com/Stir-Fry-Vegetables.
+
+Similarly, in our task, you will be given a goal for the agent to accomplish.
+
+
+
+And we want you to write the instruction steps to achieve this goal.
+
+1. Keep it high level and break the task down, just like Wikihow.
+2. Do not write just the specific action (e.g. up, down, left, craft, mine, etc) or something like "press up" or "press left 2 times" which are not meaningful steps as your instruction.
+3. Since we want high level steps, this means you will execute more than 1 action per instruction.
+
+
+
+Then, demonstrate how to do this step by executing the step using the UP, LEFT, DOWN, RIGHT (or arrow keys).
+
+Use the other buttons (or keys denoted in the parentheses), explained below to complete the task.
+
+
+Figure 8: Demo instructions for AMT workers.
+
+After you have completed the first instruction, press the "DONE" button (or hit enter).
+
+
+
+Then return to write the next instruction, then execute, and do this step-by-step until goal is completed.
+
+
+
+By the end of this game, you should have typed MULTIPLE instructions, each followed by a sequence of button/ key presses.
+
+You should be writing meaningful high level instructions which are not "up", "press up", or "go left 2 times".
+
+If you do not follow these instructions, you will not be paid.
+
+# A.3 ADDITIONAL ENVIRONMENT AND CRAFTING TASK DETAILS
+
+Figure 9 gives an example of the type of task (Make Iron Ore) that would be presented to the worker, which includes the goal, recipes, and the current board. Table 3 shows how the tasks are related in terms of sub-tasks. This might be because of a similar material (i.e. Iron) or a similar craft (i.e. Stairs).
+
+GOAL:
+
+Make Iron Ore (Iron Ore=1)
+
+RECIPES:
+
+
+Ore Vein
+
+
+Pickaxe
+
+
+
+
+Iron Ore
+
+
+Figure 9: Example board and goal configuration where the goal is to make an iron ore. The worker uses the recipes provided to give appropriate instructions and execute accordingly.
+
+Table 3: List of recipes for which we have collected annotations, labeled by the number of steps needed to complete it and other recipes which may share sub-tasks of underlying structure.
+
+| ID | Recipe Name | Steps | Related Crafts by ID |
| 1 | Gold Ore | 1 | 2 |
| 2 | Iron Ore | 1 | 1,8 |
| 3 | Diamond Boots | 2 | 12,14 |
| 4 | Brick Stairs | 2 | 5,7 |
| 5 | Cobblestone Stairs | 2 | 4,7,13 |
| 6 | Wooden Door | 3 | 7 |
| 7 | Wood Stairs | 3 | 4,5,6 |
| 8 | Iron Ingot | 3 | 2 |
| 9 | Leather Leggings | 3 | 10,11,12 |
| 10 | Leather Chestplate | 3 | 9,11,12 |
| 11 | Leather Helmet | 3 | 9,10,12 |
| 12 | Leather Boots | 3 | 3,9,10,11 |
| 13 | Stone Pickaxe | 5 | 5,14 |
| 14 | Diamond Pickaxe | 5 | 3,13 |
+
+
+Figure 10: A more in-depth example of 3 out of the 14 training tasks to show how the subtasks are related (red boxes = final craft, blue boxes = raw material).
+
+
+
+
+
+Table 4: Summary statistics for tasks of varying difficulty.
+
+| Steps | Average # of Instructions | Average # of Actions |
| 1-step | 3.7 | 15.4 |
| 2-step | 4.9 | 21.5 |
| 3-step | 6.1 | 27.6 |
| 5-step | 8.8 | 40.1 |
+
+# A.4 EXAMPLE DATA COLLECTED
+
+Table 5 gives examples of instructions randomly sampled from our dataset. Even with a limited number of crafts, we were able to collect language instructions with diversity in sentence construction. There are inconsistencies in capitalization and spelling which we handled in the preprocessing of the data. Table 6 shows the most frequently used instructions. Figure 11 gives summary statistics of the instruction side of the dataset.
+
+
+Figure 11: (Left) Instruction frequency (Middle) Word frequency (Right) Histogram of instruction lengths.
+
+
+
+
+
+Table 5: Examples of randomly sampled instructions.
+
+| Grab the pickaxe.
+Make a stone pickaxe from stick and cobblestone.
+Go to leather boots bench and Craft Leather Boots.
+Move to Tree.
+Craft at the leather helmet bench
+Make a leather chestplate from the leather.
+Unlock the door.
+Move up two squares and grab the key.
+Go to switich and open door with Toggle Switch.
+Chop down the tree to get its wood.
+Go to the stone pickaxe crafting bench.
+Mine diamond ore.
+Go to stock and click Mine to harvest Cobblestone.
+Toggle the switch to open the door.
+Grab the pickaxe.
+Go back through door and mine gold ore vein.
+Walk to brick factory and acquire bricks.
+Go to wood bench and Craft Wood Plank.
+Craft Diamond Pickaxe.
+Move to ingot.
+Next step, use axe on tree to get wood.
+Go to iron ore vein and mine for iron ore.
+Unlock the door with the key and enter the room.
+Pick up axe and put into inventory.
+Craft at the stick bench.
+Open the door to your right.
+Mine the cobblestone stash.
+Get eht pickaxe.
+Go to tools and click Grab to take each one. | Craft diamond axe with its bench, stick, diamond.
+Pick up key using "GRAB".
+Grab the key; this may be useful.
+Collect Tool and Pass Through Door.
+Go to brick stairs bench and craft.
+Move up to the axe and pick it up.
+Use Mine on Iron Ore Vein to collect Iron.
+Use pickaxe to mine diamond ore vein.
+Move to Stick bench and Craft Stick.
+Craft Wood Plank.
+Harvest wood from tree using "MINE".
+2. "Open Door", then move to Pickax and "grab".
+Craft brick stairs with its bench and brick.
+Go "Mine" both the iron ore vein and the coal vein.
+Mine the wood.
+Mine Diamond Ore Vein.
+Chop down the tree to get its wood.
+Move to Rabbit.
+Go to tools and click Grab to take each one.
+Grab key.
+Go to stick workbench and press craft.
+Go to TOOL (Pickaxe).
+Go to the stone pickaxe table.
+Move to Wood Plank bench and Craft.
+Mine cobblestone stash, then go to tree.
+Craft Wooden Door.
+Craft diamond boots.
+Use pickaxe to mine iron ore vein.
+Flip the switch. |
+
+# B METHODS DETAILS
+
+Given the dataset of human demonstrations, we convert traces of each game into state-action pairs for training. In the subsequent sections are additional details of training parameters for both the supervised IL training and RL fine-tuning. The computing infrastructure for each experiment was on 1 GeForce GTX 1080 Ti GPU. Each experiment took between a few hours to a few days to run.
+
+# B.1 DATA PREPROCESSING
+
+From the dataset, which had 6,322 game traces, we extracted 195,405 state-action pairs and 35,901 total instructions. This is done by matching an action to the corresponding state within a trace as well as the high-level natural language instruction. Each instruction was edited using a spell checker package to reduce the size of the final vocabulary. In the link above, we provide the cleaned version of the dataset.
+
+# B.2 IL PARAMETERS
+
+Both language generation and language-conditioned policy networks use Cross Entropy Loss and Adam optimizer (learning rate 0.001). In addition, the language loss also includes the addition of doubly stochastic regularization, based on the attention mechanism, and clipping of the gradient norm to a max norm of 3. We train for 15-20 epochs. The batch size used during supervised training was 64. By evaluating after each epoch on 100 randomly spawned games, we find that performance plateaus after that number of epochs. As in the RL reward, we only consider a game to be complete if the final craft is completed within the given number of steps. We use the entire dataset for training, since validation/testing were performed on randomly generated new games.
+
+Table 6: Instruction sorted by usage frequency.
+
+| Grab pickaxe.
+Grab the pickaxe.
+Open the door.
+Open door.
+Grab axe.
+Craft wood plank.
+Toggle the switch.
+Grab the key.
+Mine tree.
+Grab key.
+Craft stick.
+Toggle switch to open door.
+Grab the axe.
+Toggle switch.
+Go to pickaxe.
+Go to wood bench and craft wood plank.
+Grab key to open door.
+Get the pickaxe.
+Go to tools and click grab to take each one.
+Craft leather.
+Go to key grab it go to door and open door.
+Go to switch and open door with toggle switch.
+Go to tree.
+Grab the sword.
+Go to stick bench and craft stick.
+Mine the tree.
+Grab sword.
+Pick up pickaxe using grab.
+Mine rabbit. | Mine cobblestones stash.
+Pick up the pickaxe.
+Go to switch.
+Go to pickaxe and click grab to take it.
+Go to wood plank and press craft.
+Go to axe.
+Get the key.
+Mine diamond ore vein.
+Mine cobblestones.
+Go to stick and press craft.
+Go to pickaxe and select "grab".
+Make planks.
+Use key to open door.
+Craft diamond pickaxe.
+Go to pickaxe and press grab.
+Craft stone pickaxe.
+Unlock the door.
+Make sticks.
+Go to stone bench and craft stone pickaxe.
+Mine the diamond ore vein.
+Go to key.
+Go to door.
+Mine cobblestone stash.
+Grab axe to mine tree.
+Pick up axe using grab.
+Get the axe.
+Go to stocks click mine to harvest wood/stone.
+Craft wood plank with its bench and wood.
+Use the switch to open the door. |
+
+Table 7: We compare to some related datasets/environments (Chevalier-Boisvert et al., 2019b; Jiang et al., 2019; Hu et al., 2019; Anderson et al., 2018). We don't report dataset size for an environment that generates synthetic language. Note that $\sim$ means limited evaluation, they demonstrate unseen evaluation on one setting only). Most notably, our work focuses on developing a method that performs well on unseen tasks. We want to clarify that unseen means tasks/environments which the agent has never received supervised reward for. This is not the same as generating a new configuration of a task that the agent received reward for in training.
+
+| Dataset | Language | Dataset Size | Task | Unseen Tasks |
| BabyAI | Synthetic | - | Navigation/object placement | No |
| HAL/CLEVR | Synthetic | - | Object sorting/arrangement | ~ |
| R2R | Natural | 10,800 Views | Vision+language navigation | Yes |
| MiniRTS | Natural | 5,392 Games | Real-time strategy game | No |
| Ours | Natural | 6,322 Games | Compositional crafting tasks | Yes |
+
+# B.3 RL PARAMETERS
+
+To fine-tune with RL, we first created a gym environment for the Maze game, which at reset time will spawn a new Mazebase game in the backend. In the parameters of the environment, we define the maximum episode steps to be 100. The action space of the environment is Discrete space of size 8 (up, down, left, right, toggle switch, grab, craft, mine) and the observation space is a flat vector that concatenates all state observations. For the PPO algorithm, we use a learning rate of 2.5e4, clip parameter of 0.1, value loss coefficient of 0.5, 8 processes, 128 steps, size 4 mini-batch, linear learning rate decay, 0.01 entropy coefficient, and 100000000 environment steps.
+
+# B.4 ARCHITECTURE DETAILS
+
+# B.4.1 STATE ENCODING
+
+As shown in Figure 12, the relevant information about the state that is encoded includes the $5 \times 5$ grid, inventory, and goal. We have two representations of the $5 \times 5$ grid: one with items relevant for crafting and another with a one-hot representation of non-crafting-related items, such as a door, wall, or key. All crafting-related items on the board, inventory, and goal are embedded using a 300-dimension GloVe embedding, summing the embeddings for multiple word items (i.e. Iron Ore Vein). The intuition for this distinction is that for generalization, crafting items should be associated in terms of compositionality, whereas non-crafting items are standalone.
+
+To compute the state encoding, we first passed the two grids, inventory, and goal through separate fully connected layers to reduce to the same dimension and concatenated along the vectors. The final size of the state encoding tensor is (27, 128), where 25 are for the grid, 1 for inventory, and 1 for goal.
+
+
+Figure 12: At each time step we encode state relevant observations including the goal, inventory, and grid. This encoding is utilized by both the language generator and the language-conditioned policy. The boxes in green, denote the observations that were encoded using the GloVe embedding.
+
+# B.4.2 IL W/ DISCRIMINATIVE LANGUAGE
+
+Since Hu et al. (2019) is a closely related work to ours, we wanted to compare our method against theirs. Their method only uses behavioral cloning, which is our IL w/ Generative Language but instead with discriminative language. We modify our high-level language generator to discriminate amongst the most frequent $N = 500$ instructions by adapting code released from Hu et al. (2019) of their LSTM-based language model here). We plugged in our own state encoding instead of theirs, which is anyways tailored to their environment. In summary, the high-level language module is largely the same as our model, both LSTMs, except for the modification of the output layer, which predicts over the set of possible instructions. We similarly extract the hidden state to condition the low-level policy on. The low-level policy is kept the same as our IL w/ Generative Language model for fair comparison. The training parameters are the same as other baselines.
+
+# B.4.3 OUR METHOD
+
+The state encoder, which is used in the high- and low-level model is covered in the section above. The size of the hidden layer in the language generator LSTM takes input size 128 and has a hidden size of 32. The size of layers in the policy network is 48 and 8, and uses ReLU activations.
+
+# B.4.4 STATE RECONSTRUCTION
+
+The state reconstruction architecture was instantiated using an autoencoder with 4 hidden layers, taking as input the state encoding tensor. The low-dimensional representation after 2 hidden layers is used as the hidden state for the policy. The autoencoder is trained with MSE loss on the state encoding. This model trained for a total of 25 epochs in the IL phase.
+
+# B.4.5 STATE PREDICTION
+
+The state prediction architecture largely resembled the language generator. However, we removed the GloVe weight embedding layer. In IL training, the dataset was modified to include the past $T$ states. In RL training, the environment was modified to record the previous $T$ states. If there were $< T$ states in the current trajectory, the same state is used as input and subsequently replaced. The recurrent network is trained with MSE loss on the state encoding. This model trained for a total of 20 epochs in the IL phase.
+
+# B.4.6 UNI-MODAL INPUT
+
+A baseline we considered, but did not included in the main text, is to evaluate the necessity of the state encoding in multi-modal datasets. In other works, language instructions are sufficient to solve the task without a state encoding or some representation of the current state. This ablation helps verify that the generated instructions are sufficiently high level that they do not provide the agent with all the information necessary to complete the task in addition to the simulator itself. We considered a baseline where the agent only sees language instructions without the state encoding, so the state encoding is used to generate language, but is not provided as additional input to the policy network. This performs poorly and is not able to solve even the simplest 1-step task. We believe this is because a representation of the current state is critical to completing the task completion, and is not captured by the high-level instructions.
+
+# C SUPPLEMENTARY RESULTS
+
+# C.1 IL RESULTS IN STANDARD SETTING
+
+As shown in Table 8, having access to natural language instructions is only marginally beneficial. While the two environments capture different tasks, we found empirically that the model proposed by Hu et al. [14], which is most similar to our IL+Lang baseline, was able to solve their miniRTS environment using a similarly sized dataset to ours, whereas IL+Lang is not sufficient to complete the most difficult tasks in our environment.
+
+Table 8: Accuracy of IL (with and without language) evaluated over 100 games with 3 different seeds.
+
+| Steps | IL no language | IL Gen. Language | IL Disc. Language |
| 1-step | 18.00%± 3.55% | 19.33% ± 1.89% | 20.00% ± 1.35% |
| 2-step | 0.00%± 0.00% | 9.33% ± 2.05% | 8.33%± 0.98% |
| 3-step | 0.00%± 0.00% | 4.33% ± 0.47% | 0.00%± 0.00% |
| 5-step | 0.00%± 0.00% | 0.00%± 0.00% | 0.00%± 0.00% |
+
+# C.2 DEMONSTRATION ONLY AND FEW-SHOT
+
+As shown in Table 9, we find low deviation in our multiple variance runs. However, we do observe in some cases such as IL+RL $10\%$ and $100\%$ higher variance because in some trials the model was not able to solve any of the 3-step tasks and in others it was. In the cases of low variance, either the model was able to consistently or not solve the tasks.
+
+Table 9: Variance results from Table 3 in the main paper, which presents accuracy.
+
+ | IL | IL w/Lang | IL+RL | SP | SR | Ours |
| Steps | 5% | 10% | 100% | 5% | 10% | 100% | 5% | 10% | 100% | 5% | 10% | 100% | 5% | 10% | 100% | 5% | 10% | 100% |
| 1-step | 2% | 1% | 3% | 5% | 3% | 2% | 2% | 5% | 8% | 3% | 1% | 2% | 4% | 3% | 1% | 1% | 2% | 2% |
| 2-step | 1% | 1% | 0% | 1% | 1% | 1% | 9% | 10% | 17% | 19% | 9% | 13% | 8% | 40% | 15% | 1% | 3% | 7% |
| 3-step | 1% | 2% | 0% | 0% | 0% | 0% | 1% | 33% | 31% | 14% | 24% | 18% | 0% | 7% | 15% | 4% | 27% | 10% |
+
+# C.3 REWARD ONLY
+
+Finally, for completeness, we consider the scenario where the agent receives a reward but no demonstrations. The tasks which we select for this setting are sampled from the unseen tasks list. We
+
+choose 3 2-5 step crafts. We evaluate this scenario on our method against other baselines which train using a reward signal. In Table 10, we evaluate on tasks for which we do not have demonstrations and fine-tune a trained model with the reward signal for these tasks. This setting is not very interesting from a generalization perspective, since rewards are a far more expensive resource compared to demonstrations and instructions. We don't include 1-step tasks since that is able to be solved easily by RL alone (see 1-step results in Figure 4). IL and IL w/ Language is not included because this reduces to the zero shot setting.
+
+Table 10: Comparison of 2-5 step tasks where only reward is provided to the agent. We believe IL+RL is not able to adapt to these new tasks, given reward only, since it has overfit to the original training tasks. We find that our method outperforms baselines in this setting.
+
+| Steps | RL | IL+RL | Ours |
| 2-step | 92.00%±0.81% | 0% | 95.33%±0.94% |
| 3-step | 71.67%±0.47% | 0% | 88.00%±1.41% |
| 5-step | 0.00%±0.00% | 0% | 65.00%±5.67% |
+
+# C.4 INTERPRETABILITY
+
+Table 11: Step-by-step discriminated high-level instructions for seen crafts.
+
+| Goal: Iron Ore
+grab the pickaxe
+mine iron ore
+mine the iron ore vein
+unknown | Goal: Gold Ore
+unknown
+go to the key
+unknown
+go to gold ore vein and mine |
| Goal: Brick Stairs
+grab key and open door
+mine bricks
+unknown | Goal: Cobblestone Stairs
+take the pickaxe
+go to cobblestone stash and mine
+use pickaxe to mine cobblestone stash
+go to cobblestone stash and mine
+got to stock and click mine to harvest cobblestones
+unknown
+craft cobblestone stairs |
| Goal: Diamond Boots
+unknown
+go to pickaxe
+unknown | Goal: Iron Ore
+toggle switch to open door
+take the pickaxe
+toggle switch to open door
+unknown |
+
+We present more examples of generated language for both seen and unseen tasks (Table 12 and Table 13). The tables show a complete set of instructions for tasks which were successfully completed. We observe that if the task was not complete then the language generator would be stuck on a particular instruction. The language generated for tasks where the model received supervised data is, as expected, more consistent in using the correct language when generating instructions. However, the language generated for tasks which are new to the model also generated instructions which indicated the use of the correct items. We can observe some mistakes such as generating "leather boots" instead of "leather stairs" or generating an unknown token. Particularly for the Gold/Brick Pickaxe examples, the model is able to generate fairly consistent language for a challenging, new 5-step task. Note that in the Gold Pickaxe example, the model was not able to predict the use of item gold so it uses the word stocks. In the Brick Pickaxe example the model correctly inferred the use of brick. These inconsistencies perhaps can be explained by the amount of training examples where brick and gold appeared, with the former being much more frequent than the latter.
+
+Natural language allows us to understand why a model fails to complete a task from simply looking at the generated instructions, which can facilitate future model development and refinement. We identify two failure cases: (1) failure of language generation and (2) failure of policy. In (1), we
+
+Table 12: Step-by-step generated high-level instructions for seen crafts.
+
+| Goal: Gold Ore
+go to key and press grab.
+go to pickaxe and grab.
+go to gold ore vein and mine. | Goal: Brick Stairs
+go to pickaxe and press grab.
+go to the brick factor and mine brick.
+go to brick stairs and press craft. |
| Goal: Diamond Pickaxe
+go to axe and press grab.
+go to key grab it go to door and open door.
+go to tools and click grab to take each one.
+go to tree and press mine.
+go to stocks click mine to harvest.
+go to tree and mine.
+go to wood plank and press craft.
+go to stick bench and craft stick. | Goal: Wooden Door
+go to the axe and grab it.
+go to the switch and open door.
+go to the axe and grab it.
+go to the tree.
+go to the tree and press mine.
+go to wood plank and press craft.
+go to wood plank bench and craft wooden door. |
| Goal: Leather Helmet
+go to sword and click grab to take it.
+go to key and press grab.
+go to sword and click grab to take it.
+go to rabbit and press mine.
+go to leather and press craft.
+go to leather boots bench and craft leather. | Goal: Diamond Boots
+go to key and press grab.
+go to pickaxe and press grab.
+go to diamond ore vein and mine.
+go to diamond boots and press craft.
+go to diamond bench and craft diamond boots. |
| Goal: Iron Ore
+go to key and press grab.
+go to pickaxe and press grab.
+go to iron ore vein and press mine. | Goal: Cobblestone Stairs
+go to key and press grab.
+go to pickaxe and press grab.
+go to cobblestone stash and press mine.
+go to cobblestone stairs and press craft. |
| Goal: Wood Stairs
+go to axe and press grab.
+go to tree and mine.
+go to wood plank and press craft.
+go to wood stairs and press craft. | Goal: Leather Chestplate
+go to sword and press grab.
+go to rabbit and mine.
+go to leather and craft.
+go to leather chestplate and craft. |
| Goal: Leather Leggings
+go to sword and click grab to take it.
+go to rabbit and press mine.
+go to leather and press craft.
+go to leather bench and craft leather | Goal: Iron Ingot
+go to key and press grab.
+go to pickaxe and press grab.
+go to iron ore vein and mine.
+go to iron ingot and craft. |
+
+observe occasional instructions generated for unseen tasks which have incorrect crafting materials. For example, "Go to iron ingot and press craft" was generated as an instruction for Gold Ingot. In (2), while the generated language is sensical for the task, the policy failed to execute correctly.
+
+Table 13: Step-by-step generated high-level instructions for unseen crafts.
+
+| Goal: Cobblestone Boots
+go to key and press grab.
+go to pickaxe and press grab.
+go to cobblestone stash and mine.
+go to workbench and press craft. | Goal: Diamond Leggings
+go to pickaxe and press grab.
+go to diamond ore vein and mine.
+go to diamond boots and press craft. |
| Goal: Leather Stairs
+go to sword and press grab.
+go to rabbit and mine the rabbit.
+go to leather and press craft. | Goal: Stone Helmet
+go to pickaxe and press grab.
+go to the cobblestones stash and mine.
+go to the workbench and craft. |
| Goal: Diamond Ingot
+go to pickaxe and press grab.
+go to diamond ore vein.
+go to the workbench and craft. | Goal: Brick Door
+go to pickaxe and press grab.
+go to the brick factory and mine the brick.
+go to the brick stairs and craft. |
| Goal: Brick Pickaxe
+go to the pickaxe and grab it
+go to the axe and press grab.
+go to the tree.
+go to the tree and mine.
+go to the brick factory and mine.
+go to the wood plank and craft.
+go to the stick bench and craft stick.
+go to stick and craft. | Goal: Gold Pickaxe
+go to the pickaxe and press grab.
+go to the axe and grab it.
+go to the tree.
+go to stocks and click mine to harvest <unk>.
+go to the tree and mine the tree.
+go to wood plank and press craft.
+go to stick and press craft. |
| Goal: Diamond Stairs
+go to key and press grab.
+go to pickaxe and press grab.
+go to the diamond ore vein and mine.
+go to the bench and craft. | Goal: Wood Chestplate
+go to key and grab it.
+go to axe and grab it.
+go to the tree.
+go to tree and mine.
+go to wood plank and craft. |
+
+Table 14: Example of instruction and inventory side-by-side for 3 unseen tasks. As in Figure 6 from the main paper, the inventory changes when a subtask, given by the instruction, is completed.
+
+| Goal: Leather Door |
| Instruction | Inventory |
| go to the sword and grab it | {'Sword': 1} |
| go to the rabbit and mine | {'Sword': 1, 'Rabbit Hide': 1} |
| go to the leather and press craft | {'Sword': 1, 'Rabbit Hide': 0, 'Leather': 1} |
| go to the leather boots bench and craft leather | {'Sword': 1, 'Rabbit Hide': 0, 'Leather': 0, 'Leather Door': 1} |
+
+| Goal: Stone Boots |
| Instruction | Inventory |
| go to key and press grab | {'key': 1} |
| go to pickaxe and press grab | {'key': 1, 'Pickaxe': 1} |
| go to the cobblestone stash and mine the <unk> | {'key': 1, 'Pickaxe': 1, 'Cobblestone': 1} |
| go to the bench and craft | {'key': 1, 'Pickaxe': 1, 'Cobblestone': 0, 'Stone Boots': 1} |
+
+| Goal: Diamond Stairs |
| Instruction | Inventory |
| go to key and press grab | {'key': 1} |
| go to pickaxe and press grab | {'key': 1, 'Pickaxe': 1} |
| go to the diamond ore vein | {'key': 1, 'Pickaxe': 1} |
| go to diamond ore vein and mine | {'key': 1, 'Pickaxe': 1, 'Diamond': 1} |
| go to the bench and craft | {'key': 1, 'Pickaxe': 1, 'Diamond': 0, 'Diamond Stairs': 1} |
\ No newline at end of file
diff --git a/askyourhumansusinghumaninstructionstoimprovegeneralizationinreinforcementlearning/images.zip b/askyourhumansusinghumaninstructionstoimprovegeneralizationinreinforcementlearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..786f387b13dde7e15375b9df29ddd9203290f3a4
--- /dev/null
+++ b/askyourhumansusinghumaninstructionstoimprovegeneralizationinreinforcementlearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e346401a13b901396384fdf729d03e89f02a8109c06806f3fde8783f0a5f7264
+size 1583792
diff --git a/askyourhumansusinghumaninstructionstoimprovegeneralizationinreinforcementlearning/layout.json b/askyourhumansusinghumaninstructionstoimprovegeneralizationinreinforcementlearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..09c6a9676af22a919e21cb415f32ff76f04efc63
--- /dev/null
+++ b/askyourhumansusinghumaninstructionstoimprovegeneralizationinreinforcementlearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:760d2158e69ef233ffdc93d222134a58b2e14c9d7d2c229acab69b89fcc4c00f
+size 494733
diff --git a/attentionalconstellationnetsforfewshotlearning/8d357406-6e29-47af-b7d2-458a8aaa9f4c_content_list.json b/attentionalconstellationnetsforfewshotlearning/8d357406-6e29-47af-b7d2-458a8aaa9f4c_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..4f541b9913dd3394338c992fbe69780b4c739ca7
--- /dev/null
+++ b/attentionalconstellationnetsforfewshotlearning/8d357406-6e29-47af-b7d2-458a8aaa9f4c_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0e72f0dafe3d223eca9c522e8875088c6ea35662a823d42db4d4254d6a54d40d
+size 110791
diff --git a/attentionalconstellationnetsforfewshotlearning/8d357406-6e29-47af-b7d2-458a8aaa9f4c_model.json b/attentionalconstellationnetsforfewshotlearning/8d357406-6e29-47af-b7d2-458a8aaa9f4c_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..6e6dff1c89d950ad1f31f17c0ea452486e3502b0
--- /dev/null
+++ b/attentionalconstellationnetsforfewshotlearning/8d357406-6e29-47af-b7d2-458a8aaa9f4c_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9686f7134be3d15e24baa19d03be1d4cc25476c3167ad366c289d5eb5dc48036
+size 132853
diff --git a/attentionalconstellationnetsforfewshotlearning/8d357406-6e29-47af-b7d2-458a8aaa9f4c_origin.pdf b/attentionalconstellationnetsforfewshotlearning/8d357406-6e29-47af-b7d2-458a8aaa9f4c_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a830aa96fe21c279385287d78c33eeffe0d60297
--- /dev/null
+++ b/attentionalconstellationnetsforfewshotlearning/8d357406-6e29-47af-b7d2-458a8aaa9f4c_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0decbce91088f71525fc77ad7c0d9b5e5c8ae8af5236dfda244ee286674cb01c
+size 3961693
diff --git a/attentionalconstellationnetsforfewshotlearning/full.md b/attentionalconstellationnetsforfewshotlearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..cc33a909c78f0cdea7023be1b3a4e1d4235a0953
--- /dev/null
+++ b/attentionalconstellationnetsforfewshotlearning/full.md
@@ -0,0 +1,419 @@
+# ATTENTIONAL CONSTELLATION NETS FOR FEW-SHOT LEARNING
+
+Weijian $\mathbf{X}\mathbf{u}^{*1}$ , Yifan $\mathbf{X}\mathbf{u}^{*1}$ , Huajin Wang $^{*1}$ & Zhuowen $\mathbf{T}\mathbf{u}^{1,2}$
+
+University of California San Diego1, Amazon Web Services2
+
+{wex041,yix081,huw011,ztu}@ucsd.edu
+
+# ABSTRACT
+
+The success of deep convolutional neural networks builds on top of the learning of effective convolution operations, capturing a hierarchy of structured features via filtering, activation, and pooling. However, the explicit structured features, e.g. object parts, are not expressive in the existing CNN frameworks. In this paper, we tackle the few-shot learning problem and make an effort to enhance structured features by expanding CNNs with a constellation model, which performs cell feature clustering and encoding with a dense part representation; the relationships among the cell features are further modeled by an attention mechanism. With the additional constellation branch to increase the awareness of object parts, our method is able to attain the advantages of the CNNs while making the overall internal representations more robust in the few-shot learning setting. Our approach attains a significant improvement over the existing methods in few-shot learning on the CIFAR-FS, FC100, and mini-ImageNet benchmarks.
+
+# 1 INTRODUCTION
+
+Tremendous progress has been made in both the development and the applications of the deep convolutional neural networks (CNNs) (Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; Szegedy et al., 2015; He et al., 2016; Xie et al., 2017). Visualization of the internal CNN structure trained on e.g. ImageNet (Deng et al., 2009) has revealed the increasing level of semantic relevance for the learned convolution kernels/filters to the semantics of the object classes, displaying bar/edge like patterns in the early layers, object parts in the middle layers, and face/object like patterns in the higher layers (Zeiler & Fergus, 2014). In general, we consider the learned convolution kernels being somewhat implicit about the underlying objects since they represent projections/mappings for the input but without the explicit knowledge about the parts in terms of their numbers, distributions, and spatial configurations.
+
+On the other hand, there has been a rich history about explicit object representations starting from deformable templates (Yuille et al., 1992), pictorial structure (Felzenszwalb & Huttenlocher, 2005), constellation models (Weber et al., 2000; Fergus et al., 2003; Sudderth et al., 2005; Fei-Fei et al., 2006), and grammar-based model (Zhu & Mumford, 2007). These part-based models (Weber et al., 2000; Felzenszwalb & Huttenlocher, 2005; Fergus et al., 2003; Sudderth et al., 2005; Zhu & Mumford, 2007) share three common properties in the algorithm design: (1) unsupervised learning, (2) explicit clustering to obtain the parts, and (3) modeling to characterize the spatial configuration of the parts. Compared to the CNN architectures, these methods are expressive with explicit part-based representation. They have pointed to a promising direction for object recognition, albeit a lack of strong practice performance on the modern datasets. Another line of object recognition system with the part concept but trained discriminatively includes the discriminative trained part-based model (DPM) (Felzenszwalb et al., 2009) and the spatial pyramid matching method (SPM) (Lazebnik et al., 2006). In the context of deep learning, efforts exist to bring the explicit part representation into deep hierarchical structures (Salakhutdinov et al., 2012).
+
+The implicit and explicit feature representations could share mutual benefits, especially in few-shot learning where training data is scarce: CNNs may face difficulty in learning a generalized representation due to lack of sufficient training data, whereas clustering and dictionary learning
+
+provide a direct means for data abstraction. In general, end-to-end learning of both the implicit and explicit part-based representations is a viable and valuable means in machine learning. We view convolutional features as an implicit part-based representation since they are learned through back-propagation via filtering processes. On the other hand, an explicit representation can be attained by introducing feature clustering that captures the data abstraction/distribution under a mixture model.
+
+In this paper, we develop an end-to-end framework to combine the implicit and explicit part-based representations for the few-shot classification task by seamlessly integrating constellation models with convolution operations. In addition to keeping a standard CNN architecture, we also employ a cell feature clustering module to encode the potential object parts. This procedure is similar to the clustering/codebook learning for appearance in the constellation model (Weber et al., 2000). The cell feature clustering process generates a dense distance map. We further model the relations for the cells using a self-attention mechanism, resembling the spatial configuration design in the constellation model (Weber et al., 2000). Thus, we name our method constellation networks (ConstellationNet). We demonstrate the effectiveness of our approach on standard few-shot benchmarks, including FC100 (Oreshkin et al., 2018), CIFAR-FS (Bertinetto et al., 2018) and mini-ImageNet (Vinyals et al., 2016) by showing a significant improvement over the existing methods. An ablation study also demonstrates the effectiveness of ConstellationNet is not achieved by simply increasing the model complexity using e.g. more convolution channels or deeper and wider convolution layers (WRN-28-10 (Zagoruyko & Komodakis, 2016)) (see ablation study in Table 3 and Figure 2 (e)).
+
+# 2 RELATED WORK
+
+Few-Shot Learning. Recently, few-shot learning attracts much attention in the deep learning community (Snell et al., 2017; Lee et al., 2019). Current few-shot learning is typically formulated as a meta-learning problem (Finn et al., 2017), in which an effective feature embedding is learned for generalization across novel tasks. We broadly divide the existing few-shot learning approaches into three categories: (1) Gradient-based methods optimize feature embedding with gradient descent during meta-test stage (Finn et al., 2017; Bertinetto et al., 2018; Lee et al., 2019). (2) Metric-based methods learn a fixed optimal embedding with a distance-based prediction rule (Vinyals et al., 2016; Snell et al., 2017). (3) Model-based methods obtains a conditional feature embedding via a weight predictor (Mishra et al., 2017; Munkhdalai et al., 2017). Here we adopt ProtoNet (Snell et al., 2017), a popular metric-based framework, in our approach and boost the generalization ability of the feature embeddings with explicit structured representations from the constellation model. Recently, Tokmakov et al. (2019) proposes a compositional regularization to the image with its attribute annotations, which is different from out unsupervised part-discovery strategy.
+
+Part-Based Constellation/Discriminative Models. The constellation model family (Weber et al., 2000; Felzenszwalb & Huttenlocher, 2005; Fergus et al., 2003; Sudderth et al., 2005; Fei-Fei et al., 2006; Zhu & Mumford, 2007) is mostly generative/expressive that shares two commonalities in the representation: (1) clustering/codebook learning in the appearance and (2) modeling of the spatial configurations. The key difference among these approaches lies in how the spatial configuration is modeled: Gaussian distributions (Weber et al., 2000); pictorial structure (Felzenszwalb & Huttenlocher, 2005); joint shape model (Fergus et al., 2003); hierarchical graphical model (Sudderth et al., 2005); grammar-based (Zhu & Mumford, 2007). These constellation models represent a promising direction for object recognition but are not practical competitive compared with deep learning based approaches. There are also discriminative models: The discriminatively trained part-based model (DPM) (Felzenszwalb et al., 2009) is a typical method in this vein where object parts (as HOG features (Dalal & Triggs, 2005)) and their configurations (a star model) are learned jointly in a discriminative way. The spatial pyramid matching method (SPM) (Lazebnik et al., 2006) has no explicit parts but instead builds on top of different levels of grids with codebook learned on top of the SIFT features (Lowe, 2004). DPM and SPM are of practical significance for object detection and recognition. In our approach, we implement the constellation model with cell feature clustering and attention-based cell relation modeling to demonstrate the appearance learning and spatial configuration respectively.
+
+Parts models are extensively studied in fine-grained image classifications and object detection to provide spatial guidance for filtering uninformative object proposals (Simon & Rodner, 2015; Peng et al., 2017; Zhu et al., 2017; Ge et al., 2019; Qi et al., 2019). Related to our work, Neural Activation Constellations (NAC) (Simon & Rodner, 2015) introduces the constellation model to perform unsupervised part model discovery with convolutional networks. Our work is different from NAC in three aspects: (1) The algorithmic mechanisms behind Simon & Rodner (2015) and ours are
+
+
+Figure 1: Illustration of our ConstellationNet pipeline where the bottom part is the network architecture based on Conv-4 backbone, and the top part shows the constellation model. Our proposed ConstellationNet consists of "Constell." modules that perform explicit cell feature clustering with self-attention for joint relation modeling.
+
+different. Simon & Rodner (2015) implements a traditional Gaussian-based constellation module to model the spatial configuration and part selection on top of a fixed pre-trained CNN. However, in our ConstellationNet, our part representation and spatial configuration are modeled by cell feature clustering and self-attention based cell relation module, which is general-purpose, modularized and recursive. (2) In Simon & Rodner (2015), the constellation module is optimized in an EM-like algorithm, which is separate from the CNN optimization. Our constellation modules are seamlessly integrated into the current CNNs and jointly optimized with them. (3) Our ConstellationNet uses the dense cell features from the CNN feature maps, which considers all positions from the images as potential parts and models their relation. However, (Simon et al. 2015) extracts sparse part representations (i.e. it uses at most one part proposal per channel and selects even less parts later), which may not fully utilize the rich information from the CNN feature maps.
+
+# 3 FEW-SHOT LEARNING
+
+In a standard classification problem, we aim to learn a model trained on the dataset $\mathcal{D}^{\mathrm{base}}$ that can generalize its classification ability to unseen test set $\mathcal{D}^{\mathrm{novel}}$ belonging to same categories. In few-shot classification problem, we encourage $\mathcal{D}^{\mathrm{base}}$ and $\mathcal{D}^{\mathrm{novel}}$ to be formed from different categories to emphasize model's generalization ability on novel categories, where we denote training categories as $\mathcal{C}_{\mathrm{base}}$ , test categories as $\mathcal{C}_{\mathrm{novel}}$ , and $\mathcal{C}_{\mathrm{base}} \cap \mathcal{C}_{\mathrm{novel}} = \emptyset$ to ensure the fairness.
+
+In the training stage (a.k.a. meta-train stage), metric-based few-shot learning approaches (Snell et al., 2017; Vinyals et al., 2016; Oreshkin et al., 2018) usually learn a feature extractor $\phi(\mathbf{x})$ on the dataset $\mathcal{D}^{\mathrm{base}}$ to obtain generic feature embedding by optimizing the loss $\mathcal{L}(\phi)$ :
+
+$$
+\mathcal {L} (\phi) = \mathbb {E} _ {\left\{\left(\mathbf {x}, y\right) \right\} \sim \mathcal {D} _ {\text {b a s e}}} \ell \left(\left\{\left(\phi (\mathbf {x}), y\right) \right\}\right) \tag {1}
+$$
+
+where $\{(\mathbf{x},y)\}$ is a sampled mini-batch of data points and $\ell (\cdot)$ is usually an episodic few-shot loss (Vinyals et al., 2016) or a standard cross-entropy loss (Chen et al., 2020).
+
+In the inference stage (a.k.a. meta-test stage), a typical few-shot benchmark evaluates the model on $K$ -way, $N$ -shot classification tasks $\mathcal{T}$ drawn from $\mathcal{D}^{\mathrm{novel}}$ , where each task has a support set and a query set, i.e. $\mathcal{T} = (\mathcal{T}^{\mathrm{supp}},\mathcal{T}^{\mathrm{query}})$ . The support set $\mathcal{T}^{\mathrm{supp}}$ contains $K$ classes and each class has $N$ images (e.g. $K = 5$ , $N\in \{1,5\}$ ). Following Snell et al. (2017), the prediction $\hat{y}^\prime$ of a query image $\mathbf{x}'\in \mathcal{T}^{\mathrm{query}}$ is given by the label of nearest prototype $\mathbf{c}_k$ from $\mathcal{T}^{\mathrm{supp}}$ under a cosine similarity $d(\cdot ,\cdot)$ :
+
+$$
+\hat {y} ^ {\prime} = \arg \max _ {k} d \left(\phi \left(\mathbf {x} ^ {\prime}\right), \mathbf {c} _ {k}\right), \quad \mathbf {c} _ {k} = \frac {1}{N} \sum_ {(\mathbf {x}, y) \in \mathcal {T} ^ {\text {s u p p}}, y = k} \phi (\mathbf {x}). \tag {2}
+$$
+
+An extended description of the few-shot learning framework can be found from Appendix A.1. The generalization ability of the feature extractor $\phi (\mathbf{x})$ is improved in terms of training scheme (e.g.
+
+episodic learning (Vinyals et al., 2016)), network design (e.g. task condition (Oreshkin et al., 2018)) or objective function (e.g. learnable distance (Sung et al., 2018)). In our method, we propose a novel network design by inserting constellation models into CNNs and strengthen the intermediate features.
+
+# 4 CONSTELLATION MODEL
+
+The concept of constellation has been introduced to the few-shot learning scenario in early years (Fei-Fei et al., 2006), in which the appearance and the shape are independently learned in a mixture model. In our work, we revisit the constellation model in an end-to-end learning framework: First, we define the a cell feature as the individual local feature at a position in the feature map (see Figure 1). We then employ cell feature clustering to model the underlying distribution of input cell features, implying a part discovery procedure. We further obtain the distance map of the cell features from clustering and then perform cell relation modeling to build spatial relationships.
+
+# 4.1 CELL FEATURE CLUSTERING
+
+In convolutional neural networks (CNNs), the convolutional filters are learned to detect the discriminative patterns from low-level to high-level through back-propagation (Zeiler & Fergus, 2014). In fact, the backward signal in the back-propagation is not necessarily needed to obtain a pattern detector. With the feature map in the forward step of the CNN, we are able to cluster the individual features at each location of the feature map (a.k.a. cell features) into multiple centers and employ the cluster centers as filters (Coates & Ng, 2012; Krähenbuhl et al., 2015). Assume we obtain a convolutional feature map $\mathbf{U}$ with batch size $B$ , spatial size $H\times W$ and channels $C$ . We disassemble the feature map $\mathbf{U}\in \mathbb{R}^{B\times H\times W\times C}$ into a cell features set $\mathcal{U} = \{\mathbf{u}_1,\mathbf{u}_2,\dots,\mathbf{u}_n\}$ where $n = BHW$ and $\mathbf{u}_i\in \mathbb{R}^C$ is a cell feature. Naively, we can conduct a $k$ -means algorithm on input cell features $\mathcal{U}$ to solve the clustering objective:
+
+$$
+\min \sum_ {i} \sum_ {k} m _ {i k} \| \mathbf {u} _ {i} - \mathbf {v} _ {k} \| _ {2} ^ {2} \quad \text {s . t .} \quad m _ {i k} \in \{0, 1 \}, \quad \sum_ {k} m _ {i k} = 1 \tag {3}
+$$
+
+where $\mathcal{V} = \{\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_K\}$ is a set of cluster centers and $m_{ik}$ indicates if the input cell feature $\mathbf{u}_i$ is assigned to cluster center $\mathbf{v}_k$ . The clustering-based filters $\mathcal{V}$ can model the underlying cell feature distributions and capture the most frequent features, which can be explicitly interpreted as meaningful part patterns/part types. The hard assignment map $\mathbf{m}_i = (m_{i1}, m_{i2}, \dots, m_{iK})$ of input cell feature $\mathbf{u}_i$ onto the cluster centers can be used as a part-based representation, providing alternative information to the next layer in the CNN.
+
+However, there are two issues remaining unsolved in the naive design: Firstly, CNNs are typically optimized in a stochastic gradient descent (SGD) manner. Thus, in each forward step, only a minibatch of images are proceeded to provide cell features, which implies that the cluster centers cannot extract the global feature distribution across the whole dataset. Secondly, the hard assignment map has limited information due to its discrete representation. Therefore, inspired by Sculley (2010), we design a mini-batch soft $k$ -means algorithm to cluster the cell features approximately:
+
+- Initialization. Randomly initialize global cluster centers $\mathcal{V} = \{\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_K\}$ and a counter $\mathbf{s} = (s_1, s_2, \dots, s_K) = \mathbf{0}$ .
+- Cluster Assignment. In forward step, given input cell features $\mathcal{U} = \{\mathbf{u}_1,\mathbf{u}_2,\dots,\mathbf{u}_n\}$ , we compute the distance vector $\mathbf{d}_i = (d_{i1},d_{i2},\ldots d_{iK})$ between input cell feature $\mathbf{u}_i$ and all cluster centers $\nu$ . We then compute the soft assignment $m_{ik}\in \mathbb{R}$ and generate the current mini-batch centers $\mathbf{v}_k^{\prime}$ :
+
+$$
+d _ {i k} = \left\| \mathbf {u} _ {i} - \mathbf {v} _ {k} \right\| _ {2} ^ {2}, \quad m _ {i k} = \frac {e ^ {- \beta d _ {i k}}}{\sum_ {j} e ^ {- \beta d _ {i j}}}, \quad \mathbf {v} _ {k} ^ {\prime} = \frac {\sum_ {i} m _ {i k} \mathbf {u} _ {i}}{\sum_ {i} m _ {i k}} \tag {4}
+$$
+
+where $\beta > 0$ is an inverse temperature.
+
+- Centroid Movement. We formulate a count update $\Delta \mathbf{s} = \sum_{i}\mathbf{m}_{i}$ by summing all assignment maps $\mathbf{m}_i = (m_{i1},m_{i2},\dots m_{iK})$ . The current mini-batch centers $\mathbf{v}_k^{\prime}$ are then updated to the global centers $\mathbf{v}_k$ with a momentum coefficient $\eta$ :
+
+$$
+\mathbf {v} _ {k} \leftarrow (1 - \eta) \mathbf {v} _ {k} + \eta \mathbf {v} _ {k} ^ {\prime}, \quad \eta = \frac {\lambda}{s _ {k} + \Delta s _ {k}} \tag {5}
+$$
+
+- Counter Update. Counter s is updated and distance vectors $\{\mathbf{d}_i\}$ are reshaped and returned:
+
+$$
+\mathbf {s} \leftarrow \mathbf {s} + \Delta \mathbf {s} \tag {6}
+$$
+
+With gradually updating global cluster centers, the above algorithm is able to address the issue of limited data in a mini-batch. In addition, we reshape the distance vectors $\{\mathbf{d}_i\}$ of all input cell features to a distance map $\mathbf{D} \in \mathbb{R}^{B \times H \times W \times K}$ . Each distance vector $\mathbf{d}_i$ can be seen as a learned cell code in codebook (dictionary) learning, which encodes a soft assignment of the visual word (i.e. cell feature) onto the codewords (i.e. cluster centers) and implies a part representation. The distance map $\mathbf{D}$ then can be viewed as a cell code map that represents a spatial distribution of identified parts, which is passed to following layers. Empirically, it is observed that when $\mathbf{u}_i$ and $\mathbf{v}_k$ are $L_2$ normalized, the training procedure is more stable and the Euclidean distance $d_{ik}$ is equivalent to a cosine similarity up to an affine transformation. Details of the cell feature clustering can be found in Appendix A.9.
+
+# 4.2 CELL RELATION AND SPATIAL CONFIGURATION MODELING
+
+Before the deep learning era, traditional constellation models (Fei-Fei et al., 2006) decompose visual information into appearance and shape representation. The appearance of different parts in the image is treated independently while the shape of parts is assumed to have spatial connections. In our constellation model, we establish the spatial relationship among the individual part-based representations at a different location from the distance map as well. Specifically, we apply the self-attention mechanism (Vaswani et al., 2017) to build the spatial relationship and enhance the representation instead of using probabilistic graphical models in prior work (Fei-Fei et al., 2006).
+
+In cell relation modeling, we add a positional encoding $\mathbf{P} \in \mathbb{R}^{B \times H \times W \times C}$ following Carion et al. (2020) for spatial locations to the distance map $\mathbf{D}$ and obtain the input feature map $\mathbf{F}_{\mathrm{I}}$ for query and key layers. For value layer, we directly flatten the distance map $\mathbf{D}$ to another input feature map $\mathbf{F}_{\mathrm{I}}'$ :
+
+$$
+\mathbf {F} _ {\mathrm {I}} = \text {S p a t i a l F l a t t e n} (\mathbf {D} + \mathbf {P}) \in \mathbb {R} ^ {B \times H W \times K}, \quad \mathbf {F} _ {\mathrm {I}} ^ {\prime} = \text {S p a t i a l F l a t t e n} (\mathbf {D}) \in \mathbb {R} ^ {B \times H W \times K} \tag {7}
+$$
+
+The input feature maps $\mathbf{F}_I, \mathbf{F}_I'$ are transformed into query, key and value $\{\mathbf{F}^q, \mathbf{F}^k, \mathbf{F}^v\} \subset \mathbb{R}^{B \times HW \times K}$ by three linear layers $\{\mathbf{W}^q, \mathbf{W}^k, \mathbf{W}^v\} \subset \mathbb{R}^{K \times K}$ and further computes the output feature $\mathbf{F}_A$ :
+
+$$
+\left[ \mathbf {F} ^ {q}, \mathbf {F} ^ {k}, \mathbf {F} ^ {v} \right] = \left[ \mathbf {F} _ {\mathrm {I}} \mathbf {W} ^ {q}, \mathbf {F} _ {\mathrm {I}} \mathbf {W} ^ {k}, \mathbf {F} _ {\mathrm {I}} ^ {\prime} \mathbf {W} ^ {v} \right] \tag {8}
+$$
+
+$$
+\mathbf {F} _ {\mathrm {A}} = \operatorname {A t t} \left(\mathbf {F} ^ {q}, \mathbf {F} ^ {k}, \mathbf {F} ^ {v}\right) = \operatorname {s o f t m a x} \left(\frac {\mathbf {F} ^ {q} \left(\mathbf {F} ^ {k}\right) ^ {\top}}{\sqrt {K}}\right) \mathbf {F} ^ {v} \tag {9}
+$$
+
+The softmax of dot product between query and key matrix $\mathbf{F}^q (\mathbf{F}^k)^\top \in \mathbb{R}^{B\times HW\times HW}$ calculates the similarity scores in the embedding space among features across the spatial dimension. This encodes the spatial relationships of input features and leads to an enhanced output feature representation $\mathbf{F}_{\mathrm{A}}$ . Besides, $\sqrt{K}$ in the denominator is to stabilize the gradient. In practice, we adopt a multi-head attention to model the feature relation in the embedding subspaces:
+
+$$
+\mathbf {F} _ {\mathrm {M H A}} = \operatorname {M u l t i H e a d A t t} \left(\mathbf {F} ^ {q}, \mathbf {F} ^ {k}, \mathbf {F} ^ {v}\right) = \left[ \mathbf {F} _ {1}, \dots , \mathbf {F} _ {J} \right] \mathbf {W}, \quad \mathbf {F} _ {j} = \operatorname {A t t} \left(\mathbf {F} _ {j} ^ {q}, \mathbf {F} _ {j} ^ {k}, \mathbf {F} _ {j} ^ {v}\right) \tag {10}
+$$
+
+In a $J$ -head attention, the aforementioned similarity scores in the $K' = \frac{K}{J}$ dimensional embedding subspace are calculated using the query, key and value from $j$ -th head, i.e. $\{\mathbf{F}_j^q, \mathbf{F}_j^k, \mathbf{F}_j^v\} \subset \mathbb{R}^{B \times HW \times K'}$ . The output features $\mathbf{F}_j$ of each head are computed following Eq. 9. All the output features $\{\mathbf{F}_1, \dots, \mathbf{F}_J\}$ are concatenated back into $K$ dimension embedding and further processed with a linear layer $\mathbf{W} \in \mathbb{R}^{K \times K}$ to generate multi-head output features $\mathbf{F}_{\mathrm{MHA}}$ . Such multi-head attention settings could provide more diverse feature relation without introducing extra parameters.
+
+# 4.3 INTEGRATE CONSTELLATION MODEL WITH CNNS
+
+Our constellation model has the capability to capture explicit structured features and encodes spatial relations among the cell features. The output features yield informative visual cues which are able to strengthen the convolutional features. Thus, as shown in Figure 1, we place the constellation model after the convolution operation to extract its unique explicit features and concatenate them with the original convolutional feature map. A following $1 \times 1$ convolutional layer is used on the concatenated features to restore the channels of convolutional feature map. In Table 3, we provide evidence that merging features from constellation model to the CNN backbone can significantly improve the representation ability. In contrast, increasing channels in CNNs alone to double the parameters (second row in Table 3) can only improve the performance marginally. Optionally, we found it is useful to adopt auxiliary loss when training the constellation model in deeper networks (e.g. ResNet-12). On top of each constellation model, we conduct a standard classification to acquire additional regularization.
+
+# 4.4 WHY CLUSTERING AND SELF-ATTENTION (CLUSTERING MAP + POSITIONAL ENCODING)?
+
+As described in Section 1 and 2, classical constellation models (Fergus et al., 2003; Felzenszwalb & Huttenlocher, 2005) extract parts with their spatial relationships; they are expressive but do not produce competitive results on modern image benchmarks. CNN models (Krizhevsky et al., 2012; He et al., 2016) attain remarkable results on large-scale image benchmarks (Deng et al., 2009) but they are limited when training data is scarce. We take the inspiration from the traditional constellation models, but with a realization that overcomes their previous modeling limitations.
+
+The main contribution of our work is a constellation module/block that performs cell-wise clustering, followed by self-attention on the clustering distance map + positional encoding. This separates our work from previous attempts, e.g. non-local block work (Wang et al., 2018) in which long-range non-linear averaging is performed on the convolution features (no clustering, nor positional encoding for the spatial configuration). The main properties of our constellation block include: (1) Cell based dense representation as opposed to the sparse part representation in (Weber et al., 2000) to make the cells recursively modeled in the self-attention unit in a modularized and general-purpose way. (2) Clustering to generate the cell code after clustering (codebook learning) that attains abstraction and is not dependent on the CNN feature dimensions. (3) Positional encoding (as in Carion et al. (2020)) for cells to encode the spatial locations. (4) Tokenized representation as expressive parts (code/clustering distance map + positional encoding) for the cells. (5) Self-attention to jointly model the cell code and positional encoding to capture the relationships between the parts together with their spatial configurations.
+
+# 5 EXPERIMENT
+
+# 5.1 DATASETS
+
+We adopt three standard benchmark datasets that are widely used in few-shot learning, CIFAR-FS dataset (Bertinetto et al., 2018), FC100 dataset (Oreshkin et al., 2018), and mini-ImageNet dataset (Vinyals et al., 2016). Details about dataset settings in few-shot learning are in Appendix A.2.
+
+# 5.2 NETWORK WITH MULTI-BRANCH
+
+We build ConstellationNet on two ProtoNet variants, namely Conv-4 and ResNet-12, which are commonly used in few-shot learning. Details of networks and the optimization are in Appendix.
+
+We develop a new technique, Multi-Branch, to optimize standard classification loss and prototypical loss simultaneously. We find the two training schemes, standard classification scheme and prototypical scheme, can be a companion rather than a conflict. Details of these two schemes can be found from Appendix A.1. Different from standard network backbone used in prior works, our embedding $\phi (\mathbf{x})$ is separated into two branches after a shared stem (Y-shape). Details of our multi-branch design are elaborated in A.10. The detailed ablation study is described in Table 3.
+
+Feature Augmentation. During the meta-testing stage, we discover that concatenating features before average pooling to the final output can improve classification accuracy. The advantage of this technique is that no additional training and model parameters are introduced.
+
+# 5.3 RESULTS ON STANDARD BENCHMARKS
+
+Table 1 and 2 summarize the results of the few-shot classification tasks on CIFAR-FS, FC100, and mini-ImageNet, respectively. Our method shows a notable improvement over several strong baselines in various settings. ConstellationNet significantly improves the performance on shallow networks (Conv-4). In Table 2, our model outperforms SIB (Hu et al., 2020) 1-shot by $0.6\%$ and 5-shot by $5.6\%$ . In Table 1, our model outperforms MetaOptNet (Lee et al., 2019) by $5.95\%$ in 1-shot and $6.24\%$ in 5-shot. For deep networks with rich features, the constellation module still contributes to the performance, showing its complementary advantage to convolution. Our ResNet-12 model beats (Lee et al., 2019) 1-shot result by $2.7\%$ on FC100, $3.4\%$ on CIFAR-FS, and $1.72\%$ on mini-ImageNet. The consistent improvement over both shallow and deep networks across all three datasets shows the generality of our method. Our ConstellationNet is orthogonal to the margin loss based methods (Liu et al., 2020; Li et al., 2020), and we also do not use extra cross-modal information (Xing et al., 2019; Li et al., 2020). On the contrary, our model enhances the embedding generalization ability by incorporating its own part-based representation. Additionally, to verify the orthogonality of our method, we adapt the negative margin loss following Liu et al. (2020) to our Conv-4 models in
+
+Table 1: Comparison to prior work on mini-ImageNet. Average 5-way classification accuracies (%) on mini-ImageNet meta-test split are reported with $95\%$ confidence intervals. Results of prior works are adopted from Lee et al. (2019) and original papers. $\dagger$ used extra cross-modal information.
+
+| Model | Backbone | mini-ImageNet 5-way |
| 1-shot | 5-shot |
| Meta-Learning LSTM (Ravi & Larochelle, 2016) | Conv-4 | 43.44 ± 0.77 | 60.60 ± 0.71 |
| Matching Networks (Vinyals et al., 2016) | Conv-4 | 43.56 ± 0.84 | 55.31 ± 0.73 |
| Prototypical Networks (Snell et al., 2017) | Conv-4 | 49.42 ± 0.78 | 68.20 ± 0.66 |
| Transductive Prop Nets (Liu et al., 2018) | Conv-4 | 55.51 ± 0.86 | 69.86 ± 0.65 |
| MetaOptNet (Lee et al., 2019) | Conv-4 | 52.87 ± 0.57 | 68.76 ± 0.48 |
| Negative Margin (Liu et al., 2020) | Conv-4 | 52.84 ± 0.76 | 70.41 ± 0.66 |
| ConstellationNet (ours) | Conv-4 | 58.82 ± 0.23 | 75.00 ± 0.18 |
| SNAIL (Mishra et al., 2018) | ResNet-12 | 55.71 ± 0.99 | 68.88 ± 0.92 |
| TADAM (Oreshkin et al., 2018) | ResNet-12 | 58.50 ± 0.30 | 76.70 ± 0.30 |
| TapNet (Yoon et al., 2019) | ResNet-12 | 61.65 ± 0.15 | 76.36 ± 0.10 |
| Variational FSL (Zhang et al., 2019) | ResNet-12 | 61.23 ± 0.26 | 77.69 ± 0.17 |
| MetaOptNet (Lee et al., 2019) | ResNet-12 | 62.64 ± 0.61 | 78.63 ± 0.46 |
| CAN (Hou et al., 2019) | ResNet-12 | 63.85 ± 0.48 | 79.44 ± 0.34 |
| SLA-AG (Lee et al., 2020) | ResNet-12 | 62.93 ± 0.63 | 79.63 ± 0.47 |
| Meta-Baseline (Chen et al., 2020) | ResNet-12 | 63.17 ± 0.23 | 79.26 ± 0.17 |
| AM3 (Xing et al., 2019)† | ResNet-12 | 65.21 ± 0.30 | 75.20 ± 0.27 |
| ProtoNets + TRAML (Li et al., 2020) | ResNet-12 | 60.31 ± 0.48 | 77.94 ± 0.57 |
| AM3 + TRAML (Li et al., 2020)† | ResNet-12 | 67.10 ± 0.52 | 79.54 ± 0.60 |
| Negative Margin (Liu et al., 2020) | ResNet-12 | 63.85 ± 0.81 | 81.57 ± 0.56 |
| ConstellationNet (ours) | ResNet-12 | 64.89 ± 0.23 | 79.95 ± 0.17 |
+
+Table 2: Comparison to prior work on FC100 and CIFAR-FS. Average 5-way classification accuracies (%) on CIFAR-FS and FC100 meta-test split are reported with $95\%$ confidence intervals. Results of prior works are adopted from Lee et al. (2019) and original papers.
+
+| Model | Backbone | CIFAR-FS 5-way | FC100 5-way |
| 1-shot | 5-shot | 1-shot | 5-shot |
| MAML (Finn et al., 2017) | Conv-4 | 58.9 ± 1.9 | 71.5 ± 1.0 | - | - |
| Prototypical Networks (Snell et al., 2017) | Conv-4 | 55.5 ± 0.7 | 72.0 ± 0.6 | - | - |
| Relation Networks (Sung et al., 2018) | Conv-4 | 55.0 ± 1.0 | 69.3 ± 0.8 | - | - |
| R2D2 (Bertinetto et al., 2018) | Conv-4 | 65.3 ± 0.2 | 79.4 ± 0.1 | - | - |
| SIB (Hu et al., 2020) | Conv-4 | 68.7 ± 0.6 | 77.1 ± 0.4 | - | - |
| ConstellationNet (ours) | Conv-4 | 69.3 ± 0.3 | 82.7 ± 0.2 | - | - |
| Prototypical Networks (Snell et al., 2017) | ResNet-12 | 72.2 ± 0.7 | 83.5 ± 0.5 | 37.5 ± 0.6 | 52.5 ± 0.6 |
| TADAM (Oreshkin et al., 2018) | ResNet-12 | - | - | 40.1 ± 0.4 | 56.1 ± 0.4 |
| MetaOptNet-RR (Lee et al., 2019) | ResNet-12 | 72.6 ± 0.7 | 84.3 ± 0.5 | 40.5 ± 0.6 | 55.3 ± 0.6 |
| MetaOptNet-SVM (Lee et al., 2019) | ResNet-12 | 72.0 ± 0.7 | 84.2 ± 0.5 | 41.1 ± 0.6 | 55.5 ± 0.6 |
| ConstellationNet (ours) | ResNet-12 | 75.4 ± 0.2 | 86.8 ± 0.2 | 43.8 ± 0.2 | 59.7 ± 0.2 |
+
+Appendix A.8. We observe ConstellationNet with negative margin brings $0.52\%$ improvement to ConstellationNet, and obtains $6.93\%$ gain compared with baseline on mini-ImageNet.
+
+# 6 MODEL ANALYSIS
+
+# 6.1 ARCHITECTURE ALTERNATIVES
+
+In Table 3, we first study the role of each module in ConstellationNet, where the number of parameters is controlled approximately equivalent to the baseline's size. Our constellation model brings $6.41\%$ and $2.59\%$ improvements over baseline on 1-shot Conv-4 and ResNet-12 results. Combined with our multi-branch training procedure, the model further improves additional $1.34\%$ and $1.26\%$ on 1-shot Conv-4 and ResNet-12, respectively. Finally, feature augmentation from penultimate layer to final output embedding brings additional $0.45\%$ and $0.27\%$ improvements on two variants.
+
+We also test the baseline model with extra channels in the Table 3. The new model only shows slight improvements over original baseline, and is outperformed by our ConstellationNet with a large margin. We also obtain WRN-28-10 baseline results to validate our improvement. While making ResNet baselines deeper and wider, our ConstellationNet still outperforms this strong baseline. In Figure 2 (e), we further study whether the performance gap between ConstellationNet and baseline can be reduced by simply altering the baseline's model complexity using e.g. more convolution channels. Although the trend of baseline accuracy increases when increasing the model parameter number gradually, the performance gap is still significant. This validates our concept that modeling hierarchical part structures can greatly benefit features learned from convolution operation, and obtain a more robust feature representation. In addition, applying self-attention on the distance map (6-th
+
+Table 3: Effectiveness of modules. Average classification accuracies $(\%)$ on mini-ImageNet meta-test split. We compare our ConstellationNet with alternative architectures including the baseline and the modified baseline with extra channels based on Conv-4 and ResNet-12. We also include a baseline with WideResNet-28-10 (Zagoruyko & Komodakis, 2016) backbone for comparison.
+
+| Baseline | Cell Feature Clustering | Cell Relation Modeling | Multi Branch | Feature Augment | Extra Channels | 1x1 Convolution | #Params Conv-4 | ResNet-12 |
| Conv-4/Res-12 | 1-shot | 5-shot | 1-shot |
| ✓ | | | | | | | 117K/8.0M | 50.62 ± 0.23 | 68.40 ± 0.19 | 60.77 ± 0.22 |
| ✓ | | | | | ✓ | | 222K/16M | 51.76 ± 0.22 | 69.54 ± 0.18 | 61.45 ± 0.22 |
| ✓ | ✓ | | | | | | 146K/8.3M | 53.34 ± 0.23 | 70.61 ± 0.19 | 62.24 ± 0.23 |
| ✓ | | ✓ | | | | | 184K/9.7M | 55.92 ± 0.23 | 73.02 ± 0.18 | 62.75 ± 0.23 |
| ✓ | | ✓ | | | | ✓ | 192K/8.4M | 55.46 ± 0.23 | 72.52 ± 0.18 | 61.54 ± 0.24 |
| ✓ | ✓ | ✓ | | | | | 200K/8.4M | 57.03 ± 0.23 | 74.09 ± 0.18 | 63.36 ± 0.23 |
| ✓ | ✓ | ✓ | ✓ | | | | 200K/8.4M | 58.37 ± 0.23 | 74.52 ± 0.18 | 64.62 ± 0.23 |
| ✓ | ✓ | ✓ | ✓ | ✓ | | | 200K/8.4M | 58.82 ± 0.23 | 75.00 ± 0.18 | 64.89 ± 0.23 |
| | | | | | | WRN | | | WideResNet-28-10 |
| ✓ | | | | | ✓ | | 36.5M | | | 61.54 ± 0.25 |
| | | | | | | | | | 79.41 ± 0.23 |
+
+row: $57.03\%$ on Conv-4, 1-shot) achieves better performance than directly applying it to the original cell features (i.e. convolutional feature map) (4-th row: $55.92\%$ on Conv-4, 1-shot). We also tried to replace the cell feature clustering module with a 1x1 convolution layer (output dimension is equal to the number of clusters) (5-th row: $55.46\%$ on Conv-4, 1-shot). It is worse than our results (6-th row) as well. We observe that the 1x1 convolution layer is less expressive than the cell feature clustering module, making it difficult to extract enough context information during cell relation modeling.
+
+# 6.2 MODULES ANALYSIS
+
+
+Figure 2: Modules analysis. (a, b, c, d) We study the effectiveness of changing the number of clusters, the number of heads in attention layer, and the layer indices with constellation based on Conv-4, (e) We demonstrate the performance gain of our ConstellationNet is unmatched by increasing the model complexity of our baselines. All experiments are done on mini-ImageNet.
+
+
+
+
+
+
+
+
+
+In Figure 2 (a), we vary the number of clusters adapted in all layers to observe the performance change. We found that increasing the number of clusters improves the accuracy in general, and set clusters to 64 is optimal in terms of both model size and classification performance. Figure 2 (b) shows the number of attention heads does not effect performance as much as the number of cluster, and 8-head attention obtains $1.80\%$ performance gain on the 1-shot setting compared to 1-head attention. In Figure 2 (c, d), we also study the effectiveness of clustering algorithm applied to different layers. The results show both early features and high-level features benefit from introducing clusters algorithm into the original CNN architecture.
+
+# 6.3 VISUALIZATION
+
+Figure 3 demonstrates the visualization of cluster centers in each layer of Conv-4 model on miniImageNet. In the upper part of the figure, each image shows patches corresponding to the nearest cell features to a cluster center (i.e. with lowest Euclidean distance). It is observed that clusters in early layers (e.g. layer 1,2) represent simple low-level patterns while the clusters in high layers (e.g. layer 3,4) indicate more complex structures and parts. In the lower part of the figure, we choose two cluster centers from layer 4 for further interpretation: The left one with green box could possibly represent legs since it consists of various types of legs from human, dog and other animals. The right one with the red box shows most nearest cell features to this cluster center are parts with bird's head or beetles, which share a dotted structure (i.e. black dots on beetles / eyes on bird's head).
+
+The left side of Figure 4 shows the visualization of cell features that are assigned to different clusters. For each image, we extract the assignment maps corresponding to three cluster centers generated in the last constellation module of Conv-4 and find multiple cell features with the highest assignments within each assignment map. The locations of cell features are projected back in the original image space, marked by three different colors of ". . ." in the raw image to show three different feature clusters. For a given class of images, the same cluster centers are selected for comparison across 6 samples. As shown in Figure 4, we observe part information of each class is explicitly discovered. For the bird
+
+
+Layer 1
+
+
+Layer 2
+
+
+Layer 3
+
+
+Layer 4
+
+
+
+
+Unicycle wheels (w/ human legs)
+Dog's legs
+Other legs
+Figure 3: Visualization of cluster centers. (Upper) We visualize four cluster centers in each layer by showing patches associated with cell features that have the nearest distance to the clustering center. (Lower) Identifying parts from two cluster centers in layer 4: Left one with green box represents various types of legs. Right one with red box mostly shows beetles and bird's head, sharing a dotted structure.
+
+
+Human legs
+
+
+Beetles
+
+
+Bird's head
+
+
+Figure 4: Visualization of the cells assignment and attention maps. (Left) Each color represents a cluster, and each point, marked as "·", represents a cell assigned to a cluster center. We demonstrate 6 samples for each class (bird, dog and tank). (Right) We visualize attention maps of one query feature (at the location of red point in left part) with all key features. The middle part shows the attention maps corresponding to 8 heads in the multi-head attention. The right part shows an overlapped map of all attention maps.
+
+
+
+
+
+
+
+
+
+
+
+category, we can see different parts in each image, including head (cyan " $\cdot$ ", body (purple " $\cdot$ " and tail (yellow " $\cdot$ "). For the dog category, we see parts including heads (red " $\cdot$ ", legs (green " $\cdot$ " and body (blue " $\cdot$ ". For the tank category, we see parts like track (light blue " $\cdot$ " and turret (pink " $\cdot$ ".
+
+The right side of Figure 4 visualizes the attention maps in the cell relation model. We use the last constellation module in the ResNet-12 model for visualization since it captures high-level features that better represent parts. We choose one query feature at the center of the object and show its attention map to all key features. The middle part of the figure shows the attention maps corresponding to 8 heads in the multi-head attention. It is observed that some parts are identified such as head (second map in first row), legs (first two map in second row), buttock (first map in first row) and body (second map in the second row). A merged attention map by overlaying all 8 attention maps is presented at right part of the figure. It indicates that all the attention heads together can extract the features of the whole object, which would be useful for final classification.
+
+# 7 CONCLUSION
+
+In this paper, we present ConstellationNet by introducing an explicit feature clustering procedure with relation learning via self-attention. We implement a mini-batch soft $k$ -means algorithm to capture the cell feature distribution. With integrated implicit (standard CNN modules) and explicit (cell feature clustering + cell relation modeling) representations, our proposed ConstellationNet achieves significant improvement over the competing methods on few-shot classification benchmarks.
+
+# ACKNOWLEDGMENTS
+
+This work is funded by NSF IIS-1618477 and NSF IIS-1717431. We thank Qualcomm Inc. for an award support. We thank Kwonjoon Lee, Tiange Luo and Hao Su for valuable feedbacks.
+
+# REFERENCES
+
+Luca Bertinetto, Joao F Henriques, Philip HS Torr, and Andrea Vedaldi. Meta-learning with differentiable closed-form solvers. arXiv preprint arXiv:1805.08136, 2018.
+Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In ECCV, 2020.
+Yinbo Chen, Xiaolong Wang, Zhuang Liu, Huijuan Xu, and Trevor Darrell. A new meta-baseline for few-shot learning. arXiv preprint arXiv:2003.04390, 2020.
+Adam Coates and Andrew Y Ng. Learning feature representations with k-means. In Neural networks: Tricks of the trade, pp. 561-580. Springer, 2012.
+Navneet Dalal and Bill Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005.
+Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248-255. IEEE, 2009.
+Li Fei-Fei, Rob Fergus, and Pietro Perona. One-shot learning of object categories. IEEE transactions on pattern analysis and machine intelligence, 28(4):594-611, 2006.
+Pedro F Felzenszwalb and Daniel P Huttenlocher. Pictorial structures for object recognition. International journal of computer vision, 61(1):55-79, 2005.
+Pedro F Felzenszwalb, Ross B Girshick, David McAllester, and Deva Ramanan. Object detection with discriminatively trained part-based models. IEEE transactions on pattern analysis and machine intelligence, 32(9):1627-1645, 2009.
+Robert Fergus, Pietro Perona, and Andrew Zisserman. Object class recognition by unsupervised scale-invariant learning. In CVPR, 2003.
+Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. arXiv preprint arXiv:1703.03400, 2017.
+Weifeng Ge, Xiangru Lin, and Yizhou Yu. Weakly supervised complementary parts models for fine-grained image classification from the bottom up. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3034-3043, 2019.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
+Geoffrey E Hinton, Sara Sabour, and Nicholas Frosst. Matrix capsules with em routing. 2018.
+Ruiming Hou, Hong Chang, MA Bingpeng, Shiguang Shan, and Xilin Chen. Cross attention network for few-shot classification. In Advances in Neural Information Processing Systems, pp. 4005-4016, 2019.
+Shell Xu Hu, Pablo G Moreno, Yang Xiao, Xi Shen, Guillaume Obozinski, Neil D Lawrence, and Andreas Damianou. Empirical bayes transductive meta-learning with synthetic gradients. arXiv preprint arXiv:2004.12696, 2020.
+Adam Kosiorek, Sara Sabour, Yee Whye Teh, and Geoffrey E Hinton. Stacked capsule autoencoders. In Advances in Neural Information Processing Systems, pp. 15486-15496, 2019.
+Philipp Krahenbuhl, Carl Doersch, Jeff Donahue, and Trevor Darrell. Data-dependent initializations of convolutional neural networks. arXiv preprint arXiv:1511.06856, 2015.
+Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
+Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, 2012.
+
+Svetlana Lazebnik, Cordelia Schmid, and Jean Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In CVPR, 2006.
+Hankook Lee, Sung Ju Hwang, and Jinwoo Shin. Self-supervised label augmentation via input transformations. 37th International Conference on Machine Learning, Vienna, Austria, PMLR 119, 2020, 2020.
+Kwonjoon Lee, Subhransu Maji, Avinash Ravichandran, and Stefano Soatto. Meta-learning with differentiable convex optimization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10657-10665, 2019.
+Aoxue Li, Weiran Huang, Xu Lan, Jiashi Feng, Zhenguo Li, and Liwei Wang. Boosting few-shot learning with adaptive margin loss. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12576-12584, 2020.
+Bin Liu, Yue Cao, Yutong Lin, Qi Li, Zheng Zhang, Mingsheng Long, and Han Hu. Negative margin matters: Understanding margin in few-shot classification. arXiv preprint arXiv:2003.12060, 2020.
+Yanbin Liu, Juho Lee, Minseop Park, Saehoon Kim, Eunho Yang, Sung Ju Hwang, and Yi Yang. Learning to propagate labels: Transductive propagation network for few-shot learning. arXiv preprint arXiv:1805.10002, 2018.
+David G Lowe. Distinctive image features from scale-invariant keypoints. International journal of computer vision, 60(2):91-110, 2004.
+Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, and Pieter Abbeel. A simple neural attentive metalearner. arXiv preprint arXiv:1707.03141, 2017.
+Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, and Pieter Abbeel. A simple neural attentive metalearner. 2018.
+Tsendsuren Munkhdalai, Xingdi Yuan, Soroush Mehri, and Adam Trischler. Rapid adaptation with conditionally shifted neurons. arXiv preprint arXiv:1712.09926, 2017.
+Boris N Oreshkin, Alexandre Lacoste, and Pau Rodriguez. Tadam: Task dependent adaptive metric for improved few-shot learning. arXiv preprint arXiv:1805.10123, 2018.
+Yuxin Peng, Xiangteng He, and Junjie Zhao. Object-part attention model for fine-grained image classification. IEEE Transactions on Image Processing, 27(3):1487-1500, 2017.
+Lei Qi, Xiaoqiang Lu, and Xuelong Li. Exploiting spatial relation for fine-grained image classification. Pattern Recognition, 91:47-55, 2019.
+Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. 2016.
+Sara Sabour, Nicholas Frosst, and Geoffrey E Hinton. Dynamic routing between capsules. In Advances in neural information processing systems, pp. 3856-3866, 2017.
+Ruslan Salakhutdinov, Joshua B Tenenbaum, and Antonio Torralba. Learning with hierarchical-deep models. IEEE transactions on pattern analysis and machine intelligence, 35(8):1958-1971, 2012.
+David Sculley. Web-scale k-means clustering. In Proceedings of the 19th international conference on World wide web, pp. 1177-1178, 2010.
+Marcel Simon and Erik Rodner. Neural activation constellations: Unsupervised part model discovery with convolutional networks. In Proceedings of the IEEE international conference on computer vision, pp. 1143-1151, 2015.
+Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
+Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems, pp. 4077-4087, 2017.
+
+Erik B Sudderth, Antonio Torralba, William T Freeman, and Alan S Willsky. Learning hierarchical models of scenes, objects, and parts. In ICCV, volume 2, 2005.
+Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1199-1208, 2018.
+Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In CVPR, 2015.
+Pavel Tokmakov, Yu-Xiong Wang, and Martial Hebert. Learning compositional representations for few-shot recognition. In Proceedings of the IEEE International Conference on Computer Vision, pp. 6372-6381, 2019.
+Yao-Hung Hubert Tsai, Nitish Srivastava, Hanlin Goh, and Ruslan Salakhutdinov. Capsules with inverted dot-product attention routing. arXiv preprint arXiv:2002.04764, 2020.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998-6008, 2017.
+Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. Matching networks for one shot learning. In Advances in neural information processing systems, pp. 3630-3638, 2016.
+Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7794-7803, 2018.
+Markus Weber, Max Welling, and Pietro Perona. Unsupervised learning of models for recognition. In ECCV, 2000.
+Saining Xie, Ross Girshick, Piotr Dólar, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In CVPR, 2017.
+Chen Xing, Negar Rostamzadeh, Boris Oreshkin, and Pedro O O Pinheiro. Adaptive cross-modal few-shot learning. Advances in Neural Information Processing Systems, 32:4847-4857, 2019.
+Sung Whan Yoon, Jun Seo, and Jaekyun Moon. Tapnet: Neural network augmented with task-adaptive projection for few-shot learning. arXiv preprint arXiv:1905.06549, 2019.
+Alan L Yuille, Peter W Hallinan, and David S Cohen. Feature extraction from faces using deformable templates. International journal of computer vision, 8(2):99-111, 1992.
+Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
+Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In ECCV, 2014.
+Jian Zhang, Chenglong Zhao, Bingbing Ni, Minghao Xu, and Xiaokang Yang. Variational few-shot learning. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1685-1694, 2019.
+Song-Chun Zhu and David Mumford. A stochastic grammar of images. Now Publishers Inc, 2007.
+Yousong Zhu, Chaoyang Zhao, Jinqiao Wang, Xu Zhao, Yi Wu, and Hanqing Lu. Coupling global structure with local parts for object detection. In Proceedings of the IEEE international conference on computer vision, pp. 4126-4134, 2017.
+
+# A APPENDIX
+
+# A.1 FEW-SHOT LEARNING FRAMEWORK
+
+In this section, we introduce background concepts of meta-learning and elaborate the few-shot learning framework used in our ConstellationNet.
+
+Meta-Learning in Few-Shot Classification. Current few-shot learning is typically formulated as a meta-learning task (Finn et al., 2017), in which an dataset $\mathcal{D}^{\mathrm{base}}$ is used to provide commonsense knowledge and a dataset $\mathcal{D}^{\mathrm{novel}}$ for the few-shot classification. $\mathcal{D}^{\mathrm{base}}$ has the classes $\mathcal{C}_{\mathrm{base}}$ which are disjoint from the $\mathcal{C}_{\mathrm{novel}}$ in $\mathcal{D}^{\mathrm{novel}}$ to ensure fairness. There are two stages, meta-training and meta-test, in the meta-learning framework: In meta-training stage, we attempt to train a model to learn generic features from $\mathcal{D}^{\mathrm{base}}$ . In meta-test stage, we adapt the model on the limited training split from $\mathcal{D}^{\mathrm{novel}}$ and evaluate the performance of the model on the test split.
+
+ProtoNet-Based Framework. In our ConstellationNet, we adopt ProtoNet (Snell et al., 2017) as the base few-shot learning framework. In ProtoNet, the dataset $\mathcal{D}^{\mathrm{novel}}$ is represented by a series of $K$ -way $N$ -shot tasks $\{\mathcal{T}\}$ where each task consists of a support set and a query set, i.e. $\mathcal{T} = (\mathcal{T}^{\mathrm{supp}}, \mathcal{T}^{\mathrm{query}})$ . The support set $\mathcal{T}^{\mathrm{supp}}$ contains $K$ classes and each class has $N$ examples from the training split of $\mathcal{D}^{\mathrm{novel}}$ , which are used to adapt the model in meta-test stage. The query set $\mathcal{T}^{\mathrm{query}}$ from the test split of $\mathcal{D}^{\mathrm{novel}}$ is then used to evaluate the model.
+
+The ProtoNet attempts to learn a generic feature extractor $\phi(\mathbf{x})$ on image $\mathbf{x}$ , and represent a class $k$ by the prototype $\mathbf{c}_k$ , which is the average feature of examples from support set $\mathcal{T}^{\mathrm{supp}}$ with this class:
+
+$$
+\mathbf {c} _ {k} = \frac {1}{| N |} \sum_ {(\mathbf {x}, y) \in \mathcal {T} ^ {\text {s u p p}}, y = k} \phi (\mathbf {x}) \tag {11}
+$$
+
+During the meta-test stage, we use the prototypes to compute the probability $p_k$ of a query example $\mathbf{x}' \in \mathcal{T}^{\text{query}}$ on class $k$ and predict its label $y'$ :
+
+$$
+p _ {k} = p \left(y = k \mid \mathbf {x} ^ {\prime}, \mathcal {T} ^ {\text {s u p p}}\right) = \frac {\exp \left(d \left(\mathbf {x} ^ {\prime} , \mathbf {c} _ {k}\right)\right)}{\sum_ {k ^ {\prime}} \exp \left(d \left(\mathbf {x} ^ {\prime} , \mathbf {c} _ {k ^ {\prime}}\right)\right)}, \quad y ^ {\prime} = \arg \max _ {k} p _ {k}. \tag {12}
+$$
+
+where $d(\cdot, \cdot)$ is a cosine similarity function (different from the Euclidean distance in Snell et al. (2017)).
+
+During the meta-training stage, there are two different training schemes: The prototypical scheme from ProtoNet uses an episodic learning strategy that also formulates the dataset $\mathcal{D}^{\mathrm{base}}$ as a series of tasks $\{\mathcal{T}\}$ . The negative log-likelihood loss $\mathcal{L}(\phi)$ is optimized:
+
+$$
+\ell \left(\mathcal {T} ^ {\text {s u p p}}, \mathcal {T} ^ {\text {q u e r y}}\right) = \mathbb {E} _ {\left(\mathbf {x} ^ {\prime}, y ^ {\prime}\right) \in \mathcal {T} ^ {\text {q u e r y}}} - \log p \left(y = y ^ {\prime} \mid \mathbf {x} ^ {\prime}, \mathcal {T} ^ {\text {s u p p}}\right), \tag {13}
+$$
+
+$$
+\mathcal {L} (\phi) = \mathbb {E} _ {\mathcal {T} = \left(\mathcal {T} ^ {\text {s u p p}}, \mathcal {T} ^ {\text {q u e r y}}\right) \sim \mathcal {D} ^ {\text {b a s e}}} \ell \left(\mathcal {T} ^ {\text {s u p p}}, \mathcal {T} ^ {\text {q u e r y}}\right). \tag {14}
+$$
+
+Another way is the standard classification scheme (Chen et al., 2020): It simply uses $\mathcal{D}^{\mathrm{base}}$ as a standard classification dataset $\{(\mathbf{x},y)\}$ consisting of $Q$ classes in total. Thus, a cross-entropy loss $\mathcal{L}(\phi)$ is optimized:
+
+$$
+\mathcal {L} (\phi) = \mathbb {E} _ {(\mathbf {x}, y) \sim \mathcal {D} ^ {\text {b a s e}}} - \log \frac {\exp \left(\mathbf {w} _ {y} \cdot \phi (\mathbf {x})\right)}{\sum_ {q} \exp \left(\mathbf {w} _ {q} \cdot \phi (\mathbf {x})\right)} \tag {15}
+$$
+
+where $\mathbf{w}_q$ is the linear weight for class $q$ . In our ConstellationNet, we use the standard classification scheme at default. For the experiment with multi-branch network, we use the prototypical scheme and standard classification scheme for separate branches.
+
+# A.2 DATASETS
+
+The CIFAR-FS dataset (Bertinetto et al., 2018) is a few-shot classification benchmark containing 100 classes from CIFAR-100 (Krizhevsky et al., 2009). The classes are randomly split into 64, 16 and 20 classes as meta-training, meta-validation and meta-testing set respectively. For each class, it
+
+contains 600 images of size $32 \times 32$ . We adopt the split from Lee et al. (2019). The FC100 dataset (Oreshkin et al., 2018) is another benchmark based on CIFAR-100 where classes are grouped into 20 superclasses to void the overlap between the splits. The mini-ImageNet dataset (Vinyals et al., 2016) is a common benchmark for few-shot classification containing 100 classes from ILSVRC-2012 (Deng et al., 2009). The classes are randomly split into 64, 16 and 20 classes as meta-training, meta-validation and meta-testing set respectively. For each class, it contains 600 images of size $84 \times 84$ . We follow the commonly-used split in Ravi & Larochelle (2016), Lee et al. (2019) and Chen et al. (2020). In all experiments, we conduct data augmentation for the meta-training set of all datasets to match Lee et al. (2019)'s implementation.
+
+# A.3 NETWORK BACKBONE
+
+Conv-4. Following Lee et al. (2019), we adopt the same network with 4 convolutional blocks. Each of the 4 blocks has a $3 \times 3$ convolutional layer, a batch normalization layer, a ReLU activation and a $2 \times 2$ max-pooling layer sequentially. The numbers of filters are 64 for all 4 convolutional layers.
+
+ResNet-12. Following Chen et al. (2020), we construct the residual block with 3 consecutive convolutional blocks followed by an addition average pooling layer where each convolutional block has a $3 \times 3$ convolutional layer, a batch normalization layer, a leaky ReLU activation, and max-pooling layers. The ResNet-12 network has 4 residual blocks with each filter size set to 64, 128, 256, 512, respectively.
+
+WRN-28-10. WideResNet expands the residual blocks by increasing the convolutional channels and layers (Zagoruyko & Komodakis, 2016). WRN-28-10 uses 28 convolutional layers with a widening factor of 10.
+
+# A.4 CONSTELLATION MODULE CONFIGURATION
+
+To achieve the best performance with constellation modules, we do not always fully enable them after all the convolutional layers. For Conv-4, we use constellation modules after all four convolutional layers, but the cell relation modeling module is disabled in first two constellation modules due to the high memory consumption. For ResNet-12, we enable the constellation modules after the convolutional layer 1,7,8,9 and disable the relation modeling module in the first constellation module. We use the deep supervision in ResNet-12 to stabilize the training of constellation modules.
+
+# A.5 SELF-ATTENTION SETTINGS
+
+We follow the common practice in Vaswani et al. (2017) to set the attention layer with residual connections, dropout and layer normalization. The sine positional encoding follows settings in Carion et al. (2020).
+
+# A.6 TRAINING DETAILS
+
+Optimization Settings. We follow implementation in Lee et al. (2019), and use SGD optimizer with initial learning rate of 1, and set momentum to 0.9 and weight decay rate to $5 \times 10^{-4}$ . The learning rate reduces to 0.06, 0.012, and 0.0024 at epoch 20, 40 and 50. The inverse temperature $\beta$ is set to 100.0 in the cluster assignment step, and $\lambda$ is set to 1.0 in the centroid movement step.
+
+# A.7 ABLATION STUDY ON THE NUMBER OF CLUSTERS
+
+Table 4 studies the number of clusters needed for random and similar classes. The result shows the optimal number of clusters are less affected by the number of clusters but more affected by the similarity between classes. Less number of clusters are needed for dataset with classes of high similarity, which aligns with our intuition, limited number of patterns exist in this dataset so that small number of clusters are enough to represent its part-based information.
+
+FC100 training dataset consists of 60 classes that are grouped evenly into 12 superclasses. In the random classes group, the training dataset includes 6 randomly selected super-classes (i.e., 30 classes) and models are trained with 8, 16, 32, 64 and 128 number of clusters. The highest accuracy occurs at 16 clusters (1-shot: $39.12\%$ in ResNet-12). In the similar classes group, 30 classes are randomly
+
+Table 4: Ablation study on the number of clusters for random and similar classes. We investigate how similarities of images in the training dataset affect the optimal number of clusters. The first group of experiments use training dataset with 30 similar classes while the second group use 30 random classes from FC100 dataset, all of which performed on ResNet-12 with Constellation module.
+
+| # Clusters | Similar Classes | Random Classes |
| 1-shot | 5-shot | 1-shot | 5-shot |
| 8 | 38.9 ± 0.2 | 52.8 ± 0.2 | 40.9 ± 0.2 | 54.5 ± 0.2 |
| 16 | 39.1 ± 0.2 | 51.8 ± 0.2 | 40.9 ± 0.2 | 54.9 ± 0.2 |
| 32 | 38.7 ± 0.2 | 52.3 ± 0.2 | 40.9 ± 0.2 | 54.7 ± 0.2 |
| 64 | 38.8 ± 0.2 | 52.3 ± 0.2 | 41.2 ± 0.2 | 54.9 ± 0.2 |
| 128 | 38.8 ± 0.2 | 52.1 ± 0.2 | 40.8 ± 0.2 | 54.7 ± 0.2 |
+
+sampled from the original training dataset and we repeat the same experiments as above. The highest accuracy occurs at 64 clusters (1-shot: $41.22\%$ in ResNet-12), which is much more than the 16 clusters used for images from similar classes.
+
+# A.8 ADDITIONAL EXPERIMENTS WITH NEGATIVE MARGIN
+
+Table 5: Additional experiments with the use of negative margin. Average classification accuracies (%) on mini-ImageNet meta-test split. We compare our ConstellationNet and baseline with and without the negative margin loss based on Conv-4.
+
+| Baseline | Cell Feature Clustering | Cell Relation Modeling | Negative Margin | Conv-4 |
| 1-shot | 5-shot |
| ✓ | | | | 50.62 ± 0.23 | 68.40 ± 0.19 |
| ✓ | | | ✓ | 51.42 ± 0.23 | 68.84 ± 0.19 |
| ✓ | ✓ | ✓ | | 57.03 ± 0.23 | 74.09 ± 0.18 |
| ✓ | ✓ | ✓ | ✓ | 57.55 ± 0.23 | 74.49 ± 0.18 |
+
+Table 5 studies the use of negative margin loss (Liu et al., 2020) on our Conv-4 models. In the negative margin loss, we use the inner-product similarity, the temperature coefficient $\beta = 1.0$ and the negative margin $m = -0.5$ , which attains the best performance improvement on our models. Besides, we do not have the fine-tune step during meta-test. Our baseline with the negative margin loss obtains $0.80\%$ improvement on 1-shot and $0.44\%$ improvement on 5-shot compared with the baseline. Similarly, our ConstellationNet with the negative margin loss achieves $0.52\%$ improvement on 1-shot and $0.40\%$ improvement on 5-shot. The consistent improvement of negative margin loss on the baseline and our ConstellationNet indicates that our constellation module is orthogonal to the negative margin loss, and both modules can boost the performance on few-shot classification.
+
+# A.9 CLARIFICATION ON CLUSTERING PROCEDURE
+
+In this section, we add more clarification on our cell feature clustering procedure in Sec. 4.1: During the training stage, the global cluster centers $\mathcal{V} = \{\mathbf{v}_k\}$ are updated by the computed clustering centers $\{\mathbf{v}_k^{\prime}\}$ in current mini-batch. Each update to a cluster center $\mathbf{v}_k$ is weighted by a momentum coefficient $\eta$ determined by the value of an associated counter $s_k$ , since we would like to avoid large adjustment from the current mini-batch in order to stabilize the global cluster centers. Besides, the mini-batches of examples are randomly drawn from the dataset following Sculley (2010), without specialized design to optimize clustering learning. During the evaluation stage, we fix the global cluster centers $\mathcal{V}$ in the forward step of our model, avoiding the potential information leak or transduction from the test mini-batches.
+
+# A.10 MULTI-BRANCH DETAILS
+
+Our embedding $\phi(\mathbf{x})$ is separated into two branches after a shared stem (Y-shape), which is defined as $\phi(\mathbf{x}) = \{\phi^{\mathrm{cls}}(\mathbf{x}), \phi^{\mathrm{proto}}(\mathbf{x})\}$ and $\phi^{\mathrm{cls}}(\mathbf{x}) = g^{\mathrm{cls}}(f^{\mathrm{stem}}(\mathbf{x}))$ , $\phi^{\mathrm{proto}}(\mathbf{x}) = g^{\mathrm{proto}}(f^{\mathrm{stem}}(\mathbf{x}))$ . Two branches $\phi^{\mathrm{cls}}(\mathbf{x}), \phi^{\mathrm{proto}}(\mathbf{x})$ are trained by standard classification and prototypical schemes separately
+
+in a multi-task learning fashion. During the testing time, $\phi^{\mathrm{cls}}(\mathbf{x})$ and $\phi^{\mathrm{proto}}(\mathbf{x})$ are concatenated together to compute distance between support prototypes and query images.
+
+For our ConstellationNet, we split the network into two branches after the second convolutional blocks (Conv-4) or the second residual blocks (ResNet-12). We keep the shared stem identical to the network backbone and reduce the channels of two separate branches to match the parameter size of the model without multi-branch.
+
+# A.11 CONNECTION WITH CAPSULE NETWORKS
+
+A notable development to learning the explicit structured representation in an end-to-end framework is the capsule networks (CapsNets) (Sabour et al., 2017). The line of works on CapsNets (Sabour et al., 2017; Hinton et al., 2018; Kosiorek et al., 2019; Tsai et al., 2020) intends to parse a visual scene in an interpretable and hierarchical way. Sabour et al. (2017) represents parts and objects in vector-based capsules with a dynamic routing mechanism. Tsai et al. (2020) uses a stacked autoencoder architecture to model the hierarchical relation among parts, objects and scenes. Here our ConstellationNet maintains part modeling by enabling the joint learning of the convolution and constellation modules to simultaneously attain implicit and explicit representations.
\ No newline at end of file
diff --git a/attentionalconstellationnetsforfewshotlearning/images.zip b/attentionalconstellationnetsforfewshotlearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..86be7a35805e607ac9dfd3148e56660c122978f2
--- /dev/null
+++ b/attentionalconstellationnetsforfewshotlearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6fc255e44791e375ec2afaf73db3f920975f7fc567daf497bd6aa0a9e0a2255c
+size 645376
diff --git a/attentionalconstellationnetsforfewshotlearning/layout.json b/attentionalconstellationnetsforfewshotlearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..38ee4f8551f0b5b9f2a1376c28bc359f540a0b2b
--- /dev/null
+++ b/attentionalconstellationnetsforfewshotlearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2bcf0126fe772b8fe01599090a97e7ea523da1a63661032811403e1af44cca77
+size 586788
diff --git a/auctionlearningasatwoplayergame/c67499e2-b68c-4964-a190-4aa8387dbeeb_content_list.json b/auctionlearningasatwoplayergame/c67499e2-b68c-4964-a190-4aa8387dbeeb_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..069545f29d896264496e3bab62d9c4e96366cb94
--- /dev/null
+++ b/auctionlearningasatwoplayergame/c67499e2-b68c-4964-a190-4aa8387dbeeb_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c113146f834c76558e9c99a9a0f1cd30819a278c1cfeae7faadff6277145ba8d
+size 95095
diff --git a/auctionlearningasatwoplayergame/c67499e2-b68c-4964-a190-4aa8387dbeeb_model.json b/auctionlearningasatwoplayergame/c67499e2-b68c-4964-a190-4aa8387dbeeb_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..c2a69dc6b5e2f513ddf3f131137b74a24e5c68ba
--- /dev/null
+++ b/auctionlearningasatwoplayergame/c67499e2-b68c-4964-a190-4aa8387dbeeb_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bb220d7670271b49cd7be4f5ba5f8435199ab5965aaafbd78794cd925446e684
+size 118710
diff --git a/auctionlearningasatwoplayergame/c67499e2-b68c-4964-a190-4aa8387dbeeb_origin.pdf b/auctionlearningasatwoplayergame/c67499e2-b68c-4964-a190-4aa8387dbeeb_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..5d7f61f34ba07203bbe18c36acb83eecec333759
--- /dev/null
+++ b/auctionlearningasatwoplayergame/c67499e2-b68c-4964-a190-4aa8387dbeeb_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6866ec8d18e8f36340e4553a488ecdb1463c85bf87114a55321dc158e2318eea
+size 1086404
diff --git a/auctionlearningasatwoplayergame/full.md b/auctionlearningasatwoplayergame/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..9eff35a0d37616b79221a3aef4af6b3a2149bbf1
--- /dev/null
+++ b/auctionlearningasatwoplayergame/full.md
@@ -0,0 +1,446 @@
+# Auction Learning as a TWO-PLAYER GAME
+
+Jad Rahme*, Samy Jelassi, S. Matthew Weinberg
+
+Princeton University
+
+Princeton, NJ 08540, USA
+
+{jrahme, sjelassi, smweinberg}@princeton.edu
+
+# ABSTRACT
+
+Designing an incentive compatible auction that maximizes expected revenue is a central problem in Auction Design. While theoretical approaches to the problem have hit some limits, a recent research direction initiated by Duetting et al. (2019) consists in building neural network architectures to find optimal auctions. We propose two conceptual deviations from their approach which result in enhanced performance. First, we use recent results in theoretical auction design to introduce a time-independent Lagrangian. This not only circumvents the need for an expensive hyper-parameter search (as in prior work), but also provides a single metric to compare the performance of two auctions (absent from prior work). Second, the optimization procedure in previous work uses an inner maximization loop to compute optimal misreports. We amortize this process through the introduction of an additional neural network. We demonstrate the effectiveness of our approach by learning competitive or strictly improved auctions compared to prior work. Both results together further imply a novel formulation of Auction Design as a two-player game with stationary utility functions.
+
+# 1 INTRODUCTION
+
+Efficiently designing truthful auctions is a core problem in Mathematical Economics. Concrete examples include the sponsored search auctions conducted by companies as Google or auctions run on platforms as eBay. Following seminal work of Vickrey (Vickrey, 1961) and Myerson (Myerson, 1981), auctions are typically studied in the independent private valuations model: each bidder has a valuation function over items, and their payoff depends only on the items they receive. Moreover, the auctioneer knows aggregate information about the population that each bidder comes from, modeled as a distribution over valuation functions, but does not know precisely each bidder's valuation (outside of any information in this Bayesian prior). A major difficulty in designing auctions is that valuations are private and bidders need to be incentivized to report their valuations truthfully. The goal of the auctioneer is to design an incentive compatible auction which maximizes expected revenue.
+
+Auction Design has existed as a rigorous mathematical field for several decades and yet, complete characterizations of the optimal auction only exist for a few settings. While Myerson's Nobel prize-winning work provides a clean characterization of the single-item optimum (Myerson, 1981), optimal multi-item auctions provably suffer from numerous formal measures of intractability (including computational intractability, high description complexity, non-monotonicity, and others) (Daskalakis et al., 2014; Chen et al., 2014; 2015; 2018; Hart & Reny, 2015; Thanassoulis, 2004).
+
+An orthogonal line of work instead develops deep learning architectures to find the optimal auction. Duetting et al. (2019) initiated this direction by proposing RegretNet, a feed-forward architecture. They frame the auction design problem as a constrained learning problem and lift the constraints into the objective via the augmented Lagrangian method. Training RegretNet involves optimizing this Lagrangian-penalized objective, while simultaneously updating network parameters and the Lagrangian multipliers themselves. This architecture produces impressive results: recovering near-optimal auctions in several known multi-item settings, and discovering new mechanisms when a theoretical optimum is unknown.
+
+Yet, this approach presents several limitations. On the conceptual front, our main insight is a connection to an exciting line of recent works (Hartline & Lucier, 2010; Hartline et al., 2011; Bei &
+
+Huang, 2011; Daskalakis & Weinberg, 2012; Rubinstein & Weinberg, 2018; Dughmi et al., 2017; Cai et al., 2019) on $\varepsilon$ -truthful-to-truthful reductions. On the technical front, we identify three areas for improvement. First, their architecture is difficult to train in practice as the objective is nonstationary. Specifically, the Lagrangian multipliers are time-dependent and they increase following a pre-defined schedule, which requires careful hyperparameter tuning (see §3.1 for experiments illustrating this). Leveraging the aforementioned works in Auction Theory, we propose a stationary Lagrangian objective. Second, all prior work inevitably finds auctions which are not precisely incentive compatible, and does not provide a metric to compare, say, an auction with revenue 1.01 which is 0.002-truthful, or one with revenue 1 which is 0.001-truthful. We argue that our stationary Lagrangian objective serves as a good metric (and that the second auction of our short example is "better" for our metric). Finally, their training procedure requires an inner-loop optimization (essentially, this inner loop is the bidders trying to maximize utility in the current auction), which is itself computationally expensive. We use amortized optimization to make this process more efficient.
+
+# CONTRIBUTIONS
+
+This paper leverages recent work in Auction Theory to formulate the learning of revenue-optimal auctions as a two-player game. We develop a new algorithm ALGnet (Auction Learning Game network) that produces competitive or better results compared to Duetting et al. (2019)'s RegretNet. In addition to the conceptual contributions, our approach yields the following improvements (as RegretNet is already learning near-optimal auctions, our improvement over RegretNet is not due to significantly higher optimal revenues).
+
+- Easier hyper-parameter tuning: By constructing a time-independent loss function, we circumvent the need to search for an adequate parameter scheduling. Our formulation also involves less hyperparameters, which makes it more robust.
+- A metric to compare auctions: We propose a metric to compare the quality of two auctions which are not incentive compatible.
+- More efficient training: We replace the inner-loop optimization of prior work with a neural network, which makes training more efficient.
+- Online auctions: Since the learning formulation is time-invariant, ALGnet is able to quickly adapt in auctions where the bidders' valuation distributions varies over time. Such setting appears for instance in the online posted pricing problem studied in Bubeck et al. (2017).
+
+Furthermore, these technical contributions together now imply a novel formulation of auction learning as a two-player game (not zero-sum) between an auctioneer and a misreporter. The auctioneer is trying to design an incentive compatible auction that maximizes revenue while the misreporter is trying to identify breaches in the truthfulness of these auctions. The paper decomposes as follows. Section 2 introduces the standard notions of auction design. Section 3 presents our game formulation for auction learning. Section 4 provides a description of ALGnet and its training procedure. Finally, Section 5 presents numerical evidence for the effectiveness of our approach.
+
+# RELATED WORK
+
+Auction design and machine learning. Machine learning and computational learning theory have been used in several ways to design auctions from samples of bidder valuations. Machine learning has been used to analyze the sample complexity of designing optimal revenue-maximizing auctions. This includes the framework of single-parameter settings (Morgenstern & Roughgarden, 2015; Huang et al., 2018; Hartline & Taggart, 2019; Roughgarden & Schrijvers, 2016; Gonczarowski & Nisan, 2017; Guo et al., 2019), multi-item auctions (Dughmi et al., 2014; Gonczarowski & Weinberg, 2018), combinatorial auctions (Balcan et al., 2016; Morgenstern & Roughgarden, 2016; Syrgkanis, 2017) and allocation mechanisms (Narasimhan & Parkes, 2016). Other works have leveraged machine learning to optimize different aspects of mechanisms (Lahaie, 2011; Dütting et al., 2015). Our approach is different as we build a deep learning architecture for auction design.
+
+Auction design and deep learning. While Duetting et al. (2019) is the first paper to design auctions through deep learning, several other papers followed-up this work. Feng et al. (2018) extended it to budget constrained bidders, Golowich et al. (2018) to the facility location problem. Tacchetti et al. (2019) built architectures based on the Vickrey-Clarke-Groves mechanism. Rahme et al. (2021) used permutation-equivariant networks to design symmetric auctions. Shen et al. (2019) and Duetting
+
+et al. (2019) proposed architectures that exactly satisfy incentive compatibility but are specific to single-bidder settings. While all the previously mentioned papers consider a non-stationary objective function, we formulate a time-invariant objective that is easier to train and that makes comparisons between mechanisms possible.
+
+# 2 AUCTION DESIGN AS A TIME-VARYING LEARNING PROBLEM
+
+We first review the framework of auction design and the problem of finding truthful mechanisms. We then recall the learning problem proposed by Duetting et al. (2019) to find optimal auctions.
+
+# 2.1 AUCTION DESIGN AND LINEAR PROGRAM
+
+Auction design. We consider an auction with $n$ bidders and $m$ items. We will denote by $N = \{1, \dots, n\}$ and $M = \{1, \dots, m\}$ the set of bidders and items. Each bidder $i$ values item $j$ at a valuation denoted $v_{ij}$ . We will focus on additive auctions. These are auctions where the value of a set $S$ of items is equal to the sum of the values of the elements in that set at $\sum_{j \in S} v_{ij}$ . Additive auctions are perhaps the most well-studied setting in multi-item auction design (Hart & Nisan, 2012; Li & Yao, 2013; Daskalakis et al., 2014; Cai et al., 2016; Daskalakis et al., 2017).
+
+The auctioneer does not know the exact valuation profile $V = (v_{ij})_{i \in N, j \in M}$ of the bidders in advance but he does know the distribution from which they are drawn: the valuation vector of bidder $i$ , $\vec{v}_i = (v_{i1}, \ldots, v_{im})$ is drawn from a distribution $D_i$ over $\mathbb{R}^m$ . We will further assume that all bidders are independent and that $D_1 = \dots = D_n$ . As a result $V$ is drawn from $D := \otimes_{i=1}^{n} D_i = D_1^{\otimes^n}$ .
+
+Definition 1. An auction is defined by a randomized allocation rule $g = (g_{1},\ldots ,g_{n})$ and a payment rule $p = (p_{1},\dots ,p_{n})$ where $g_{i}\colon \mathbb{R}^{n\times m}\to [0,1]^{m}$ and $p_i\colon \mathbb{R}^{n\times m}\to \mathbb{R}_{\geqslant 0}$ . Additionally for all items $j$ and valuation profiles $V$ , the $g_{i}$ must satisfy $\sum_{i}[g_{i}(V)]_{j}\leqslant 1$ .
+
+Given a bid matrix $B = (b_{ij})_{i \in N, j \in M}$ , $[g_i(B)]_j$ is the probability that bidder $i$ receives object $j$ and $p_i(B)$ is the price bidder $i$ has to pay to the auction. The condition $\sum_i [g_i(V)]_j \leqslant 1$ allows the possibility for an item to be not allocated.
+
+Definition 2. The utility of bidder $i$ is defined by $u_{i}(\vec{v}_{i},B) = \sum_{j = 1}^{m}[g_{i}(B)]_{j}v_{ij} - p_{i}(B)$ .
+
+Bidders seek to maximize their utility and may report bids that are different from their true valuations. In the following, we will denote by $B_{-i}$ the $(n - 1) \times m$ bid matrix without bidder $i$ , and by $(\vec{b}_i', B_{-i})$ the $n \times m$ bid matrix that inserts $\vec{b}_i'$ into row $i$ of $B_{-i}$ (for example: $B := (\vec{b}_i, B_{-i})$ ). We aim at auctions that incentivize bidders to bid their true valuations.
+
+Definition 3. An auction $(g, p)$ is dominant strategy incentive compatible (DSIC) if each bidder's utility is maximized by reporting truthfully no matter what the other bidders report. For every bidder $i$ , valuation $\vec{v}_i \in D_i$ , bid $\vec{b}_i'\in D_i$ and bids $B_{-i} \in D_{-i}$ , $u_i(\vec{v}_i, (\vec{v}_i, B_{-i})) \geqslant u_i(\vec{v}_i, (\vec{b}_i', B_{-i}))$ .
+
+Definition 4. An auction is individually rational (IR) if for all $i \in N$ , $\vec{v}_i \in D_i$ and $B_{-i} \in D_{-i}$ ,
+
+$$
+u _ {i} \left(\vec {v} _ {i}, \left(\vec {v} _ {i}, B _ {- i}\right)\right) \geqslant 0. \tag {IR}
+$$
+
+In a DSIC auction, the bidders have the incentive to truthfully report their valuations and therefore, the revenue on valuation profile $V$ is $\sum_{i=1}^{n} p_i(V)$ . Optimal auction design aims at finding a DSIC and IR auction that maximizes the expected revenue $\text{rev} \coloneqq \mathbb{E}_{V \sim D}[\sum_{i=1}^{n} p_i(V)]$ . Since there is no known characterization of DSIC mechanisms in the multi-item setting, we resort to the relaxed notion of ex-post regret. It measures the extent to which an auction violates DSIC.
+
+Definition 5. The ex-post regret for a bidder $i$ is the maximum increase in his utility when considering all his possible bids and fixing the bids of others. For a valuation profile $V$ , it is given by $r_i(V) = \max_{\vec{b}_i' \in \mathbb{R}^m} u_i(\vec{v}_i, (\vec{b}_i', V_{-i})) - u_i(\vec{v}_i, (\vec{v}_i, V_{-i}))$ . In particular, DSIC is equivalent to
+
+$$
+r _ {i} (V) = 0, \forall i \in N, \forall V \in D. \tag {IC}
+$$
+
+The bid $\vec{b}_i^\prime$ that achieves $r_i(V)$ is called the optimal misreport of bidder $i$ for valuation profile $V$ . Therefore, finding an optimal auction is equivalent to the following linear program:
+
+$$
+\min _ {(g, p) \in \mathcal {M}} - \mathbb {E} _ {V \sim D} \left[ \sum_ {i = 1} ^ {n} p _ {i} (V) \right] \quad \text {s . t .} \quad r _ {i} (V) = 0, \quad \forall i \in N, \forall V \in D, \tag {LP}
+$$
+
+# 2.2 AUCTION DESIGN AS A LEARNING PROBLEM
+
+As the space of auctions $\mathcal{M}$ may be large, we will set a parametric model. In what follows, we consider the class of auctions $(g^w,p^w)$ encoded by a neural network of parameter $w\in \mathbb{R}^d$ . The corresponding utility and regret function will be denoted by $u_{i}^{w}$ and $r_i^w$
+
+Following Duetting et al. (2019), the formulation (LP) is relaxed: the IC constraint for all $V \in D$ is replaced by the expected constraint $\mathbb{E}_{V \sim D} [r_i^w(V)] = 0$ for all $i \in N$ . The justification for this relaxation can be found in Duetting et al. (2019). By replacing expectations with empirical averages, the learning problem becomes:
+
+$$
+\min _ {w \in \mathbb {R} ^ {d}} - \frac {1}{L} \sum_ {\ell = 1} ^ {L} \sum_ {i = 1} ^ {n} p _ {i} ^ {w} (V ^ {(\ell)}) \quad \text {s . t .} \quad \widehat {r} _ {i} ^ {w} := \frac {1}{L} \sum_ {\ell = 1} ^ {L} r _ {i} ^ {w} (V ^ {(\ell)}) = 0, \forall i \in N. \quad \quad (\widehat {\mathrm {L P}})
+$$
+
+The learning problem $(\widehat{\mathbf{LP}})$ does not ensure (IR). However, this constraint is usually built into the parametrization (architecture) of the model: by design, the only auction mechanism considered satisfy (IR). Implementation details can be found in Duetting et al. (2019); Rahme et al. (2021) or in Sec 4.
+
+# 3 AUCTION LEARNING AS A TWO-PLAYER GAME
+
+We first present the optimization and the training procedures for (LP) proposed by Duetting et al. (2019). We then demonstrate with numerical evidence that this approach presents two limitations: hyperparameter sensitivity and lack of interpretability. Using the concept of $\varepsilon$ -truthful to truthful reductions, we construct a new loss function that circumvents these two aspects. Lastly, we resort to amortized optimization and reframe the auction learning problem as a two-player game.
+
+# 3.1 THE AUGMENTED LAGRANGIAN METHOD AND ITS SHORTCOMINGS
+
+Optimization and training. We briefly review the training procedure proposed by Duetting et al. (2019) to learn optimal auctions. The authors apply the augmented Lagrangian method to solve the constrained problem $(\widehat{\mathrm{LP}})$ and consider the loss:
+
+$$
+\mathcal {L} (w; \lambda ; \rho) = - \frac {1}{L} \sum_ {\ell = 1} ^ {L} \sum_ {i \in N} p _ {i} ^ {w} (V ^ {(\ell)}) + \sum_ {i \in N} \lambda_ {i} r _ {i} ^ {w} (V ^ {(\ell)}) + \frac {\rho}{2} \left(\sum_ {i \in N} r _ {i} ^ {w} (V ^ {(\ell)})\right) ^ {2},
+$$
+
+where $\lambda \in \mathbb{R}^n$ is a vector of Lagrange multipliers and $\rho > 0$ is a parameter controlling the weight of the quadratic penalty. More details about the training procedure can be found in Appendix A.
+
+Scheduling consistency problem. The parameters $\lambda$ and $\rho$ are time-varying. Indeed, their value changes according to a pre-defined scheduling of the following form: 1) Initialize $\lambda$ and $\rho$ with respectively $\lambda^0$ and $\rho^0$ , 2) Update $\rho$ every $T_{\rho}$ iterations: $\rho^{t + 1} \gets \rho^t + c$ , where $c$ is a pre-defined constant, 3) Update $\lambda$ every $T_{\lambda}$ iterations according to $\lambda_i^t \gets \lambda_i^t + \rho^t \hat{r}_i^{w^t}$ .
+
+Therefore, this scheduling requires to set up five hyper parameters $(\lambda^0,\rho^0,c,T_\lambda ,T_\rho)$ . Some of the experiments found Duetting et al. (2019) were about learning an optimal mechanism for an $n$ -bidder $m$ -item auction $(n\times m)$ where the valuations are iid $\mathcal{U}[0,1]$ . Different scheduling parameters were used for different values of $n$ and $m$ . We report the values of the hyper parameters used for the $1\times 2$ , $3\times 10$ and $5\times 10$ settings in Table 1(a). A natural question is whether the choice of parameters heavily affects the performance. We proceed to a numerical investigation of this questions by trying different schedulings (columns) for different settings (rows) and report our the results in Table 1(b).
+
+Table 1: (a): Scheduling parameters values set in Duetting et al. (2019) to reach optimal auctions in $n \times m$ settings with $n$ bidders, $m$ objects and i.i.d. valuations sampled from $\mathcal{U}[0,1]$ . (b): Revenue $rev := \mathbb{E}_{V \sim D}[\sum_{i=1}^{n} p_i(V)]$ and average regret per bidder $reg := \frac{1}{n} \mathbb{E}_{V \in D}[\sum_{i=1}^{n} r_i(V)]$ for $n \times m$ settings when using the different parameters values set reported in (a).
+
+ | 1 × 2 | 3 × 10 | 5 × 10 |
| λ0 | 5 | 5 | 1 |
| ρ0 | 1 | 1 | 0.25 |
| c | 50 | 1 | 0.25 |
| Tλ | 102 | 102 | 102 |
| Tρ | 104 | 104 | 105 |
| (a) |
| | | Scheduling |
| | | 1 × 2 |
| | | Setting |
| | | rev |
| | | rgt |
| | | rev |
| | | rgt |
| | | rev |
| | | rgt |
| | | 0.552 |
| | | 0.0001 |
| | | 0.573 |
| | | 0.0012 |
| | | 0.332 |
| | | 0.0179 |
| | | 5.880 |
| | | 0.0047 |
| | | 6.749 |
+
+(b)
+
+The auction returned by the network dramatically varies with the choice of scheduling parameters. When applying the parameters of $1 \times 2$ to $5 \times 10$ , we obtain a revenue that is lower by $30\%$ . The performance of the learning algorithm strongly depends on the specific values of the hyperparameters. Finding an adequate scheduling requires an extensive and time consuming hyperparameter search.
+
+Lack of interpretability. How should one compare two mechanisms with different expected revenue and regret? Is a mechanism $M_1$ with revenue $P_1 = 1.01$ and an average total regret $R_1 = 0.02$ better than a mechanism $M_2$ with $P_2 = 1.0$ and $R_2 = 0.01$ ? The approach in Duetting et al. (2019) cannot answer this question. To see that, notice that when $\lambda_1 = \dots = \lambda_n = \lambda$ we can rewrite $\mathcal{L}(w; \lambda; \rho) = -P + \lambda R + \frac{\rho}{2} R^2$ . Which mechanism is better depends on the values of $\lambda$ and $\rho$ . For example if $\rho = 1$ and $\lambda = 0.1$ we find that $M_1$ is better, but if $\rho = 1$ and $\lambda = 10$ then $M_2$ is better. Since the values of $\lambda$ and $\rho$ change with time, the Lagrangian approach in Duetting et al. (2019) cannot provide metric to compare two mechanisms.
+
+# 3.2 A TIME-INDEPENDENT AND INTERPRETABLE LOSS FUNCTION FOR AUCTION LEARNING
+
+Our first contribution consists in introducing a new loss function for auction learning that addresses the two first limitations of Duetting et al. (2019) mentioned in Section 3.1. We first motivate this loss in the one bidder case and then extend it to auctions with many bidders.
+
+# 3.2.1 MECHANISMS WITH ONE BIDDER
+
+Proposition 1. [Balcan et al. (2005), attributed to Nisan] Let $\mathcal{M}$ be an additive auction with 1 bidder and $m$ items. Let $P$ and $R$ denote the expected revenue and regret, $P = \mathbb{E}_{V\in D}[p(V)]$ and $R = \mathbb{E}_{V\in D}[r(V)]$ . There exists a mechanism $\mathcal{M}^*$ with expected revenue $P^{*} = (\sqrt{P} -\sqrt{R})^{2}$ and zero regret $R^{*} = 0$ .
+
+A proof of this proposition can be found in Appendix C. Comparing two mechanisms is straightforward when both of them have zero-regret: the best one achieves the highest revenue. Prop. 1 allows a natural and simple extension of this criteria for non zero-regret mechanism with one bidder: we will say that $M_1$ is better than $M_2$ if and only if $M_1^*$ is better than $M_2^*$ :
+
+$$
+M _ {1} \geqslant M _ {2} \iff P ^ {*} (M _ {1}) \geqslant P ^ {*} (M _ {2}) \iff \sqrt {P _ {1}} - \sqrt {R _ {1}} \geqslant \sqrt {P _ {2}} - \sqrt {R _ {2}}
+$$
+
+Using our metric, we find that a one bidder mechanism with revenue of 1.00 and regret of 0.01 is "better" than one with revenue 1.01 and regret 0.02.
+
+# 3.2.2 MECHANISMS WITH MULTIPLE BIDDERS
+
+Let $M_1$ and $M_2$ be two mechanisms with $n$ bidders and $m$ objects. Let $P_i$ and $R_i$ denote their total expected revenue and regret, $P_i = \mathbb{E}_{V \in D} \left[ \sum_{j=1}^{n} p_j(V) \right]$ and $R_i = \mathbb{E}_{V \in D} \left[ \sum_{j=1}^{n} r_j(V) \right]$ . We can extend our metric derived in Section 3.2.1 to the multiple bidder by the following:
+
+$$
+M _ {1} \text {i s}" b e t t e r" t h a n} M _ {2} \iff M _ {1} \geqslant M _ {2} \iff \sqrt {P _ {1}} - \sqrt {R _ {1}} \geqslant \sqrt {P _ {2}} - \sqrt {R _ {2}}
+$$
+
+When $n = 1$ we recover the criteria from Section 3.2.1 that is backed by Prop. 1. When $n > 1$ , it is considered a major open problem whether the extension of Prop. 1 still holds. Note that a multibidder variant of Prop. 1 does hold under a different solution concept termed "Bayesian Incentive Compatible" (Rubinstein & Weinberg, 2018; Cai et al., 2019), supporting the conjecture that Prop. 1
+
+indeed extends. $^2$ Independently of whether or not Prop. 1 holds, this reasoning implies a candidate loss function for the multi-bidder setting which we can evaluate empirically.
+
+This way of comparing mechanisms motivates the use of loss function: $\mathcal{L}(P,R) = -(\sqrt{P} -\sqrt{R})$ instead of the Lagrangian from Section 3, and indeed this loss function works well in practice. We empirically find the loss function $\mathcal{L}_m(P,R) = -(\sqrt{P} -\sqrt{R}) + R$ further accelerates training, as it further (slightly) biases towards mechanisms with low regret. Both of these loss functions are time-independent and hyperparameter-free.
+
+# 3.3 AMORTIZED MISREPORT OPTIMIZATION
+
+To compute the regret $r_i^w (V)$ one has to solve the optimization problem: $\max_{\vec{v}_i' \in \mathbb{R}^m} u_i^w (\vec{v}_i, (\vec{v}_i', V_{-i})) - u_i^w (\vec{v}_i, (\vec{v}_i, V_{-i}))$ . In Duetting et al. (2019), this optimization problem is solved with an inner optimization loop for each valuation profile. In other words, computing the regret of each valuation profile is solved separately and independently, from scratch. If two valuation profiles are very close to each other, one should expect that the resulting optimization problems have close results. We leverage this to improve training efficiency.
+
+We propose to amortize this inner loop optimization. Instead of solving all these optimization problems independently, we will instead learn one neural network $M^{\varphi}$ that tries to predict the solution of all of them. $M^{\varphi}$ takes as entry a valuation profile and maps it to the optimal misreport:
+
+$$
+M ^ {\varphi}: \left\{ \begin{array}{l l} \mathbb {R} ^ {n \times m} & \to \mathbb {R} ^ {n \times m} \\ V = [ \vec {v _ {i}} ] _ {i \in N} & \to \big [ \operatorname {a r g m a x} _ {\vec {v ^ {\prime}} \in D} u _ {i} (\vec {v _ {i}}, (\vec {v ^ {\prime}}, V _ {- i})) \big ] _ {i \in N} \end{array} \right.
+$$
+
+The loss $\mathcal{L}_r$ that $M^{\varphi}$ is trying to minimize follows naturally from that definition and is then given by: $\mathcal{L}_r(\varphi ,w) = -\mathbb{E}_{V\in D}\left[\sum_{i = 1}^{n}u_i^w (\vec{v_i},([M^\varphi (V)]_i,V_{-i}))\right]$ .
+
+# 3.4 AUCTION LEARNING AS A TWO-PLAYER GAME
+
+In this section, we combine the ideas from Sections 3.2 and 3.3 to obtain a new formulation for the auction learning problem as a two-player game between an Auctioneer with parameter $w$ and a Misreporter with parameter $\varphi$ . The optimal parameters for the auction learning problem $(w^{*},\varphi^{*})$ are a Nash Equilibrium for this game.
+
+The Auctioneer is trying to design a truthful (IC) and rational (IR) auction that maximizes revenue. The Misreporter is trying to maximize the bidders' utility, for the current auction selected by Auctioneer, $w$ . This is achieved by minimizing the loss function $\mathcal{L}_r(\varphi, w)$ wrt to $\varphi$ (as discussed in Sec 3.3). The Auctioneer in turn maximizes expected revenue, for the current misreports as chosen by Misreporter. This is achieved by minimizing $\mathcal{L}_m(w, \varphi) = -\left( \sqrt{P^w} + \sqrt{R^{w,\varphi}} \right) + R^{w,\varphi}$ with respect to $w$ (as discussed in Sec 3.2). Here, $R^{w,\varphi}$ is an estimate of the total regret that auctioneer computes for the current Misreporter $\varphi$ , $R^{w,\varphi} = \frac{1}{L} \sum_{\ell=1}^{L} \sum_{i \in N} \left( u_i^w(\vec{v}_i, ([M^\varphi(V)]_i, V_{-i})) - u_i^w(\vec{v}_i, (\vec{v}_i, V_{-i})) \right)$ . This game formulation can be summarized as follows:
+
+
+
+Remark 1. The game formulation (G) reminds us of Generative Adversarial Networks (Goodfellow et al., 2014). Contrary to GANs, it is not a zero-sum game.
+
+# 4 ARCHITECTURE AND TRAINING PROCEDURE
+
+We describe ALGnet, a feed-forward architecture solving for the game formulation (G) and then provide a training procedure. ALGnet consists in two modules that are the auctioneer's module and the misreporter's module. These components take as input a bid matrix $B = (b_{i,j}) \in \mathbb{R}^{n \times m}$ and are trained jointly. Their outputs are used to compute the regret and revenue of the auction.
+
+Notation. We use $\mathrm{MLP}(d_{\mathrm{in}},n_l,h,d_{\mathrm{out}})$ to refer to a fully-connected neural network with input dimension $d_{\mathrm{in}}$ , output dimension $d_{\mathrm{out}}$ and $n_l$ hidden layers of width $h$ and tanh activation function. sig denotes the sigmoid activation function. Given a matrix $B = [\vec{b}_1,\dots ,\vec{b}_n]^\top \in \mathbb{R}^{n\times m}$ , we define for a fixed $i\in N$ , the matrix $B_{(i)}\coloneqq [\vec{b}_i,\vec{b}_1,\dots ,\vec{b}_{i - 1},\vec{b}_{i + 1},\dots ,\vec{b}_n]$ .
+
+# 4.1 THE AUCTIONER'S MODULE
+
+It is composed of an allocation network that encodes a randomized allocation $g^w \colon \mathbb{R}^{nm} \to [0,1]^{nm}$ and a payment network that encodes a payment rule $p^w \colon \mathbb{R}^{nm} \to \mathbb{R}^n$ .
+
+Allocation network. It computes the allocation probability of item $j$ to bidder $i$ $[g^{w}(B)]_{ij}$ as $[g^{w}(B)]_{ij} = [f_{1}(B)]_{j} \cdot [f_{2}(B)]_{ij}$ where $f_{1} \colon \mathbb{R}^{n \times m} \to [0,1]^{m}$ and $f_{2} \colon \mathbb{R}^{n \times m} \to [0,1]^{m \times n}$ are functions computed by two feed-forward neural networks.
+
+- $[f_1(B)]_j$ is the probability that object $j \in M$ is allocated and is given by $[f_1(B)]_j = \mathrm{sig}(\mathbf{MLP}(nm, n_a, h_a, n))$ .
+- $[f_2(B)]_{ij}$ is the probability that item $j \in M$ is allocated to bidder $i \in N$ conditioned on object $j$ being allocated. A first MLP computes $l_j := \mathrm{MLP}(nm, n_a, h_a, m)(B_{(j)})$ for all $j \in M$ . The network then concatenates all these vectors $l_j$ into a matrix $L \in \mathbb{R}^{n \times m}$ . A softmax activation function is finally applied to $L$ to ensure feasibility i.e. for all $j \in M$ , $\sum_{i \in N} L_{ij} = 1$ .
+
+Payment network. It computes the payment $[p^w (B)]_i$ for bidder $i$ as $[p^w (B)]_i = \tilde{p}_i\sum_{j = 1}^m B_{ij}[g^w (B)]_{ij}$ , where $\tilde{p}\colon \mathbb{R}^{n\times m}\to [0,1]^n$ . $\tilde{p}_i$ is the fraction of bidder's $i$ utility that she has to pay to the mechanism. We compute $\tilde{p}_i = \mathrm{sig}(\mathrm{MLP}(nm,n_p,h_p,1))(B_{(i)})$ . Finally, notice that by construction $[p^w (B)]_i\leqslant \sum_{j = 1}^m B_{ij}g^w (B)_{ij}$ which ensures that (IR) is respected.
+
+# 4.2 THE MISREPORTER'S MODULE
+
+The module consists in an $\mathrm{MLP}(nm, n_M, h_M, m)$ followed by a projection layer $\mathrm{Proj}$ that ensures that the output of the network is in the domain $D$ of the valuation. For example when the valuations are restricted to $[0,1]$ , we can take $\mathrm{Proj} = \mathrm{sig}$ , if they are non-negative numbers, we can take $\mathrm{Proj} = \mathrm{SoftPlus}$ . The optimal misreport for bidder $i$ is then given by $\mathrm{Proj} \circ \mathrm{MLP}(nm, n_M, h_M, m)(B_{(i)}) \in \mathbb{R}^m$ . Stacking these vectors gives us the misreport matrix $M^\varphi(B)$ .
+
+# 4.3 TRAINING PROCEDURE AND OPTIMIZATION
+
+We optimize the game (G) over the space of neural networks parameters $(w, \varphi)$ . The algorithm is easy to implement (Alg. 1).
+
+At each time $t$ , we sample a batch of valuation profiles of size $B$ . The algorithm performs $\tau$ updates for the Misreporter's network (line 9) and one update on the Auctioneer's network (line 10). Moreover, we often reinitialize the Misreporter's network every $T_{init}$ steps in the early phases of the training ( $t \leqslant T_{limit}$ ). This step is not necessary but we found empirically that it speeds up training.
+
+# Algorithm 1 ALGnet training
+
+1: Input: number of agents, number of objects.
+2: Parameter: $\gamma > 0$ ; $B, T, T_{init}, T_{limit}, \tau \in \mathbb{N}$ .
+3: Initialize misreport's and auctioneer's nets.
+4: for $t = 1, \dots, T$ do
+
+5: if $t \equiv 0 \mod T_{init}$ and $t < T_{Limit}$ then:
+6: Reinitialize Misreport Network
+7: Sample valuation batch $S$ of size $B$ .
+8: for $\bar{s} = 1,\dots ,\tau$ do
+9: $\varphi^{s + 1}\gets \varphi^s -\gamma \nabla_\varphi \mathcal{L}_r(\varphi^s,w^t)(S).$
+10: $w^{t + 1}\gets w^t -\gamma \nabla_w\mathcal{L}_m(w^t,\varphi)(S).$
+
+# 5 EXPERIMENTAL RESULTS
+
+We show that ALGnet can recover near-optimal auctions for settings where the optimal solution is known and that it can find new auctions for settings where analytical solutions are not known. Since RegretNet is already capable of discovering near optimal auctions, one cannot expect ALGnet to achieve significantly higher optimal revenue than RegretNet. The results obtained are competitive or better than the ones obtained in Duetting et al. (2019) while requiring much less hyperparameters (Section 3). We also evaluate ALGnet in online auctions and compare it to RegretNet.
+
+For each experiment, we compute the total revenue $rev := \mathbb{E}_{V \sim D}[\sum_{i \in N} p_i^w(V)]$ and average regret $rgt := \frac{1}{n} \mathbb{E}_{V \sim D}[\sum_{i \in N} r_i^w(V)]$ on a test set of 10,000 valuation profiles. We run each experiment 5 times with different random seeds and report the average and standard deviation of these runs. In our comparisons we make sure that ALGnet and RegretNet have similar sizes for fairness (Appendix D).
+
+# 5.1 AUCTIONS WITH KNOWN AND UNKNOWN OPTIMA
+
+Known settings. We show that ALGnet is capable of recovering near optimal auction in different well-studied auctions that have an analytical solution. These are one bidder and two items auctions where the valuations of the two items $v_{1}$ and $v_{2}$ are independent. We consider the following settings. (A): $v_{1}$ and $v_{2}$ are i.i.d. from $\mathcal{U}[0,1]$ , (B): $v_{1} \sim \mathcal{U}[4,16]$ and $v_{2} \sim \mathcal{U}[4,7]$ , (C): $v_{1}$ has density $f_{1}(x) = 5 / (1 + x)^{6}$ and $v_{2}$ has density $f_{2}(y) = 6 / (1 + y)^{7}$ .
+
+(A) is the celebrated Manelli-Vincent auction (Manelli & Vincent, 2006); (B) is a non-i.i.d. auction and (C) is a non-i.i.d. heavy-tail auction and both of them are studied in Daskalakis et al. (2017). We compare our results to the theoretical optimal auction (Table 2). (Duetting et al. (2019) does not evaluate RegretNet on settings (B) & (C)). During the training process, reg decreases to 0 while $rev$ and $P^*$ converge to the optimal revenue. For (A), we also plot $rev$ , $rgt$ and $P^*$ as function of the number of epochs and we compare it to RegretNet (Fig. 1).
+
+Contrary to ALGnet, we observe that RegretNet overestimates the revenue in the early stages of training at the expense of a higher regret. As a consequence, ALGnet learns the optimal auction faster than RegretNet while being schedule-free and requiring less hyperparameters.
+
+Table 2: Revenue & regret of ALGnet for settings (A)-(C).
+
+ | Optimal | ALGnet (Ours) |
| rev | rgt | rev | rgt (×10-3) |
| (A) | 0.550 | 0 | 0.555 (±0.0019) | 0.55 (±0.14) |
| (B) | 9.781 | 0 | 9.737 (±0.0443) | 0.75 (±0.17) |
| (C) | 0.1706 | 0 | 0.1712 (±0.0012) | 0.14 (±0.07) |
+
+
+(a)
+
+
+(b)
+
+
+(c)
+
+
+(d)
+
+
+(e)
+Figure 1: (a-b-c) compares the evolution of the revenue, regret and $P^{*}$ as a function of the number of epoch for RegretNet and ALGnet for setting (A). (d-e-f) plots the revenue, regret and $P^{*}$ as a function of time for ALGnet and (offline & online) RegretNet for an online auction (Section 5.2).
+
+
+(f)
+
+Unknown and large-scale auctions. We now consider settings where the optimal auction is unknown. We look at $n$ -bidder $m$ -item additive settings where the valuations are sampled i.i.d from $\mathcal{U}[0,1]$ which we will denote by $n \times m$ . In addition to "reasonable"-scale auctions $(1 \times 10$ and $2 \times 2$ ), we investigate large-scale auctions $(3 \times 10$ and $5 \times 10$ ) that are much more complex. Only deep learning methods are able to solve them efficiently. Table 3 shows that ALGnet is able to discover auctions that yield comparable or better results than RegretNet.
+
+# 5.2 ONLINE AUCTIONS
+
+ALGnet is an online algorithm with a time-independent loss function. We would expect it to perform well in settings where the underlying distribution of the valuations changes over time. We consider a one bidder and two items additive auction with valuations $v_{1}$ and $v_{2}$ sampled i.i.d from $\mathcal{U}[0,1 + t]$ where $t$ in increased from 0 to 1 at a steady rate. The optimal auction at time $t$ has revenue
+
+Table 3: Comparison of RegretNet and ALGnet. The values reported for RegretNet are found in Duetting et al. (2019), the numerical values for rgt and standard deviations are not available.
+
+| Setting | RegretNet | ALGnet (Ours) |
| rev | rgt | rev | rgt |
| 1 × 2 | 0.554 | < 1.0 · 10-3 | 0.555 (±0.0019) | 0.55 · 10-3(±0.14 · 10-3) |
| 1 × 10 | 3.461 | < 3.0 · 10-3 | 3.487 (±0.0135) | 1.65 · 10-3(±0.57 · 10-3) |
| 2 × 2 | 0.878 | < 1.0 · 10-3 | 0.879 (±0.0024) | 0.58 · 10-3(±0.23 · 10-3) |
| 3 × 10 | 5.541 | < 2.0 · 10-3 | 5.562 (±0.0308) | 1.93 · 10-3(±0.33 · 10-3) |
| 5 × 10 | 6.778 | < 5.0 · 10-3 | 6.781 (±0.0504) | 3.85 · 10-3(±0.43 · 10-3) |
+
+$0.55 \times (1 + t)$ . We use ALGnet and two versions of RegretNet, the original offline version (Appendix A) and our own online version (Appendix B) and plot $rev(t)$ , $rgt(t)$ and $P^{*}(t)$ (Fig. 1). The offline version learns from a fixed dataset of valuations sampled at $t = 0$ (i.e. with $V \sim \mathcal{U}[0,1]^{nm}$ ) while the online versions (as ALGnet) learn from a stream of data at each time $t$ . Overall, ALGnet performs better than the other methods. It learns an optimal auction faster at the initial (especially compared to RegretNet Online) and keep adapting to the distributional shift (contrary to vanilla RegretNet).
+
+# 6 CONCLUSION
+
+We identified two inefficiencies in previous approaches to deep auction design and propose solutions, building upon recent trends and results from machine learning (amortization) and theoretical auction design (stationary Lagrangian). This resulted in a novel formulation of auction learning as a two-player game between an Auctioneer and a Misreporter and a new architecture ALGnet. ALGnet requires significantly fewer hyperparameters than previous Lagrangian approaches. We demonstrated the effectiveness of ALGnet on a variety of examples by comparing it to the theoretical optimal auction when it is known, and to RegretNet when the optimal solution is not known.
+
+Acknowledgements. Jad Rahme would like to thank Ryan P. Adams for helpful discussions and feedback on the manuscript. Samy Jelassi thanks Arthur Mensch for fruitful discussions on the subject and feedback on the manuscript. The work of Jad Rahme was funded by a Princeton SEAS Innovation Grant. The work of Samy Jelassi is supported by the NSF CAREER CIF 1845360. The work of S. Matthew Weinberg was supported by NSF CCF-1717899.
+
+# REFERENCES
+
+Maria-Florina Balcan, Avrim Blum, Jason D. Hartline, and Yishay Mansour. Mechanism design via machine learning. In 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2005), 23-25 October 2005, Pittsburgh, PA, USA, Proceedings, pp. 605-614. IEEE Computer Society, 2005. doi: 10.1109/SFCS.2005.50. URL https://doi.org/10.1109/SFCS.2005.50.
+Maria-Florina F Balcan, Tuomas Sandholm, and Ellen Vitercik. Sample complexity of automated mechanism design. In Advances in Neural Information Processing Systems, pp. 2083-2091, 2016.
+Xiaohui Bei and Zhiyi Huang. Bayesian Incentive Compatibility via Fractional Assignments. In the Twenty-Second Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), 2011.
+Sebastien Bubeck, Nikhil R Devanur, Zhiyi Huang, and Rad Niazadeh. Online auctions and multiscale online learning. In Proceedings of the 2017 ACM Conference on Economics and Computation, pp. 497-514, 2017.
+Yang Cai, Nikhil Devanur, and S. Matthew Weinberg. A duality based unified approach to bayesian mechanism design. In Proceedings of the 48th ACM Conference on Theory of Computation(STOC), 2016.
+Yang Cai, Argyris Oikonomou, Grigoris Velegkas, and Mingfei Zhao. An efficient $\varepsilon$ -bic to BIC transformation and its application to black-box reduction in revenue maximization. CoRR, abs/1911.10172, 2019. URL http://arxiv.org/abs/1911.10172.
+
+Xi Chen, Ilias Diakonikolas, Dimitris Paparas, Xiaorui Sun, and Mihalis Yannakakis. The complexity of optimal multidimensional pricing. In Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2014, Portland, Oregon, USA, January 5-7, 2014, pp. 1319-1328, 2014. doi: 10.1137/1.9781611973402.97. URL http://dx.doi.org/10.1137/1.9781611973402.97.
+Xi Chen, Ilias Diakonikolas, Anthi Orfanou, Dimitris Paparas, Xiaorui Sun, and Mihalis Yannakakis. On the complexity of optimal lottery pricing and randomized mechanisms. In 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, pp. 1464-1479. IEEE, 2015.
+Xi Chen, George Matikas, Dimitris Paparas, and Mihalis Yannakakis. On the complexity of simple and optimal deterministic mechanisms for an additive buyer. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 2036-2049. SIAM, 2018.
+Constantinos Daskalakis and Seth Matthew Weinberg. Symmetries and optimal multi-dimensional mechanism design. In Proceedings of the 13th ACM Conference on Electronic Commerce, pp. 370-387, 2012.
+Constantinos Daskalakis, Alan Deckelbaum, and Christos Tzamos. The complexity of optimal mechanism design. In Proceedings of the twenty-fifth annual ACM-SIAM symposium on Discrete algorithms, pp. 1302-1318. SIAM, 2014.
+Constantinos Daskalakis, Alan Deckelbaum, and Christos Tzamos. Strong duality for a multiple-good monopolist. *Econometrica*, 85(3):735-767, 2017.
+Paul Duetting, Zhe Feng, Harikrishna Narasimhan, David Parkes, and Sai Srivatsa Ravindranath. Optimal auctions through deep learning. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 1706-1715, Long Beach, California, USA, 09-15 Jun 2019. PMLR. URL http://proceedings.mlr.press/v97/duetting19a.html.
+Shaddin Dughmi, Li Han, and Noam Nisan. Sampling and representation complexity of revenue maximization. In International Conference on Web and Internet Economics, pp. 277-291. Springer, 2014.
+Shaddin Dughmi, Jason D. Hartline, Robert Kleinberg, and Rad Niazadeh. Bernoulli factories and black-box reductions in mechanism design. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2017, Montreal, QC, Canada, June 19-23, 2017, pp. 158-169, 2017. doi: 10.1145/3055399.3055492. URL http://doi.acm.org/10.1145/3055399.3055492.
+Paul Dütting, Felix Fischer, Pichayut Jirapinyo, John K Lai, Benjamin Lubin, and David C Parkes. Payment rules through discriminant-based classifiers. ACM Transactions on Economics and Computation (TEAC), 3(1):1-41, 2015.
+Zhe Feng, Harikrishna Narasimhan, and David C Parkes. Deep learning for revenue-optimal auctions with budgets. In Proceedings of the 17th International Conference on Autonomous Agents and Multiagent Systems, pp. 354-362. International Foundation for Autonomous Agents and Multiagent Systems, 2018.
+Noah Golowich, Harikrishna Narasimhan, and David C. Parkes. Deep learning for multi-facility location mechanism design. In Proceedings of the 17th International Joint Conference on Artificial Intelligence (IJCAI 2018), pp. 261-267, 2018. URL https://econcs.seas.harvard.edu/files/econcs/files/golowich_ijcai18.pdf.
+Yannai A. Gonczarowski and Noam Nisan. Efficient empirical revenue maximization in single-parameter auction environments. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2017, Montreal, QC, Canada, June 19-23, 2017, pp. 856-868, 2017. doi: 10.1145/3055399.3055427. URL http://doi.acm.org/10.1145/3055399.3055427.
+Yannai A. Gonczarowski and S. Matthew Weinberg. The sample complexity of up-to- $\varepsilon$ multidimensional revenue maximization. In 59th IEEE Annual Symposium on Foundations of Computer Science, FOCS, 2018.
+
+Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2672-2680, 2014.
+Chenghao Guo, Zhiyi Huang, and Xinzhi Zhang. Settling the sample complexity of single-parameter revenue maximization. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, STOC 2019, Phoenix, AZ, USA, June 23-26, 2019., pp. 662-673, 2019. doi: 10.1145/3313276.3316325. URL https://doi.org/10.1145/3313276.3316325.
+Sergiu Hart and Noam Nisan. Approximate Revenue Maximization with Multiple Items. In the 13th ACM Conference on Electronic Commerce (EC), 2012.
+Sergiu Hart and Philip J. Reny. Maximizing Revenue with Multiple Goods: Nonmonotonicity and Other Observations. Theoretical Economics, 10(3):893-922, 2015.
+Jason D. Hartline and Brendan Lucier. Bayesian Algorithmic Mechanism Design. In the 42nd ACM Symposium on Theory of Computing (STOC), 2010.
+Jason D. Hartline and Samuel Taggart. Sample complexity for non-truthful mechanisms. In Proceedings of the 2019 ACM Conference on Economics and Computation, EC 2019, Phoenix, AZ, USA, June 24-28, 2019., pp. 399-416, 2019. doi: 10.1145/3328526.3329632. URL https://doi.org/10.1145/3328526.3329632.
+Jason D. Hartline, Robert Kleinberg, and Azarakhsh Malekian. Bayesian Incentive Compatibility via Matchings. In the Twenty-Second Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), 2011.
+Zhiyi Huang, Yishay Mansour, and Tim Roughgarden. Making the most of your samples. SIAM Journal on Computing, 47(3):651-674, 2018.
+Sébastien Lahaie. A kernel-based iterative combinatorial auction. In Twenty-Fifth AAAI Conference on Artificial Intelligence, 2011.
+Xinye Li and Andrew Chi-Chih Yao. On revenue maximization for selling multiple independently distributed items. Proceedings of the National Academy of Sciences, 110(28):11232-11237, 2013.
+Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017.
+Alejandro Manelli and Daniel Vincent. Bundling as an optimal selling mechanism for a multiple-good monopolist. Journal of Economic Theory, 127(1):1-35, 2006.
+Jamie Morgenstern and Tim Roughgarden. Learning simple auctions. In Conference on Learning Theory, pp. 1298-1318, 2016.
+Jamie H Morgenstern and Tim Roughgarden. On the pseudo-dimension of nearly optimal auctions. In Advances in Neural Information Processing Systems, pp. 136-144, 2015.
+Roger B Myerson. Optimal auction design. Mathematics of operations research, 6(1):58-73, 1981.
+Harikrishna Narasimhan and David C Parkes. A general statistical framework for designing strategy-proof assignment mechanisms. In UAI'16 Proceedings of the Thirty-Second Conference on Uncertainty in Artificial Intelligence, 2016.
+Jad Rahme, Samy Jelassi, Joan Bruna, and S. Matthew Weinberg. A permutation-equivariant neural network architecture for auction design. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 2021.
+Tim Roughgarden and Okke Schrijvers. Ironing in the dark. In Proceedings of the 2016 ACM Conference on Economics and Computation, EC '16, Maastricht, The Netherlands, July 24-28, 2016, pp. 1-18, 2016. doi: 10.1145/2940716.2940723. URL http://doi.acm.org/10.1145/2940716.2940723.
+
+Aviad Rubinstein and S Matthew Weinberg. Simple mechanisms for a subadditive buyer and applications to revenue monotonicity. ACM Transactions on Economics and Computation (TEAC), 6(3-4):1-25, 2018.
+Weiran Shen, Pingzhong Tang, and Song Zuo. Automated mechanism design via neural networks. In Proceedings of the 18th International Conference on Autonomous Agents and Multiagent Systems, pp. 215-223. International Foundation for Autonomous Agents and Multiagent Systems, 2019.
+Vasilis Syrgkanis. A sample complexity measure with applications to learning optimal auctions. In Advances in Neural Information Processing Systems, pp. 5352-5359, 2017.
+Andrea Tacchetti, DJ Strouse, Marta Garnelo, Thore Graepel, and Yoram Bachrach. A neural architecture for designing truthful and efficient auctions. arXiv preprint arXiv:1907.05181, 2019.
+John Thanassoulis. Haggling over substitutes. Journal of Economic Theory, 117:217-245, 2004.
+William Vickrey. Counterspeculation, auctions, and competitive sealed tenders. The Journal of finance, 16(1):8-37, 1961.
+
+# A TRAINING ALGORITHM FOR REGRET NET
+
+We present the training algorithm for RegretNet, more details can be found in Duetting et al. (2019).
+
+Algorithm 2 Training Algorithm.
+1: Input: Minibatches $S_{1},\ldots ,S_{T}$ of size $B$
+2: Parameters: $\gamma >0,\eta >0,c > 0,R\in \mathbb{N},T\in \mathbb{N},T_{\rho}\in \mathbb{N},T_{\lambda}\in \mathbb{N}.$
+3: Initialize Parameters: $\rho^0\in \mathbb{R},w^0\in \mathbb{R}^d,\lambda^0\in \mathbb{R}^n,$
+4: Initialize Misreports: $v_{i}^{\prime (\ell)}\in \mathcal{D}_{i},\forall \ell \in [B],i\in N.$
+5:
+6: for $t = 0,\dots ,T$ do
+7: Receive minibatch $S_{t} = \{V^{(1)},\dots ,V^{(B)}\}$
+8: for $r = 0,\dots ,R$ do $\forall \ell \in [B],i\in n:$ $v_{i}^{\prime (\ell)}\gets v_{i}^{\prime (\ell)} + \gamma \nabla_{v_{i}^{\prime}}u_{i}^{w^{t}}(v_{i}^{(\ell)};(v_{i}^{\prime (\ell)},V_{-i}^{(\ell)}))$
+10:
+11: Get Lagrangian gradient and update $w^{t}$ .
+12: $w^{t + 1}\gets w^t -\eta \nabla_w\mathcal{L}(w^t;\lambda^t;\rho^t).$
+13:
+14: Update $\rho$ once in $T_{\rho}$ iterations:
+15: if $t$ is a multiple of $T_{\rho}$ then
+16: $\rho^{t + 1}\leftarrow \rho^t +c$
+17: else
+18: $\rho^{t + 1}\leftarrow \rho^t$
+19:
+20: Update Lagrange multipliers once in $T_{\lambda}$ iterations:
+21: if $t$ is a multiple of $T_{\lambda}$ then
+22: $\lambda_i^{t + 1}\gets \lambda_i^t +\rho^t\hat{r}_i(w^t),\forall i\in N$
+23: else
+24: $\lambda^{t + 1}\gets \lambda^t$
+
+# B TRAINING ALGORITHM FOR ONLINE REGRET NET
+
+We present an online version of the training algorithm for RegretNet, more details can be found in Duetting et al. (2019). This version is mentioned in the original paper but the algorithm is not explicitly written there. The following code is our own adaptation of the original RegretNet algorithm for online settings.
+
+Algorithm 3 Training Algorithm.
+1: Input: Valuation's Distribution $\mathcal{D}$
+2: Parameters: $\gamma > 0$ , $\eta > 0$ , $c > 0$ , $R \in \mathbb{N}$ , $T \in \mathbb{N}$ , $T_{\rho} \in \mathbb{N}$ , $T_{\lambda} \in \mathbb{N}$ , $B \in \mathbb{N}$
+3: Initialize Parameters: $\rho^0 \in \mathbb{R}$ , $w^0 \in \mathbb{R}^d$ , $\lambda^0 \in \mathbb{R}^n$
+4: for $t = 0, \ldots, T$ do
+5: Sample minibatch $S_t = \{V^{(1)}, \ldots, V^{(B)}\}$ from distribution $\mathcal{D}$ .
+6: Initialize Misreports: $v_i^{(\ell)} \in \mathcal{D}_i$ , $\forall \ell \in [B]$ , $i \in N$ .
+7:
+8: for $r = 0, \ldots, R$ do
+9: $\forall \ell \in [B]$ , $i \in n : v_i^{(\ell)} \gets v_i^{(\ell)} + \gamma \nabla_{v_i'} u_i^{w^t}(v_i^{(\ell)}; (v_i'^{(\ell)}, V_{-i}^{(\ell)}))$
+10:
+11: Get Lagrangian gradient and update $w^t$ : $w^{t+1} \gets w^t - \eta \nabla_w \mathcal{L}(w^t; \lambda^t; \rho^t)$ .
+12: Update $\rho$ once in $T_{\rho}$ iterations:
+13: if $t$ is a multiple of $T_{\rho}$ then
+14: Update $\rho$ once in $T_{\rho}$ iterations:
+15: if $t$ is a multiple of $T_{\rho}$ then
+16: else
+17: Update Lagrange multipliers once in $T_{\lambda}$ iterations:
+18: else
+19:
+20: Update Lagrange multipliers once in $T_{\lambda}$ iterations:
+21: if $t$ is a multiple of $T_{\lambda}$ then
+22: $\lambda_i^{t+1} \gets \lambda_i^t + \rho^t \widehat{r}_i(w^t)$ , $\forall i \in N$
+23: else
+24:
+
+# C PROOF OF PROP. 1
+
+Lemma 1. Let $M$ be a one bidder $m$ item mechanism with expected revenue $P$ and expected regret $R$ , then $\forall \varepsilon > 0$ , there exists a mechanism $M'$ with expected revenue $P' = (1 - \varepsilon)P - \frac{1 - \varepsilon}{\varepsilon}R$ and zero expected regret, $R' = 0$ .
+
+Proof. For every valuation vector $v \in D$ , let $g(v)$ and $p(v)$ denote the allocation vector and price that $M$ assigns to $v$ .
+
+We now consider the mechanism $M^{\prime}$ that does the following:
+
+$g^{\prime}(v) = g(v^{\prime})$
+$p^{\prime}(v) = (1 - \varepsilon)p(v^{\prime})$
+
+Where $v'$ is given by: $v' = \operatorname{argmax}_{\tilde{v} \in D} \langle v, g(\tilde{v}) \rangle - (1 - \varepsilon)p(\tilde{v})$ . By construction, the mechanism $M'$ has zero regret, all we have to do now is bound its revenue. If we denote by $R(v)$ the regret of the profile $v$ in the mechanism $M$ , $R(v) = \max_{\tilde{v} \in D} \langle v, g(\tilde{v}) - g(v) \rangle - (p(\tilde{v}) - p(v))$ we have.
+
+$$
+\begin{array}{l} \langle v, g \left(v ^ {\prime}\right) \rangle - p \left(v ^ {\prime}\right) = \langle v, g (v) \rangle - p (v) + \langle v, g \left(v ^ {\prime}\right) - g (v) \rangle - \left(p \left(v ^ {\prime}\right) - p (v)\right) \\ \leqslant \langle v, g (v) \rangle - p (v) + R (v) \\ \end{array}
+$$
+
+Which we will write as:
+
+$$
+\langle v, g (v) \rangle - p (v) \geqslant \langle v, g \left(v ^ {\prime}\right) \rangle - p \left(v ^ {\prime}\right) - R (v)
+$$
+
+Second, we have by construction:
+
+$$
+\langle v, g \left(v ^ {\prime}\right) \rangle - (1 - \varepsilon) p \left(v ^ {\prime}\right) \geqslant \langle v, g (v) \rangle - (1 - \varepsilon) p (v)
+$$
+
+By summing these two relations we find :
+
+$$
+p \left(v ^ {\prime}\right) \geqslant p (v) - \frac {R (v)}{\varepsilon}
+$$
+
+Finally we get that:
+
+$$
+p ^ {\prime} (v) \geqslant (1 - \varepsilon) p (v) - \frac {1 - \varepsilon}{\varepsilon} R (v)
+$$
+
+Taking the expectation we get:
+
+$$
+P ^ {\prime} \geqslant (1 - \varepsilon) P - \frac {1 - \varepsilon}{\varepsilon} R
+$$
+
+Proposition 1. Let $\mathcal{M}$ be an additive auction with 1 bidders and $m$ items. Let $P$ and $R$ denote the total expected revenue and regret, $P = \mathbb{E}_{V\in D}[p(V)]$ and $R = \mathbb{E}_{V\in D}[r(V)]$ . There exists a mechanism $\mathcal{M}^*$ with expected revenue $P^{*} = \left(\sqrt{P} -\sqrt{R}\right)^{2}$ and zero regret $R^{*} = 0$ .
+
+Proof. From Lemma 1 we know that $\forall \varepsilon > 0$ , we can find a zero regret mechanism with revenue $P' = (1 - \varepsilon)P - \frac{1 - \varepsilon}{\varepsilon}R$ . By optimizing over $\varepsilon$ we find that the best mechanism is the one corresponding to $\varepsilon = \sqrt{\frac{R}{P}}$ . The resulting optimal revenue is given by:
+
+$$
+P ^ {*} = (1 - \sqrt {\frac {R}{P}}) P - \frac {\sqrt {\frac {R}{P}}}{\sqrt {\frac {R}{P}}} R = P - 2 \sqrt {P R} + R = \left(\sqrt {P} - \sqrt {R}\right) ^ {2}
+$$
+
+□
+
+# D IMPLEMENTATION AND SETUP
+
+We implemented ALGnet in PyTorch and all our experiments can be run on Google's Colab platform (with GPU). In Alg. 1, we used batches of valuation profiles of size $B \in \{500\}$ and set $T \in \{160000, 240000\}$ , $T_{limit} \in \{40000, 60000\}$ , $T_{init} \in \{800, 1600\}$ and $\tau \in \{100\}$ .
+
+We used the AdamW optimizer (Loshchilov & Hutter, 2017) to train the Auctioneer's and the Misreporter's networks with learning rate $\gamma \in \{0.0005, 0.001\}$ . Typical values for the architecture's parameters are $n_a = n_p = n_m \in [3, 7]$ and $h_p = h_n = h_m \in \{50, 100, 200\}$ . These networks are similar in size to the ones used for RegretNet in Duetting et al. (2019).
+
+For each experiment, we compute the total revenue $rev \coloneqq \mathbb{E}_{V \sim D}[\sum_{i \in N} p_i^w(V)]$ and average regret $rgt \coloneqq 1 / n \mathbb{E}_{V \sim D}[\sum_{i \in N} r_i^w(V)]$ using a test set of 10,000 valuation profiles. We run each experiment 5 times with different random seeds and report the average and standard deviation of these runs.
\ No newline at end of file
diff --git a/auctionlearningasatwoplayergame/images.zip b/auctionlearningasatwoplayergame/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..f56143758fa202604b3cb6da6f5f1c70f5a74e57
--- /dev/null
+++ b/auctionlearningasatwoplayergame/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2bf216e00249350b5484b35514a1c01632fff47621fb1d35d21cdffa28bb8c0b
+size 323682
diff --git a/auctionlearningasatwoplayergame/layout.json b/auctionlearningasatwoplayergame/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..e40b242ee26d95f0754deb4ba6f832ac18cee646
--- /dev/null
+++ b/auctionlearningasatwoplayergame/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e195497ee438399e4f7cb3f5627c2e93a77dd00c8630c2b45d2920509d54dfc9
+size 729036
diff --git a/autolrsautomaticlearningrateschedulebybayesianoptimizationonthefly/d6d44d12-39ed-4868-9a0f-b1c48452e21c_content_list.json b/autolrsautomaticlearningrateschedulebybayesianoptimizationonthefly/d6d44d12-39ed-4868-9a0f-b1c48452e21c_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..b0c72454c0dadfaace3e2475c474def769219bb8
--- /dev/null
+++ b/autolrsautomaticlearningrateschedulebybayesianoptimizationonthefly/d6d44d12-39ed-4868-9a0f-b1c48452e21c_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3eb12319fe9d13006fec7b69995f20a6c4370338143ead0b9bfc2b1ba5cb02e5
+size 138980
diff --git a/autolrsautomaticlearningrateschedulebybayesianoptimizationonthefly/d6d44d12-39ed-4868-9a0f-b1c48452e21c_model.json b/autolrsautomaticlearningrateschedulebybayesianoptimizationonthefly/d6d44d12-39ed-4868-9a0f-b1c48452e21c_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..79a4ea3c3a38aa312aeb6bd287d5a06d2dffc9e1
--- /dev/null
+++ b/autolrsautomaticlearningrateschedulebybayesianoptimizationonthefly/d6d44d12-39ed-4868-9a0f-b1c48452e21c_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3b1d0cb3ccb96f46a391e98699fb0b04d3022a83b1ecfa9493972aba1985a5a5
+size 167318
diff --git a/autolrsautomaticlearningrateschedulebybayesianoptimizationonthefly/d6d44d12-39ed-4868-9a0f-b1c48452e21c_origin.pdf b/autolrsautomaticlearningrateschedulebybayesianoptimizationonthefly/d6d44d12-39ed-4868-9a0f-b1c48452e21c_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..2ce7cc2ac97fa0b658d25e55a245da5e3b0bbcd4
--- /dev/null
+++ b/autolrsautomaticlearningrateschedulebybayesianoptimizationonthefly/d6d44d12-39ed-4868-9a0f-b1c48452e21c_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2cf5361655e7979bf36cd5eb8aa21e530a404b009ac519d6077081f26dbecb3a
+size 1033829
diff --git a/autolrsautomaticlearningrateschedulebybayesianoptimizationonthefly/full.md b/autolrsautomaticlearningrateschedulebybayesianoptimizationonthefly/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..3634eb497eee799127b0c9679730721eee8cd2cc
--- /dev/null
+++ b/autolrsautomaticlearningrateschedulebybayesianoptimizationonthefly/full.md
@@ -0,0 +1,463 @@
+# AUTOLRS: AUTOMATIC LEARNING-RATE SCHEDULE BY BAYESIAN OPTIMIZATION ON THE FLY
+
+Yuchen Jin, Tianyi Zhou, Liangyu Zhao
+
+University of Washington
+
+{yuchenj, tianyizh, liangyu}@cs.washington.edu
+
+Yibo Zhu, Chuanxiong Guo
+
+ByteDance Inc.
+
+{zhuyibo, guochuanxiong}@bytedance.com
+
+Marco Canini
+
+KAUST
+
+marco@kaust.edu.sa
+
+Arvind Krishnamurthy
+
+University of Washington
+
+arvind@cs.washington.edu
+
+# ABSTRACT
+
+The learning rate (LR) schedule is one of the most important hyper-parameters needing careful tuning in training DNNs. However, it is also one of the least automated parts of machine learning systems and usually costs significant manual effort and computing. Though there are pre-defined LR schedules and optimizers with adaptive LR, they introduce new hyperparameters that need to be tuned separately for different tasks/datasets. In this paper, we consider the question: Can we automatically tune the LR over the course of training without human involvement? We propose an efficient method, AutoLRS, which automatically optimizes the LR for each training stage by modeling training dynamics. AutoLRS aims to find an LR applied to every $\tau$ steps that minimizes the resulted validation loss. We solve this black-box optimization on the fly by Bayesian optimization (BO). However, collecting training instances for BO requires a system to evaluate each LR queried by BO's acquisition function for $\tau$ steps, which is prohibitively expensive in practice. Instead, we apply each candidate LR for only $\tau' \ll \tau$ steps and train an exponential model to predict the validation loss after $\tau$ steps. This mutual-training process between BO and the loss-prediction model allows us to limit the training steps invested in the BO search. We demonstrate the advantages and the generality of AutoLRS through extensive experiments of training DNNs for tasks from diverse domains using different optimizers. The LR schedules auto-generated by AutoLRS lead to a speedup of $1.22 \times$ , $1.43 \times$ , and $1.5 \times$ when training ResNet-50, Transformer, and BERT, respectively, compared to the LR schedules in their original papers, and an average speedup of $1.31 \times$ over state-of-the-art heavily-tuned LR schedules.
+
+# 1 INTRODUCTION
+
+In the regime of deep learning, the success of training largely depends on the choice of the learning rate (LR) schedule, since most optimizers will have difficulty traversing a non-smooth and non-convex loss landscape with multiple local minimums and possibly saddle points (Kawaguchi, 2016; Jin et al., 2017; Goodfellow et al., 2016; Li et al., 2018a). To achieve stable and fast convergence towards a solution with good generalization performance, one has to tune the LR schedules carefully for different tasks (Nar & Sastry, 2018; Jastrzebski et al., 2017). This tuning is usually non-trivial and requires many trial-and-error iterations that are computationally expensive. Moreover, the randomness of the widely-used mini-batch stochastic gradient descent (SGD) may introduce more uncertainty and difficulty in the tuning process. For the same reasons, it is also hard to directly formulate the search of the LR schedule as a well-posed optimization problem and address it through standard optimization.
+
+The broadly-adopted strategy is to either pick one from a family of pre-defined LR schedules or apply an optimizer that has a built-in mechanism changing the LR adaptively. However, we have a limited number of choices for pre-defined LR schedules, most of which are simple functions such as exponent or cosine and thus cannot perfectly align with the non-smooth loss landscape. The latter set of adaptive optimizers, e.g., Adam (Kingma & Ba, 2015) and Adadelta (Zeiler, 2012), are extended from convex optimization and rely on strong assumptions to make the convergence properties hold. Moreover, the methods in both categories introduce new hyper-parameters that have to be tuned separately for different tasks or datasets, requiring significant human involvement.
+
+In this paper, we study the question: can we automatically tune the LR over the course of training without human involvement? At the beginning of every $\tau$ steps (i.e., a "stage" in our method), we seek to identify an LR that optimizes the validation loss (i.e., an empirical estimate of the generalization error) at the end of the stage. To do so, we employ Bayesian optimization (BO) that treats the validation loss as a black-box function of LR. BO simultaneously updates a posterior estimation of the black-box function and searches for the best LR with respect to the posterior. This approach is, however, computationally expensive since estimating the posterior needs many (input, output) instances of the function, and acquiring each instance costs $\tau$ steps of training. We, therefore, develop a simple yet efficient approximation: for every LR that BO decides to evaluate, we train the model by using the LR for only $\tau' \ll \tau$ steps and use the validation loss over the $\tau'$ steps to train a time-series forecasting model that provides a prediction of the validation loss after $\tau$ steps. As we will show later, an exponential model suffices to produce accurate predictions when using a small $\tau' = \tau / 10$ . Then, AutoLRS can allow BO to explore ten different LRs in each stage and still bound the total running time to approximately twice the training cost associated with the generated schedule, i.e., the time spent to find the stage-specific LRs is roughly equal to the time spent training the model with the identified LRs.
+
+AutoLRS does not depend on a pre-defined LR schedule, dataset, or a specified task and is compatible with almost all optimizers. Hence, it can be generally deployed across a broad range of ML tasks without much human involvement or expensive tuning over choices of LR schedules and their hyperparameters. Moreover, since it directly minimizes the validation loss, it does not only accelerate the convergence but also improves the generalization compared to just minimizing the training loss. Furthermore, AutoLRS only needs to update two extremely light-weight models, i.e., the BO posterior and the exponential forecasting model, and it is efficient in exploring the loss landscape. Hence, it does not result in notable extra costs in either memory or computation. Note that AutoLRS searches for better LRs based on the training dynamics, which can be seen as a form of self-supervision. The interaction between BO and the forecasting model is an example of mutual learning, where one produces training data for the other.
+
+In experiments, we apply AutoLRS to train three representative DNNs widely used in practice, i.e., ResNet-50 (He et al., 2016a) on ImageNet classification (Russakovsky et al., 2015); Transformer (Vaswani et al., 2017) and BERT (Devlin et al., 2019) for NLP tasks. Though they have been extensively studied and have hand-tuned LR schedules, the LR schedules computed by AutoLRS are faster than the original, hand-tuned, LR schedules by $1.22 \times$ , $1.43 \times$ , and $1.5 \times$ for training ResNet-50, Transformer, and BERT, respectively, in terms of the training steps used to update the DNN (i.e., excluding the costs of the LR/hyperparameter search). It meanwhile achieves test-set performance better or on par with state-of-the-art results. We also carefully hand-tuned two state-of-the-art learning rate schedules, CLR (Smith, 2017) and SGDR (Loshchilov & Hutter, 2017), and conducted more than ten experiments with different CLR/SGDR hyperparameters on each model. AutoLRS still has an average speedup of $1.29 \times$ and $1.34 \times$ across the three models, in terms of training steps, compared to the best CLR and SGDR LR schedules, respectively. The AutoLRS implementation is available at https://github.com/YuchenJin/autolrs.
+
+# 2 RELATED WORK
+
+Learning rate scheduling: In contrast to traditional LR schedules with a monotone decreasing sequence of LRs and multi-step LR schedule, a recent class of LR schedules propose to apply multiple cycles of LR decay. Cyclical Learning Rate (CLR) changes LR from a maximal LR $(\eta_{\mathrm{max}})$ to a minimal LR $(\eta_{\mathrm{min}})$ at a pre-defined frequency and achieves faster convergence for some DNNs (Smith, 2017). The approach requires a "LR range test" to estimate the minimal and maximal LR. The $LR$ range test trains the model with a linearly-increasing LR between a low LR
+
+and a high LR, and finds the LR range $([\eta_{\mathrm{min}},\eta_{\mathrm{max}}])$ over which the training loss decreases. The authors proposed three variants of CLR: triangular2 that halves the maximum LR bound after each cycle; $exp\_range$ that exponentially reduces the maximum LR bound after each cycle; and 1cycle containing only one triangular cycle (Smith, 2018). Similar to CLR, Stochastic Gradient Descent with Warm Restarts (SGDR) restarts the LR and then applies cosine annealing/decay at a pre-defined frequency (Loshchilov & Hutter, 2017). Neither CLR or SGDR is automatic, because they are quite sensitive to their hyperparameters, which require careful hand-tuning. CLR and SGDR may even cause undesirable divergence in loss during training with suboptimal hyperparameters (see §5).
+
+Learning rate adaptation with hypergradient descent: Aiming for the same goal of automatically tuning the LR, the hypergradient based technique (Almeida et al., 1998; Franceschi et al., 2017; Baydin et al., 2018; Donini et al., 2020) optimizes the LR schedule by applying gradient descent of the objective function w.r.t. the LR during training. In addition to the initial value of the regular LR, it introduces an additional hypergradient LR whose initial value is another hyperparameter to be specified. We experimentally show that this technique is subject to overfitting, it is quite sensitive to its two hyperparameters, and it is unable to match the state-of-the-art test-set performance on the models we test (§A.5.1). We also compare its performance against AutoLRS (§A.5.2).
+
+DNN hyperparameter optimization: Automatic hyperparameter searching for DNNs has been broadly studied in recent years. When applied to learning rates, they can determine an optimized value for LR that is kept constant (or constrained to be a pre-defined shape) through the entire training process, as opposed to determining an LR schedule. They can be primarily categorized into Bayesian optimization based approaches (Hutter et al., 2011; Snoek et al., 2012; Bergstra et al., 2013), bandit-based solutions (Li et al., 2017; 2018b), hybrid approaches that combine bandit-based and Bayesian optimization based approaches (Falkner et al., 2018; Zela et al., 2018), and population-based methods (Jaderberg et al., 2017; Parker-Holder et al., 2020). It might be possible to extend these techniques to determine a LR schedule with an optimized LR for each training stage, but it is not sample-efficient and time-efficient to do so since the LR schedule would correspond to hundreds or thousands of hyperparameters.
+
+Optimization methods with adaptive LR: These optimizers can adaptively adjust LR for each training step by maintaining an estimate of a better learning rate separately for each parameter in the DNN. Adagrad (Duchi et al., 2011) applies lower LRs to parameters with larger accumulated gradients and higher learning rates to the ones with smaller accumulated gradients. RMSprop (Tieleman & Hinton, 2012), AdaDelta (Zeiler, 2012), and Adam (Kingma & Ba, 2015) were later proposed to address the issue in Adagrad that the model stops learning due to the continual decay of LR. These optimizers with adaptive LR are orthogonal to our automatic LR scheduler, and they still require a global learning rate schedule, which can be obtained from our AutoLRS. In particular, their default hyperparameters do not always work well and need careful tuning, e.g., Adam's default LR 0.001 performs poorly in training BERT and Transformer, and a better-tuned LR schedule can significantly reduce the training time (\$5). Recent optimization methods (Schaul et al., 2013; Mahsereci & Hennig, 2015) proposed to remove the need for LR tuning in SGD altogether, but they are not widely used potentially due to their limited applicability and sub-optimal performance (Baydin et al., 2018).
+
+# 3 PROBLEM FORMULATION
+
+Training of DNNs can be written in a general form of minimizing a loss function $L(x; \theta)$ over training samples $x \in D_{train}$ , where $\theta$ represents the model weights being optimized. The minimization is conducted by applying an optimizer that updates $\theta$ iteratively. For example, at each step $t$ , mini-batch SGD updates $\theta$ using the gradient computed on a mini-batch of samples $B_{train} \subseteq D_{train}$ :
+
+$$
+\theta_ {t + 1} = \theta_ {t} - \frac {\eta_ {t}}{\left| B _ {\text {t r a i n}} \right|} \sum_ {x \in B _ {\text {t r a i n}}} \nabla_ {\theta} L (x; \theta_ {t}), \tag {1}
+$$
+
+where $\eta_t$ is the learning rate (LR) at step $t$ and $\nabla_{\theta}L(x;\theta_t)$ denotes the gradient of the loss $L(x;\theta)$ w.r.t. $\theta_t$ at step $t$ . Given $B_{train}$ and $\theta_t, \theta_{t+1}$ can be represented as a function of LR $\eta_t$ , i.e., $\theta_{t+1}(\eta_t)$ .
+
+Our ultimate goal is to search for an optimal schedule of LR, i.e., a sequence of LRs $\eta_{1:T} \triangleq (\eta_1, \eta_2, \dots, \eta_T)$ applied to the total $T$ training steps, such that the generalization error can be minimized. Ideally, we need to optimize the entire sequence of LRs. This, however, is intractable in practice given the large number of possible LR schedules and since evaluating each one of those possible LR schedules requires a full training of $T$ steps. Hence, we break down the LR schedule optimization into a dynamic optimization of a constant LR for every $\tau$ steps, which we refer to
+
+as a "training stage". Since most tasks prefer a relatively small LR due to the non-smoothness of DNNs' loss landscapes, when $\tau$ is also small, the LR-resulted change on the validation loss might be too small and overwhelmed by the randomness of mini-batch SGD. Hence, in this case, we need to increase $\tau$ , so the effect of LR $\eta$ on the validation loss can be accumulated for more steps to overcome noise. A large $\tau$ also reduces the frequency of applying LR search and saves computation. On the other hand, setting $\tau$ to be too large might lose some optimality of the induced LR schedule. Therefore, we need to trade-off the above two issues to find an appropriate $\tau$ . In our final algorithm, we propose a curriculum for $\tau$ , i.e., we start from a small $\tau$ , in line with the greater volatility during early stages, and gradually increase $\tau$ as training proceeds (as described in §4.4). Since we mainly focus on LR search within a stage, for simplicity, we will use $\tau$ instead of $\tau_{t}$ for the exposition below.
+
+We study a greedy approach and split the whole training process into multiple stages of $\tau$ steps each. We choose an LR at the beginning of each stage and apply $\tau$ steps of optimization using this LR, i.e., at step- $t = 0, \tau, 2\tau, \dots, T - \tau$ , we aim to find the LR $\eta_{t:t + \tau}$ that minimizes the validation loss on $D_{val}$ (i.e., an estimate of the generalization error) after step- $(t + \tau)$ . This can be formulated as:
+
+$$
+\min _ {\eta} \sum_ {x \in D _ {v a l}} L (x; \theta_ {t + \tau} (\eta)), t = 0, \tau , 2 \tau , \dots , T - \tau . \tag {2}
+$$
+
+We try to sequentially solve $\lfloor T / \tau \rfloor$ sub-problems of the above form. However, we cannot apply standard optimization to solve each sub-problem in practice because: (i) it is a high-order optimization of $\eta$ since we need to unroll $\theta_{t + \tau}$ in Eq. (2) backward for $\tau$ steps using Eq. (1), which requires prohibitive memory and is unstable for DNNs; (ii) one step of optimizing $\eta$ needs to apply $\tau$ steps of optimization on $\theta$ , which is costly and weakens the advantage of searching LR for better efficiency. To avoid these issues, we treat the objective function in Eq. (2) for $t:t + \tau$ as a black-box function $f_{t}(\eta)$ and study how to optimize it based on the observed training dynamics through Bayesian optimization (BO).
+
+# 4 AUTOMATIC LEARNING RATE SCHEDULE SEARCH
+
+We first elaborate on the details of our BO algorithm (§4.1) that identifies the LR for each stage1. However, collecting even one data point $(\eta, f(\eta))$ for BO requires us to train the model for $\tau$ steps, which is costly and impractical since the LR computed by the entire BO process is used for only $\tau$ steps. To reduce the cost of generating instances of $(\eta, f(\eta))$ , in §4.2 and §A.3, we propose to train a light-weight time-series forecasting model to predict $f(\eta)$ based on the validation loss observed during the first $\tau'$ ( $\tau' \ll \tau$ ) steps of applying LR $\eta$ . We find that a simple exponential model suffices to produce accurate predictions. Our LR search then reduces to a multi-training process between BO and the forecasting model, where one produces training instances for the other. The resulting algorithm can automatically find an LR schedule without introducing significant extra computation.
+
+# 4.1 BAYESIAN OPTIMIZATION
+
+BO (Shahriari et al., 2016) is one of the state-of-the-art techniques for black-box optimization. It applies exploration and exploitation to the objective by sequentially and actively querying the function values of some input instances. Specifically, BO uses Gaussian process as a surrogate model (prior) to fit the black-box objective function $f(\eta)$ . It sequentially updates a posterior of $f(\eta)$ by using its likelihood on newly evaluated $(\eta_i', y_i = f(\eta_i') + \epsilon)$ pairs $^2$ , where $y_i$ is a noisy observation of $f(\eta_i')$ and is the validation loss after $\tau$ steps. Then, it finds the next $\eta_{i+1}'$ to evaluate based on an acquisition function $u_i(\eta)$ defined by the posterior mean $\mu_i(\eta)$ and standard deviation $\sigma_i(\eta)$ . $u_i(\eta)$ performs a trade-off between exploration (i.e., large $\sigma_i(\eta)$ ) and exploitation (i.e., small $\mu_i(\eta)$ ). In AutoLRS, we use Lower Confidence Bound (LCB) (Cox & John, 1992; Auer, 2002) as $u_i(\eta)$ . Given $\eta_{1:i}^{\prime}$ and their corresponding validation loss $y_{1:i}$ , we determine the next LR $\eta_{i+1}$ by minimizing LCB, i.e.,
+
+$$
+\eta_ {i + 1} ^ {\prime} = \arg \min _ {\eta} u _ {i} (\eta), u _ {i} (\eta) \triangleq \mu_ {i} (\eta) - \kappa \sigma_ {i} (\eta), \tag {3}
+$$
+
+where $\mu_i(\eta)$ and $\sigma_i(\eta)$ are defined in Eq. (7) in §A.1, $\kappa$ is a positive hyper-parameter to balance exploration and exploitation. In experiments, $\kappa = 1000$ works consistently well. BO repeats the above process until it achieves a precise posterior distribution of $f(\eta)$ . See §A.1 for more details.
+
+Algorithm 1: AutoLRS
+Input: (1) Number of steps in each training stage, $\tau$ (2) Learning-rate search interval $(\eta_{\mathrm{min}},\eta_{\mathrm{max}})$ (3) Number of LRs to evaluate by BO in each training stage, k (4) Number of training steps to evaluate each LR in BO, $\tau^{\prime}$ (5) Trade-off weight in the acquisition function of BO, $\kappa$
+1 while not converge do
+2 initialize a GP prior: $\mu_0(\eta) = 0,\sigma_0^2 (\eta) = K(\eta ,\eta)$ defined in Eq. (4) in $\S \mathrm{A}.1$ .
+3 $c\gets$ checkpoint of model parameters and optimizer states;
+4 for $i\gets 1$ to k do /\* mutual-training loop between BO and loss forecasting model \*/
+5 choose the next LR to explore: $\eta_i^\prime = \arg \min_\eta \mu_{i - 1}(\eta) - \kappa \sigma_{i - 1}(\eta);$
+6 $y_{1:\tau '}\gets$ train the DNN with LR $\eta_i^\prime$ for $\tau '$ steps and record the corresponding validation loss series;
+7 $y_{n}\gets$ train an exponential forecasting model on $y_{1:\tau '}$ and predict the validation loss after $\tau$ steps;
+8 update the GP posterior by $(\eta_i^\prime ,y_i)$ and update new $\mu_{i}(\eta)$ and $\sigma_{i}(\eta)$ using Eq. (7) in $\S \mathrm{A}.1$
+9 restore the checkpoint $c$ of model parameters and optimizer states;
+10 end
+11 $\eta^{*}\gets$ the LR with the minimal predicted validation loss $\mu_k(\eta)$ among the k explored LRs $\eta_{1:k}^{\prime}$ above;
+12 train the DNN using LR $\eta^{*}$ for $\tau$ steps; /\* training model using BO-searched best learning rate \*/
+13 end
+
+# 4.2 TIME-SERIES FORECASTING MODEL OF LOSS
+
+Typically, BO would require $\tau$ training steps to measure the validation loss associated with every LR $\eta$ that it considers during a stage. This is computationally expensive. We now introduce a simple yet effective approach that substantially reduces the number of training steps required to evaluate each LR candidate: for each LR $\eta$ that is evaluated, we only apply it for $\tau'\ll \tau$ steps and use the validation loss observed in the $\tau'$ steps to train a short-term time-series forecasting model. We then use the resulting forecasting model to predict the validation loss after $\tau$ steps.
+
+In numerous experiments, we observed that when a DNN is trained with a reasonable LR, the validation loss typically decreases exponentially and converges to a small value. We show examples of practical loss time series and their exponential-model fitting results in Figure 3. Moreover, recent deep learning theory (Allen-Zhu et al., 2019b) also proves the linear convergence of training DNNs. In addition, a simple model to fit the observed loss time-series can filter the noise and avoid possible overfitting. Hence, we propose to train an exponential model in the form of $L(t) = a\exp (bt) + c$ with parameters $a, c$ and $b < 0$ and for $t = 1,\dots ,\tau$ , as the forecasting model for the time series of the validation loss in a training stage of $\tau$ steps with a given LR $\eta$ . §A.2 describes how we estimate $a$ , $b$ , and $c$ based on the validation loss observed in the first $\tau^{\prime}$ steps, and §A.3 describes how we filter out noise and outliers.
+
+# 4.3 MUTUAL TRAINING BETWEEN BO AND EXPONENTIAL PREDICTION
+
+We present the complete procedure of AutoLRS in Algorithm 1. It sequentially optimizes LR for every training stage during the training of a DNN model, solely based on the observed training dynamics, and it can be seen as a form of self-supervision. For each training stage, it searches for the LR that leads to the largest improvement in the validation loss via an efficient black-box function optimization conducted by a mutual training loop between Bayesian optimization and a short-term forecasting model for each loss series. It then applies the best LR among the explored ones for $\tau$ steps and repeats the above process until convergence.
+
+In line 5, the algorithm solves a constrained optimization problem over $\eta$ , in the range of $[\eta_{min}, \eta_{max}]$ . In practice, we prefer a large learning-rate search interval $(\eta_{\mathrm{min}}, \eta_{\mathrm{max}})$ , across orders of magnitude, but also need fine-grained optimization over small LRs. Hence, we operate on $\eta$ in its log-scale space, i.e., we replace $\eta$ by $\log \eta$ in Algorithm 1, except in lines 6 and 12 when we use the original LR (rather than $\log \eta$ ) to train the DNN.
+
+At the end of each iteration in the mutual training loop (line 9), we restore the checkpoint $c$ of model parameters and optimizer states to the one saved at the beginning of the training stage3. By doing so,
+
+we guarantee that the $k$ different LRs all start from the same model and their losses can be compared. $\S A.4$ illustrates how BO learns the underlying function in practice for early and late stages of training.
+
+Hyperparameters: AutoLRS substantially reduces the amount of hyperparameters that need to be hand-tuned in existing LR schedules or policies. However, as shown in Algorithm 1, we still have hyperparameters in AutoLRS. First, we need to set a search interval $(\eta_{\mathrm{min}}, \eta_{\mathrm{max}})$ for LR. However, this interval can be reasonably wide by using an $LR$ range test (Loshchilov & Hutter, 2017) as we will show in §5. Secondly, our default settings of $k$ , $\tau'$ , $\tau$ , and $\kappa$ work well for a diverse set of DNN models from different domains and tasks, though it is possible to achieve further improvements by fine-tuning them.
+
+# 4.4 PRACTICAL IMPROVEMENTS
+
+We found the following modifications can further improve the performance of AutoLRS in practice.
+
+Gradually increase $\tau$ over the course of training: Often, in DNN training, the loss and the model parameters experience rapid changes only during the first few epochs before they enter a phase of stable improvement. Our approach can adapt to this phenomenon. For the early stages, when the loss is less predictable for the time-series forecasting model, we use a small $\tau$ (and $\tau'$ ). As training proceeds and the model becomes stable, we gradually increase $\tau$ (and $\tau'$ ) and adjust the LR more lazily. This curriculum of increasing $\tau$ places more exploration in earlier stages and more exploitation in later stages. In practice, we start with $\tau = 1000$ and $\tau' = 100$ , and double them after every stage until it reaches $\tau_{\mathrm{max}}$ . $\tau_{\mathrm{max}}$ is a hyperparameter that limits the maximum number of steps in a stage. We will discuss more of $\tau_{\mathrm{max}}$ in §5. This gradual increase of $\tau$ can provide stability to the LR schedule search. Similar strategies have been widely used in previous pre-defined LR schedules, e.g., the multi-stage schedule with increasing epochs within each stage, and some recent cyclical LR schedules (Loshchilov & Hutter, 2017).
+
+Minimizing training loss in early stages: Computing the validation loss series for a candidate $\eta^{\prime}$ requires considerable computation if we were to use the entire validation dataset at each step of mutual training. Recall, however, that the primary purpose of minimizing the validation loss instead of the training loss is to avoid overfitting on the training set when the training loss notoriously deviates from the generalization error. However, a variety of empirical evidence and recent theory (Allen-Zhu et al., 2019a) show that overfitting is unlikely while training over-parameterized DNNs due to the inductive bias of random initialization and SGD, especially during the early phase of training. Hence, in practice, for the first several training stages, we can safely approximate the validation loss in our method by the corresponding training loss, which is a by-product of forward propagation and free to obtain. In later stages (i.e., once $\tau$ reaches $\tau_{\mathrm{max}}$ ), since the model is stable and the loss changes smoothly, we can evaluate the validation loss on a small subset of the validation set without compromising robustness. In our experiments, this set is composed of merely 10 mini-batches, and we evaluate the validation loss on them every 50 training steps (as opposed to every step). Therefore, the evaluation of validation loss in our approach does not introduce notable extra computations4.
+
+# 5 EXPERIMENTS
+
+We now evaluate AutoLRS by applying it to three widely-used and representative DNNs: ResNet-50, Transformer, and BERT. Here are some highlights:
+
+- The LR schedules computed by AutoLRS are $1.22 \times$ , $1.43 \times$ , and $1.5 \times$ faster, in terms of training steps, than the original, hand-tuned LR schedules for ResNet-50, Transformer, and BERT, respectively. Meanwhile, it improves or matches the test-set performance.
+- For each model, we carefully hand-tuned CLR and SGDR using more than ten experiments with different CLR/SGDR hyperparameters. Across the three models, the LR schedules computed by AutoLRS achieve an average speedup of $1.29 \times$ and $1.34 \times$ , in terms of training steps, over the best tuned LR schedules under CLR and SGDR, respectively. While CLR and SGDR had to be run
+
+for at least 10 trials to find a good LR schedule, AutoLRS only costs slightly over $2 \times$ the training time associated with the computed LR schedule even after accounting for the BO search cost.
+
+- AutoLRS is robust to the change of hyperparameters and consistently finds better LR schedules than other baselines. In contrast, CLR and SGDR are sensitive to the choices of hyperparameters.
+- We perform ablation studies in §A.5.4 to demonstrate that both BO and the exponential forecasting model are essential for AutoLRS to find good LR schedules.
+- Hypergradient descent is subject to overfitting, and it is unable to match the state-of-the-art test-set performance using all the guideline values of its two hyperparameters on VGG-16 (Simonyan & Zisserman, 2015) and ResNet-50 ( $\S$ A.5.1). In contrast, AutoLRS can consistently improve or match the state-of-the-art test-set performance with different $\tau_{\mathrm{max}}$ values using fewer training steps than the hand-tuned LR schedules ( $\S$ A.5.2).
+- Using Hyperband (Li et al., 2017) for LR schedule search incurs a high computational overhead. Moreover, it cannot find an LR schedule that matches the state-of-the-art accuracy (§A.5.3).
+
+Baseline Setup: ML practitioners typically need to hand-tune the LR schedules carefully for a long time to achieve satisfying performance, so the LR schedule adopted in each model's original paper is a presumably tough-to-beat baseline to compare with. For CLR and SGDR, we hand-tune their hyperparameters separately for each DNN. Hyperparameters in CLR include the high/low LR for the $LR$ range test to sweep, the number of steps to perform the test, the number of steps in each triangular cycle, and the choice of variants (triangular2, exp_range, 1cycle) introduced in §2. Hyperparameters in SGDR include the number of steps/epochs in each cycle and the initial LR at the beginning of each cycle. We carefully tuned these hyperparameters separately for each DNN and chose the LR schedule producing the best validation-set performance among $\geq 10$ trials of different hyperparameters.
+
+Hyperparameters in AutoLRS: In our default setting, we set $k = 10$ and $\tau' = \tau / 10$ so that the training steps spent on BO equals the training steps spent on updating the DNN model. We start from $\tau = 1000$ and $\tau' = 100$ and double $\tau$ and $\tau'$ after each stage until $\tau$ reaches $\tau_{\max}$ . We use $\tau_{\max} = 8000$ for ResNet-50 and Transformer, $\tau_{\max} = 32000$ for BERT. We also tried $\tau_{\max} = 8000$ , 16000, and 32000 for each DNN and found that the resulting LR schedules are not very sensitive to $\tau_{\max}$ . (An analysis of the sensitivity to $\tau_{\max}$ is in §A.5.2.) The LR search interval $(\eta_{\min}, \eta_{\max})$ for ResNet-50, Transformer, and BERT are $(10^{-3}, 1)$ , $(10^{-4}, 10^{-2})$ , and $(10^{-6}, 10^{-3})$ , respectively. These are easily found by an LR range test (Loshchilov & Hutter, 2017).
+
+ResNet-50: ResNet (He et al., 2016a;b) is one of the most popular DNNs in computer vision tasks. We train ResNet-50 on ImageNet (Russakovsky et al., 2015) using SGD with momentum on 32 NVIDIA Tesla V100 GPUs with data parallelism and a mini-batch size of 1024. The LR schedule in the original paper adopts a warmup phase of 5 epochs at the beginning and performs a 3-step decay as in (Goyal et al., 2017). Figure 1a presents different LR schedules for training ResNet-50 on ImageNet. We report how their top-1 accuracy on the validation set changes during training in Figure 1b. AutoLRS achieves a speedup of $1.19 \times$ and $1.22 \times$ over SGDR and the original LR schedule respectively but is slightly (i.e., $5.4\%$ ) slower than CLR. Note that the best CLR result is achieved after 10 trials of heavy hand-tuning to hyperparameters. (In fact, 7 out of 10 CLR trials failed to achieve the best possible test-set accuracy, and the second best and the third best trials are $5.4\%$ and $7.9\%$ slower than AutoLRS). AutoLRS achieves competitive speed even though it invests a significantly lower search cost that is comparable to the overall model update time associated with the identified LR schedule.
+
+Transformer: Transformer (Vaswani et al., 2017) is a neural machine translation (NMT) model that is built upon a multi-head self-attention mechanism to capture the contextual dependencies and achieves promising translation performance. We train $\mathrm{Transformer}^6$ on a standard benchmark, i.e., WMT 2014 English-German dataset, using 8 NVIDIA Tesla V100 GPUs. Following (Vaswani et al., 2017), we use Adam (Kingma & Ba, 2015) with $\beta_{1} = 0.9$ , $\beta_{2} = 0.98$ , and $\epsilon = 10^{-9}$ . The LR schedule in the original paper starts from a linear warmup of 4,000 steps from 0 to $7e^{-4}$ , followed by 96,000 steps of decaying the LR proportionally to $1/\sqrt{t}$ for step- $t$ . In AutoLRS, we also use the same linear warmup. The current AutoLRS does not search LR for warmup steps since warmup
+
+
+(a) LR on ResNet-50.
+
+
+(b) Val.Acc. on ResNet-50.
+
+
+(c) LR for Transformer.
+
+
+(d) BLEU of Transformer.
+
+
+Figure 1: Comparison of different LR schedules in training ResNet-50 on ImageNet (a, b), and the Transformer base model (c, d). When training ResNet-50, AutoLRS, CLR, SGDR, and the original LR achieve $75.9\%$ top-1 accuracy at epoch 74, 70, 88, and 90, respectively. When training Transformer base, AutoLRS, SGDR, and original achieve 27.3 BLEU score (uncased) at step 69,000, 91,000, 98,000, respectively. CLR (the best we were able to find) achieves 27.2 BLEU score at step 99,000.
+(a) LR schedules (Phase $1 + 2$
+Figure 2: Comparison of different LR schedules and training loss in pre-training $\mathrm{BERT}_{\mathrm{BASE}}$
+
+
+(b) Training loss in Phase 1.
+
+
+(c) Training loss in Phase 2.
+
+does not have an explicit optimization objective, such as minimizing the validation loss. Warmup usually takes very few steps, and its main purpose is to prevent deeper layers in a DNN from creating training instability (Gotmare et al., 2019). Figure 1c visualizes different LR schedules in training the Transformer model. Their BLEU scores on the test set during training are reported in Figure 1d. Overall, the LR schedule searched by AutoLRS yields a $1.32 - 1.43 \times$ speedup over the hand-tuned LR schedules. AutoLRS consistently achieves a similar amount of speedup over three trials – they achieve 27.3 BLEU score (uncased) at step 69,000, 69,000, and 70,000, respectively. Interestingly, if we continue the LR search of AutoLRS, we can get 27.4 BLEU score (uncased) at step 99,000.
+
+BERT Pre-training: BERT (Devlin et al., 2019) is a recent model that achieved state-of-the-art results on 11 NLP tasks. It first pre-trains a language representation model on a large text corpus by unsupervised learning and then fine-tunes it for downstream NLP tasks. The $\mathrm{BERT}_{\mathrm{BASE}}$ model has 110M parameters, which makes the pre-training phase expensive, and hand-tuning the LR schedule might be impractical. We pre-train $\mathrm{BERT}_{\mathrm{BASE}}$ with mixed precision (Micikevicius et al., 2018) on the English Wikipedia and the BooksCorpus dataset7 (Zhu et al., 2015a). Following the original paper, we use Adam with L2 weight decay of 0.01 and $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ . The pre-training is divided into two phases: Phase 1 includes $90\%$ of the total training steps and uses a sequence length of 128, while Phase 2 uses a sequence length of 512 for the rest $10\%$ of training steps. We apply this two-phase training in the experiments of all LR schedules. We pre-train $\mathrm{BERT}_{\mathrm{BASE}}$ on 32 NVIDIA Tesla V100 GPUs using a mini-batch size of 1024 sequences, which is $4\times$ the batch size in the original paper. To adapt the original LR schedule to our batch size, we tried both the linear scaling rule (Goyal et al., 2017) and the square root scaling rule (Krizhevsky, 2014), and found that the square root scaling rule works better while the linear scaling rule made the loss diverge.
+
+As shown in Figure 2, Phase 1 contains 150,000/225,000 steps and Phase 2 contains 16,000/25,000 steps respectively for AutoLRS and all baselines, since AutoLRS requires much less total steps. In both AutoLRS and SGDR, we apply a linear warmup in the first 2,500 steps to make the deeper layers of BERT stable. In Figures 2b and 2c, we report the training loss achieved by different schemes.
+
+We fine-tune the pre-trained models on four downstream NLP tasks: Microsoft Research Paraphrase Corpus (MRPC) for identifying semantic textual similarity (Dolan & Brockett, 2005); Multi-Genre Natural Language Inference (MNLI) for entailment classification (Williams et al., 2018); Corpus of Linguistic Acceptability (CoLA) for predicting whether an English sentence is linguistically acceptable (Warstadt et al., 2019); and Stanford Question Answering Dataset (SQuAD) v1.1 (Rajpurkar et al., 2016). Table 1 reports the after-fine-tuning performance on the four tasks. Since fine-tuning performance is unstable on small datasets like MRPC, we fine-tuned on each task several times and report the best Dev-set performance. It shows that the model pre-trained by AutoLRS outperforms
+
+Table 1: Fine-tuning $\mathrm{{BERT}}{}_{\mathrm{{BASE}}}$ that is pre-trained using different LR schedules on 4 downstream tasks. We report the accuracy on the Dev set of MRPC, MNLI, and CoLA, and F1 scores on the Dev set of SQuAD v1.1.
+
+| LR schedule (Phase 1/Phase 2) | MRPC | MNLI | CoLA | SQuAD v1.1 |
| Original (225,000/25,000) | 86.5 | 82.2 | 47.8 | 87.0 |
| CLR (225,000/25,000) | 86.0 | 80.7 | 44.4 | 86.5 |
| SGDR (225,000/25,000) | 84.8 | 81.6 | 38.7 | 86.2 |
| AutoLRS (150,000/16,000) | 88.0 | 82.5 | 47.6 | 87.1 |
+
+Table 2: Performance comparison with LR schedules searched by prior solutions on CIFAR-10 training with VGG-16 (batch size = 128). Note that the hand-tuned LR schedule can achieve 93.70% top-1 test accuracy in 350 epochs. The Runtime column shows how long each method takes on one NVIDIA Titan RTX GPU to find the LR schedule shown in the previous column. The runtime of HD and MARTHE include trying the guideline values of their hyperparameters to get a decent LR schedule.
+
+| Method | Best top-1 accuracy achieved in 350 epochs | Runtime (seconds) |
| HD | 91.31% | 187,110 |
| MARTHE | 92.99% | 67,578 |
| Hyperband | 93.24% | 109,454 |
| AutoLRS | 94.13% | 6,538 |
+
+those using other LR schedules in most downstream tasks and meanwhile achieves a speedup of $1.5 \times$ . Note AutoLRS consistently achieves this speedup over 3 trials (details in §A.5.5). We also tried pre-training using other LR schedules for fewer steps but the fine-tuning performances were worse. Notably, when we use CLR and SGDR for pre-training $\mathrm{BERT}_{\mathrm{BASE}}$ , the training loss diverged after 100,000 steps in several trials, even as we decreased the maximal LR and increased the number of steps per cycle. This illustrates how difficult and computationally intensive it is to hand-tune the hyperparameters of existing LR schedules on complicated models and tasks. In contrast, AutoLRS significantly simplifies the process and saves human effort.
+
+Experimental comparison to prior methods: Hypergradient descent (HD) (Baydin et al., 2018) is a hypergradient based method to adjust the learning rate in an online fashion by deriving the derivative of the training loss with respect to the learning rate, and performing gradient descent on the learning rate during training. MARTHE (Donini et al., 2020) is a generalization of two hypergradient based methods, HD and RTHO (Franceschi et al., 2017). One distinction between MARTHE and HD is that MARTHE computes the gradient of the validation loss instead of training loss with respect to the learning rate. Hyperband is a multi-armed bandit approach for DNN hyperparameter optimization. We use HD, MARTHE, and Hyperband to tune the LR schedules for CIFAR-10 training with VGG-16, and compare their performance with AutoLRS in Table 2. AutoLRS achieves higher best top-1 test accuracy than the other methods as well as the hand-tuned LR schedule, with much less overhead. Detailed descriptions of these methods and the experimental results are in §A.5.1 and §A.5.3.
+
+# 6 CONCLUSION
+
+We propose an automatic learning-rate schedule method, AutoLRS, as a more efficient and versatile alternative to hand-tuning that can be broadly applied to train different DNNs for tasks in diverse application domains. We break down the sequence optimization to learning rate search for minimizing validation loss in each training stage and then solve this sub-problem by Bayesian optimization (BO). To reduce the cost of BO exploration, we train a light-weight loss-forecasting model from the early-stage training dynamics of BO exploration. AutoLRS achieves a speedup of $1.22 \times 1.43 \times 1.5 \times$ on training ResNet-50, Transformer, and BERT compared to their highly hand-tuned schedules.
+
+# ACKNOWLEDGMENTS
+
+We would like to thank the anonymous ICLR reviewers for their valuable feedback. We would also like to thank Damien Fay for his suggestions on time series analysis. This work was partially supported by DARPA. For computer time, this research used the resources at ByteDance and the Supercomputing Laboratory at KAUST.
+
+# REFERENCES
+
+Zeyuan Allen-Zhu, Yuanzhi Li, and Yingyu Liang. Learning and generalization in overparameterized neural networks, going beyond two layers. In Advances in neural information processing systems, pp. 6158-6169, 2019a.
+Zeyuan Allen-Zhu, Yanzhi Li, and Zhao Song. A convergence theory for deep learning via overparameterization. In Proceedings of the 36th International Conference on Machine Learning, volume 97, pp. 242-252, 2019b.
+Luís B Almeida, Thibault Langlois, José D Amaral, and Alexander Plakhov. Parameter adaptation in stochastic optimization. On-Line Learning in Neural Networks, Publications of the Newton Institute, pp. 111-134, 1998.
+Peter Auer. Using confidence bounds for exploitation-exploration trade-offs. Journal of Machine Learning Research, 3(Nov):397-422, 2002.
+Attilim Güneş Baydin, Robert Cornish, David Martínez Rubio, Mark Schmidt, and Frank Wood. Online learning rate adaptation with hypergradient descent. In International Conference on Learning Representations, 2018.
+James Bergstra, Dan Yamins, and David D Cox. Hyperopt: A python library for optimizing the hyperparameters of machine learning algorithms. In Proceedings of the 12th Python in science conference, pp. 13-20. CiteSeer, 2013.
+Dennis D Cox and Susan John. A statistical method for global optimization. In Proceedings of the 1992 IEEE International Conference on Systems, Man, and Cybernetics, pp. 1241-1246. IEEE, 1992.
+Zhongxiang Dai, Haibin Yu, Bryan Kian Hsiang Low, and Patrick Jaillet. Bayesian optimization meets Bayesian optimal stopping. In International Conference on Machine Learning, pp. 1496-1506, 2019.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171-4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics.
+William B Dolan and Chris Brockett. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005), 2005.
+Tobias Domhan, Jost Tobias Springenberg, and Frank Hutter. Speeding up automatic hyperparameter optimization of deep neural networks by extrapolation of learning curves. In *IJCAI*, pp. 3460-3468, 2015.
+Michele Donini, Luca Franceschi, Orchid Majumder, Massimiliano Pontil, and Paolo Frasconi. MARTHE: Scheduling the learning rate via online hypergradients. In *IJCAI-20*, pp. 2119–2125, 2020.
+John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(61):2121-2159, 2011.
+Stefan Falkner, Aaron Klein, and Frank Hutter. BOHB: Robust and efficient hyperparameter optimization at scale. arXiv preprint arXiv:1807.01774, 2018.
+Luca Franceschi, Michele Donini, Paolo Frasconi, and Massimiliano Pontil. Forward and reverse gradient-based hyperparameter optimization. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 1165-1173. PMLR, 2017.
+Marc G Genton. Classes of kernels for machine learning: a statistics perspective. Journal of machine learning research, 2(Dec):299-312, 2001.
+Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT press, 2016.
+
+Deepak Akhilesh Gotmare, Shirish Nitish Keskar, Caiming Xiong, and Richard Socher. A closer look at deep learning heuristics: Learning rate restarts, warmup and distillation. international conference on learning representations, 2019.
+Priya Goyal, Piotr Dólar, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch SGD: Training ImageNet in 1 hour. arXiv preprint arXiv:1706.02677, 2017.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016a.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European conference on computer vision, pp. 630-645. Springer, 2016b.
+Frank Hutter, Holger H Hoos, and Kevin Leyton-Brown. Sequential model-based optimization for general algorithm configuration. In International conference on learning and intelligent optimization, pp. 507-523. Springer, 2011.
+Max Jaderberg, Valentin Dalibard, Simon Osindero, Wojciech M Czarnecki, Jeff Donahue, Ali Razavi, Oriol Vinyals, Tim Green, Iain Dunning, Karen Simonyan, et al. Population based training of neural networks. arXiv preprint arXiv:1711.09846, 2017.
+Kevin Jamieson and Ameet Talwalkar. Non-stochastic best arm identification and hyperparameter optimization. In Artificial Intelligence and Statistics, pp. 240-248, 2016.
+Stanisław Jastrzejbski, Zachary Kenton, Devansh Arpit, Nicolas Ballas, Asja Fischer, Yoshua Bengio, and Amos Storkey. Three factors influencing minima in SGD. arXiv preprint arXiv:1711.04623, 2017.
+Chi Jin, Rong Ge, Praneeth Netrapalli, Sham M Kakade, and Michael I Jordan. How to escape saddle points efficiently. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1724-1732. JMLR.org, 2017.
+Kenji Kawaguchi. Deep learning without poor local minima. In Advances in neural information processing systems, pp. 586-594, 2016.
+Diederick P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015.
+A. Klein, Stefan Falkner, Jost Tobias Springenberg, and F. Hutter. Learning curve prediction with Bayesian neural networks. In International Conference on Learning Representations, 2017.
+A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Master's thesis, Department of Computer Science, University of Toronto, 2009.
+Alex Krizhevsky. One weird trick for parallelizing convolutional neural networks. arXiv preprint arXiv:1404.5997, 2014.
+Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. CIFAR-100 (Canadian Institute for Advanced Research). URL http://www.cs.toronto.edu/~kriz/cifar.html.
+Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, and Tom Goldstein. Visualizing the loss landscape of neural nets. In Advances in Neural Information Processing Systems, pp. 6389-6399, 2018a.
+Liam Li, KG Jamieson, Afshin Rostamizadeh, Ekaterina Gonina, MH Jonathan Ben-Tzur, B Recht, and A Talwalkar. A system for massively parallel hyperparameter tuning. In Conference on Machine Learning and Systems, 2020a, volume 1, 2018b.
+Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, and Ameet Talwalkar. Hyperband: A novel bandit-based approach to hyperparameter optimization. The Journal of Machine Learning Research, 18(1):6765-6816, 2017.
+Ilya Loshchilov and Frank Hutter. SGDR: stochastic gradient descent with warm restarts. In International Conference on Learning Representations, 2017.
+
+Maren Mahsereci and Philipp Hennig. Probabilistic line searches for stochastic optimization. In Advances in Neural Information Processing Systems, volume 28, 2015.
+Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, and Hao Wu. Mixed precision training. In International Conference on Learning Representations, 2018.
+Kamil Nar and Shankar Sastry. Step size matters in deep learning. In Advances in Neural Information Processing Systems, pp. 3436-3444, 2018.
+Jack Parker-Holder, Vu Nguyen, and Stephen Roberts. Provably efficient online hyperparameter optimization with population-based bandits. Advances in Neural Information Processing Systems, 2020.
+Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2383-2392, Austin, Texas, November 2016. Association for Computational Linguistics.
+CE. Rasmussen and CKI. Williams. Gaussian Processes for Machine Learning. Adaptive Computation and Machine Learning. MIT Press, Cambridge, MA, USA, January 2006.
+Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115 (3):211-252, 2015.
+Tom Schaul, Sixin Zhang, and Yann LeCun. No more pesky learning rates. In International Conference on Machine Learning, pp. 343-351, 2013.
+Bobak Shahriari, Kevin Swersky, Ziyu Wang, Ryan P. Adams, and Nando De Freitas. Taking the human out of the loop: A review of Bayesian optimization. Proceedings of the IEEE, 104(1): 148-175, January 2016. ISSN 0018-9219.
+Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, 2015.
+Leslie N Smith. Cyclical learning rates for training neural networks. In 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 464-472. IEEE, 2017.
+Leslie N Smith. A disciplined approach to neural network hyper-parameters: Part 1-learning rate, batch size, momentum, and weight decay. arXiv preprint arXiv:1803.09820, 2018.
+Jasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical bayesian optimization of machine learning algorithms. In Advances in neural information processing systems, pp. 2951-2959, 2012.
+Kevin Swersky, Jasper Snoek, and Ryan Prescott Adams. Freeze-thaw bayesian optimization. arXiv preprint arXiv:1406.3896, 2014.
+T. Tieleman and G. Hinton. Lecture 6.5—RmsProp: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998-6008, 2017.
+Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625-641, 2019.
+Yeming Wen, Paul Vicol, Jimmy Ba, Dustin Tran, and Roger Grosse. Flipout: Efficient pseudo-independent weight perturbations on mini-batches. In International Conference on Learning Representations, 2018.
+
+Adina Williams, Nikita Nangia, and Samuel Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1112–1122, New Orleans, Louisiana, June 2018. Association for Computational Linguistics.
+Yuhuai Wu, Mengye Ren, Renjie Liao, and Roger Grosse. Understanding short-horizon bias in stochastic meta-optimization. In International Conference on Learning Representations, 2018.
+Matthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012.
+Arber Zela, Aaron Klein, Stefan Falkner, and Frank Hutter. Towards automated deep learning: Efficient joint neural architecture and hyperparameter search. In ICML 2018 AutoML Workshop, July 2018.
+Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), ICCV '15, pp. 19-27, USA, 2015a. IEEE Computer Society. ISBN 9781467383912.
+Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. BookCorpus website. https://yknzhu.wixsite.com/web, 2015b.
+
+# A APPENDIX
+
+# A.1 BAYESIAN OPTIMIZATION (MORE DETAILS)
+
+BO (Shahriari et al., 2016) is one of the state-of-the-art techniques for black-box optimization. It applies exploration and exploitation to the black-box objective by sequentially and actively querying the function values of some input instances. Specifically, BO uses Gaussian process as a surrogate model to fit the black-box objective function $f(\eta)$ . It updates a posterior distribution of $f(\eta)$ by using its likelihood on newly evaluated $(\eta, y = f(\eta) + \epsilon)$ pairs, where $y$ is a noisy observation of $f(\eta)$ and is the validation loss after $\tau$ steps in our case. Then, it determines the next LR $\eta$ to evaluate as the one maximizing an acquisition function, which is computed from the updated posterior. The acquisition function performs a trade-off between exploration and exploitation in evaluating the candidates of LR. BO repeats the above process until achieving a precise posterior predictive distribution of $f(\eta)$ .
+
+Surrogate model (prior): We apply a commonly used surrogate model — Gaussian process (GP) (Rasmussen & Williams, 2006) as the prior of the black-box objective function in Eq. (2). A GP prior is specified by its mean function $\mu(\cdot)$ and its covariance function (i.e., kernel function) $K(\cdot, \cdot)$ . We adopt a common choice $\mu(\cdot) = 0$ and set $K(\cdot, \cdot)$ to be the Matern kernel (Genton, 2001) with smoothness factor $\nu = 2.5$ and length scale $l = 1$ , which is defined as
+
+$$
+K \left(\eta_ {i}, \eta_ {j}\right) = \frac {1}{\Gamma (\nu) 2 ^ {\nu - 1}} \left(\frac {\sqrt {2 \nu} \| \eta_ {i} - \eta_ {j} \| _ {2}}{l}\right) ^ {\nu} K _ {\nu} \left(\frac {\sqrt {2 \nu} \| \eta_ {i} - \eta_ {j} \| _ {2}}{l}\right), \tag {4}
+$$
+
+where $K_{\nu}(\cdot)$ is a modified Bessel function and $\Gamma (\cdot)$ is the gamma function, and $K(\eta_i,\eta_j)$ performs a convolution of the unit ball. Comparing to the radial basis function (RBF) kernel which always generates infinitely differentiable functions that might be overly smooth, GP with Matern kernel can control the smoothness of generated functions to be $\lceil \nu \rceil -1$ times differentiable (Rasmussen & Williams, 2006). This helps to capture the less-smooth local changes. In our case, $\nu = 2.5$ leads to twice-differentiable functions.
+
+Posterior prediction: In the following, we will use simplified notations $\eta_{1:k}^{\prime}$ and $f(\eta_{1:k}^{\prime})$ for vectors composed of $\{\eta_i'\}_{i=1}^k$ and $\{f(\eta_i')\}_{i=1}^k$ , respectively. The GP prior indicates a Gaussian distribution over function values, i.e., $f(\eta_{1:k}') \sim \mathcal{N}(\mathbf{0}, \mathbf{K})$ where $\mathbf{K}_{i,j} = K(\eta_i', \eta_j')$ , $\forall i, j \in [k]$ . After $\tau$ training steps using LR $\eta_i'$ , we evaluate the validation loss denoted by $y_i$ as a noisy observation of $f(\eta_i')$ . i.e., $y_i = f(\eta_i') + \epsilon$ where Gaussian white noise $\epsilon \sim \mathcal{N}(0, \sigma^2)$ . Given the noisy observations $y_{1:k}$ , we can update the GP posterior of the black-box function $f(\cdot)$ as
+
+$$
+f \left(\eta_ {1: k} ^ {\prime}\right) \mid \eta_ {1: k} ^ {\prime}, y _ {1: k} \sim \mathcal {N} \left(y _ {1: k}, \mathbf {K} + \sigma^ {2} \mathbf {I}\right). \tag {5}
+$$
+
+Given a new LR $\eta$ , we can now use the above GP posterior to predict the distribution of $f(\eta)$ by the following reasoning based on Bayes' theorem, i.e.,
+
+$$
+P (f (\eta) \mid \eta_ {1: k} ^ {\prime}, y _ {1: k}) = \int P (f (\eta) \mid f \left(\eta_ {1: k} ^ {\prime}\right)) P (f \left(\eta_ {1: k} ^ {\prime}\right) \mid \eta_ {1: k} ^ {\prime}, y _ {1: k}) d f \left(\eta_ {1: k} ^ {\prime}\right), \tag {6}
+$$
+
+which yields the posterior predictive distribution of $f(\eta)$ as
+
+$$
+f (\eta) | \eta_ {1: k} ^ {\prime}, y _ {1: k} \sim \mathcal {N} \left(\mu_ {n} (\eta), \sigma_ {n} ^ {2} (\eta)\right),
+$$
+
+$$
+\mu_ {n} (\eta) \triangleq \mathbf {k} (\mathbf {K} + \sigma^ {2} \mathbf {I}) ^ {- 1} y _ {1: k}, \tag {7}
+$$
+
+$$
+\sigma_ {n} ^ {2} (\eta) \triangleq K (\eta , \eta) - \mathbf {k} ^ {T} (\mathbf {K} + \sigma^ {2} \mathbf {I}) ^ {- 1} \mathbf {k}.
+$$
+
+where $\mathbf{k}_i = K(\eta, \eta_i')$ . The above result about single LR $\eta$ can be trivially extended to multiple LRs.
+
+Acquisition function: Given the posterior predictive distribution of $f(\eta)$ in Eq (7), BO finds the next $\eta_{i+1}'$ to evaluate based on an acquisition function $u_i(\eta)$ defined by the posterior mean $\mu_i(\eta)$ and standard deviation $\sigma_i(\eta)$ . A promising acquisition function should balance the trade-off between exploration (i.e., large $\sigma_i(\eta)$ ) and exploitation (i.e., small $\mu_i(\eta)$ ). In AutoLRS, we use Lower Confidence Bound (LCB) (Cox & John, 1992; Auer, 2002) as our acquisition function. In particular, given $\eta_{1:k}'$ and their corresponding validation loss $y_{1:k}$ , we determine the next LR $\eta_{i+1}'$ by minimizing LCB, i.e.,
+
+$$
+\eta_ {i + 1} ^ {\prime} = \underset {\eta} {\arg \min } u _ {i} (\eta), u _ {i} (\eta) \triangleq \mu_ {i} (\eta) - \kappa \sigma_ {i} (\eta), \tag {8}
+$$
+
+where $\mu_{i}(\eta)$ and $\sigma_{i}(\eta)$ were defined in Eq. (7), $\kappa$ is a positive hyper-parameter to balance the trade-off between exploration and exploitation. In experiments, we set $\kappa = 1000$ and it works consistently well.
+
+
+(a) Training loss during 100 training steps and fitting it by an exponential time-series forecasting model.
+
+
+(b) Validation loss during 800 training steps and fitting it by an exponential time-series forecasting model.
+
+
+(c) A corner case when exponential model cannot fully capture the non-monotone change of the loss during the first 50 steps.
+Figure 3: Fitting the time-series of loss by exponential model when training ResNet-50 on ImageNet.
+
+# A.2 EXPONENTIAL MODEL (MORE DETAILS)
+
+We take two steps to estimate $a, b$ , and $c$ in fitting the exponential model $L(t) = a\exp (bt) + c$ , based on the validation loss observed in the first $\tau^{\prime}$ steps, which is represented by $y_{t}, t = 1,\dots ,\tau^{\prime}$ . First, we reduce the fitting problem to an optimization problem. Define function $g(b)$ as the least squared error between predictions and observations w.r.t. $a$ and $c$ . We can write the original fitting problem in the following two-stage form.
+
+$$
+\min _ {b < 0} g (b), \quad g (b) \triangleq \min _ {a, c} \sum_ {t = 1} ^ {\tau^ {\prime}} \left(a \exp (b t) + c - y _ {t}\right) ^ {2} \tag {9}
+$$
+
+It is a 1-dimensional optimization problem. Moreover, with $b$ fixed, the minimization problem w.r.t. $a, c$ is a linear regression problem that has a closed-form solution. Hence, we apply a simple gradient descent method that starts from an initial $b$ , computes the linear least squares w.r.t. $a, c$ under $b$ , search for the next $b$ by the gradient descent method, and repeats these two steps. Thereby, in practice we can achieve a fast decrease on the regression error. In addition, to enforce the negative constraint for $b$ , we re-parameterize it to be $b \gets -\exp(b')$ . The problem now reduces to
+
+$$
+\min _ {b ^ {\prime}} \min _ {a, c} \sum_ {t = 1} ^ {\tau^ {\prime}} \left(a \exp \left(- \exp \left(b ^ {\prime}\right) t\right) + c - y _ {t}\right) ^ {2} \tag {10}
+$$
+
+Although there might exist other possible strategies to optimize Eq. (9), we find the above method is stable and fast in reducing the regression error and thus keeps the fitting process highly efficient.
+
+We empirically test whether the exponential model obtained by our method can ideally fit the loss time-series in different cases. Figure 3a and Figure 3b are two typical examples fitting the time series of training loss and validation loss by the proposed model. They show that the model can precisely predict the main trends of the time-varying loss, though ruling out some less informative noises.
+
+In Figure 3c, we also show a rare corner case when the model fails to fit the increasing loss in early steps. However, the loss-increasing stage usually does not last long and thus the inaccuracy is not so harmful to the prediction of later-stage loss, which is our major goal since $\tau$ is usually larger than the length of loss-increasing stage. To overcome such corner cases and outliers in the observed validation loss series, we present a pre-processing strategy to make stable exponential fitting in §A.3. Every time we predict the validation loss after $\tau$ steps, we first pre-process the loss observed in the $\tau'$ steps, and then fit the pre-processed loss series with the exponential model.
+
+In our empirical study, we also tried other more sophisticated time-series forecasting models including Holt-Winters, autoregressive integrated moving average (ARIMA) and singular spectrum analysis (SSA). We show two examples to compare their performance with our simple exponential prediction model in Figure 4. Some prior works also fit and predict learning curves (Swersky et al., 2014; Domhan et al., 2015; Klein et al., 2017; Dai et al., 2019) for early termination of evaluations of poorly-performing hyperparameters when doing DNN hyperparameter optimization, but they need non-negligible time for training their models and performing inference. They are much more computationally intensive than our lightweight exponential prediction model, and this makes them less practical to be used in automatic LR schedule tuning.
+
+
+(a) Predict the training loss after 2000 steps.
+
+
+(b) Predict the training loss after 4000 steps.
+
+
+Figure 4: Examples of forecasting the loss series by various time-series forecasting models when training ResNet-50 on ImageNet. Our simple exponential prediction model yields the least mean squared error (MSE) among all the models.
+
+
+Figure 5: The loss sequence in Figure 3c and its quadratic spline smoothing result after 1, 5, and 10 iterations of our spline smoothing.
+
+
+
+# A.3 PRE-PROCESS LOSS SERIES BY ITERATIVE SPLINE SMOOTHING
+
+We show a corner case in Figure 3c where the loss decreases rapidly at first, then increases for a while, but finally decreases stably. It might be a result of a large LR or happens when escaping from a possibly poor local minimum or a saddle point (Goodfellow et al., 2016). Our exponential model cannot fully capture the loss change in early steps of this case. But we also consistently observe in our experiments that the early instability of loss only lasts for at most hundreds of steps after we switch to a new LR.
+
+Nevertheless, we find that adding a pre-processing step to eliminate the noises, anomalies, and corner cases in the observed validation loss series makes the exponential fitting easier and more stable. Hence, we propose to apply an iterative spline smoothing to the validation loss observed in $\tau'$ steps before training the forecasting model. In particular, when evaluating a LR $\eta$ for a training stage, we firstly run $\tau'$ training steps and fit the observed sequence of validation loss by a quadratic spline. We do such spline smoothing for multiple iterations. At the end of each iteration, we remove the loss values that are among the farthest $3\%$ from the spline smoothing results if they are collected in the first $\tau'/2$ steps (when the corner cases like the one in Figure 3c might happen). So the next iteration's spline smoothing only aims to fit the rest loss values. After certain number of iterations, we use the final spline smoothing values to train the exponential forecasting model.
+
+Empirically, we find that 10 iterations of the above spline smoothing are necessary before training the exponential forecasting model. Figure 5 shows the loss sequence from Figure 3c before smoothing and after 1, 5 and 10 iterations of smoothing. As shown in the plots, the iterative spline smoothing can effectively remove the unnecessary noise and unstable changes during the early phase.
+
+# A.4 POSTERIOR LEARNED BY BO
+
+Figure 6 shows how the BO posterior gradually learns an increasingly more accurate estimation of the underlying black-box objective. We also visualize the "learning progress" of BO in an earlier stage and a later stage during training. It shows that in both early and late stages, by exploring more LRs, BO can achieve a more accurate posterior estimation of the objective function, and $k = 10$ suffices to obtain a satisfying estimate. Moreover, the posteriors in the later stages have much smaller variance/uncertainty than in the earlier stages.
+
+
+Figure 6: BO's posterior of the black-box objective function after exploring $i$ LRs (red dots $\bullet$ ) determined by Eq. (3) at an early stage and a late stage during the training of ResNet-50 on ImageNet. The dashed lines show the mean function $\mu_i(\eta)$ (indicating the predicted validation loss of applying LR $\eta$ for $\tau$ steps) and the shaded areas show the standard deviation $\sigma_i(\eta)$ (indicating the prediction uncertainty) in the form of $\mu_i(\eta) \pm \sigma_i(\eta)$ .
+
+# A.5 EXPERIMENTS (MORE DETAILS)
+
+# A.5.1 ANALYSIS OF ONLINE LEARNING RATE ADAPTATION WITH HYPERGRADIENT-BASED METHODS
+
+Hypergradient descent (HD) (Baydin et al., 2018) is a method to adjust the learning rate in an online fashion by performing gradient descent on the learning rate at the same time as the underlying DNN is optimized. For simplicity, we rewrite Eq. (1), which performs mini-batch SGD updates on model weights $\theta$ at each step $t$ , as:
+
+$$
+\theta_ {t + 1} = \theta_ {t} - \eta_ {t} \nabla L (\theta_ {t}), \tag {11}
+$$
+
+where $\eta_{t}$ is the learning rate (LR) at step $t$ and $\nabla L(\theta_t)$ denotes the gradient of the loss function $L$ w.r.t. the model weights $\theta_{t}$ at step $t$ . By making the assumption that the optimal value of LR does not change much between two consecutive iterations, HD derives the partial derivative of the loss function $L$ with respect to the learning rate $\eta$ :
+
+$$
+\frac {\partial L \left(\theta_ {t}\right)}{\partial \eta} = \nabla L \left(\theta_ {t}\right) \frac {\partial \left(\theta_ {t - 1} - \eta \nabla L \left(\theta_ {t - 1}\right)\right)}{\partial \eta} = \nabla L \left(\theta_ {t}\right) (- \nabla L \left(\theta_ {t - 1}\right)) \tag {12}
+$$
+
+An update rule for the learning rate is constructed as:
+
+$$
+\eta_ {t + 1} = \eta_ {t} - \beta \frac {\partial L \left(\theta_ {t}\right)}{\eta} = \eta_ {t} + \beta \nabla L \left(\theta_ {t}\right) \nabla L \left(\theta_ {t - 1}\right), \tag {13}
+$$
+
+which introduces a hyperparameter $\beta$ as the hypergradient learning rate. Updating the learning rate is a single vector multiplication between the gradient of the model weights at the previous step and the one at the current step. By updating both the learning rate $\eta_t$ and the model weights $\theta_t$ using Eq.(13) and Eq.(11) in each step, the HD algorithm performs gradient descent on both learning rate and the model weights during training.
+
+HD can be applied to optimizers including SGD, SGD with Nesterov momentum, and Adam. The original paper empirically shows that these optimizers equipped with HD are much less sensitive to the choice of the initial regular learning rate and the convergence rate is improved on a set of tasks. However, the paper only compares HD with constant LR baselines on small models and small datasets. To study how HD compares to hand-tuned LR schedules on larger models and datasets, we train VGG-16 (Simonyan & Zisserman, 2015) and ResNet-50 neural networks on the CIFAR-10 image recognition dataset (Krizhevsky & Hinton, 2009) with a mini-batch size of 128 using a PyTorch implementation. A hand-tuned LR schedule consists of a total of 350 epochs, starting with 0.1 and multiplying the learning rate by 0.1 at epoch 150 and 250. This hand-tuned LR schedule can achieve around $93.70\%$ and $95.56\%$ top-1 accuracy on the test set for VGG-16 and ResNet-50, respectively when we train the models on one NVIDIA Titan RTX GPU. We apply SGD with HD (SGD-HD) to train the two models, sweep all the guideline values of the two hyperparameters (regular LR and
+
+hypergradient LR) in SGD-HD, and report the best top-1 accuracy that SGD-HD can achieve for VGG-16 and ResNet-50 within 500 epochs in Table 4 and Table 5. We have three observations: (1) Hypergradient descent is very sensitive to the selection of the regular LR and the hypergradient LR. The top-1 accuracy ranges from $10.00\%$ to $91.80\%$ for VGG-16 and ranges from $10.00\%$ to $92.52\%$ for ResNet-50, with all suggested values of the two hyperparameters. (2) It cannot match the top-1 accuracy achieved with hand-tuned LR schedules: the best top-1 accuracy it can achieve among all the different hyperparameter settings are $1.90\%$ and $3.04\%$ behind the accuracy achieved with hand-tuned LR schedules for VGG-16 and ResNet-50, even though we ran each of them 150 epochs more than the hand-tuned LR schedule. (3) It is prone to overfitting. For example, when using regular $\mathrm{LR} = 10^{-3}$ and hypergradient $\mathrm{LR} = 10^{-5}$ to train VGG-16, the top-1 accuracy is only $90.74\%$ while the training accuracy is already $99.98\%$ .
+
+MARTHE (Donini et al., 2020) adaptively interpolates between two hypergradient based methods, HD and RTHO (Franceschi et al., 2017), and it computes the gradient of the loss function on the validation set instead of training set w.r.t. the learning rate. Besides the two hyperparameters in HD, MARTHE introduces another hyperparameter $\mu$ that controls how quickly past history is forgotten. We sample $\mu$ between 0.9 and 0.999, sample the hypergradient LR in $[10^{-3}, 10^{-6}]$ log-uniformly, and set the initial LR to 0.1, as how the MARTHE paper sets its hyperparameters for training VGG-11 on CIFAR-10. We apply SGD with MARTHE $^{10}$ to train VGG-16 on CIFAR-10. The best top-1 accuracy MARTHE can achieve among all the hyperparameter settings in 350 epochs is $92.99\%$ , which is $0.71\%$ lower than the accuracy achieved with the hand-tuned LR schedule.
+
+# A.5.2 SENSITIVITY TEST OF $\tau_{\mathrm{max}}$ IN AutoLRS AND MEASURE OF VARIABILITY
+
+Recall from §4.4 that AutoLRS starts with $\tau = 1000$ and $\tau' = 100$ , and doubles them after every stage until it reaches $\tau_{\mathrm{max}}$ . We test the sensitivity of AutoLRS to this hyperparameter, $\tau_{\mathrm{max}}$ , by comparing the generated LR schedules with different $\tau_{\mathrm{max}}$ values for the VGG-16 neural network on CIFAR-10 as in §A.5.1. The LR search interval $(\eta_{\mathrm{min}}, \eta_{\mathrm{max}})$ we use is $(10^{-3}, 10^{-1})$ . We report the training epochs to reach the target $93.70\%$ top-1 accuracy using the LR schedules generated among 5 trials for different $\tau_{\mathrm{max}}$ values in Table 6. AutoLRS with different $\tau_{\mathrm{max}}$ values can consistently achieve the target top-1 accuracy achieved with the hand-tuned LR schedule (i.e., $93.70\%$ ) in fewer training steps. We also see that the best AutoLRS-generated LR schedule can achieve $94.13\%$ top-1 accuracy within 350 training epochs (excluding the costs of the LR search).
+
+In the last column of Table 6, we report the mean and standard deviation of the top-1 accuracy achieved by AutoLRS over 5 trials for each $\tau_{\mathrm{max}}$ . To further measure the variability of AutoLRS, we train VGG-16 on CIFAR-100 (Krizhevsky et al.) with a mini-batch size of 128. A carefully hand-tuned LR schedule consists of a total of 200 epochs, starting with 0.1 and dividing the learning rate by 5 at epoch 60, 120, and 160. This hand-tuned LR schedule can achieve $72.93\%$ top-1 accuracy. We train VGG-16 on CIFAR-100 for 200 epochs with AutoLRS for 10 trials using different random seeds, and report the top-1 accuracy they achieve in Table 9. The LR search interval $(\eta_{\mathrm{min}}, \eta_{\mathrm{max}})$ we use is $(10^{-3}, 10^{-1})$ , and $\tau_{\mathrm{max}}$ is set to 8000. The top-1 accuracy achieved by AutoLRS-generated LR schedules over 10 trials are distributed with a mean of $73.05\%$ and a standard deviation of $0.14\%$ . The best AutoLRS-generated LR schedule can achieve $73.30\%$ top-1 accuracy, which is $0.37\%$ higher than the accuracy achieved using the hand-tuned LR schedule.
+
+# A.5.3 LEARNING RATE SCHEDULE SEARCH WITH HYPERBAND
+
+Hyperband is a multi-armed bandit approach for DNN hyperparameter optimization. It dynamically allocates resources to randomly sampled configurations and uses successive halving (Jamieson & Talwalkar, 2016) to early stop poorly-performing configurations. We attempt to use Hyperband to optimize the LR schedule on CIFAR-10 training with VGG-16 by searching for an exponential decay LR schedule, which can be parameterized with an initial learning rate and a decay factor. The learning rate is decayed by the decay factor every epoch. Exponential decay is a commonly used LR schedule and is also used in other DNN hyperparameter optimization methods (Falkner et al., 2018). We use the search space of $(10^{-3}, 10^{-1})$ for the initial LR, and the search space of $(0.9, 1)$ for the decay rate. The decay rate is uniformly random sampled, and the initial LR is uniformly random sampled in its log-scale space. We use the default setting of Hyperband that sets the maximum epochs that can be
+
+allocated to a single configuration to 350 and discards two-thirds of the configurations in each round of successive halving. This results in evaluating 384 configurations with different numbers of epochs with a total of 12600 epochs, which has a computational overhead of $36 \times$ compared to a single run of training with the hand-tuned LR schedule. The best configuration found by Hyperband achieves $93.24\%$ top-1 accuracy, which is $0.46\%$ lower than the accuracy achieved with the hand-tuned LR schedule.
+
+# A.5.4 ABLATION STUDY
+
+To illustrate the effects of the exponential model and BO of AutoLRS, we perform ablation studies using the VGG-16 neural network on CIFAR-10 as in §A.5.1.
+
+Exponential model: What if we remove the exponential forecasting model and simply use the validation loss at $\tau'$ step to update the BO posterior? Will the LR schedules generated by AutoLRS be significantly worse? We apply AutoLRS without the exponential forecasting to find the LR schedules for VGG-16 on CIFAR-10. With $\tau_{\mathrm{max}}$ being chosen from the set of $\{4000, 8000, 16000\}$ , the best top-1 test accuracy that AutoLRS can achieve within 350 training epochs are $91.73\%$ , $92.59\%$ , $92.24\%$ , respectively. Therefore, it is unable to match the target accuracy in reasonable training steps without the exponential forecasting model. The reason for this is that the objective of BO has become to minimize the validation loss at the $\tau'$ step, which will lead to short-horizon issues (Wu et al., 2018). As a consequence, it tends to select a conservative LR, which is often a small LR around $\eta_{\mathrm{min}}$ in the late stages. In contract, with the exponential forecasting model, the goal of the BO is to find the LR that minimizes the predicted validation loss in $\tau$ steps. This allows the LR selected in the current stage to be higher than that in the past stages, and the loss to even increase in a short period of time, as long as the predicted loss in $\tau$ steps is low. This phenomenon can be seen in Figure 1 and Figure 2.
+
+Bayesian Optimization: What if we replace BO in AutoLRS with random search or grid search? Will the LR schedules generated by AutoLRS get worse? We replace the BO part in AutoLRS with random search and grid search while keeping the exponential forecasting part of it, and apply it to find the LR schedules for VGG-16 on CIFAR-10. The LR search interval is $(10^{-3}, 10^{-1})$ , the same as in §A.5.2. Table 7 and Table 8 show the results of random search and grid search with different $\tau_{\mathrm{max}}$ values, respectively. We observe that both random search and grid search have at least one trial that fails to match the hand-tuned LR schedule to achieve $93.70\%$ top-1 test accuracy within 350 epochs (denoted by N/A in the tables). The top-1 accuracy achieved on average across trials in 350 epochs by random search and grid search is $0.09\%$ and $0.24\%$ behind AutoLRS with BO, respectively. We also replace BO with grid search and apply it to find the LR schedules for VGG-16 on CIFAR-100. The top-1 accuracy achieved over 10 trials are distributed with a mean of $72.63\%$ and a standard deviation of $0.56\%$ . Compared to the AutoLRS-generated LR schedules in Table 9, the mean of the grid search accuracy is out of two standard deviations from the BO accuracy $73.05\% \pm 0.14\%$ .
+
+# A.5.5 AutoLRS FINE-TUNING RESULTS OF BERTBASE ACROSS 3 TRIALS
+
+We pre-trained $\mathrm{BERT}_{\mathrm{BASE}}$ with AutoLRS for 3 trials, and report their fine-tuning results in Table 3.
+
+Table 3: Fine-tuning results of $\mathrm{{BERT}}{}_{\mathrm{{BASE}}}$ models pre-trained with AutoLRS for 3 trials. Accuracy scores on the Dev set are reported for MRPC, MNLI, and CoLA. F1 scores on the Dev set are reported for SQuAD v1.1.
+
+ | MRPC | MNLI | CoLA | SQuAD v1.1 |
| Trial 1 | 88.0 | 82.5 | 47.6 | 87.1 |
| Trial 2 | 88.0 | 82.7 | 46.5 | 87.0 |
| Trial 3 | 87.8 | 82.3 | 47.0 | 86.6 |
+
+Table 4: The accuracy information of tuning the regular LR and the hypergradient LR of SGD-HD for CIFAR-10 training with VGG-16 (batch size = 128). We train the model for 500 epochs using SGD-HD with each suggested value for the regular LR and the hypergradient LR, and report the best top-1 test accuracy it can achieve, its corresponding training accuracy, and the epoch number. Note that a hand-tuned LR schedule can achieve $93.70\%$ top-1 test accuracy in 350 epochs.
+
+| regular LR | hypergradient LR | Top-1 Test Accuracy | Training Accuracy | Epoch |
| 10-6 | 10-6 | 86.13% | 99.77% | 438 |
| 10-6 | 10-5 | 88.79% | 99.98% | 480 |
| 10-6 | 10-4 | 86.31% | 98.10% | 494 |
| 10-6 | 10-3 | 90.70% | 99.95% | 499 |
| 10-6 | 10-2 | 10.30% | 9.90% | 40 |
| 10-6 | 10-1 | 10.00% | 10.00% | 1 |
| 10-5 | 10-6 | 86.14% | 99.73% | 394 |
| 10-5 | 10-5 | 88.49% | 99.95% | 448 |
| 10-5 | 10-4 | 87.67% | 98.78% | 483 |
| 10-5 | 10-3 | 88.70% | 99.49% | 469 |
| 10-5 | 10-2 | 10.22% | 9.92% | 170 |
| 10-5 | 10-1 | 10.00% | 10.00% | 1 |
| 10-4 | 10-6 | 86.09% | 99.84% | 481 |
| 10-4 | 10-5 | 88.82% | 99.94% | 304 |
| 10-4 | 10-4 | 86.63% | 95.37% | 479 |
| 10-4 | 10-3 | 10.22% | 10.13% | 1 |
| 10-4 | 10-2 | 10.02% | 10.00% | 1 |
| 10-4 | 10-1 | 10.00% | 10.00% | 1 |
| 10-3 | 10-6 | 86.13% | 99.73% | 406 |
| 10-3 | 10-5 | 88.78% | 99.94% | 346 |
| 10-3 | 10-4 | 90.74% | 99.98% | 484 |
| 10-3 | 10-3 | 44.12% | 43.02% | 500 |
| 10-3 | 10-2 | 88.48% | 99.55% | 467 |
| 10-3 | 10-1 | 10.00% | 10.00% | 1 |
| 10-2 | 10-6 | 91.69% | 99.97% | 389 |
| 10-2 | 10-5 | 88.53% | 99.89% | 397 |
| 10-2 | 10-4 | 89.11% | 99.92% | 484 |
| 10-2 | 10-3 | 10.07% | 9.90% | 265 |
| 10-2 | 10-2 | 10.00% | 10.02% | 1 |
| 10-2 | 10-1 | 10.00% | 9.99% | 1 |
| 10-1 | 10-6 | 91.80% | 99.93% | 476 |
| 10-1 | 10-5 | 91.48% | 99.85% | 317 |
| 10-1 | 10-4 | 88.81% | 99.57% | 499 |
| 10-1 | 10-3 | 90.42% | 99.80% | 393 |
| 10-1 | 10-2 | 11.24% | 10.45% | 1 |
| 10-1 | 10-1 | 10.00% | 10.02% | 1 |
+
+Table 5: The accuracy information of tuning the regular LR and the hypergradient LR of SGD-HD for CIFAR-10 training with ResNet-50 (batch size = 128). We train the model for 500 epochs using SGD-HD with each suggested value for the regular LR and the hypergradient LR, and report the best top-1 test accuracy it can achieve, its corresponding training accuracy, and the epoch number. Note that a hand-tuned LR schedule can achieve $95.56\%$ top-1 test accuracy in 350 epochs.
+
+| regular LR | hypergradient LR | Top-1 Test Accuracy | Training Accuracy | Epoch |
| 10-6 | 10-6 | 83.67% | 99.71% | 410 |
| 10-6 | 10-5 | 88.75% | 99.44% | 490 |
| 10-6 | 10-4 | 83.77% | 99.68% | 494 |
| 10-6 | 10-3 | 71.03% | 72.14% | 491 |
| 10-6 | 10-2 | 10.11% | 10.03% | 261 |
| 10-6 | 10-1 | 10.0% | 10.0% | 1 |
| 10-5 | 10-6 | 83.99% | 99.64% | 420 |
| 10-5 | 10-5 | 89.15% | 99.97% | 460 |
| 10-5 | 10-4 | 10.12% | 9.95% | 206 |
| 10-5 | 10-3 | 19.73% | 18.53% | 13 |
| 10-5 | 10-2 | 10.03% | 9.98% | 137 |
| 10-5 | 10-1 | 10.0% | 10.0% | 1 |
| 10-4 | 10-6 | 84.98% | 99.85% | 488 |
| 10-4 | 10-5 | 89.27% | 99.94% | 482 |
| 10-4 | 10-4 | 84.36% | 97.78% | 424 |
| 10-4 | 10-3 | 88.72% | 99.84% | 484 |
| 10-4 | 10-2 | 10.00% | 10.00% | 1 |
| 10-4 | 10-1 | 10.00% | 10.00% | 1 |
| 10-3 | 10-6 | 83.22% | 99.81% | 487 |
| 10-3 | 10-5 | 88.56% | 99.98% | 492 |
| 10-3 | 10-4 | 86.00% | 97.32% | 440 |
| 10-3 | 10-3 | 10.10% | 9.76% | 367 |
| 10-3 | 10-2 | 42.80% | 40.11% | 497 |
| 10-3 | 10-1 | 10.00% | 10.00% | 1 |
| 10-2 | 10-6 | 92.40% | 99.99% | 459 |
| 10-2 | 10-5 | 88.51% | 99.98% | 440 |
| 10-2 | 10-4 | 90.72% | 99.91% | 452 |
| 10-2 | 10-3 | 10.19% | 9.64% | 315 |
| 10-2 | 10-2 | 10.05% | 9.99% | 8 |
| 10-2 | 10-1 | 10.00% | 10.00% | 1 |
| 10-1 | 10-6 | 92.18% | 99.97% | 487 |
| 10-1 | 10-5 | 92.52% | 99.97% | 494 |
| 10-1 | 10-4 | 87.74% | 99.86% | 492 |
| 10-1 | 10-3 | 84.32% | 97.23% | 477 |
| 10-1 | 10-2 | 10.00% | 10.11% | 1 |
| 10-1 | 10-1 | 10.00% | 10.00% | 1 |
+
+Table 6: Performance of AutoLRS with different $\tau_{\mathrm{max}}$ values for CIFAR-10 training with VGG-16 (batch size = 128). Note that a hand-tuned LR schedule can achieve 93.70% top-1 test accuracy in 350 epochs. We report the top-1 accuracy achieved within 350 epochs for each trial, and the mean and standard deviation of the top-1 accuracy achieved by AutoLRS over 5 trials for each $\tau_{\mathrm{max}}$ .
+
+| τmax | Trial Number | Epoch to 93.70% Top-1 Accuracy | Top-1 Accuracy Achieved | Mean±std |
| 4000 | Trial 1 | 108 | 94.13% | 94.01%±0.13% |
| Trial 2 | 181 | 93.96% |
| Trial 3 | 223 | 94.07% |
| Trial 4 | 315 | 93.82% |
| Trial 5 | 287 | 94.09% |
| 8000 | Trial 1 | 115 | 94.03% | 93.96%±0.07% |
| Trial 2 | 265 | 93.92% |
| Trial 3 | 203 | 93.94% |
| Trial 4 | 194 | 94.02% |
| Trial 5 | 305 | 93.87% |
| 16000 | Trial 1 | 229 | 93.77% | 93.80%±0.10% |
| Trial 2 | 250 | 93.95% |
| Trial 3 | 267 | 93.73% |
| Trial 4 | 313 | 93.71% |
| Trial 5 | 330 | 93.82% |
+
+Table 7: Experimental results after replacing BO in AutoLRS with random search for CIFAR-10 training with VGG-16 (batch size = 128). We also report the top-1 accuracy achieved within 350 epochs for each trial.
+
+| τmax | Trial Number | Epoch to 93.70% Top-1 Accuracy | Top-1 Accuracy Achieved |
| 4000 | Trial 1 | 199 | 93.80% |
| Trial 2 | 209 | 93.97% |
| Trial 3 | 298 | 93.84% |
| 8000 | Trial 1 | 344 | 93.71% |
| Trial 2 | 225 | 93.98% |
| Trial 3 | 175 | 93.91% |
| 16000 | Trial 1 | N/A | 93.64% |
| Trial 2 | 316 | 93.96% |
| Trial 3 | 310 | 93.86% |
+
+Table 8: Experimental results after replacing BO in AutoLRS with grid search for CIFAR-10 training with VGG-16 (batch size = 128). We also report the top-1 accuracy achieved within 350 epochs for each trial.
+
+| τmax | Trial Number | Epoch to 93.70% Top-1 Accuracy | Top-1 Accuracy Achieved |
| 4000 | Trial 1 | 304 | 93.88% |
| Trial 2 | 233 | 93.88% |
| Trial 3 | 180 | 93.91% |
| 8000 | Trial 1 | 239 | 93.72% |
| Trial 2 | 296 | 93.95% |
| Trial 3 | N/A | 93.32% |
| 16000 | Trial 1 | N/A | 93.02% |
| Trial 2 | 153 | 93.78% |
| Trial 3 | 288 | 93.70% |
+
+Table 9: Top-1 test accuracy achieved by AutoLRS-generated LR schedules for CIFAR-100 training with VGG-16 over 10 trials.
+
+| 73.12% | 73.20% | 72.90% | 72.93% | 73.03% |
| 73.16% | 73.30% | 72.85% | 73.00% | 72.97% |
\ No newline at end of file
diff --git a/autolrsautomaticlearningrateschedulebybayesianoptimizationonthefly/images.zip b/autolrsautomaticlearningrateschedulebybayesianoptimizationonthefly/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..fb19c7c8f194d0f3278027776629082cb70c3a6a
--- /dev/null
+++ b/autolrsautomaticlearningrateschedulebybayesianoptimizationonthefly/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:652ce0c331319726ea7da92f484ff681a7c9b0297fc18c29265a6f80e4841010
+size 795429
diff --git a/autolrsautomaticlearningrateschedulebybayesianoptimizationonthefly/layout.json b/autolrsautomaticlearningrateschedulebybayesianoptimizationonthefly/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..aeac8a8c0acabc5bb6c62753afc3a87fa771f8a8
--- /dev/null
+++ b/autolrsautomaticlearningrateschedulebybayesianoptimizationonthefly/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0238290dbc21347a0cbc3ca65358fdea4fad95fbe48eb1e67d0e96e0e93cf47e
+size 857101
diff --git a/autoregressivedynamicsmodelsforofflinepolicyevaluationandoptimization/f7793aa3-ff60-4924-84b7-8a987c0c05b6_content_list.json b/autoregressivedynamicsmodelsforofflinepolicyevaluationandoptimization/f7793aa3-ff60-4924-84b7-8a987c0c05b6_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..0e4dfdfd145d940e702f204ff0d3dd45c83baebe
--- /dev/null
+++ b/autoregressivedynamicsmodelsforofflinepolicyevaluationandoptimization/f7793aa3-ff60-4924-84b7-8a987c0c05b6_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b1a8392dd2d7d546528dae1da5efe0e6f223885ce5d1d7a3675d28b2a3a55353
+size 104819
diff --git a/autoregressivedynamicsmodelsforofflinepolicyevaluationandoptimization/f7793aa3-ff60-4924-84b7-8a987c0c05b6_model.json b/autoregressivedynamicsmodelsforofflinepolicyevaluationandoptimization/f7793aa3-ff60-4924-84b7-8a987c0c05b6_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..ff90ddb6add73951269af69e7bf1e0dd58e122ed
--- /dev/null
+++ b/autoregressivedynamicsmodelsforofflinepolicyevaluationandoptimization/f7793aa3-ff60-4924-84b7-8a987c0c05b6_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bf11e08fc7895d809a2c30a79282ffc8938073bd5d5dad41c0165e144f506c17
+size 121617
diff --git a/autoregressivedynamicsmodelsforofflinepolicyevaluationandoptimization/f7793aa3-ff60-4924-84b7-8a987c0c05b6_origin.pdf b/autoregressivedynamicsmodelsforofflinepolicyevaluationandoptimization/f7793aa3-ff60-4924-84b7-8a987c0c05b6_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..88a876e73d9138cefd8d8f213d4606a08ddbf6f8
--- /dev/null
+++ b/autoregressivedynamicsmodelsforofflinepolicyevaluationandoptimization/f7793aa3-ff60-4924-84b7-8a987c0c05b6_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5260f0ef1dea6fa5fb249798e9b95c56037b7dc101f94cf906a324ea0b5294da
+size 1878434
diff --git a/autoregressivedynamicsmodelsforofflinepolicyevaluationandoptimization/full.md b/autoregressivedynamicsmodelsforofflinepolicyevaluationandoptimization/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..20d591ae4c240f912fc71dd7398e3d3f4ceb1601
--- /dev/null
+++ b/autoregressivedynamicsmodelsforofflinepolicyevaluationandoptimization/full.md
@@ -0,0 +1,409 @@
+# AUTOREGRESSIVE DYNAMICS MODELS FOR OFFLINE POLICY EVALUATION AND OPTIMIZATION
+
+Michael R. Zhang\*1 Tom Le Paine2 Ofir Nachum3 Cosmin Paduraru2 George Tucker3 Ziyu Wang3 Mohammad Norouzi3
+
+1University of Toronto 2DeepMind 3Google Brain
+michael@cs.toronto.edu, mnorouzi@google.com
+
+# ABSTRACT
+
+Standard dynamics models for continuous control make use of feedforward computation to predict the conditional distribution of next state and reward given current state and action using a multivariate Gaussian with a diagonal covariance structure. This modeling choice assumes that different dimensions of the next state and reward are conditionally independent given the current state and action and may be driven by the fact that fully observable physics-based simulation environments entail deterministic transition dynamics. In this paper, we challenge this conditional independence assumption and propose a family of expressive autoregressive dynamics models that generate different dimensions of the next state and reward sequentially conditioned on previous dimensions. We demonstrate that autoregressive dynamics models indeed outperform standard feedforward models in log-likelihood on held-out transitions. Furthermore, we compare different model-based and model-free off-policy evaluation (OPE) methods on RL Unplugged, a suite of offline MuJoCo datasets, and find that autoregressive dynamics models consistently outperform all baselines, achieving a new state-of-the-art. Finally, we show that autoregressive dynamics models are useful for offline policy optimization by serving as a way to enrich the replay buffer through data augmentation and improving performance using model-based planning.
+
+# 1 INTRODUCTION
+
+Model-based Reinforcement Learning (RL) aims to learn an approximate model of the environment's dynamics from existing logged interactions to facilitate efficient policy evaluation and optimization. Early work on Model-based RL uses simple tabular (Sutton, 1990; Moore and Atkeson, 1993; Peng and Williams, 1993) and locally linear (Atkeson et al., 1997) dynamics models, which often result in a large degree of model bias (Deisenroth and Rasmussen, 2011). Recent work adopts feedforward neural networks to model complex transition dynamics and improve generalization to unseen states and actions, achieving a high level of performance on standard RL benchmarks (Chua et al., 2018; Wang et al., 2019). However, standard feedforward dynamics models assume that different dimensions of the next state and reward are conditionally independent given the current state and action, which may lead to a poor estimation of uncertainty and unclear effects on RL applications.
+
+In this work, we propose a new family of autoregressive dynamics models and study their effectiveness for off-policy evaluation (OPE) and offline policy optimization on continuous control. Autoregressive dynamics models generate each dimension of the next state conditioned on previous dimensions of the next state, in addition to the current state and action (see Figure 1). This means that to sample the next state from an autoregressive dynamics model, one needs $n$ sequential steps, where $n$ is the number of state dimensions, and one more step to generate the reward. By contrast, standard feedforward dynamics models take current state and action as input and predict the distribution of the next state and reward as a multivariate Gaussian with a diagonal covariance structure (e.g., Chua et al. (2018); Janner et al. (2019)). This modeling choice assumes that different state dimensions are conditionally independent.
+
+Autoregressive generative models have seen success in generating natural images (Parmar et al., 2018), text (Brown et al., 2020), and speech (Oord et al., 2016), but they have not seen use in Model-based RL for continuous control.
+
+We find that autoregressive dynamics models achieve higher log-likelihood compared to their feedforward counterparts on heldout validation transitions of all DM continuous control tasks (Tassa et al., 2018) from the RL Unplugged dataset (Gulcehre et al., 2020). To determine the impact of improved transition dynamics models, we primarily focus on OPE because it allows us to isolate contributions of the dynamics model in value estimation vs. the many other factors of variation in policy optimization and data collection. We find that autoregressive dynamics models consistently outperform existing Model-based and Model-free OPE baselines on continuous control in both ranking and value estimation metrics. We expect that our advances in model-based OPE will improve offline policy selection for offline RL (Paine et al., 2020). Finally, we show that our autoregressive dynamics models can help improve offline policy optimization by model predictive control, achieving a new state-of-the-art on cheetah-run and fish-swim from RL Unplugged (Gulcehre et al., 2020).
+
+Key contributions of this paper include:
+
+- We propose autoregressive dynamics models to capture dependencies between state dimensions in forward prediction. We show that autoregressive models improve log-likelihood over non-autoregressive models for continuous control tasks from the DM Control Suite (Tassa et al., 2018).
+- We apply autoregressive dynamics models to Off-Policy Evaluation (OPE), surpassing the performance of state-of-the-art baselines in median absolute error, rank correlation, and normalized top-5 regret across 9 control tasks.
+- We show that autoregressive dynamics models are more useful than feedforward models for offline policy optimization, serving as a way to enrich experience replay by data augmentation and improving performance via model-based planning.
+
+# 2 PRELIMINARIES
+
+Here we introduce relevant notation and discuss off-policy (offline) policy evaluation (OPE). We refer the reader to Lange et al. (2012) and Levine et al. (2020) for background on offline RL, which is also known as batch RL in the literature.
+
+A finite-horizon Markov Decision Process (MDP) is defined by a tuple $\mathcal{M} = (\mathcal{S},\mathcal{A},\mathcal{T},d_0,r,\gamma)$ , where $\mathcal{S}$ is a set of states $s\in S$ , $\mathcal{A}$ is a set of actions $a\in \mathcal{A}$ , $\mathcal{T}$ defines transition probability distributions $p(s_{t + 1}|s_t,a_t)$ , $d_0$ defines the initial state distribution $d_0\equiv p(s_0)$ , $r$ defines a reward function $r:\mathcal{S}\times \mathcal{A}\to \mathbb{R}$ , and $\gamma$ is a scalar discount factor. A policy $\pi (a\mid s)$ defines a conditional distribution over actions conditioned on states. A trajectory consists of a sequence of states and actions $\tau = (s_0,a_0,s_1,a_1,\dots ,s_H)$ of horizon length $H$ . We use $s_{t,i}$ to denote the $i$ -th dimension of the state at time step $t$ (and similarly for actions). In reinforcement learning, the objective is to maximize the expected sum of discounted rewards over the trajectory distribution induced by the policy:
+
+$$
+V _ {\gamma} (\pi) = \mathbb {E} _ {\tau \sim p _ {\pi} (\tau)} \left[ \sum_ {t = 0} ^ {H} \gamma^ {t} r \left(s _ {t}, a _ {t}\right) \right]. \tag {1}
+$$
+
+The trajectory distribution is characterized by the initial state distribution, policy, and transition probability distribution:
+
+$$
+p _ {\pi} (\tau) = d _ {0} \left(s _ {0}\right) \prod_ {t = 0} ^ {H - 1} \pi \left(a _ {t} \mid s _ {t}\right) p \left(s _ {t + 1} \mid s _ {t}, a _ {t}\right). \tag {2}
+$$
+
+In offline RL, we are given access to a dataset of transitions $\mathcal{D} = \{(s_t^i,a_t^i,r_{t + 1}^i,s_{t + 1}^i)\}_{i = 1}^N$ and a set of initial states $S_0$ . Offline RL is inherently a data-driven approach since the agent needs to optimize the same objective as in Eq. (1) but is not allowed additional interactions with the environment. Even though offline RL offers the promise of leveraging existing logged datasets, current offline RL algorithms (Fujimoto et al., 2019; Agarwal et al., 2020; Kumar et al., 2019) are typically evaluated using online interaction, which limits their applicability in the real world.
+
+
+Standard Feedforward Dynamics Models
+
+
+Proposed Autoregressive Dynamics Model
+Figure 1: Standard probabilistic dynamics models (e.g., Chua et al. (2018)) use a neural network to predict the mean and standard deviation of different dimensions of the next state and reward simultaneously. By contrast, we use the same neural network architectures with several additional inputs and predict the mean and standard deviation of each dimension of the next state conditional on previous dimensions of the next state. As empirical results indicate, this small change makes a big difference in the expressive power of dynamics models. Note that reward prediction is not shown on the right to reduce clutter, but it can be thought of as $(n + 1)$ th state dimension.
+
+The problem of off-policy (offline) policy evaluation (OPE) entails estimating $V_{\gamma}(\pi)$ , the value of a target policy $\pi$ , based on a fixed dataset of transitions denoted $\mathcal{D}$ , without access to the environment's dynamics. Some OPE methods assume that $\mathcal{D}$ is generated from a known behavior (logging) policy $\mu$ and assume access to $\mu$ in addition to $\mathcal{D}$ . In practice, the logged dataset $\mathcal{D}$ may be the result of following some existing system that does not have a probabilistic form. Hence, in our work, we will assume no access to the original behavior policy $\mu$ for OPE. That said, for methods that require access to $\mu$ , we train a behavior cloning policy on $\mathcal{D}$ .
+
+# 3 PROBABILISTIC DYNAMICS MODELS
+
+Feedforward dynamics model. In the context of our paper, we use the term "model" to jointly refer to the forward dynamics model $p_{s}(s_{t + 1}|s_{t},a_{t})$ and reward model $p_{r}(r_{t + 1}|s_{t},a_{t})$ . We use neural nets to parameterize both distributions since they are powerful function approximators that have been effective for model-based RL (Chua et al., 2018; Nagabandi et al., 2018; Janner et al., 2019).
+
+Let $\theta$ denote the parameters of a fully connected network used to model $p_{\theta}(s_{t + 1},r_{t + 1}\mid s_t,a_t)$ . We expect joint modeling of the next state and reward to benefit from sharing intermediate network features. Similar to prior work (Janner et al., 2019), our baseline feedforward model outputs the mean and log variance of all state dimensions and reward simultaneously, as follows:
+
+$$
+p _ {\theta} \left(s _ {t + 1}, r _ {t + 1} \mid s _ {t}, a _ {t}\right) = \mathcal {N} \left(\mu \left(s _ {t}, a _ {t}\right), \operatorname {D i a g} \left(\exp \left\{l \left(s _ {t}, a _ {t}\right) \right\}\right)\right), \tag {3}
+$$
+
+where $\mu (s_t,a_t)\in \mathbb{R}^{n + 1}$ denotes the mean for the concatenation of the next state and reward, $l(s_{t},a_{t})\in \mathbb{R}^{n + 1}$ denotes the log variance, and $\operatorname {Diag}(v)$ is an operator that creates a diagonal matrix with the main diagonal specified by the vector $v$ . During training, we seek to minimize the negative log likelihood of the parameters given observed transitions in the dataset $\mathcal{D}$ ..
+
+$$
+\ell (\theta \mid \mathcal {D}) = - \sum_ {(s, a, r ^ {\prime}, s ^ {\prime}) \in \mathcal {D}} \log p _ {\theta} \left(s ^ {\prime}, r ^ {\prime} \mid s, a\right). \tag {4}
+$$
+
+While it is possible to place different weights on the loss for next state and reward prediction, we did not apply any special weighting and treated the reward as an additional state dimension in all of our experiments. This is straightforward to implement and does not require tuning an additional hyperparameter, which is challenging for OPE. Note that the input has $|s| + |a|$ dimensions.
+
+Autoregressive dynamics model. We now describe our autoregressive model. We seek to demonstrate the utility of predicting state dimensions in an autoregressive way. Therefore, rather than using a complex neural network architecture, where improvements in log-likelihood and policy evaluation are confounded by architectural differences, we opt to make simple modifications to the feedforward model described above. This allows us to isolate the source of performance improvements.
+
+The autoregressive model we use is a fully connected model that predicts the mean and log variance of a single state dimension. We augment the input space of the baseline with the previous predicted
+
+state dimensions and a one-hot encoding to indicate which dimension to predict. This is illustrated in Figure 1. The autoregressive model therefore has $3|s| + |a|$ input dimensions. Hence, the autoregressive model has a small number of additional weights in the first fully connected layer, but as will be shown in our experiments, these extra parameters are not the reason for a performance gain.
+
+At training time, the autoregressive model has a similar computational cost to the fully connected model as we can mask ground truth states and use data parallelism to compute all state dimensions simultaneously. At inference, the autoregressive model requires additional forward passes, on the order of the number of state dimensions in a given environment. We use the default ordering for the state dimensions in a given environment, though it is interesting to explore different orderings in future works. The negative log-likelihood for an autoregressive model takes the form of:
+
+$$
+\ell (\theta \mid \mathcal {D}) = - \sum_ {(s, a, r ^ {\prime}, s ^ {\prime}) \in \mathcal {D}} \left[ \log p _ {\theta} \left(r ^ {\prime} \mid s, a, s ^ {\prime}\right) + \sum_ {i = 1} ^ {n} \log p _ {\theta} \left(s _ {i} ^ {\prime} \mid s, a, s _ {1} ^ {\prime}, \dots , s _ {i - 1} ^ {\prime}\right) \right], \tag {5}
+$$
+
+where we use chain rule to factorize the joint probability of $p(s', r' \mid s, a)$ .
+
+The main advantage of the autoregressive model is that it makes no conditional independence assumption between next state dimensions. This class of models can therefore capture non-unimodal dependencies, e.g., between different joint angles of a robot. Paduraru (2007) demonstrates this increased expressivity in the tabular setting, constructing an example on which a model assuming conditional independence fails. While the expressive power of autoregressive models have been shown in various generative models (Parmar et al., 2018; Oord et al., 2016), autoregressive dynamics models have not seen much use in Model-based RL for continuous control before this work.
+
+Model-based OPE. Once a dynamics model is trained from offline data, OPE can be performed in a direct and primitive way. We let the policy and model interact—the policy generates the next action, the model plays the role of the environment and generates the next state and reward. Due to the stochasticity in the model and the policy, we estimate the return for a policy with Monte-Carlo sampling and monitor standard error. See Algorithm 1 for pseudocode.
+
+# 4 RELATED WORK
+
+Our work follows a long line of OPE research, which is especially relevant to many practical domains such as medicine (Murphy et al., 2001), recommendation systems (Li et al., 2011), and education (Mandel et al., 2014) in order to avoid the costs and risks associated
+
+# Algorithm 1 Model-based OPE
+
+| Require: Number of rollouts n, discount factor γ, horizon length H, policy π, dynamics model p, set of initial states S0 |
| for i = 1, 2, ... n do |
| Ri← 0 |
| sample initial state s0 ~ S0 |
| for t = 0, 1, 2, ..., H-1 do |
| sample from policy: at ∼ π(· | st) |
| sample from the dynamics model: st+1, rt+1 ∼ p(·, · | st, at) |
| Ri← Ri + γtrt+1 |
| end for |
| end for |
| return 1/n ∑i=1n Ri |
+
+with online evaluation. There exists a large body of work on OPE, including methods based on importance weighting (Precup, 2000; Li et al., 2014) and Lagrangian duality (Nachum et al., 2019; Yang et al., 2020; Uehara and Jiang, 2019). The model-based approach that we focus on in this paper lies within the class of algorithms referred to as the direct method (Kostrikov and Nachum, 2020; Dudík et al., 2011; Voloshin et al., 2019), which approximate the value of a new policy by either explicitly or implicitly estimating the transition and reward functions of the environment. While model-based policy evaluation has been considered by previous works (Paduraru, 2007; Thomas and Brunskill, 2016a; Hanna et al., 2017), it has largely been confined to simple domains with finite state and action spaces where function approximation is not necessary. By contrast, our work provides an extensive demonstration of model-based OPE in challenging continuous control benchmark domains. Previous instances of the use of function approximation for model-based OPE (Hallak et al., 2015) impose strong assumptions on the probabilistic dynamics models, such as factorability of the MDP. Our results indicate that even seemingly benign assumptions about the independence of different state dimensions can have detrimental consequences for the effectiveness of a model-based OPE estimate.
+
+While the use of model-based principles in OPE has been relatively rare, it has been more commonly used for policy optimization. The field of model-based RL has matured in recent years to yield impressive results for both online (Nagabandi et al., 2018; Chua et al., 2018; Kurutach et al., 2018; Janner et al., 2019) and offline (Matsushima et al., 2020; Kidambi et al., 2020; Yu et al., 2020; Argenson and Dulac-Arnold, 2020) policy optimization. Several of the techniques we employ, such
+
+Table 1: Summary of the offline datasets used. Dataset size indicates the number of $(s, a, r', s')$ tuples.
+
+ | cartpole swingup | cheetah run | finger turn hard | fish swim | humanoid run | walker stand | walker walk | manipulator insert ball | manipulator insert peg |
| State dim. | 5 | 17 | 12 | 24 | 67 | 24 | 24 | 44 | 44 |
| Action dim. | 1 | 6 | 2 | 5 | 21 | 6 | 6 | 5 | 5 |
| Dataset size | 40K | 300K | 500K | 200K | 3M | 200K | 200K | 1.5M | 1.5M |
+
+Table 2: Negative log-likelihood on heldout validation sets for different RL Unplugged tasks (lower is better). For both family of dynamics models, we train 48 models with different hyperparameters. We report the Top-1 NLL on the top and average of Top-5 models on the bottom. On all of the tasks autoregressive dynamics models significantly outperform feedforward models in terms of NLL for both Top-1 and Top-5.
+
+| Dynamics model architecture | cartpole swingup | cheetah run | finger turn hard | fish swim | humanoid run | walker stand | walker walk | manipulator insert ball | manipulator insert peg |
| Top1 | Feedforward | -6.81 | -4.90 | -5.58 | -4.91 | -3.42 | -4.52 | -3.84 | -4.74 |
| Top5 | Autoregressive | -7.21 | -6.36 | -6.14 | -5.21 | -4.18 | -4.73 | -4.17 | -5.62 |
| Top5 | Feedforward | -6.75 | -4.85 | -5.50 | -4.90 | -3.40 | -4.49 | -3.81 | -4.64 |
| Top5 | Autoregressive | -7.14 | -6.32 | -5.94 | -5.18 | -4.15 | -4.71 | -4.15 | -5.58 |
+
+as the normalization of the observation space, are borrowed from this previous literature (Nagabandi et al., 2018; Chua et al., 2018). Conversely, we present strong empirical evidence that the benefits of our introduced autoregressive generative models of state observations do carry over to model-based policy optimization, at least in the offline setting, and this is an interesting avenue for future work.
+
+# 5 RESULTS
+
+We conduct our experiments on the DeepMind control suite (Tassa et al., 2018), a set of control tasks implemented in MuJoCo (Todorov et al., 2012). We use the offline datasets from RL Unplugged (Gulcehre et al., 2020), the details of which are provided in Table 1. These environments capture a wide range of complexity, from 40K transitions in a 5-dimensional cartpole environment to 1.5 million transitions on complex manipulation tasks. We follow the evaluation protocol in the Deep OPE (Fu et al., 2021) benchmark and use policies generated by four different algorithms: behavioral cloning (Bain, 1995), D4PG (Barth-Maron et al., 2018), Critic Regularized Regression (Wang et al., 2020), and ABM (Siegel et al., 2019). With varied hyperparameters, these form a diverse set of policies of varying quality.
+
+We perform a thorough hyperparameter sweep in the experiments and use standard practice from generative modeling to improve the quality of the models. We allocate $80\%$ of the data for training and $20\%$ of the data for model selection. We vary the depth and width of the neural networks (number of layers $\in \{3,4\}$ , layer size $\in \{512,1024\}$ ), add different amounts of noise to input states and actions, and consider two levels of weight decay for regularization (input noise $\in \{0,1\mathrm{e} - 6,1\mathrm{e} - 7\}$ , weight decay $\in \{0,1\mathrm{e} - 6\}$ ). For the choice of optimizer, we consider both Adam (Kingma and Ba, 2014) and SGD with momentum and find Adam to be more effective at maximizing log-likelihood across all tasks in preliminary experiments. We thus use Adam in all of our experiments with two learning rates $\in \{1\mathrm{e} - 3,3\mathrm{e} - 4\}$ . We decay the optimizer's learning rate linearly to zero throughout training, finding this choice to outperform a constant learning rate. Lastly, we find that longer training often improves log-likelihood results. We use 500 epochs for training final models.
+
+For each task we consider in total 48 hyperparameter combinations (listed above) for both models and pick the best model in each model family based on validation log-likelihood. This model is then used for model-based OPE and policy optimization. Note that, in our experiments, $20\%$ of the transitions are used only for validation, but we believe one can re-train the models with the best hyperparameter configuration on the full transition datasets to improve the results even further.
+
+# 5.1 AUTOREGRESSIVE DYNAMICS MODELS OUTPERFORM FEEDFORWARD MODELS IN NLL
+
+To evaluate the effectiveness of autoregressive dynamics models compared to feedforward counterparts, Table 2 reports negative log-likelihood (NLL) on the heldout validation set for the best
+
+
+Figure 2: Network parameter count vs. validation negative log-likelihood for autoregressive and feedforward models. Autoregressive models often have a lower validation NLL irrespective of parameter count.
+
+
+
+
+
+
+
+
+Figure 3: Validation negative log-likelihood vs. OPE correlation coefficients on different tasks. On 4 RL Unplugged tasks, we conduct an extensive experiment in which 48 Autoregressive and 48 Feedforward Dynamics models are used for OPE. For each dynamics model, we calculate the correlation coefficient between model-based value estimates and ground truth values at a discount factor of 0.995. We find that low validation NLL numbers generally correspond to accurate policy evaluation, while higher NLL numbers are less meaningful.
+
+
+
+
+
+
+
+performing models from our hyperparameter sweep. For each environment, we report the NLL for the best-performing model (Top-1) and the average NLL across the Top-5 models. The autoregressive model has lower NLL on all environments, indicating that it generalizes better to unseen data.
+
+To study the impact of model size on NLL, Figure 2 shows validation NLL as a function of parameter count. We find that on small datasets large models hurt, but more importantly autoregressive models outperform feedforward models regardless of the parameter count regime, i.e., even small autoregressive models attain a lower validation NLL compared to big feedforward models. This indicates that autoregressive models have a better inductive bias in modeling the transition dynamics than feedforward models that make a conditional independence assumption.
+
+# 5.2 ARE DYNAMICS MODELS WITH LOWER NLL BETTER FOR MODEL-BASED OPE?
+
+We ultimately care not just about the log-likelihood numbers, but also whether or not the dynamics models are useful in policy evaluation and optimization. To study the relationship of NLL and OPE performance for model-based methods, we compute OPE estimates via Algorithm 1 and compute the Pearson correlation between the OPE estimates and the true discounted returns. This serves as a measure of the effectiveness of the model for OPE. We repeat this for all 96 dynamics models we trained on a given environment and plot the correlation coefficients against validation NLL in Figure 3.
+
+Models with low NLL are generally more accurate in OPE. Lambert et al. (2020) have previously demonstrated that in Model-based RL, "training cost does not hold a strong correlation to maximization of episode reward." We use validation NLL instead, and our results on policy evaluation decouple the model from policy optimization, suggesting a more nuanced picture: low validation NLL numbers generally correspond to accurate policy evaluation, while higher NLL numbers are generally less meaningful. In other words, if the dynamics model does not capture the transition dynamics accurately enough, then it is very hard to predict its performance on OPE. However, once the model starts to capture the dynamics faithfully, we conjecture that NLL starts to become a reasonable metric for model selection. For instance, validation NLL does not seem to be a great metric for ranking feedforward models, whereas it is more reasonable for autoregressive models.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 4: Comparison of model-based OPE using autoregressive and feedforward dynamics models with state-of-the-art FQE methods based on L2 and distributional Bellman error. We plot OPE estimates on the y-axis against ground truth returns with a discount of .995 on the x-axis. We report the Pearson correlation coefficient $(r)$ in the title. While feedforward models fall behind FQE on most tasks, autoregressive dynamics models are often superior. See Figure B.4 for additional scatter plots on the other environments.
+
+
+
+
+
+
+
+# 5.3 COMPARISON WITH OTHER OPE METHODS
+
+We adopt a recently proposed benchmark for OPE (Fu et al., 2021) and compare our model-based approaches with state-of-the-art OPE baselines therein. Figures 4 and B.4 compare OPE estimates from two Fitted-Q Evaluation (FQE) baselines (Le et al., 2019; Kostrikov and Nachum, 2020; Paine et al., 2020), our feedforward models, and the autoregressive approach. Each plot reports the Pearson correlation between the OPE estimates and the true returns. The autoregressive model consistently outperforms the feedforward model and FQE methods on most environments. We report ensembling results in the appendix, but compare single models for fairness in the rest of the paper.
+
+We compute summary statistics for OPE methods in Table 3, Table A.1, and Table A.2. These tables report the Spearman's rank correlation, regret, and absolute error, respectively. These metrics capture different desirable properties of OPE methods (Fu et al., 2021); more details about how they are computed are in the appendix. In all three metrics, the autoregressive model achieves the best median performance across nine environments, whereas the baseline model is not as good as FQE. The only environment in which the autoregressive model has negative rank correlation is manipulator insert ball. In addition, a major advantage of our model-based approach over FQE is that the model only needs to be trained once per environment—we do not need to perform additional policy-specific optimization, whereas FQE needs to optimize a separate Q-function approximator per policy.
+
+ | Cartpole swingup | Cheetah run | Finger turn hard | Fish swim | Humanoid run |
| Rank Correlation btw. OPE and ground truth | Importance Sampling | -0.23±0.11 | -0.01±0.12 | -0.45±0.08 | -0.17±0.11 | 0.91±0.02 |
| Best DICE | -0.16±0.11 | 0.07±0.11 | -0.22±0.11 | 0.44±0.09 | -0.10±0.10 |
| Variational power method | 0.01±0.11 | 0.01±0.12 | -0.25±0.11 | 0.56±0.08 | 0.36±0.09 |
| Doubly Robust (IS, FQE) | 0.55±0.09 | 0.56±0.08 | 0.67±0.05 | 0.11±0.12 | -0.03±0.12 |
| Feedforward Model | 0.83±0.05 | 0.64±0.08 | 0.08±0.11 | 0.95±0.02 | 0.35±0.10 |
| FQE (distributional) | 0.69±0.07 | 0.67±0.06 | 0.94±0.01 | 0.59±0.10 | 0.74±0.06 |
| FQE (L2) | 0.70±0.07 | 0.56±0.08 | 0.83±0.04 | 0.10±0.12 | -0.02±0.12 |
| Autoregressive Model | 0.91±0.02 | 0.74±0.07 | 0.57±0.09 | 0.96±0.01 | 0.90±0.02 |
| Walker stand | Walker walk | Manipulator insert ball | Manipulator insert peg | Median ↑ |
| Rank Correlation btw. OPE and ground truth | Importance Sampling | 0.59±0.08 | 0.38±0.10 | -0.72±0.05 | -0.25±0.08 | -0.17 |
| Best DICE | -0.11±0.12 | -0.58±0.08 | 0.19±0.11 | -0.35±0.10 | -0.11 |
| Variational power method | -0.35±0.10 | -0.10±0.11 | 0.61±0.08 | 0.41±0.09 | 0.01 |
| Doubly Robust (IS, FQE) | 0.88±0.03 | 0.85±0.04 | 0.42±0.10 | -0.47±0.09 | 0.55 |
| Feedforward Model | 0.82±0.04 | 0.80±0.05 | 0.06±0.10 | -0.56±0.08 | 0.64 |
| FQE (distributional) | 0.87±0.02 | 0.89±0.03 | 0.63±0.08 | -0.23±0.10 | 0.69 |
| FQE (L2) | 0.96±0.01 | 0.94±0.02 | 0.70±0.07 | -0.48±0.08 | 0.70 |
| Autoregressive Model | 0.96±0.01 | 0.98±0.00 | -0.33±0.09 | 0.47±0.09 | 0.90 |
+
+Table 3: Spearman's rank correlation $(\rho)$ coefficient (bootstrap mean $\pm$ standard deviation) between different OPE metrics and ground truth values at a discount factor of 0.995. In each column, rank correlation coefficients that are not significantly different from the best $(p > 0.05)$ are bold faced. Methods are ordered by median. Also see Table A.1 and Table A.2 for Normalized Regret@5 and Average Absolute Error results.
+
+
+Figure 5: Model-based offline policy optimization results. With planning and data augmentation, we improve the performance over CRR exp (our baseline algorithm). When using autoregressive dynamics models (CRR-planning AR), we outperform state-of-the-art on Cheetah run and Fish swim. Previous SOTA results (Gulcehre et al., 2020; Wang et al., 2020) are obtained using different offline RL algorithms: Cheetah run - CRR exp, Fish swim - CRR binary max, Finger turn hard - CRR binary max, Cartpole swingup - BRAC (Wu et al., 2019).
+
+# 5.4 AUTOREGRESSIVE DYNAMICS MODELS FOR OFFLINE POLICY OPTIMIZATION
+
+Policy evaluation is an integral part of reinforcement learning. Improvement in policy evaluation can therefore be adapted for policy optimization. In this section, we explore two possibilities of using models to improve offline reinforcement learning. In all experiments, we use Critic Regularized Regression (CRR) as a base offline reinforcement learning algorithm (Wang et al., 2020).
+
+First, we utilize the model during test time for planning by using a modified version of Model Predictive Path Integral (MPPI) (Williams et al., 2015). Unlike MPPI, we truncate the planning process after 10 steps of rollout and use the CRR critic to evaluate future discounted returns. We provide additional details in the appendix. Secondly, we use the model to augment the transition dataset to learn a better critic for CRR. More precisely, given $s_t^i \sim \mathcal{D}$ , and the current policy $\pi$ , we can generate additional data using the following process: $\hat{a}_t^i \sim \pi(\cdot | s_t^i)$ , $\hat{s}_{t+1}^i$ , $\hat{r}_{t+1}^i \sim p(\cdot, \cdot | s_t^i, \hat{a}_t)$ .
+
+These two options are orthogonal and can be applied jointly. We implemented both techniques on top of the CRR exp variant (Wang et al., 2020) and show their combined effect in Figure 5. The
+
+figure shows that autoregressive dynamics models also outperform feedforward ones in the policy optimization context. Notably, in the case of cheetah run and fish swim, using autoregressive models for planning as well as data augmentation enables us to outperform the previous state-of-the-art on these offline datasets. Additionally, when using autoregressive dynamics models, both techniques improve performance. In the appendix, we show this result as well as more ablations.
+
+# 6 CONCLUSION
+
+This paper shows the promise of autoregressive models in learning transition dynamics for continuous control, showing strong results for off-policy policy evaluation and offline policy optimization. Our contributions to offline model-based policy optimization are orthogonal to prior work that uses ensembles to lower the values when ensemble components disagree (Kidambi et al., 2020). Incorporating conservative value estimation into our method is an interesting avenue for future research. We use relatively primitive autoregressive neural architectures in this paper to enable a fair comparison with existing feedforward dynamics models. That said, it will be exciting to apply more sophisticated autoregressive neural network architectures with cross attention (Bahdanau et al., 2014) and self-attention (Vaswani et al., 2017) to Model-based RL for continuous control.
+
+Acknowledgements We thank Jimmy Ba, William Chan, Rishabh Agarwal, Dale Schuurmans, and Silviu Pitis for fruitful discussions on our work. We are also grateful for the helpful comments from Lihong Li, Jenny Liu, Harris Chan, Keiran Paster, Sheng Jia, and Tingwu Wang on earlier drafts.
+
+# REFERENCES
+
+Rishabh Agarwal, Dale Schuurmans, and Mohammad Norouzi. An optimistic perspective on offline reinforcement learning. International Conference on Machine Learning, 2020.
+Arthur Argenson and Gabriel Dulac-Arnold. Model-based offline planning. arXiv:2008.05556, 2020.
+Christopher G Atkeson, Andrew W Moore, and Stefan Schaal. Locally weighted learning. In Lazy learning, pages 11-73. Springer, 1997.
+Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
+Michael Bain. A framework for behavioural cloning. In Machine Intelligence 15, pages 103-129, 1995.
+Gabriel Barth-Maron, Matthew W Hoffman, David Budden, Will Dabney, Dan Horgan, Dhruva Tb, Alistair Muldal, Nicolas Heess, and Timothy Lillicrap. Distributed distributional deterministic policy gradients. arXiv:1804.08617, 2018.
+Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. arXiv:2005.14165, 2020.
+Kurtland Chua, Roberto Calandra, Rowan McAllister, and Sergey Levine. Deep reinforcement learning in a handful of trials using probabilistic dynamics models. In Advances in Neural Information Processing Systems, 2018.
+Marc Deisenroth and Carl E Rasmussen. *Pilco: A model-based and data-efficient approach to policy search*. In *Proceedings of the 28th International Conference on machine learning* (ICML-11), pages 465–472, 2011.
+Miroslav Dudík, John Langford, and Lihong Li. Doubly robust policy evaluation and learning. arXiv:1103.4601, 2011.
+Justin Fu, Mohammad Norouzi, Ofir Nachum, George Tucker, Ziyu Wang, Alexander Novikov, Mengjiao Yang, Michael R. Zhang, Yutian Chen, Aviral Kumar, Cosmin Paduraru, Sergey Levine, and Thomas Paine. Benchmarks for deep off-policy evaluation. In International Conference on Learning Representations, 2021.
+
+Scott Fujimoto, David Meger, and Doina Precup. Off-policy deep reinforcement learning without exploration. In International Conference on Machine Learning, pages 2052-2062, 2019.
+Caglar Gulcehre, Ziyu Wang, Alexander Novikov, Tom Le Paine, Sergio Gomez Colmenarejo, Konrad Zolna, Rishabh Agarwal, Josh Merel, Daniel Mankowitz, Cosmin Paduraru, et al. Rl unplugged: Benchmarks for offline reinforcement learning. arXiv:2006.13888, 2020.
+Assaf Hallak, François Schnitzler, Timothy Mann, and Shie Mannor. Off-policy model-based learning under unknown factored dynamics. In International Conference on Machine Learning, pages 711-719, 2015.
+Josiah Hanna, Scott Niekum, and Peter Stone. Importance sampling policy evaluation with an estimated behavior policy. In International Conference on Machine Learning, pages 2605-2613. PMLR, 2019.
+Josiah P Hanna, Peter Stone, and Scott Niekum. Bootstrapping with models: Confidence intervals for off-policy evaluation. In Thirty-First AAAI Conference on Artificial Intelligence, 2017.
+Michael Janner, Justin Fu, Marvin Zhang, and Sergey Levine. When to trust your model: Model-based policy optimization. In Advances in Neural Information Processing Systems, 2019.
+Rahul Kidambi, Aravind Rajeswaran, Praneeth Netrapalli, and Thorsten Joachims. MOReL: Model-based offline reinforcement learning. arXiv:2005.05951, 2020.
+Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014.
+Ilya Kostrikov and Ofir Nachum. Statistical bootstrapping for uncertainty estimation in off-policy evaluation, 2020.
+Aviral Kumar, Justin Fu, Matthew Soh, George Tucker, and Sergey Levine. Stabilizing off-policy q-learning via bootstrapping error reduction. In Advances in Neural Information Processing Systems, pages 11784-11794, 2019.
+Thanard Kurutach, Ignasi Clavera, Yan Duan, Aviv Tamar, and Pieter Abbeel. Model-Ensemble Trust-Region Policy Optimization. In International Conference on Learning Representations, 2018.
+Nathan Lambert, Brandon Amos, Omry Yadan, and Roberto Calandra. Objective mismatch in model-based reinforcement learning. arXiv:2002.04523, 2020.
+Sascha Lange, Thomas Gabel, and Martin Riedmiller. Batch reinforcement learning. In Reinforcement learning, pages 45-73. Springer, 2012.
+Hoang M Le, Cameron Voloshin, and Yisong Yue. Batch policy learning under constraints. arXiv preprint arXiv:1903.08738, 2019.
+Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. arXiv:2005.01643, 2020.
+Lihong Li, Wei Chu, John Langford, and Xuanhui Wang. Unbiased offline evaluation of contextual-bandit-based news article recommendation algorithms. In Proceedings of the fourth ACM international conference on Web search and data mining, pages 297-306. ACM, 2011.
+Lihong Li, Remi Munos, and Csaba Szepesvári. On minimax optimal offline policy evaluation. arXiv:1409.3653, 2014.
+Qiang Liu, Lihong Li, Ziyang Tang, and Dengyong Zhou. Breaking the curse of horizon: Infinite-horizon off-policy estimation. In Advances in Neural Information Processing Systems, pages 5356-5366, 2018.
+Travis Mandel, Yun-En Liu, Sergey Levine, Emma Brunskill, and Zoran Popovic. Offline policy evaluation across representations with applications to educational games. In Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems, pages 1077-1084. International Foundation for Autonomous Agents and Multiagent Systems, 2014.
+
+Tatsuya Matsushima, Hiroki Furuta, Yutaka Matsuo, Ofir Nachum, and Shixiang Gu. Deployment-efficient reinforcement learning via model-based offline optimization. arXiv:2006.03647, 2020.
+Andrew W Moore and Christopher G Atkeson. Memory-based reinforcement learning: Efficient computation with prioritized sweeping. In Advances in neural information processing systems, pages 263-270, 1993.
+Susan A Murphy, Mark J van der Laan, James M Robins, and Conduct Problems Prevention Research Group. Marginal mean models for dynamic regimes. Journal of the American Statistical Association, 96(456):1410-1423, 2001.
+Ofir Nachum, Yinlam Chow, Bo Dai, and Lihong Li. Dualdice: Behavior-agnostic estimation of discounted stationary distribution corrections. In Advances in Neural Information Processing Systems, pages 2318-2328, 2019.
+Anusha Nagabandi, Gregory Kahn, Ronald S Fearing, and Sergey Levine. Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 7559-7566. IEEE, 2018.
+Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. arXiv:1609.03499, 2016.
+Cosmin Paduraru. Planning with approximate and learned models of markov decision processes. *MsC Thesis, University of Alberta, 2007.*
+Tom Le Paine, Cosmin Paduraru, Andrea Michi, Caglar Gulcehre, Konrad Zolna, Alexander Novikov, Ziyu Wang, and Nando de Freitas. Hyperparameter selection for offline reinforcement learning. arXiv:2007.09055, 2020.
+Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, Alexander Ku, and Dustin Tran. Image transformer. arXiv:1802.05751, 2018.
+Jing Peng and Ronald J Williams. Efficient learning and planning within the dyna framework. Adaptive behavior, 1(4):437-454, 1993.
+Doina Precup. Eligibility traces for off-policy policy evaluation. Computer Science Department Faculty Publication Series, page 80, 2000.
+Noah Siegel, Jost Tobias Springenberg, Felix Berkenkamp, Abbas Abdelmaleki, Michael Neunert, Thomas Lampe, Roland Hafner, Nicolas Heess, and Martin Riedmiller. Keep doing what worked: Behavior modelling priors for offline reinforcement learning. In International Conference on Learning Representations, 2019.
+Richard S Sutton. Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In Machine learning proceedings 1990, pages 216-224. Elsevier, 1990.
+Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Budden, Abbas Abdelmaleki, Josh Merel, Andrew Lefrancq, et al. Deepmind control suite. arXiv:1801.00690, 2018.
+P. Thomas and E. Brunskill. Data-efficient off-policy policy evaluation for reinforcement learning. In Proceedings of the 33rd International Conference on Machine Learning, pages 2139-2148, 2016a.
+Philip Thomas and Emma Brunskill. Data-efficient off-policy policy evaluation for reinforcement learning. In International Conference on Machine Learning, pages 2139-2148, 2016b.
+Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 5026-5033. IEEE, 2012.
+Masatoshi Uehara and Nan Jiang. Minimax weight and q-function learning for off-policy evaluation. arXiv:1910.12809, 2019.
+
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008, 2017.
+Cameron Voloshin, Hoang M Le, Nan Jiang, and Yisong Yue. Empirical study of off-policy policy evaluation for reinforcement learning. arXiv:1911.06854, 2019.
+Tingwu Wang, Xuchan Bao, Ignasi Clavera, Jerrick Hoang, Yeming Wen, Eric Langlois, Shunshi Zhang, Guodong Zhang, Pieter Abbeel, and Jimmy Ba. Benchmarking model-based reinforcement learning. arXiv:1907.02057, 2019.
+Ziyu Wang, Alexander Novikov, Konrad Zolna, Jost Tobias Springenberg, Scott Reed, Bobak Shahriari, Noah Siegel, Josh Merel, Caglar Gulcehre, Nicolas Heess, et al. Critic regularized regression. arXiv:2006.15134, 2020.
+Junfeng Wen, Bo Dai, Lihong Li, and Dale Schuurmans. Batch stationary distribution estimation. arXiv preprint arXiv:2003.00722, 2020.
+Grady Williams, Andrew Aldrich, and Evangelos Theodorou. Model predictive path integral control using covariance variable importance sampling. arXiv:1509.01149, 2015.
+Yifan Wu, George Tucker, and Ofir Nachum. Behavior regularized offline reinforcement learning. arXiv:1911.11361, 2019.
+Mengjiao Yang, Ofir Nachum, Bo Dai, Lihong Li, and Dale Schuurmans. Off-policy evaluation via the regularized lagrangian, 2020.
+Tianhe Yu, Garrett Thomas, Lantao Yu, Stefano Ermon, James Zou, Sergey Levine, Chelsea Finn, and Tengyu Ma. MOPO: Model-based offline policy optimization. arXiv:2005.13239, 2020.
+
+# A OFFLINE POLICY EVALUATION
+
+We use the baseline results in Fu et al. (2021). For convenience, we replicate their description of the OPE baselines and metrics.
+
+# A.1 OPE METRICS
+
+To evaluate the OPE algorithms, we compute three different metrics between the estimated returns and the ground truth returns:
+
+1. Rank correlation This metric assesses how well estimated values rank policies. It is equal to the correlation between the ranking (sorted order) by the OPE estimates and the ranking by the ground truth values.
+2. Absolute Error: This metric measures the deviations of the estimates from the ground truth and does not directly access the usefulness for ranking.
+3. Regret@k This metric measures how much worse the best policies identified by the estimates are than the best policy in the entire set. Regret@k is the difference between the actual expected return of the best policy in the entire set, and the actual value of the best policy in the top-k set.
+
+# A.2 OPE BASELINES
+
+Fitted Q-Evaluation (FQE) As in Le et al. (2019), we train a neural network to estimate the value of the evaluation policy $\pi_{e}$ by bootstrapping from $Q(s^{\prime},\pi_{e}(s^{\prime}))$ . We tried two different implementations, one from Kostrikov and Nachum (2020) and another from Paine et al. (2020).
+
+Importance Sampling (IS) We perform importance sampling with a learned behavior policy. We use the implementation from Kostrikov and Nachum (2020), which uses self-normalized (also known as weighted) step-wise importance sampling (Liu et al., 2018; Nachum et al., 2019). Since the behavior policy is not known explicitly, we learn an estimate of it via a max-likelihood objective over the dataset $\mathcal{D}$ , as advocated by Hanna et al. (2019). In order to be able to compute log-probabilities when the target policy is deterministic, we add artificial Gaussian noise with standard deviation 0.01 for all deterministic target policies.
+
+Doubly-Robust (DR) We perform weighted doubly-robust policy evaluation based on Thomas and Brunskill (2016b) and using the implementation of Kostrikov and Nachum (2020). Specifically, this method combines the IS technique above with a value estimator for variance reduction. The value estimator is learned according to Kostrikov and Nachum (2020), using deep FQE with an L2 loss function.
+
+DICE This method uses a saddle-point objective to estimate marginalized importance weights $d^{\pi}(s,a) / d^{\pi_B}(s,a)$ ; these weights are then used to compute a weighted average of reward over the offline dataset, and this serves as an estimate of the policy's value in the MDP. We use the implementation from Yang et al. (2020) corresponding to the algorithm BestDICE.
+
+Variational Power Method (VPM) This method runs a variational power iteration algorithm to estimate the importance weights $d^{\pi}(s,a) / d^{\pi_B}(s,a)$ without the knowledge of the behavior policy. It then estimates the target policy value using weighted average of rewards similar to the DICE method. Our implementation is based on the same network and hyperparameters for OPE setting as in Wen et al. (2020). We further tune the hyperparameters including the regularization parameter $\lambda$ , learning rates $\alpha_{\theta}$ and $\alpha_{v}$ , and number of iterations on the Cartpole swingup task using ground-truth policy value, and then fix them for all other tasks.
+
+# A.3 ENSEMBLING
+
+As in Chua et al. (2018); Janner et al. (2019), we can form an ensemble using our best-performing models. We generate rollouts using the procedure detailed in Janner et al. (2019), forming an ensemble with 4 models. We see some improvement in policy evaluation results, as shown in Figure A.1. Ensembling could likely be further improved by forcing unique hyperparameter settings and seeds.
+
+ | Cartpole swingup | Cheetah run | Finger turn hard | Fish swim | Humanoid run |
| Regret@5 for OPE vs. ground truth | Importance Sampling | 0.73±0.16 | 0.40±0.21 | 0.64±0.05 | 0.12±0.05 | 0.31±0.09 |
| Best DICE | 0.68±0.41 | 0.27±0.05 | 0.44±0.04 | 0.35±0.24 | 0.84±0.22 |
| Variational power method | 0.50±0.13 | 0.37±0.04 | 0.45±0.13 | 0.02±0.02 | 0.56±0.08 |
| Doubly Robust (IS, FQE) | 0.28±0.05 | 0.09±0.05 | 0.56±0.12 | 0.61±0.12 | 0.99±0.00 |
| FQE (L2) | 0.06±0.04 | 0.17±0.05 | 0.30±0.11 | 0.50±0.03 | 0.99±0.00 |
| Feedforward Model | 0.02±0.02 | 0.24±0.12 | 0.43±0.04 | 0.00±0.00 | 0.44±0.02 |
| FQE (distributional) | 0.03±0.09 | 0.11±0.09 | 0.10±0.12 | 0.49±0.06 | 0.24±0.15 |
| Autoregressive Model | 0.00±0.02 | 0.01±0.02 | 0.63±0.11 | 0.03±0.02 | 0.32±0.06 |
| Walker stand | Walker walk | Manipulator insert ball | Manipulator insert peg | Median ↓ |
| Regret@5 for OPE vs. ground truth | Importance Sampling | 0.54±0.11 | 0.54±0.23 | 0.83±0.05 | 0.22±0.03 | 0.54 |
| Best DICE | 0.24±0.07 | 0.55±0.06 | 0.44±0.07 | 0.75±0.04 | 0.44 |
| Variational power method | 0.41±0.02 | 0.39±0.02 | 0.52±0.20 | 0.32±0.02 | 0.41 |
| Doubly Robust (IS, FQE) | 0.02±0.01 | 0.05±0.07 | 0.30±0.10 | 0.73±0.01 | 0.30 |
| FQE (L2) | 0.04±0.02 | 0.00±0.02 | 0.37±0.07 | 0.74±0.01 | 0.30 |
| Feedforward Model | 0.18±0.10 | 0.03±0.05 | 0.83±0.06 | 0.74±0.01 | 0.24 |
| FQE (distributional) | 0.03±0.03 | 0.01±0.02 | 0.50±0.30 | 0.73±0.01 | 0.11 |
| Autoregressive Model | 0.04±0.02 | 0.04±0.02 | 0.85±0.02 | 0.30±0.04 | 0.04 |
+
+Table A.1: Normalized Regret@5 (bootstrap mean ± standard deviation) for OPE methods vs. ground truth values at a discount factor of 0.995. In each column, normalized regret values that are not significantly different from the best $(p > 0.05)$ are bold faced. Methods are ordered by median.
+
+ | | Cartpole swingup | Cheetah run | Finger turn hard | Fish swim | Humanoid run |
| Absolute Error btw. OPE and ground truth | Variational power method | 37.53 ±3.50 | 61.89 ±4.25 | 46.22 ±3.93 | 31.27 ±0.99 | 35.29 ±3.03 |
| Importance Sampling | 68.75 ±2.39 | 44.29 ±1.91 | 90.10 ±4.68 | 34.82 ±1.93 | 27.89 ±1.98 |
| Best DICE | 22.73 ±1.65 | 23.35 ±1.32 | 33.52 ±3.48 | 59.48 ±2.47 | 31.42 ±2.04 |
| Feedforward Model | 6.80 ±0.85 | 13.64 ±0.59 | 35.99 ±3.00 | 4.75 ±0.23 | 30.12 ±2.40 |
| FQE (L2) | 19.02 ±1.34 | 48.26 ±1.78 | 27.91 ±1.18 | 19.82 ±1.57 | 56.28 ±3.52 |
| Doubly Robust (IS, FQE) | 24.38 ±2.51 | 40.27 ±2.05 | 25.26 ±2.48 | 20.28 ±1.90 | 53.64 ±3.68 |
| FQE (distributional) | 12.63 ±1.21 | 36.50 ±1.62 | 10.23 ±0.93 | 7.76 ±0.95 | 32.36 ±2.27 |
| Autoregressive Model | 5.32 ±0.54 | 4.64 ±0.46 | 22.93 ±1.72 | 4.31 ±0.22 | 20.95 ±1.61 |
| | Walker stand | Walker walk | Manipulator insert ball | Manipulator insert peg | Median ↓ |
| Absolute Error btw. OPE and ground truth | Variational power method | 96.76 ±3.59 | 87.24 ±4.25 | 79.25 ±6.19 | 21.95 ±1.17 | 46.22 |
| Importance Sampling | 66.50 ±1.90 | 67.24 ±2.70 | 29.93 ±1.10 | 12.78 ±0.66 | 44.29 |
| Best DICE | 27.58 ±3.01 | 47.28 ±3.13 | 103.45 ±5.21 | 22.75 ±3.00 | 31.42 |
| Feedforward Model | 23.34 ±2.41 | 52.23 ±2.34 | 34.30 ±2.55 | 121.12 ±1.58 | 30.12 |
| FQE (L2) | 6.51 ±0.71 | 18.34 ±0.95 | 36.32 ±1.07 | 31.12 ±2.37 | 27.91 |
| Doubly Robust (IS, FQE) | 26.82 ±2.66 | 24.63 ±1.69 | 13.33 ±1.16 | 22.28 ±2.34 | 24.63 |
| FQE (distributional) | 21.49 ±1.41 | 27.57 ±1.54 | 9.75 ±1.10 | 12.66 ±1.39 | 12.66 |
| Autoregressive Model | 19.12 ±1.23 | 5.14 ±0.49 | 17.13 ±1.34 | 9.71 ±0.70 | 9.71 |
+
+Table A.2: Average absolute error between OPE metrics and ground truth values at a discount factor of 0.995. In each column, absolute error values that are not significantly different from the best $(p > 0.05)$ are bold faced. Methods are ordered by median.
+
+
+
+
+
+
+
+
+
+
+Figure A.1: Estimates of returns using the top model versus estimates of returns using an ensemble of the top-4 models.
+
+
+
+
+
+
+
+Algorithm 2 Model Predictive Path Integral Planning
+Require: state $s$ , policy $\pi$ , dynamics model $p$ , critic $Q$ , temperature $\beta$ , and noise variance $\sigma^2$ .
+for $m = 1, \dots, M$ do
+for $n = 1, \dots, N$ do
+ $s_0 \gets s$ $R_n \gets 0$
+for $\tau = 0, \dots, H - 1$ do
+ $a_n^\tau \sim \pi(\cdot | s_n^\tau)$ $s_n^{\tau + 1}, r_n^{\tau + 1} \sim \pi(\cdot, \cdot | s_n^\tau, a_n^\tau)$ $R_n \gets R_n + \gamma^\tau r_n^{\tau + 1}$
+end for
+ $a^H \sim \pi(\cdot | s^H)$ $R_n \gets R_n + \gamma^H Q(s_n^H, a_n^H)$
+end for
+Re-define $\pi$ such that $\pi(\cdot | \hat{s}^\tau) = \sum_{n} \frac{\exp(R_n / \beta)}{\sum_{m} \exp(R_m / \beta)} \mathcal{N}(\cdot | a_n^\tau, \sigma^2 I)$ . ( $\pi$ depends on $\tau$ and not $\hat{s}$ .)
+end for
+sample final action $a \sim \sum_{n} \frac{\exp(R_n / \beta)}{\sum_{m} \exp(R_m / \beta)} \delta(a_n^0)$
+return $a$
+
+# B ADDITIONAL DETAILS REGARDING POLICY OPTIMIZATION
+
+To test dynamic models for policy optimization, we implement the two methods discussed in Section 5.4 on top of CRR $exp$ , one of the CRR variants (Wang et al., 2020). We use the RL Unplugged datasets (Gulcehre et al., 2020) for all environments studied in this section. When using data augmentation, we adopt a 1-to-1 ratio between the original dataset and the augmented dataset.
+
+To take advantage of the dynamics models at test time, we use a variant of Model Predictive Path Integral (MPPI) for planning. To reduce the planning horizon, we truncate the model rollout using CRR critics. The details of the planning procedure is summarized in Algorithm 2. All hyperparameter tuning for the planning process is conducted on the "cartpole swingup" task. The hyperparameters used in the planning process are $M = 3$ , $N = 16$ , $H = 10$ , $\beta = 0.1$ , and $\sigma^2 = 0.01$ . To match the temperature used in the planning component, we choose $\beta = 0.1$ for the CWP component of CRR. This change, however, does not impact the baseline CRR agent performance much. With the exception of $\beta$ and the planning component, all hyperparameters are kept the same as CRR exp.
+
+
+Figure B.2: Effects of the planning procedure. Here we compare using planning (CRR-planning AR) vs not ((CRR AR)) while using augmented data generated by the autoregressive model. Planning with autoregressive models helps in all environments tested.
+
+
+
+
+
+
+
+
+Figure B.3: Effects of the data augmentation on cheetah run. [LEFT] In the absence of planning, data augmentation significantly increase the performance of CRR agent. [RIGHT] With the planning procedure, data augmentation is still effective albeit to a lesser extent.
+
+
+
+We compare the agents' performance with and without the planning procedure to test its effects. As shown in Figure B.2, planning using an autoregressive model significantly increases performance.
+
+Data augmentation does not change the agents' performance on cartpole swingup, fish swim, or finger turn hard. It, however, boosts performance considerably on cheetah run. In Figure B.3, we show the effects of data augmentation on cheetah run.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure B.4: Comparison of model-based OPE using autoregressive and feedforward dynamics models with state-of-the-art FQE methods based on L2 and distributional Bellman error. We plot ground truth returns on the x-axis against estimates of returns from various OPE methods on the y-axis. While feedforward models fall behind FQE on most tasks, autoregressive dynamics models are often superior. The remaining environments are plotted in Figure 4.
+
+
+
+
+
+
\ No newline at end of file
diff --git a/autoregressivedynamicsmodelsforofflinepolicyevaluationandoptimization/images.zip b/autoregressivedynamicsmodelsforofflinepolicyevaluationandoptimization/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..0d7bb103dd6b06aaa69c6ff178211060d75d5d93
--- /dev/null
+++ b/autoregressivedynamicsmodelsforofflinepolicyevaluationandoptimization/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bc1e044f1834adb1bf0191b85208ee55dbd22189863a9663e94185d1f7046368
+size 1321975
diff --git a/autoregressivedynamicsmodelsforofflinepolicyevaluationandoptimization/layout.json b/autoregressivedynamicsmodelsforofflinepolicyevaluationandoptimization/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..2feb05a3aa943d560a9e4e9a58c385a3c46712ec
--- /dev/null
+++ b/autoregressivedynamicsmodelsforofflinepolicyevaluationandoptimization/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9c51b6db0dfb110e5cc4e227e782e59fcb08f40a8be0ee4fd5dbe28cd9a95442
+size 523913
diff --git a/autoseglosssearchingmetricsurrogatesforsemanticsegmentation/2802d650-3a45-4a26-97d7-c37c1d6ee49e_content_list.json b/autoseglosssearchingmetricsurrogatesforsemanticsegmentation/2802d650-3a45-4a26-97d7-c37c1d6ee49e_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..9d3f1d0708ffa7ff553ef96c3a166c46ff9640db
--- /dev/null
+++ b/autoseglosssearchingmetricsurrogatesforsemanticsegmentation/2802d650-3a45-4a26-97d7-c37c1d6ee49e_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:93d27ff0e079a88798e11aac93b71bff9e6c506d682e2b63fe0fe5a7409795be
+size 79211
diff --git a/autoseglosssearchingmetricsurrogatesforsemanticsegmentation/2802d650-3a45-4a26-97d7-c37c1d6ee49e_model.json b/autoseglosssearchingmetricsurrogatesforsemanticsegmentation/2802d650-3a45-4a26-97d7-c37c1d6ee49e_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..e4f3ae47e758ce6c2ea4957d6e3cea54b84db103
--- /dev/null
+++ b/autoseglosssearchingmetricsurrogatesforsemanticsegmentation/2802d650-3a45-4a26-97d7-c37c1d6ee49e_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:394d1324a1e893457fc4a3a01cf3deec564f0279ed8fe0a6470321fc26d0c4f4
+size 94414
diff --git a/autoseglosssearchingmetricsurrogatesforsemanticsegmentation/2802d650-3a45-4a26-97d7-c37c1d6ee49e_origin.pdf b/autoseglosssearchingmetricsurrogatesforsemanticsegmentation/2802d650-3a45-4a26-97d7-c37c1d6ee49e_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..b502062417def563eb848e5080e73764a48316e6
--- /dev/null
+++ b/autoseglosssearchingmetricsurrogatesforsemanticsegmentation/2802d650-3a45-4a26-97d7-c37c1d6ee49e_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ef08494e796d6dab6efc56d326468a7ce54ee2feac99a1be8e00608ba5858c6f
+size 1617393
diff --git a/autoseglosssearchingmetricsurrogatesforsemanticsegmentation/full.md b/autoseglosssearchingmetricsurrogatesforsemanticsegmentation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..a8d5a3816c5fcf3ccb92909c5f7d3f735e975455
--- /dev/null
+++ b/autoseglosssearchingmetricsurrogatesforsemanticsegmentation/full.md
@@ -0,0 +1,331 @@
+# AUTO SEG-LOSS: SEARCHING METRIC SURROGATES FOR SEMANTIC SEGMENTATION
+
+Hao Li $^{1*†}$ , Chenxin Tao $^{2*†}$ , Xizhou Zhu $^{3}$ , Xiaogang Wang $^{1,3}$ , Gao Huang $^{2}$ , Jifeng Dai $^{3,4‡}$
+
+1The Chinese University of Hong Kong 2Tsinghua University
+
+$^{3}$ SenseTime Research $^{4}$ Qing Yuan Research Institute, Shanghai Jiao Tong University
+
+haoli@link.cuhk.edu.hk, tcx20@mails.tsinghua.edu.cn
+
+{zhuwalter, daijifeng}@sensetime.com
+
+xgwang@ee.cuhk.edu.hk, gaohuang@tsinghua.edu.cn
+
+# ABSTRACT
+
+Designing proper loss functions is essential in training deep networks. Especially in the field of semantic segmentation, various evaluation metrics have been proposed for diverse scenarios. Despite the success of the widely adopted cross-entropy loss and its variants, the mis-alignment between the loss functions and evaluation metrics degrades the network performance. Meanwhile, manually designing loss functions for each specific metric requires expertise and significant manpower. In this paper, we propose to automate the design of metric-specific loss functions by searching differentiable surrogate losses for each metric. We substitute the non-differentiable operations in the metrics with parameterized functions, and conduct parameter search to optimize the shape of loss surfaces. Two constraints are introduced to regularize the search space and make the search efficient. Extensive experiments on PASCAL VOC and Cityscapes demonstrate that the searched surrogate losses outperform the manually designed loss functions consistently. The searched losses can generalize well to other datasets and networks. Code shall be released at https://github.com/fundamentalvision/ Auto-Seg-Loss.
+
+# 1 INTRODUCTION
+
+Loss functions are of indispensable components in training deep networks, as they drive the feature learning process for various applications with specific evaluation metrics. However, most metrics, like the commonly used 0-1 classification error, are non-differentiable in their original forms and cannot be directly optimized via gradient-based methods. Empirically, the cross-entropy loss serves well as an effective surrogate objective function for a variety of tasks concerning categorization. This phenomenon is especially prevailing in image semantic segmentation, where various evaluation metrics have been designed to address the diverse task focusing on different scenarios. Some metrics measure the accuracy on the whole image, while others focus more on the segmentation boundaries. Although cross-entropy and its variants work well for many metrics, the mis-alignment between network training and evaluation still exist and inevitably leads to performance degradation.
+
+Typically, there are two ways for designing metric-specific loss functions in semantic segmentation. The first is to modify the standard cross-entropy loss to meet the target metric (Ronneberger et al., 2015; Wu et al., 2016). The other is to design other clever surrogate losses for specific evaluation metrics (Rahman & Wang, 2016; Milletari et al., 2016). Despite the improvements, these handcrafted losses need expertise and are non-trivial to extend to other evaluation metrics.
+
+In contrast to designing loss functions manually, an alternative approach is to find a framework that can design proper loss functions for different evaluation metrics in an automated manner, motivated by recent progress in AutoML (Zoph & Le, 2017; Pham et al., 2018; Liu et al., 2018; Li et al., 2019). Although automating the design process for loss functions is attractive, it is non-trivial to apply an
+
+AutoML framework to loss functions. Typical AutoML algorithms require a proper search space, in which some search algorithms are conducted. Previous search spaces are either unsuitable for loss design, or too general to be searched efficiently. Recently Li et al. (2019) and Wang et al. (2020) proposed search spaces based on existing handcrafted loss functions. And the algorithm searches for the best combination. However, these search spaces are still limited to the variants of cross-entropy loss, and thus do not address the mis-alignment problem well.
+
+In this paper, we propose a general framework for searching surrogate losses for mainstream non-differentiable segmentation metrics. The key idea is that we can build the search space according to the form of evaluation metrics. In this way, the training criteria and evaluation metrics are unified. Meanwhile, the search space is compact enough for efficient search. Specifically, the metrics are first relaxed to the continuous domain by substituting the one-hot prediction and logical operations, which are the non-differentiable parts in most metrics, with their differentiable approximations. Parameterized functions are introduced to approximate the logical operations, ensuring that the loss surfaces are smooth while effective for training. The loss parameterization functions can be of arbitrary families defined on $[0,1]$ . Parameter search is further conducted on the chosen family so as to optimize the network performance on the validation set with the given evaluation metric. Two essential constraints are introduced to regularize the parameter search space. We find that the searched surrogate losses can effectively generalize to different networks and datasets. Extensive experiments on Pascal VOC (Everingham et al., 2015) and Cityscapes (Cordts et al., 2016) show our approach delivers accuracy superior than the existing losses specifically designed for individual segmentation metrics with a mild computational overhead.
+
+Our contributions can be summarized as follows: 1) Our approach is the first general framework of surrogate loss search for mainstream segmentation metrics. 2) We propose an effective parameter regularization and parameter search algorithm, which can find loss surrogates optimizing the target metric performance with mild computational overhead. 3) The surrogate losses obtained via the proposed searching framework promote our understandings on loss function design and by themselves are novel contributions, because they are different from existing loss functions specifically designed for individual metrics, and are transferable across different datasets and networks.
+
+# 2 RELATED WORK
+
+Loss function design is an active topic in deep network training (Ma, 2020). In the area of image semantic segmentation, cross-entropy loss is widely used (Ronneberger et al., 2015; Chen et al., 2018). But the cross-entropy loss is designed for optimizing the global accuracy measure (Rahman & Wang, 2016; Patel et al., 2020), which is not aligned with many other metrics. Numerous studies are conducted to design proper loss functions for the prevalent evaluation metrics. For the mIoU metric, many works (Ronneberger et al., 2015; Wu et al., 2016) incorporate class frequency to mitigate the class imbalance problem. For the boundary F1 score, the losses at boundary regions are up-weighted (Caliva et al., 2019; Qin et al., 2019), so as to deliver more accurate boundaries. These works carefully analyze the property of specific evaluation metrics, and design the loss functions in a fully handcrafted way, which needs expertise. By contrast, we propose a unified framework for deriving parameterized surrogate losses for various evaluation metrics. Wherein, the parameters are searched by reinforcement learning in an automatic way. The networks trained with the searched surrogate losses deliver accuracy on par or even superior than those with the best handcrafted losses.
+
+Direct loss optimization for non-differentiable evaluation metrics has long been studied for structural SVM models (Joachims, 2005; Yue et al., 2007; Ranjbar et al., 2012). However, the gradients w.r.t. features cannot be derived from these approaches. Therefore, they cannot drive the training of deep networks through back-propagation. Hazan et al. (2010) proposes to optimize structural SVM with gradient descent, where loss-augmented inference is applied to get the gradients of the expectation of evaluation metrics. Song et al. (2016) further extends this approach to non-linear models (e.g., deep neural networks). However, the computational complexity is very high during each step in gradient descent. Although Song et al. (2016) and Mohapatra et al. (2018) have designed efficient algorithms for the Average Precision (AP) metric, other metrics still need specially designed efficient algorithms. Our method, by contrast, is general for the mainstream segmentation metrics. Thanks to the good generalizability, our method only needs to perform the search process
+
+once for a specific metric, and the searched surrogate loss can be directly used henceforth. Applying the searched loss for training networks brings very little additional computational cost.
+
+Surrogate loss is introduced to derive loss gradients for the non-differentiable evaluation metrics. There are usually two ways for designing surrogate losses. The first is to handcraft an approximated differentiable metric function. For the IoU measure, Rahman & Wang (2016) propose to approximate the intersection and union separately using the softmax probabilities in a differentiable form, and show its effectiveness on binary segmentation tasks. Berman et al. (2018) further deal with multi-class segmentation problems by extending mIoU from binary inputs to the continuous domain with the convex Lovász extension, and their method outperforms standard cross entropy loss in multi-class segmentation tasks. For the F1 measure, dice loss is proposed by Milletari et al. (2016) as a direct objective by substituting the binary prediction with the softmax probability. In spite of the success, they do not apply for other metrics.
+
+The second solution is to train a network to approximate the target metric. Nagendar et al. (2018) train a network to approximate mIoU. Patel et al. (2020) design a neural network to learn embeddings for predictions and ground truths for tasks other than segmentation. This line of research focuses on minimizing the approximation error w.r.t. the target metrics. But there is no guarantee that their approximations provide good loss signals for training. These approximated losses are just employed in a post-tuning setup, still relying on cross-entropy pre-trained models. Our method significantly differs in that we search surrogate losses to directly optimize the evaluation metrics in applications.
+
+AutoML is a long-pursued target of machine learning (He et al., 2019). Recently a sub-field of AutoML, neural architecture search (NAS), has attracted much attention due to its success in automating the process of neural network architecture design (Zoph & Le, 2017; Pham et al., 2018; Liu et al., 2018). As an essential element, loss function has also raised the interest of researchers to automate its design process. Li et al. (2019) and Wang et al. (2020) design search spaces based on existing human-designed loss functions and search for the best combination parameters. There are two issues: a) the search process outputs whole network models rather than loss functions. For every new network or dataset, the expensive search procedure is conducted again, and b) the search space are filled with variants of cross-entropy, which cannot solve the mis-alignment between cross-entropy loss and many target metrics. By contrast, our method outputs the searched surrogate loss functions of close form with the target metrics, which are transferable between networks and datasets.
+
+# 3 REVISITING EVALUATION METRICS FOR SEMANTIC SEGMENTATION
+
+Various evaluation metrics are defined for semantic segmentation, to address the diverse task focusing on different scenarios. Most of them are of three typical classes: Acc-based, IoU-based, and F1-score-based. This section revisits the evaluation metrics, under a unified notation set.
+
+Table 1 summarizes the mainstream evaluation metrics. The notations are as follows: suppose the validation set is composed of $N$ images, labeled with categories from $C$ classes (background included). Let $I_{n}, n \in \{1, \ldots, N\}$ be the $n$ -th image, and $Y_{n}$ be the corresponding ground-truth segmentation mask. Here $Y_{n} = \{y_{n,c,h,w}\}_{c,h,w}$ is a one-hot vector, where $y_{n,c,h,w} \in \{0,1\}$ indicates whether the pixel at spatial location $(h,w)$ belongs to the $c$ -th category $(c \in \{1, \ldots, C\})$ . In evaluation, the ground-truth segmentation mask $Y_{n}$ is compared to the network prediction $\hat{Y}_{n} = \{\hat{y}_{n,c,h,w}\}_{c,h,w}$ , where $\hat{y}_{n,c,h,w} \in \{0,1\}$ . $\hat{y}_{n,c,h,w}$ is quantized from the continuous scores produced by the network (by argmax operation).
+
+Acc-based metrics. The global accuracy measure (gAcc) counts the number of pixels correctly classified. It can be written with logical operator AND as Eq. (1). The gAcc metric counts each pixel equally, so the results of the long-tailed categories have little impact on the metric number. The mean accuracy (mAcc) metric mitigates this by normalizing within each category as in Eq. (2).
+
+IoU-based metrics. The evaluation is on set similarity rather than pixel accuracy. The intersection-over-union (IoU) score is evaluated between the prediction and the ground-truth mask of each category. The mean IoU (mIoU) metric averages the IoU scores of all categories, as in Eq. (3).
+
+In the variants, the frequency weighted IoU (FWIoU) metric weighs each category IoU score by the category pixel number, as in Eq. (4). The boundary IoU (BIoU) (Kohli et al., 2009) metric only cares about the segmentation quality around the boundary, so it picks the boundary pixels out in evaluation
+
+Table 1: Revisiting mainstream metrics for semantic segmentation. The metrics with $\dagger$ measure the segmentation accuracy on the whole image. The metrics with * focus on the boundary quality.
+
+| Type | Name | Formula |
| Acc-based | Global Accuracy† | gAcc = ∑n,c,h,w帽子n,c,h,w AND yn,c,h,w / ∑n,c,h,w yn,c,h,w (1) |
| Mean Accuracy† | mAcc = 1/C ∑c ∑n,h,w帽子n,c,h,w AND yn,c,h,w / ∑n,h,w yn,c,h,w (2) |
| IoU-based | Mean IoU† | mIoU = 1/C ∑c ∑n,h,w帽子n,c,h,w AND yn,c,h,w OR yn,c,h,w (3) |
| Frequency Weighted IoU† | FWIoU = ∑c ∑n,h,w yn,c,h,w ∑n,h,w帽子n,c,h,w AND yn,c,h,w / ∑n,c,h,w yn,c,h,w OR yn,c,h,w (4) |
| Boundary IoU* | BIoU = 1/C ∑c ∑n,h,w∈BD(yn)帽子n,c,h,w AND yn,c,h,w OR yn,c,h,w where BD(y) = y XOR Min-Pooling(y) (5) |
| F1-score-based | Boundary F1 Score* | BF1-score = 1/C ∑c ∑n,h,w 2×precc×recallc / (precc + recallc) where precc = ∑n,h,w BD(ŷn)c,h,w AND Max-Pooling(BD(yn)c,h,w) / ∑n,h,w BD(ŷn)c,h,w recallc = ∑n,h,w Max-Pooling(BD(ŷn)c,h,w) AND(BD(yn)c,h,w) / ∑n,h,w BD(yn)c,h,w |
+
+and ignores the rest pixels. It can be calculated with Eq. (5), in which $\mathrm{BD}(y_n)$ denotes the boundary region in map $y_{n}$ . $\mathrm{BD}(y_n)$ is derived by applying XOR operation on the min-pooled ground-truth mask. The stride of the Min-Pooling(·) is 1.
+
+F1-score-based metrics. F1-score is a criterion that takes both precision and recall into consideration. A well-known metric of this type is boundary F1-score (BF1-score) (Csurka et al., 2013), which is widely used for evaluating boundary segmentation accuracy. The computation of precision and recall in BF1-score is as in Eq. (6), where $\mathrm{BD}(\hat{y}_n)$ and $\mathrm{BD}(y_n)$ are derived from Eq. (5). Max pooling with stride 1, Max-Pooling( $\cdot$ ), is applied on the boundary regions to allow error tolerance.
+
+# 4 AUTO SEG-LOSS FRAMEWORK
+
+In the Auto Seg-Loss framework, the evaluation metrics are transferred into continuous surrogate losses with learnable parameters, which are further optimized. Fig. 1 illustrates our approach.
+
+# 4.1 EXTENDING METRICS TO SURROGATES
+
+As shown in Section 3, most segmentation metrics are non-differentiable because they take one-hot prediction maps as input, and contain binary logical operations. We extend these metrics to be continuous loss surrogates by smoothing the non-differentiable operations within.
+
+Extending One-hot Operation. The one-hot prediction map, $\hat{Y}_n = \{\hat{y}_{n,c,h,w}\}_{c,h,w}$ , is derived by picking the highest scoring category at each pixel, which is further turned into one-hot form. Here, we approximate the one-hot predictions with softmax probabilities, as,
+
+$$
+\hat {y} _ {n, c, h, w} \approx \widetilde {y} _ {n, c, h, w} = \operatorname {S o f t m a x} _ {c} \left(z _ {n, c, h, w}\right), \tag {7}
+$$
+
+where $z_{n,c,h,w} \in \mathbb{R}$ is the category score output by the network (without normalization). The approximated one-hot prediction is denoted by $\widetilde{y}_{n,c,h,w}$ .
+
+Extending Logical Operations. As shown in Table 1, the non-differentiable logical operations, $f_{\mathrm{AND}}(y_1,y_2)$ , $f_{\mathrm{OR}}(y_1,y_2)$ , and $f_{\mathrm{XOR}}(y_1,y_2)$ , are of indispensable components in these metrics. Because the XOR operation can be constructed by AND and OR, $f_{\mathrm{XOR}}(y_1,y_2) = f_{\mathrm{OR}}(y_1,y_2) - f_{\mathrm{AND}}(y_1,y_2)$ , we focus on extending $f_{\mathrm{AND}}(y_1,y_2)$ and $f_{\mathrm{OR}}(y_1,y_2)$ to the continuous domain.
+
+Following the common practice, the logical operators are substituted with arithmetic operators
+
+$$
+f _ {\mathrm {A N D}} \left(y _ {1}, y _ {2}\right) = y _ {1} y _ {2}, f _ {\mathrm {O R}} \left(y _ {1}, y _ {2}\right) = y _ {1} + y _ {2} - y _ {1} y _ {2}, \tag {8}
+$$
+
+
+Figure 1: Overview of the proposed Auto Seg-Loss framework. The surfaces of $h_{\mathrm{AND}}$ and $h_{\mathrm{OR}}$ shown in the "Optimal Parameterization" illustrate the searched optimal parameterization for mIoU, where $y_1, y_2 \in \{0,1\}$ . Eq. (8) can be directly extended to take continuous $y_1, y_2 \in [0,1]$ as inputs. By such an extension, together with the approximated one-hot operation, a naive version of differentiable surrogate losses can be obtained. The strength of such surrogates is that they are directly derived from the metrics, which significantly reduces the gap between training and evaluation. However, there is no guarantee that the loss surfaces formed by naively extending Eq. (8) provide accurate loss signals. To adjust the loss surfaces, we parameterize the AND and OR functions as
+
+$$
+\begin{array}{l} h _ {\text {A N D}} \left(y _ {1}, y _ {2}; \theta_ {\text {A N D}}\right) = g \left(y _ {1}; \theta_ {\text {A N D}}\right) g \left(y _ {2}; \theta_ {\text {A N D}}\right), \\ h _ {\text {O R}} \left(y _ {1}, y _ {2}; \theta_ {\text {O R}}\right) = g \left(y _ {1}; \theta_ {\text {O R}}\right) + g \left(y _ {2}; \theta_ {\text {O R}}\right) - g \left(y _ {1}; \theta_ {\text {O R}}\right), \tag {9} \\ \end{array}
+$$
+
+where $g(y;\theta):[0,1]\to \mathbb{R}$ is a scalar function parameterized by $\theta$
+
+The parameterized function $g(y; \theta)$ can be from arbitrary function families defined on [0, 1], e.g., piecewise linear functions and piecewise Bézier curves. With a chosen function family, the parameters $\theta$ control the shape of loss surfaces. We seek to search for the optimal parameters $\theta$ so as to maximize the given evaluation metric.
+
+Meanwhile, optimal parameter search is non-trivial. With the introduced parameters, the plasticity of loss surfaces is strong. The parameterized loss surfaces may well be chaotic, or be far away from the target evaluation metric even at the binary inputs. For more effective parameter search, we regularize the loss surfaces by introducing two constraints on $g(y; \theta)$ .
+
+Truth-table constraint is introduced to enforce the surrogate loss surfaces taking the same values as the evaluation metric score at binary inputs. This is applied by enforcing
+
+$$
+g (0; \theta) = 0, g (1; \theta) = 1. \tag {10}
+$$
+
+Thus, the parameterized functions $h(y_1, y_2; \theta)$ preserve the behavior of the corresponding logical operations $f(y_1, y_2)$ on binary inputs $y_1, y_2 \in \{0, 1\}$ .
+
+Monotonicity constraint is introduced based on the observation of monotonicity tendency in the truth tables of AND and OR. It pushes the loss surfaces towards a benign landscape, avoiding dramatic non-smoothness. The monotonicity constraint is enforced on $h_{\mathrm{AND}}(y_1,y_2)$ and $h_{\mathrm{OR}}(y_1,y_2)$ , as
+
+$$
+\partial h _ {\text {A N D}} / \partial y _ {i} \geq 0, \partial h _ {\text {O R}} / \partial y _ {i} \geq 0, \forall y _ {i} \in [ 0, 1 ], i = 1, 2.
+$$
+
+Applying the chain rule and the truth table constraint, the monotonicity constraint implies
+
+$$
+\partial g (y; \theta) / \partial y \geq 0, \forall y \in [ 0, 1 ]. \tag {11}
+$$
+
+Empirically we find it important to enforce these two constraints in parameterization.
+
+Extending Evaluation Metrics. Now we can extend the metrics to surrogate losses by a) replacing the one-hot predictions with softmax probabilities, and b) substituting the logical operations with parameterized functions. Note that if the metric contains several logical operations, their parameters will not be shared. The collection of parameters in one metric are denoted as $\Theta$ . For a segmentation network N and evaluation dataset $S$ , the score of the evaluation metric is denoted as $\xi(N; S)$ . And the parameterized surrogate loss is denoted as $\widetilde{\xi}_{\Theta}(N; S)$ .
+
+# 4.2 SURROGATE PARAMETERIZATION
+
+The parameterized function can be from any function families defined on $[0, 1]$ , such as piecewise Bézier curve and piecewise linear functions. Here we choose the piecewise Bézier curve for parameterizing $g(y; \theta)$ , which is widely used in computer graphics and is easy to enforce the constraints via its control points. We also verify the effectiveness of parameterizing $g(y; \theta)$ by piecewise linear functions. See Fig. 2 for visualization and Appendix B for more details.
+
+A piecewise Bézier curve consists of a series of quadratic Bézier curves, where the last control point of one curve segment coincides with the first control point of the next curve segment. If there are $n$ segments in a piecewise Bézier curve, the $k$ -th segment is defined as
+
+$$
+B (k, s) = (1 - s) ^ {2} B _ {2 k} + 2 s (1 - s) B _ {2 k + 1} + s ^ {2} B _ {2 k + 2}, 0 \leq s \leq 1 \tag {12}
+$$
+
+where $s$ transverses the $k$ -th segment, $B_{2k + i} = (B_{(2k + i),u}, B_{(2k + i),v})$ $(i = 0,1,2)$ denotes the $i$ -th control point on the $k$ -th segment, in which $u, v$ index the 2-d plane axes. A piecewise Bézier curve with $n$ segments has $2n + 1$ control points in total. To parameterize $g(y;\theta)$ , we assign
+
+$$
+y = (1 - s) ^ {2} B _ {2 k, u} + 2 s (1 - s) B _ {(2 k + 1), u} + s ^ {2} B _ {(2 k + 2), u}, \tag {13a}
+$$
+
+$$
+g (y; \theta) = (1 - s) ^ {2} B _ {2 k, v} + 2 s (1 - s) B _ {(2 k + 1), v} + s ^ {2} B _ {(2 k + 2), v}, \tag {13b}
+$$
+
+$$
+\text {s . t .} B _ {2 k, u} \leq y \leq B _ {(2 k + 2), u}, \tag {13c}
+$$
+
+where $\theta$ is the control point set, $B_{2k,u} < B_{(2k + 1),u} < B_{(2k + 2),u}$ , $0 \leq k \leq n - 1$ . Given an input $y$ , the segment index $k$ and the transversal parameter $s$ are derived from Eq. (13c) and Eq. (13a), respectively. Then $g(y;\theta)$ is assigned as Eq. (13b). Because $g(y;\theta)$ is defined on $y \in [0,1]$ , we arrange the control points in the $u$ -axis as, $B_{0,u} = 0$ , $B_{2n,u} = 1$ , where the $u$ -coordinate of the first and the last control points are at 0 and 1, respectively.
+
+The strength of the piecewise Bézier curve is that the curve shape is defined explicitly via the control points. Here we enforce the truth-table and the monotonicity constraints on the control points via,
+
+$$
+B _ {0, v} = 0, B _ {2 n, v} = 1;
+$$
+
+$$
+(\text {t r u t h - t a b l e c o n s t r a i n t})
+$$
+
+$$
+B _ {2 k, v} \leq B _ {(2 k + 1), v} \leq B _ {(2 k + 2), v}, \quad k = 0, 1, \dots , n - 1.
+$$
+
+$$
+(m o n o t o n i c i t y c o n s t r a i n t)
+$$
+
+To fulfill the above restrictions in optimization, the specific form of the parameters is given by
+
+$$
+\theta = \left\{\left(\frac {B _ {i , u} - B _ {(i - 1) , u}}{B _ {2 n , u} - B _ {(i - 1) , u}}, \frac {B _ {i , v} - B _ {(i - 1) , v}}{B _ {2 n , v} - B _ {(i - 1) , v}}\right) \mid i = 1, 2, \dots , 2 n - 1 \right\},
+$$
+
+with $B_0 = (0,0)$ and $B_{2n} = (1,1)$ fixed. So every $\theta_i = (\theta_{i,u},\theta_{i,v})$ is in range $[0,1]^2$ and it is straightforward to compute the actual coordinates of control points from this parameterized form. Such parameterization makes each $\theta_i$ independent with each other, and thus simplifies the optimization. By default, we use piecewise Bézier curve with two segments to parameterize $g(y,\theta)$ .
+
+
+Figure 2: Parameterization of $g(y;\theta)$ using Piecewise Bézier curve with four segments. The red points are control points. The purple point is on the curve, which shows the relationship among $y$ , $g(y;\theta)$ and the transversal parameter $s$ .
+
+# Algorithm 1: Auto Seg-Loss Parameter Search
+
+Input: Initialized network $\mathrm{N}_{\omega_0}$ , initialized distribution $\mu_{1}$ and $\sigma^2$ , target metric $\xi$ , training set $S_{train}$ and hold-out training set $S_{hold - out}$
+
+Result: Obtained optimal parameters $\Theta^{*}$
+
+for $t = 1$ to $T$ do
+
+for $i = 1$ to $M$ do
+
+Sample parameter $\Theta_{i}^{(t)}\sim \mathcal{N}_{\mathrm{trunc}[0,1]}(\mu_t,\sigma^2 I)$ Network training
+
+$\omega^{*}(\Theta_{i}^{(t)}) = \arg \max_{\omega}\tilde{\xi}_{\Theta^{(t)}}(\mathrm{N}_{\omega};\mathcal{S}_{\mathrm{train}}),$
+
+with $w$ initialized from $w_{0}$
+
+Compute the evaluation metric score
+
+$$
+\xi (\Theta_ {i} ^ {(t)}) = \xi (N _ {\omega^ {*} (\Theta_ {i} ^ {(t)})}; S _ {\text {h o l d - o u t}});
+$$
+
+end
+
+Update $\mu_{t + 1} = \arg \max_{\mu}\frac{1}{M}\sum_{i = 1}^{M}R(\mu ,\mu_{t},\Theta_{i}^{(t)})$
+
+end
+
+return $\Theta^{*} = \arg \max_{\mu_t}\sum_{i = 1}^M\xi (\Theta_i^{(t)}),\forall t = 1,\ldots ,T + 1$
+
+# 4.3 SURROGATE PARAMETER OPTIMIZATION
+
+Algorithm 1 describes our parameter search algorithm. The training set is split into two subsets, $S_{\text{train}}$ for training and $S_{\text{hold-out}}$ for evaluation in the search algorithm, respectively. Specifically, suppose we have a segmentation network $N_{\omega}$ with weights $\omega$ , our search target is the parameters that maximize the evaluation metric on the hold-out training set $\xi(N_{\omega}; S_{\text{hold-out}})$ .
+
+$$
+\max _ {\Theta} \xi (\Theta) = \xi \left(\mathrm {N} _ {\omega^ {*} (\Theta)}; \mathcal {S} _ {\text {h o l d - o u t}}\right), \quad \text {s . t .} \quad \omega^ {*} (\Theta) = \underset {\omega} {\arg \max } \widetilde {\xi_ {\Theta}} \left(\mathrm {N} _ {\omega}; \mathcal {S} _ {\text {t r a i n}}\right). \tag {14}
+$$
+
+To optimize Eq. (14), the segmentation network is trained with SGD as the inner-level problem. At the outer-level, we use reinforcement learning as our searching algorithm, following the common practice in AutoML (Zoph & Le, 2017; Pham et al., 2018). Other searching algorithms, such as evolutionary algorithm, may also be employed. Specifically, the surrogate parameters are searched via the PPO2 algorithm (Schulman et al., 2017). The process consists of $T$ sampling steps. In the $t$ -th step, we aim to explore the search space around that from $t - 1$ . Here $M$ sets of parameters $\{\Theta_i^{(t)}\}_{i = 1}^M$ are sampled independently from a truncated normal distribution (Burkardt, 2014), as $\Theta \sim \mathcal{N}_{\mathrm{trunc}[0,1]}(\mu_t,\sigma^2 I)$ , with each variable in range [0,1]. In it, $\mu_t$ and $\sigma^2 I$ denote the mean and covariance of the parent normal distribution ( $\sigma$ is fixed as 0.2 in this paper). $\mu_t$ summarizes the information from the $(t - 1)$ -th step. $M$ surrogate losses are constructed with the sampled parameters, which drive the training of $M$ segmentation networks separately. To optimize the outer-level problem, we evaluate these models with the target metric and take the evaluation scores as rewards for PPO2. Following the PPO2 algorithm, $\mu_{t + 1}$ is computed as $\mu_{t + 1} = \arg \max_{\mu}\frac{1}{M}\sum_{i = 1}^{M}R(\mu ,\mu_{t},\Theta_{i})$ , where the reward $R(\mu ,\mu_t,\Theta_i)$ is as
+
+$$
+R (\mu , \mu_ {t}, \Theta_ {i}) = \min \left(\frac {p (\Theta_ {i} ; \mu , \sigma^ {2} I)}{p (\Theta_ {i} ; \mu_ {t} , \sigma^ {2} I)} \xi (\Theta_ {i}), \mathrm {C L I P} \left(\frac {p (\Theta_ {i} ; \mu , \sigma^ {2} I)}{p (\Theta_ {i} ; \mu_ {t} , \sigma^ {2} I)}, 1 - \epsilon , 1 + \epsilon\right) \xi (\Theta_ {i})\right),
+$$
+
+where $\min (\cdot ,\cdot)$ picks the smaller item from its inputs, $\mathrm{CLIP}(x,1 - \epsilon ,1 + \epsilon)$ clips $x$ to be within $1 - \epsilon$ and $1 + \epsilon$ , and $p(\Theta_i;\mu ,\sigma^2 I)$ is the PDF of the truncated normal distribution. Note that the mean reward of the $M$ samples is subtracted when computing $\xi (\Theta_{i})$ for better convergence. After $T$ steps, the mean $\mu_t$ with the highest average evaluation score is output as the final parameters $\Theta^{*}$ .
+
+Empirically we find the searched losses have good transferability, i.e., they can be applied for different datasets and networks. Benefiting from this, we use a light proxy task for parameter search. In it, we utilize a smaller image size, a shorter learning schedule and a lightweight network. Thus, the whole search process is quite efficient (8 hours on PASCAL VOC with 8 NVIDIA Tesla V100 GPUs). More details are in Appendix A. In addition, the search process can be conducted only once for a specific metric and the resulting surrogate loss can be directly used for training henceforth.
+
+# 5 EXPERIMENTS
+
+We evaluate on the PASCAL VOC 2012 (Everingham et al., 2015) and the Cityscapes (Cordts et al., 2016) datasets. We use Deeplabv3+ (Chen et al., 2018) with ResNet-50/101 (He et al., 2016) as the network model. During the surrogate parameter search, we randomly sample 1500 training images in PASCAL VOC and 500 training images in Cityscapes to form the hold-out set $S_{\mathrm{hold - out}}$ , respectively. The remaining training images form the training set $S_{\mathrm{train}}$ in search. $\mu_0$ is set to make $g(y;\theta) = y$ . The backbone network is ResNet-50. The images are down-sampled to be of $128\times 128$ resolution. SGD lasts only 1000 iterations with a mini-batch size of 32. After the search procedure, we re-train the segmentation networks with ResNet-101 using the searched losses on the full training set and evaluate them on the actual validation set. The re-train settings are the same as Deeplabv3+ (Chen et al., 2018), except that the loss function is substituted by the obtained surrogate loss. The search time is counted on 8 NVIDIA Tesla V100 GPUs. More details are in Appendix A.
+
+# 5.1 SEARCHING FOR DIFFERENT METRICS
+
+In Table 2, we compare our searched surrogate losses against the widely-used cross-entropy loss and its variants, and some other metric-specific surrogate losses. We also seek to compare with the AutoML-based method in Li et al. (2019), which was originally designed for other tasks. But we cannot get reasonable results due to convergence issues. The results show that our searched losses
+
+are on par or better the previous losses on their target metrics. It is interesting to note that the obtained surrogates for boundary metrics (such as BIoU and BF1) only focus on the boundary areas, see Appendix C for further discussion. We also tried training segmentation networks driven by both searched mIoU and BIoU/BF1 surrogate losses. Such combined losses refine the boundaries while keeping reasonable global performance.
+
+Table 2: Performance of different losses on PASCAL VOC and Cityscapes segmentation. The results of each loss function's target metrics are underlined. The scores whose difference with the highest is less than 0.3 are marked in bold.
+
+| Dataset | PASCAL VOC | Cityscapes |
| Loss Function | mIoU | FWIoU | BIoU | BF1 | mAcc | gAcc | mIoU | FWIoU | BIoU | BF1 | mAcc | gAcc |
| Cross Entropy | 78.69 | 91.31 | 70.61 | 65.30 | 87.31 | 95.17 | 79.97 | 93.33 | 62.07 | 62.24 | 87.01 | 96.44 |
| WCE (Ronneberger et al., 2015) | 69.60 | 85.64 | 61.80 | 37.59 | 92.61 | 91.11 | 73.01 | 90.51 | 53.07 | 51.19 | 89.22 | 94.56 |
| DPCE (Caliva et al., 2019) | 79.82 | 91.76 | 71.87 | 66.54 | 87.76 | 95.45 | 80.27 | 93.38 | 62.57 | 65.99 | 86.99 | 96.46 |
| SSIM (Qin et al., 2019) | 79.26 | 91.68 | 71.54 | 66.35 | 87.87 | 95.38 | 80.65 | 93.22 | 63.04 | 72.20 | 86.88 | 96.39 |
| DiceLoss (Milletari et al., 2016) | 77.78 | 91.34 | 69.85 | 64.38 | 87.47 | 95.11 | 79.30 | 93.25 | 60.93 | 59.94 | 86.38 | 96.39 |
| Lovasz (Berman et al., 2018) | 79.72 | 91.78 | 72.47 | 66.65 | 88.64 | 95.42 | 77.67 | 92.51 | 56.71 | 53.48 | 82.05 | 96.03 |
| Searched mIoU | 80.97 | 92.09 | 73.44 | 68.86 | 88.23 | 95.68 | 80.67 | 93.30 | 63.05 | 67.97 | 87.20 | 96.44 |
| Searched FWIoU | 80.00 | 91.93 | 75.14 | 65.67 | 89.23 | 95.44 | 79.42 | 93.33 | 61.71 | 59.68 | 87.96 | 96.37 |
| Searched BIoU | 48.97 | 69.89 | 79.27 | 38.99 | 81.28 | 62.64 | 45.89 | 39.80 | 63.89 | 38.29 | 62.80 | 58.15 |
| Searched BF1 | 1.93 | 0.96 | 7.39 | 74.83 | 6.51 | 2.66 | 6.78 | 3.19 | 18.37 | 77.40 | 12.09 | 8.19 |
| Searched mAcc | 69.80 | 85.86 | 72.85 | 35.62 | 92.66 | 91.28 | 74.10 | 90.79 | 54.62 | 53.45 | 89.22 | 94.75 |
| Searched gAcc | 79.73 | 91.76 | 74.09 | 64.41 | 88.95 | 95.47 | 79.41 | 93.30 | 61.65 | 62.04 | 87.08 | 96.51 |
| Searched mIoU + BIoU | 81.19 | 92.19 | 76.89 | 69.56 | 88.36 | 95.75 | 80.43 | 93.34 | 63.88 | 65.87 | 87.03 | 96.45 |
| Searched mIoU + BF1 | 78.72 | 90.80 | 71.81 | 73.57 | 86.70 | 94.88 | 78.30 | 93.00 | 61.62 | 71.73 | 87.13 | 96.23 |
+
+# 5.2 GENERALIZATION OF THE LOSS
+
+Generalization among datasets. Table 3 evaluates the generalization ability of our searched loss surrogates among different datasets. Due to limited computational resource, we train networks only with the searched mIoU, BF1 and mAcc surrogate losses. The results show that our searched surrogate losses generalize well between these two datasets with quite different scenes and categories.
+
+Table 3: Generalization of our searched surrogate losses between PASCAL VOC and Cityscapes.
+
+| Datasets | Cityscapes → VOC | VOC → Cityscapes |
| Loss Function | mIoU | FWIoU | BIoU | BF1 | mAcc | gAcc | mIoU | FWIoU | BIoU | BF1 | mAcc | gAcc | |
| Cross Entropy | 78.69 | 91.31 | 70.61 | 65.30 | 87.31 | 95.17 | 79.97 | 93.33 | 62.07 | 62.24 | 87.01 | 96.44 | |
| Searched mIoU | 80.05 | 91.72 | 73.97 | 67.61 | 88.01 | 95.45 | 80.67 | 93.31 | 62.96 | 66.48 | 87.36 | 96.44 | |
| Searched BF1 | 1.84 | 0.93 | 7.42 | 75.85 | 6.48 | 1.47 | 6.67 | 3.20 | 19.00 | 77.99 | 12.12 | 4.09 | |
| Searched mAcc | 70.90 | 86.29 | 73.43 | 37.18 | 93.19 | 91.43 | 73.50 | 90.68 | 54.34 | 54.04 | 88.66 | 94.68 | |
+
+Generalization among segmentation networks. The surrogate losses are searched with ResNet-50 + DeepLabv3+ on PASCAL VOC. The searched losses drive the training of ResNet-101 + DeepLabv3+, PSPNet (Zhao et al., 2017) and HRNet (Sun et al., 2019) on PASCAL VOC. Table 4 shows the results. The results demonstrate that our searched loss functions can be applied to various semantic segmentation networks.
+
+# 5.3 ABLATION
+
+Parameterization and constraints. Table 5 ablates the parameterization and the search space constraints. In it, a surrogate without parameters refers to Eq. (8), with the domain extended from discrete points $\{0,1\}$ to continuous interval [0, 1]. This naive surrogate delivers much lower accuracy, indicating the essence of parameterization. Without the truth-table constraint, the training process diverges at the very beginning, where the loss gradients become "NaN". And the performance drops if the monotonicity constraint is not enforced. The performance drops or even the algorithm fails without the constraints.
+
+Table 4: Generalization of our searched surrogate losses among different network architectures on PASCAL VOC. The losses are searched with ResNet-50 + DeepLabv3+ on PASCAL VOC.
+
+| Network | R50-DeepLabv3+ | R101-DeepLabv3+ | R101-PSPNet | HRNetV2p-W48 |
| Loss Function | mIoU | BF1 | mAcc | mIoU | BF1 | mAcc | mIoU | BF1 | mAcc | mIoU | BF1 | mAcc |
| Cross Entropy | 76.22 | 61.75 | 85.43 | 78.69 | 65.30 | 87.31 | 77.91 | 64.70 | 85.71 | 76.35 | 61.19 | 85.12 |
| Searched mIoU | 78.35 | 66.93 | 85.53 | 80.97 | 68.86 | 88.23 | 78.93 | 65.65 | 87.42 | 77.26 | 63.52 | 86.80 |
| Searched BF1 | 1.35 | 70.81 | 6.05 | 1.93 | 74.83 | 6.51 | 1.62 | 71.84 | 6.33 | 1.34 | 68.41 | 5.99 |
| Searched mAcc | 69.82 | 36.92 | 91.61 | 69.80 | 35.62 | 92.66 | 71.66 | 39.44 | 92.06 | 68.22 | 35.90 | 91.46 |
+
+Proxy tasks for parameter search. Table 6 ablates this. The bottom row is our default setting with a light-weight backbone, down-sampled image size and shorter learning schedule. The default setting delivers on par accuracy with heavier settings. This is consistent with the generalization ability of our surrogate losses. Thus we can improve the search efficiency via light proxy tasks.
+
+Parameter search algorithm. Fig. 3 compares the employed PPO2 (Schulman et al., 2017) algorithm with random search. The much better performance of PPO2 suggests that surrogate loss search is non-trivial and reinforcement learning helps to improve the search efficiency.
+
+Table 5: Ablation on search space constraints.
+
+| Parameter | Truth-table | Monotonicity | VOC mIoU |
| X | X | X | 46.99 |
| ✓ | X | X | Fail |
| ✓ | ✓ | X | 77.76 |
| ✓ | ✓ | ✓ | 80.64 |
+
+Table 6: Ablation on search proxy tasks.
+
+| Backbone | Image Size | Iterations | Time(hours) | VOC mIoU |
| R50 | 256 × 256 | 1000 | 33.0 | 81.15 |
| R50 | 128 × 128 | 2000 | 17.1 | 80.56 |
| R101 | 128 × 128 | 1000 | 13.3 | 80.75 |
| R50 | 128 × 128 | 1000 | 8.5 | 80.97 |
+
+
+(a) search for mIoU
+
+
+(b) search for BF1
+
+
+(c) search for mAcc
+Figure 3: Ablation on loss parameter search. Each curve presents the highest average evaluation score up to the $t$ -th step in one search process. The search process is repeated four times.
+
+# 6 CONCLUSION
+
+The introduced Auto Seg-Loss is a powerful framework to search for the parameterized surrogate losses for mainstream segmentation evaluation metrics. The non-differentiable operators are substituted by their parameterized continuous counterparts. The parameters are optimized to improve the final evaluation metrics with essential constraints. It would be interesting to extend the framework to more tasks, like object detection, pose estimation and machine translation problems.
+
+# ACKNOWLEDGMENTS
+
+The work is supported by the National Key R&D Program of China (2020AAA0105200), Beijing Academy of Artificial Intelligence and the Institute for Guo Qiang of Tsinghua University.
+
+# REFERENCES
+
+Maxim Berman, Amal Rannen Triki, and Matthew B Blaschko. The lovasz-softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks. In Pro
+
+ceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4413-4421, 2018.
+John Burkardt. The truncated normal distribution. Department of Scientific Computing Website, Florida State University, pp. 1-35, 2014.
+Francesco Caliva, Claudia Iriondo, Alejandro Morales Martinez, Sharmila Majumdar, and Valentina Pedoia. Distance map loss penalty term for semantic segmentation. In International Conference on Medical Imaging with Deep Learning-Extended Abstract Track, 2019.
+Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deep plab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(4): 834-848, 2017.
+Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 801-818, 2018.
+Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3213-3223, 2016.
+Gabriela Csurka, Diane Larlus, Florent Perronnin, and France Meylan. What is a good evaluation measure for semantic segmentation? In Proceedings of the British Machine Vision Conference (BMVC), volume 27, pp. 2013, 2013.
+Mark Everingham, SM Ali Eslami, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The Pascal visual object classes challenge: A retrospective. International Journal of Computer Vision, 111(1):98-136, 2015.
+Bharath Hariharan, Pablo Arbeláez, Lubomir Bourdev, Subhransu Maji, and Jitendra Malik. Semantic contours from inverse detectors. In Proceedings of IEEE International Conference on Computer Vision (ICCV), pp. 991-998, 2011.
+Tamir Hazan, Joseph Keshet, and David A McAllester. Direct loss minimization for structured prediction. In Advances in Neural Information Processing Systems (NIPS), pp. 1594-1602, 2010.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, 2016.
+Xin He, Kaiyong Zhao, and Xiaowen Chu. Automl: A survey of the state-of-the-art. arXiv preprint arXiv:1908.00709, 2019.
+Thorsten Joachims. A support vector method for multivariate performance measures. In Proceedings of the 22nd International Conference on Machine Learning (ICML), pp. 377-384. PMLR, 2005.
+Pushmeet Kohli, Philip HS Torr, et al. Robust higher order potentials for enforcing label consistency. International Journal of Computer Vision, 82(3):302-324, 2009.
+Chuming Li, Xin Yuan, Chen Lin, Minghao Guo, Wei Wu, Junjie Yan, and Wanli Ouyang. Am-lds: Automl for loss function search. In Proceedings of the IEEE International Conference on Computer Vision (CVPR), pp. 8410-8419, 2019.
+Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. In Proceedings of the 6th International Conference on Learning Representations (ICLR), 2018.
+Jun Ma. Segmentation loss odyssey. arXiv preprint arXiv:2005.13449, 2020.
+Fausto Miletari, Nassir Navab, and Seyed-Ahmad Ahmadi. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In 2016 Fourth International Conference on 3D Vision (3DV), pp. 565-571. IEEE, 2016.
+
+Pritish Mohapatra, Michal Rolinek, CV Jawahar, Vladimir Kolmogorov, and M Pawan Kumar. Efficient optimization for rank-based loss functions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3693-3701, 2018.
+Gattigorla Nagendar, Digvijay Singh, Vineeth N Balasubramanian, and CV Jawahar. Neuro-iou: Learning a surrogate loss for semantic segmentation. In Proceedings of the British Machine Vision Conference (BMVC), pp. 278, 2018.
+Yash Patel, Tomas Hodan, and Jiri Matas. Learning surrogates via deep embedding. In Proceedings of the European Conference on Computer Vision (ECCV), 2020.
+Hieu Pham, Melody Guan, Barret Zoph, Quoc Le, and Jeff Dean. Efficient neural architecture search via parameters sharing. In Proceedings of the 35th International Conference on Machine Learning (ICML), pp. 4095-4104. PMLR, 2018.
+Xuebin Qin, Zichen Zhang, Chenyang Huang, Chao Gao, Masood Dehghan, and Martin Jagersand. Basnet: Boundary-aware salient object detection. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7479-7489, 2019.
+Md Atiqur Rahman and Yang Wang. Optimizing intersection-over-union in deep neural networks for image segmentation. In International Symposium on Visual Computing, pp. 234–244. Springer, 2016.
+Mani Ranjbar, Tian Lan, Yang Wang, Steven N Robinovitch, Ze-Nian Li, and Greg Mori. Optimizing nondecomposable loss functions in structured prediction. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(4):911-924, 2012.
+Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234-241. Springer, 2015.
+John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
+Yang Song, Alexander Schwing, Raquel Urtasun, et al. Training deep neural networks via direct loss minimization. In Proceedings of the 33rd International Conference on Machine Learning (ICML), pp. 2169-2177. PMLR, 2016.
+Ke Sun, Bin Xiao, Dong Liu, and Jingdong Wang. Deep high-resolution representation learning for human pose estimation. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5693-5703, 2019.
+Xiaobo Wang, Shuo Wang, Cheng Chi, Shifeng Zhang, and Tao Mei. Loss function search for face recognition. In Proceedings of the 37th International Conference on Machine Learning (ICML). PMLR, 2020.
+Zifeng Wu, Chunhua Shen, and Anton van den Hengel. Bridging category-level and instance-level semantic image segmentation. arXiv preprint arXiv:1605.06885, 2016.
+Yisong Yue, Thomas Finley, Filip Radlinski, and Thorsten Joachims. A support vector method for optimizing average precision. In Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 271-278, 2007.
+Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, and Jiaya Jia. Pyramid scene parsing network. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2881-2890, 2017.
+Barret Zoph and Quoc V. Le. Neural architecture search with reinforcement learning. In Proceedings of the 5th International Conference on Learning Representations (ICLR), 2017.
\ No newline at end of file
diff --git a/autoseglosssearchingmetricsurrogatesforsemanticsegmentation/images.zip b/autoseglosssearchingmetricsurrogatesforsemanticsegmentation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..9249bc0036bf7a7ce32ea5c9b889bb9f38c42393
--- /dev/null
+++ b/autoseglosssearchingmetricsurrogatesforsemanticsegmentation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:14bb63f8102209ad6da0b89342f2091471e04e3cf928787963cdb5f2bd90b1e6
+size 624841
diff --git a/autoseglosssearchingmetricsurrogatesforsemanticsegmentation/layout.json b/autoseglosssearchingmetricsurrogatesforsemanticsegmentation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..104bcac5b0ffb3c489dc2aa689420dc1cd993824
--- /dev/null
+++ b/autoseglosssearchingmetricsurrogatesforsemanticsegmentation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c6637bcb8f9aa87822dc5abe925b34828651d5adc4361f70042560ceb6b96a01
+size 420522
diff --git a/auxiliarylearningbyimplicitdifferentiation/ec102424-6e52-4b06-9cc2-c97f7dd83d6e_content_list.json b/auxiliarylearningbyimplicitdifferentiation/ec102424-6e52-4b06-9cc2-c97f7dd83d6e_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..c5b5cc0c3092342788cbe4509d5667a5f9c06406
--- /dev/null
+++ b/auxiliarylearningbyimplicitdifferentiation/ec102424-6e52-4b06-9cc2-c97f7dd83d6e_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c930221afa118f682d0a6926bb172b9f42682db9e8b9b5b83827931500f0aa97
+size 131018
diff --git a/auxiliarylearningbyimplicitdifferentiation/ec102424-6e52-4b06-9cc2-c97f7dd83d6e_model.json b/auxiliarylearningbyimplicitdifferentiation/ec102424-6e52-4b06-9cc2-c97f7dd83d6e_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..96817aaed70b41e33025114f9eb892a41e137a6c
--- /dev/null
+++ b/auxiliarylearningbyimplicitdifferentiation/ec102424-6e52-4b06-9cc2-c97f7dd83d6e_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5ac390d16a51a820b71dfd2fe2cb2483f093b252b39ccd59293c3c17598953a7
+size 157736
diff --git a/auxiliarylearningbyimplicitdifferentiation/ec102424-6e52-4b06-9cc2-c97f7dd83d6e_origin.pdf b/auxiliarylearningbyimplicitdifferentiation/ec102424-6e52-4b06-9cc2-c97f7dd83d6e_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..46de0908ffd27963815bcc3ae1787585dab7d778
--- /dev/null
+++ b/auxiliarylearningbyimplicitdifferentiation/ec102424-6e52-4b06-9cc2-c97f7dd83d6e_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1b73fc4ce83f0bc7e29b3da0d6ccf46cd268b169ec5660e0120b13df9aa35611
+size 4529694
diff --git a/auxiliarylearningbyimplicitdifferentiation/full.md b/auxiliarylearningbyimplicitdifferentiation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..11276b19d634b81eec721a78e1303bd959de9ce6
--- /dev/null
+++ b/auxiliarylearningbyimplicitdifferentiation/full.md
@@ -0,0 +1,497 @@
+# AUXILIARY LEARNING BY IMPLICIT DIFFERENTIATION
+
+Aviv Navon*
+
+Bar-Ilan University, Israel
+
+aviv.navon@biu.ac.il
+
+Idan Achituve*
+
+Bar-Ilan University, Israel
+
+idan.achituve@biu.ac.il
+
+Haggai Maron
+
+NVIDIA, Israel
+
+hmaron@nvidia.com
+
+Gal Chechik†
+
+Bar-Ilan University, Israel
+
+NVIDIA, Israel
+
+gal.checkik@biu.ac.il
+
+Ethan Fetaya†
+
+Bar-Ilan University, Israel
+
+ethan.fetaya@biu.ac.il
+
+# ABSTRACT
+
+Training neural networks with auxiliary tasks is a common practice for improving the performance on a main task of interest. Two main challenges arise in this multi-task learning setting: (i) designing useful auxiliary tasks; and (ii) combining auxiliary tasks into a single coherent loss. Here, we propose a novel framework, AuxiLearn, that targets both challenges based on implicit differentiation. First, when useful auxiliaries are known, we propose learning a network that combines all losses into a single coherent objective function. This network can learn nonlinear interactions between tasks. Second, when no useful auxiliary task is known, we describe how to learn a network that generates a meaningful, novel auxiliary task. We evaluate AuxiLearn in a series of tasks and domains, including image segmentation and learning with attributes in the low data regime, and find that it consistently outperforms competing methods.
+
+# 1 INTRODUCTION
+
+The performance of deep neural networks can significantly improve by training the main task of interest with additional auxiliary tasks (Goyal et al., 2019; Jaderberg et al., 2016; Mirowski, 2019). For example, learning to segment an image into objects can be more accurate when the model is simultaneously trained to predict other properties of the image like pixel depth or 3D structure (Standley et al., 2019). In the low data regime, models trained with the main task only are prone to overfit and generalize poorly to unseen data (Vinyals et al., 2016). In this case, the benefits of learning with multiple tasks are amplified (Zhang and Yang, 2017). Training with auxiliary tasks adds an inductive bias that pushes learned models to capture meaningful representations and avoid overfitting to spurious correlations.
+
+In some domains, it may be easy to design beneficial auxiliary tasks and collect supervised data. For example, numerous tasks were proposed for self-supervised learning in image classification, including masking (Doersch et al., 2015), rotation (Gidaris et al., 2018) and patch shuffling (Doersch and Zisserman, 2017; Noroozi and Favaro, 2016). In these cases, it is not clear what would be the best way to combine all auxiliary tasks into a single loss (Doersch and Zisserman, 2017). The common practice is to compute a weighted combination of pretext losses by tuning the weights of individual losses using hyperparameter grid search. This approach, however, limits the potential of learning with auxiliary tasks because the run time of grid search grows exponentially with the number of tasks.
+
+In other domains, obtaining good auxiliaries in the first place may be challenging or may require expert knowledge. For example, for point cloud classification, few self-supervised tasks have been proposed; however, their benefits so far are limited (Achituve et al., 2020; Hassani and Haley, 2019;
+
+Sauder and Sievers, 2019; Tang et al., 2020). For these cases, it would be beneficial to automate the process of generating auxiliary tasks without domain expertise.
+
+Our work takes a step forward in automating the use and design of auxiliary learning tasks. We name our approach AuxiLearn. AuxiLearn leverages recent progress made in implicit differentiation for optimizing hyperparameters (Liao et al., 2018; Lorraine et al., 2020). We demonstrate the effectiveness of AuxiLearn in two types of problems. First, in combining auxiliaries, for cases where auxiliary tasks are predefined. We describe how to train a deep neural network (NN) on top of auxiliary losses and combine them non-linearly into a unified loss. For instance, we combine per-pixel losses in image segmentation tasks using a convolutional NN (CNN). Second, designing auxiliaries, for cases where predefined auxiliary tasks are not available. We present an approach for learning such auxiliary tasks without domain knowledge and from input data alone. This is achieved by training an auxiliary network to generate auxiliary labels while training another, primary network to learn both the original task and the auxiliary task. One important distinction from previous works, such as (Kendall et al., 2018; Liu et al., 2019a), is that we do not optimize the auxiliary parameters using the training loss but rather on a separate (small) auxiliary set, allocated from the training data. This is a key difference since the goal of auxiliary learning is to improve generalization rather than help optimization on the training data.
+
+To validate our proposed solution, we extensively evaluate AuxiLearn in several tasks in the low-data regime. In this regime, the models suffer from severe overfitting and auxiliary learning can provide the largest benefits. Our results demonstrate that using AuxiLearn leads to improved loss functions and auxiliary tasks, in terms of the performance of the resulting model on the main task. We complement our experimental section with two interesting theoretical insights regarding our model. The first shows that a relatively simple auxiliary hypothesis class may overfit. The second aims to understand which auxiliaries benefit the main task.
+
+To summarize, we propose a novel general approach for learning with auxiliaries using implicit differentiation. We make the following novel contributions: (a) We describe a unified approach for combining multiple loss terms and for learning novel auxiliary tasks from the data alone; (b) We provide a theoretical observation on the capacity of auxiliary learning; (c) We show that the key quantity for determining beneficial auxiliaries is the Newton update; (d) We provide new results on a variety of auxiliary learning tasks with a focus on the low data regime. We conclude that implicit differentiation can play a significant role in automating the design of auxiliary learning setups.
+
+# 2 RELATED WORK
+
+Learning with multiple tasks. Multitask Learning (MTL) aims at simultaneously solving multiple learning problems while sharing information across tasks. In some cases, MTL benefits the optimization process and improves task-specific generalization performance compared to single-task learning (Standley et al., 2019). In contrast to MTL, auxiliary learning aims at solving a single, main task, and the purpose of all other tasks is to facilitate the learning of the primary task. At test time, only the main task is considered. This approach has been successfully applied in multiple domains, including computer vision (Zhang et al., 2014), natural language processing (Fan et al., 2017; Trinh et al., 2018), and reinforcement learning (Jaderberg et al., 2016; Lin et al., 2019).
+
+Dynamic task weighting. When learning a set of tasks, the task-specific losses are combined into an overall loss. The way individual losses are combined is crucial because MTL-based models are sensitive to the relative weightings of the tasks (Kendall et al., 2018). A common approach for combining task losses is in a linear fashion. When the number of tasks is small, task weights are commonly tuned with a simple grid search. However, this approach does not extend to a large number of tasks, or a more complex weighting scheme. Several recent studies proposed scaling task weights using gradient magnitude (Chen et al., 2018), task uncertainty (Kendall et al., 2018), or the rate of loss change (Liu et al., 2019b). Sener and Koltun (2018) proposed casting the multitask learning problem as a multi-objective optimization. These methods assume that all tasks are equally important, and are less suited for auxiliary learning. Du et al. (2018) and Lin et al. (2019) proposed to weight auxiliary losses using gradient similarity. However, these methods do not scale well with the number of auxiliaries and do not take into account interactions between auxiliaries. In contrast, we propose to learn from data how to combine auxiliaries, possibly in a non-linear manner.
+
+
+(a) Combining losses
+
+
+(b) Learning a new auxiliary task
+Figure 1: The AuxiLearn framework. (a) Learning to combine losses into a single coherent loss term. Here, the auxiliary network operates over a vector of losses. (b) Generating a novel auxiliary task. Here the auxiliary network operates over the input space. In both cases, $g(\cdot ;\phi)$ is optimized using IFT based on $\mathcal{L}_A$ .
+
+Devising auxiliaries. Designing an auxiliary task for a given main task is challenging because it may require domain expertise and additional labeling effort. For self-supervised learning (SSL), many approaches have been proposed (see Jing and Tian (2020) for a recent survey), but the joint representation learned through SSL may suffer from negative transfer and hurt the main task (Standley et al., 2019). Liu et al. (2019a) proposed learning a helpful auxiliary in a meta-learning fashion, removing the need for handcrafted auxiliaries. However, their system is optimized for the training data, which may lead to degenerate auxiliaries. To address this issue, an entropy term is introduced to force the auxiliary network to spread the probability mass across classes.
+
+Implicit differentiation based optimization. Our formulation gives rise to a bi-level optimization problem. Such problems naturally arise in the context of meta-learning (Finn et al., 2017; Rajeswaran et al., 2019) and hyperparameter optimization (Bengio, 2000; Foo et al., 2008; Larsen et al., 1996; Liao et al., 2018; Lorraine et al., 2020; Pedregosa, 2016). The Implicit Function Theorem (IFT) is often used for computing gradients of the upper-level function, this operation requires calculating a vector-inverse Hessian product. However, for modern neural networks, it is infeasible to calculate it explicitly, and an approximation must be devised. Luketina et al. (2016) proposed approximating the Hessian with the identity matrix, whereas Foo et al. (2008); Pedregosa (2016); Rajeswaran et al. (2019) used conjugate gradient (CG) to approximate the product. Following Liao et al. (2018); Lorraine et al. (2020), we use a truncated Neumann series and efficient vector-Jacobian products, as it was empirically shown to be more stable than CG.
+
+# 3 OUR METHOD
+
+We now describe the general AuxiLearn framework for learning with auxiliary tasks. For that purpose, we use two networks, a primary network that is optimized on all tasks and an auxiliary network that is optimized on the main task only. First, we introduce our notations and formulate the general objective. Then, we describe two instances of this framework: combining auxiliaries and learning new auxiliaries. Finally, we present our optimization approach for both instances.
+
+# 3.1 PROBLEM DEFINITION
+
+Let $\{(\mathbf{x}_i^t,\pmb {y}_i^t)\}_{i}$ be the training set and $\{(\mathbf{x}_i^a,\pmb {y}_i^a)\}_{i}$ be a distinct independent set which we term auxiliary set. Let $f(\cdot ;W)$ denote the primary network, and let $g(\cdot ;\phi)$ denote the auxiliary network. Here, $W$ are the parameters of the model optimized on the training set, and $\phi$ are the auxiliary parameters trained on the auxiliary set. The training loss is defined as:
+
+$$
+\mathcal {L} _ {T} = \mathcal {L} _ {T} (W, \phi) = \sum_ {i} \ell_ {\text {m a i n}} \left(\mathbf {x} _ {i} ^ {t}, \boldsymbol {y} _ {i} ^ {t}; W\right) + h \left(\mathbf {x} _ {i} ^ {t}, \boldsymbol {y} _ {i} ^ {t}, W; \phi\right), \tag {1}
+$$
+
+where $\ell_{main}$ denotes the loss of the main task and $h$ is the overall auxiliary loss, controlled by $\phi$ . In Sections 3.2 & 3.3 we will describe two instances of $h$ . We note that $h$ has access to both $W$ and $\phi$ . The loss on the auxiliary set is defined as $\mathcal{L}_A = \sum_i \ell_{main}(\mathbf{x}_i^a, \mathbf{y}_i^a; W)$ , since we are interested in the generalization performance of the main task.
+
+We wish to find auxiliary parameters $(\phi)$ such that the primary parameters $(W)$ , trained with the combined objective, generalize well. More formally, we seek
+
+$$
+\phi^ {*} = \arg \min _ {\phi} \mathcal {L} _ {A} \left(W ^ {*} (\phi)\right), \quad \text {s . t .} \quad W ^ {*} (\phi) = \arg \min _ {W} \mathcal {L} _ {T} (W, \phi). \tag {2}
+$$
+
+# 3.2 LEARNING TO COMBINE AUXILIARY TASKS
+
+Suppose we are given $K$ auxiliary tasks, usually designed using expert domain knowledge. We wish to learn how to optimally leverage these auxiliaries by learning to combine their corresponding losses. Let $\ell (\mathbf{x},\mathbf{y};W) = (\ell_{main}(\mathbf{x},y^{main};W),\ell_1(\mathbf{x},y^1;W),\dots,\ell_K(\mathbf{x},y^K;W))$ denote a loss vector. We wish to learn an auxiliary network $g:\mathbb{R}^{K + 1}\to \mathbb{R}$ over the losses that will be added to $\ell_{main}$ in order to output the training loss $\mathcal{L}_T = \ell_{main} + g(\ell ;\phi)$ . Here, $h$ from Eq. (1) is given by $h(\cdot ;\phi) = g(\ell ;\phi)$ .
+
+Typically, $g(\ell; \phi)$ is chosen to be a linear combination of the losses: $g(\ell; \phi) = \sum_{j} \phi_{j} \ell_{j}$ , with positive weights $\phi_{j} \geq 0$ that are tuned using a grid search. However, this method can only scale to a few auxiliaries, as the run time of grid search is exponential in the number of tasks. Our method can handle a large number of auxiliaries and easily extends to a more flexible formulation in which $g$ parametrized by a deep NN. This general form allows us to capture complex interactions between tasks, and learn non-linear combinations of losses. See Figure 1a for illustration.
+
+One way to view a non-linear combination of losses is as an adaptive linear weighting, where losses have a different set of weights for each datum. If the loss at point $\mathbf{x}$ is $\ell_{\text{main}}(\mathbf{x}, y^{\text{main}}) + g(\ell(\mathbf{x}, \mathbf{y}))$ , then the gradients are $\nabla_W \ell_{\text{main}}(\mathbf{x}, y^{\text{main}}) + \sum_j \frac{\partial g}{\partial \ell_j} \nabla_W \ell_j(\mathbf{x}, y^j)$ . This is equivalent to an adaptive loss where the loss of datum $\mathbf{x}$ is $\ell_{\text{main}} + \sum_j \alpha_{j,\mathbf{x}} \ell_j$ and, $\alpha_{j,\mathbf{x}} = \frac{\partial g}{\partial \ell_j}$ . This observation connects our approach to other studies that assign adaptive loss weights (e.g., Du et al. (2018); Liu et al. (2019b)).
+
+Convolutional loss network. In certain problems there exists a spatial relation among losses. For example, semantic segmentation and depth estimation for images. A common approach is to average the losses over all locations. In contrast, AuxiLearn can leverage this spatial relation for creating a loss-image in which each task forms a channel of pixel-losses induced by the task. We then parametrize $g$ as a CNN that acts on this loss-image. This yields a spatial-aware loss function that captures interactions between task losses. See an example of a loss image in Figure 3
+
+Monotonicity. It is common to parametrize the function $g(\ell ;\phi)$ as a linear combination with non-negative weights. Under this parameterization, $g$ is a monotonic non-decreasing function of the losses. A natural question that arises is whether we should generalize this behavior and constrain $g(\ell ;\phi)$ to be non-decreasing w.r.t. the input losses as well? Empirically, we found that training with monotonic non-decreasing networks tends to be more stable and has a better or equivalent performance. We impose monotonicity during training with negative weights clipping. See Appendix C.2 for a detailed discussion and empirical comparison to non-monotonic networks.
+
+# 3.3 LEARNING NEW AUXILIARY TASKS
+
+The previous subsection focused on situations where auxiliary tasks are given. In many cases, however, no useful auxiliary tasks are known in advance, and we are only presented with the main task. We now describe how to use AuxiLearn in such cases. The intuition is simple: We wish to learn an auxiliary task that pushes the representation of the primary network to generalize better on the main task, as measured using the auxiliary set. We do so in a student-teacher manner: an auxiliary "teacher" network produces labels for the primary network (the "student") which tries to predict these labels as an auxiliary task. Both networks are trained jointly.
+
+More specifically, for auxiliary classification, we learn a soft labeling function $g(\mathbf{x};\phi)$ which produces pseudo labels $y_{aux}$ for input samples $\mathbf{x}$ . These labels are then provided to the main network $f(\mathbf{x};W)$ for training (see Figure 1b). During training, the primary network $f(\mathbf{x};W)$ outputs two predictions, $\hat{y}_{main}$ for the main task and $\hat{y}_{aux}$ for the auxiliary task. We then compute the full training loss $\mathcal{L}_T = \ell_{main}(\hat{y}_{main},y_{main}) + \ell_{aux}(\hat{y}_{aux},y_{aux})$ to update $W$ . Here, the $h$ component of $\mathcal{L}_T$ in Eq. (1) is given by $h(\cdot ;\phi) = \ell_{aux}(f(\mathbf{x}_i^t;W),g(\mathbf{x}_i^t;\phi))$ . As before, we update $\phi$ using the auxiliary set with the loss $\mathcal{L}_A = \ell_{main}$ . Intuitively, the teacher auxiliary network $g$ is rewarded when it provides labels to the student that help it succeed in the main task, as measured using $\mathcal{L}_A$ .
+
+# 3.4 OPTIMIZING AUXILIARY PARAMETERS
+
+We now return to the bi-level optimization problem in Eq. (2) and present the optimizing method for $\phi$ . Solving Eq. (2) for $\phi$ poses a problem due to the indirect dependence of $\mathcal{L}_A$ on the auxiliary parameters. To compute the gradients of $\phi$ , we need to differentiate through the optimization process over $W$ , since $\nabla_{\phi}\mathcal{L}_A = \nabla_W\mathcal{L}_A\cdot \nabla_{\phi}W^*$ . As in Liao et al. (2018); Lorraine et al. (2020), we use the implicit function theorem (IFT) to evaluate $\nabla_{\phi}W^{*}$ :
+
+$$
+\nabla_ {\phi} W ^ {*} = - \underbrace {(\nabla_ {W} ^ {2} \mathcal {L} _ {T}) ^ {- 1}} _ {| W | \times | W |} \cdot \underbrace {\nabla_ {\phi} \nabla_ {W} \mathcal {L} _ {T}} _ {| W | \times | \phi |}. \tag {3}
+$$
+
+We can leverage the IFT to approximate the gradients of the auxiliary parameters $\phi$ :
+
+$$
+\nabla_ {\phi} \mathcal {L} _ {A} \left(W ^ {*} (\phi)\right) = - \underbrace {\nabla_ {W} \mathcal {L} _ {A}} _ {1 \times | W |} \cdot \underbrace {\left(\nabla_ {W} ^ {2} \mathcal {L} _ {T}\right) ^ {- 1}} _ {| W | \times | W |} \cdot \underbrace {\nabla_ {\phi} \nabla_ {W} \mathcal {L} _ {T}} _ {| W | \times | \phi |}. \tag {4}
+$$
+
+See Appendix A for a detailed derivation. To compute the vector and Hessian inverse product, we use the algorithm proposed by Lorraine et al. (2020), which uses Neumann approximation and efficient vector-Jacobian product. We note that accurately computing $\nabla_{\phi}\mathcal{L}_A$ by IFT requires finding a point such that $\nabla_W\mathcal{L}_T = 0$ . In practice, we only approximate $W^{*}$ , and simultaneously train both $W$ and $\phi$ by altering between optimizing $W$ on $\mathcal{L}_T$ , and optimizing $\phi$ using $\mathcal{L}_A$ . We summarize our method in Alg. 1 and 2. Theoretical considerations regarding our method are given in Appendix D.
+
+Algorithm 1: AuxiLearn
+Initialize auxiliary parameters $\phi$ and weights $W$ ; while not converged do
+for $k = 1, \dots, N$ do
+ $\mathcal{L}_T = \ell_{main}(\mathbf{x}, y; W) + h(\mathbf{x}, y, W; \phi)$ $W \gets W - \alpha \nabla_W \mathcal{L}_T|_{\phi, W}$
+end
+ $\phi \gets \phi - \text{Hypergradient}(\mathcal{L}_A, \mathcal{L}_T, \phi, W)$
+end
+return $W$
+
+Algorithm 2: Hypergradient
+Input: training loss $\mathcal{L}_T$ , auxiliary loss $\mathcal{L}_A$ , a fixed point $(\phi^{\prime},W^{*})$ , number of iterations $J$ , learning rate $\alpha$ $v = p = \nabla_W\mathcal{L}_A|_{(\phi ',W*)}$
+for $j = 1,\dots ,J$ do
+ $|v - = \alpha v\cdot \nabla_W\nabla_W\mathcal{L}_T$ $p + = v$
+end
+return $-p\nabla_{\phi}\nabla_{W}\mathcal{L}_{T}|_{(\phi^{\prime},W^{*})}$
+
+# 4 ANALYSIS
+
+# 4.1 COMPLEXITY OF AUXILIARY HYPOTHESIS SPACE
+
+In our learning setup, an additional auxiliary set is used for tuning a large set of auxiliary parameters. A natural question arises: could the auxiliary parameters overfit this auxiliary set? and what is the complexity of the auxiliary hypothesis space $\mathcal{H}_{\phi}$ ? Analyzing the complexity of this space is difficult because it is coupled with the hypothesis space $\mathcal{H}_W$ of the main model. One can think of this hypothesis space as a subset of the original model hypothesis space $\mathcal{H}_{\phi} = \{h_W : \exists \phi \text{ s.t. } W = \arg \min_W \mathcal{L}_T(W, \phi)\} \subset \mathcal{H}_W$ . Due to the coupling with $\mathcal{H}_W$ the behavior can be unintuitive. We show that even simple auxiliaries can have infinite VC dimensions.
+
+Example: Consider the following 1D hypothesis space for binary classification $\mathcal{H}_W = \{\lceil \cos (Wx)\rceil ,W\in \mathbb{R}\}$ , which has infinite VC-dimension. Let the main loss be the zero-one loss and the auxiliary loss be $h(\phi ,W) = (\phi -W)^2$ , namely, an $L_{2}$ regularization with a learned center. Since the model hypothesis space $\mathcal{H}_W$ has an infinite VC-dimension, there exist training and auxiliary sets of any size that are shattered by $\mathcal{H}_W$ . Therefore, for any labeling of the auxiliary and training sets, we can let $\phi = \hat{\phi}$ , the parameter that perfectly classifies both sets. We then have that $\hat{\phi}$ is the optimum of the training with this auxiliary loss and we get that $\mathcal{H}_{\phi}$ also has an infinite VC-dimension.
+
+This important example shows that even seemingly simple-looking auxiliary losses can overfit due to the interaction with the model hypothesis space. Thus, it motivates our use of a separate auxiliary set.
+
+# 4.2 ANALYZING AN AUXILIARY TASK EFFECT
+
+When designing or learning auxiliary tasks, one important question is, what makes an auxiliary task useful? Consider the following loss with a single auxiliary task $\mathcal{L}_T(W,\phi) = \sum_i\ell_{main}(\mathbf{x}_i^t,\pmb {y}_i^t,W) +$
+
+$\phi \cdot \ell_{aux}(\mathbf{x}_i^t,\mathbf{y}_i^t,W)$ . Here $h = \phi \cdot \ell_{aux}$ . Assume $\phi = 0$ so we optimize $W$ only on the standard main task loss. We can now check if $\frac{d\mathcal{L}_A}{d\phi} |_{\phi = 0} > 0$ , namely would it help to add this auxiliary task?
+
+Proposition 1. Let $\mathcal{L}_T(W,\phi) = \sum_i\ell_{main}(\mathbf{x}_i^t,\pmb {y}_i^t,W) + \phi \cdot \ell_{aux}(\mathbf{x}_i^t,\pmb {y}_i^t,W)$ . Suppose that $\phi = 0$ and that the main task was trained until convergence. We have
+
+$$
+\left. \frac {d \mathcal {L} _ {A} \left(W ^ {*} (\phi)\right)}{d \phi} \right| _ {\phi = 0} = - \left\langle \nabla_ {W} \mathcal {L} _ {A} ^ {T}, \nabla_ {W} ^ {2} \mathcal {L} _ {T} ^ {- 1} \nabla_ {W} \mathcal {L} _ {T} \right\rangle , \tag {5}
+$$
+
+i.e. the gradient with respect to the auxiliary weight is the inner product between the Newton methods update and the gradient of the loss on the auxiliary set.
+
+Proof. In the general case, the following holds $\frac{d\mathcal{L}_A}{d\phi} = -\nabla_W\mathcal{L}_A(\nabla_W^2\mathcal{L}_T)^{-1}\nabla_\phi \nabla_W\mathcal{L}_T$ . For a linear combination, we have $\nabla_{\phi}\nabla_{W}\mathcal{L}_{T} = \sum_{i}\nabla_{W}\ell_{aux}(\mathbf{x}_{i}^{t},\pmb{y}_{i}^{t})$ . Since $W$ is optimized till convergence of the main task we obtain $\nabla_{\phi}\nabla_{W}\mathcal{L}_{T} = \nabla_{W}\mathcal{L}_{T}$ .
+
+This simple result shows that the key quantity to observe is the Newton update, rather than the gradient which is often used (Lin et al., 2019; Du et al., 2018). Intuitively, the Newton update is the important quantity because if $\Delta \phi$ is small then we are almost at the optimum. Then, due to quadratic convergence, a single Newton step is sufficient for approximately converging to the new optimum.
+
+# 5 EXPERIMENTS
+
+We evaluate the AuxiLearn framework in a series of tasks of two types: combining given auxiliary tasks into a unified loss (Sections 5.1 - 5.3), and generating a new auxiliary task (Section 5.4). Further experiments and analysis of both modules are given in Appendix C. Throughout all experiments, we use an extra data split for the auxiliary set. Hence, we use four data sets: training set, validation set, test set, and auxiliary set. The samples for the auxiliary set are pre-allocated from the training set. For a fair comparison, these samples are used as part of the training set by all competing methods. Effectively, this means we have a slightly smaller training set for optimizing the parameters $W$ of the primary network. In all experiments, we report the mean performance (e.g., accuracy) along with the Standard Error of the Mean (SEM). Full implementation details of all experiments are given in Appendix B. Our code is available at https://github.com/AvivNavon/AuxiLearn.
+
+Model variants. For learning to combine losses, we evaluated the following variants of auxiliary networks: (1) Linear: A convex linear combination between the loss terms; (2) Linear neural network (Deep linear): A deep fully-connected NN with linear activations; (3) Nonlinear: A standard feed-forward NN over the loss terms. For Section 5.3 only (4) ConvNet: A CNN over the loss-images. The expressive power of the deep linear network is equivalent to that of a 1-layer linear network; However, from an optimization perspective, it was shown that the over-parameterization introduced by the network's depth could stabilize and accelerate convergence (Arora et al., 2018; Saxe et al., 2014). All variants are constrained to represent only monotone non-decreasing functions.
+
+# 5.1 AN ILLUSTRATIVE EXAMPLE
+
+We first present an illustrative example of how AuxiLearn changes the loss landscape and helps generalization in the presence of label noise and harmful tasks. Consider a regression problem with $y_{main} = \mathbf{w}^{\star T}\mathbf{x} + \epsilon_0$ and two auxiliary tasks. The first auxiliary is helpful, $y_{1} = \mathbf{w}^{\star T}\mathbf{x} + \epsilon_{1}$ , whereas the second auxiliary is harmful $y_{2} = \tilde{\mathbf{w}}^{T}\mathbf{x} + \epsilon_{2},\tilde{\mathbf{w}}\neq \mathbf{w}^{\star}$ . We let
+
+
+Figure 2: Loss landscape generated by the auxiliary network. Darker is higher. See text for details.
+
+
+
+
+
+$\epsilon_0 \sim \mathcal{N}(0, \sigma_{main}^2)$ and $\epsilon_1, \epsilon_2 \sim \mathcal{N}(0, \sigma_{aux}^2)$ , with $\sigma_{main}^2 > \sigma_{aux}^2$ . We optimize a linear model with weights $\mathbf{w} \in \mathbb{R}^2$ that are shared across tasks, i.e., no task-specific parameters. We set $\mathbf{w}^\star = (1, 1)^T$ and $\tilde{\mathbf{w}} = (2, -4)^T$ . We train an auxiliary network to output linear task weights and observe the changes to the loss landscape in Figure 2. The left plot shows the loss landscape for the main task,
+
+
+(a) image
+(b) GT labels
+(c) aux. loss
+(d) main loss
+(e) pix. weight
+Figure 3: Loss images on test examples from NYUv2: (a) original image; (b) semantic segmentation ground truth; (c) auxiliaries loss; (d) segmentation (main task) loss; (e) adaptive pixel-wise weight $\sum_{j}\partial \mathcal{L}_{T} / \partial \ell_{j}$ .
+
+with a training set optimal solution $\mathbf{w}_{train}$ . Note that $\mathbf{w}_{train} \neq \mathbf{w}^*$ due to the noise in the training data. The loss landscape of the weighted train loss at the beginning ( $t = 0$ ) and the end ( $t = T$ ) of training is shown in the middle and right plots, respectively. Note how AuxiLearn learns to ignore the harmful auxiliary and use the helpful one to find a better solution by changing the loss landscape. In Appendix C.3 we show that the auxiliary task weight is inversely proportional to the label noise.
+
+# 5.2 FINE-GRAINED CLASSIFICATION WITH MANY AUXILIARY TASKS
+
+In fine-grained visual classification tasks, annotators should have domain expertise, making data labeling challenging and potentially expensive (e.g., in the medical domain). In some cases, however, non-experts can annotate visual attributes that are informative about the main task. As an example, consider the case of recognizing bird species, which would require an ornithologist, yet a layman can describe the head color or bill shape of a bird. These features naturally form auxiliary tasks, which can be leveraged for training jointly with the main task of bird classification.
+
+We evaluate AuxiLearn in this setup of fine-grained classification using the CaltechUCSD Birds 200-2011 dataset (CUB) (Wah et al., 2011). CUB contains 200 bird species in 11,788 images, each associated with a set of 312 binary visual attributes, which we use as auxiliaries. Since we are interested in setups where optimizing the main task alone does not generalize well, we demonstrate our method in a semi-supervised setting: we assume that auxiliary labels are available for all images but only $K$ labels per class are available for the main task (noted as $K$ -shot).
+
+Table 1: Test classification accuracy on CUB 200-2011 dataset, averaged over three runs (± SEM).
+
+ | 5-shot | 10-shot |
| Top 1 | Top 3 | Top 1 | Top 3 |
| STL | 35.50 ± 0.7 | 54.79 ± 0.7 | 54.79 ± 0.3 | 74.00 ± 0.1 |
| Equal | 41.47 ± 0.4 | 62.62 ± 0.4 | 55.36 ± 0.3 | 75.51 ± 0.4 |
| Uncertainty | 35.22 ± 0.3 | 54.99 ± 0.7 | 53.75 ± 0.6 | 73.25 ± 0.3 |
| DWA | 41.82 ± 0.1 | 62.91 ± 0.4 | 54.90 ± 0.3 | 75.74 ± 0.3 |
| GradNorm | 41.49 ± 0.4 | 63.12 ± 0.4 | 55.23 ± 0.1 | 75.62 ± 0.3 |
| GCS | 42.57 ± 0.7 | 62.60 ± 0.1 | 55.65 ± 0.2 | 75.71 ± 0.1 |
| AuxiLearn |
| Linear | 41.71 ± 0.4 | 63.73 ± 0.6 | 54.77 ± 0.2 | 75.51 ± 0.7 |
| Deep Linear | 45.84 ± 0.3 | 66.21 ± 0.5 | 57.08 ± 0.2 | 75.3 ± 0.6 |
| Nonlinear | 47.07 ± 0.1 | 68.25 ± 0.3 | 59.04 ± 0.2 | 78.08 ± 0.2 |
+
+We compare AuxiLearn with the following MTL and auxiliary learning baselines: (1) Single-task learning (STL): Training only on the main task. (2) Equal: Standard multitask learning with equal weights for all auxiliary tasks. (3) GradNorm (Chen et al., 2018): An MTL method that scales losses based on gradient magnitude. (4) Uncertainty (Kendall et al., 2018): An MTL approach that uses task uncertainty to adjust task weights. (5) Gradient Cosine Similarity (GCS) (Du et al., 2018): An auxiliary-learning approach that uses gradient similarity between the main and auxiliary tasks. (6) Dynamic weight averaging (DWA) (Liu et al., 2019b): An MTL approach that sets task weights based on the rate of loss change over time. The primary network in all experiments is ResNet-18 (He et al., 2016) pre-trained on ImageNet. We use a 5-layer fully connected NN for the auxiliary network. Sensitivity analysis of the network size and auxiliary set size is presented in Appendix C.4.
+
+Table 1 shows the test set classification accuracy. Most methods significantly improve over the STL baseline, highlighting the benefits of using additional (weak) labels. Our Nonlinear and Deep linear auxiliary network variants outperform all previous approaches by a large margin. As expected, a non-linear auxiliary network is better than its linear counterparts. This suggests that there are some non-linear interactions between the loss terms that the non-linear network is able to capture. Also, notice the effect of using deep-linear compared to a (shallow) linear model. This result indicates that at least part of the improvement achieved by our method is attributed to the over-parameterization of the auxiliary network. In the Appendix we further analyze properties of auxiliary networks. Appendix C.5 visualizes the full optimization path of a linear auxiliary network over a polynomial kernel on the losses, and Appendix C.6 shows that the last state of the auxiliary network is not informative enough.
+
+# 5.3 Pixel-WISE LOSSES
+
+We consider the indoor-scene segmentation task from Couprie et al. (2013), that uses the NYUv2 dataset (Silberman et al., 2012). We consider the 13-class semantic segmentation as the main task, with depth and surface-normal prediction (Eigen and Fergus, 2015) as auxiliaries. We use SegNet (Badrinarayanan et al., 2017) based model for the primary network, and a 4-layer CNN for the auxiliary network.
+
+Since losses in this task are given per-pixel, we can apply the ConvNet variant of the auxiliary network to the loss image. Namely, each task forms a channel with the per-pixel losses as values. Table 2 reports the mean Intersection over Union (mIoU) and pixel accuracy for the main segmentation task. Here, we
+
+Table 2: Test results for semantic segmentation on NYUv2, averaged over four runs (± SEM).
+
+ | mIoU | Pixel acc. |
| STL | 18.90 ± 0.21 | 54.74 ± 0.94 |
| Equal | 19.20 ± 0.19 | 55.37 ± 1.00 |
| Uncertainty | 19.34 ± 0.18 | 55.70 ± 0.79 |
| DWA | 19.38 ± 0.14 | 55.37 ± 0.35 |
| GradNorm | 19.52 ± 0.21 | 56.70 ± 0.33 |
| MGDA | 19.53 ± 0.35 | 56.28 ± 0.46 |
| GCS | 19.94 ± 0.13 | 56.58 ± 0.81 |
| AuxiLearn (ours) |
| Linear | 20.04 ± 0.38 | 56.80 ± 0.14 |
| Deep Linear | 19.94 ± 0.12 | 56.45 ± 0.79 |
| Nonlinear | 20.09 ± 0.34 | 56.80 ± 0.53 |
| ConvNet | 20.54 ± 0.30 | 56.69 ± 0.44 |
+
+also compare with MGDA (Sener and Koltun, 2018) which had extremely long training time in CUB experiments due to the large number of auxiliary tasks, and therefore was not evaluated in Section 5.2. All weighting methods achieve a performance gain over the STL model. The ConvNet variant of AuxiLearn outperforms all competitors in terms of test mIoU.
+
+Figure 3 shows examples of the loss-images for the auxiliary (c) and main (d) tasks, together with the pixel-wise weights (e). First, note how the loss-images resemble the actual input images. This suggests that a spatial relationship can be leveraged using a CNN auxiliary network. Second, pixel weights are a non-trivial combination of the main and auxiliary task losses. In the top (bottom) row, the plant (couch) has a low segmentation loss and intermediate auxiliary loss. As a result, a higher weight is allocated to these pixels which increases the error signal.
+
+# 5.4 LEARNING AUXILIARY LABELS
+
+Table 3: Learning auxiliary task. Test accuracy averaged over three runs (±SEM) without pre-training.
+
+ | CIFAR10 (5%) | CIFAR100 (5%) | SVHN (5%) | CUB (30-shot) | Pet (30-shot) | Cars (30-shot) |
| STL | 50.8 ± 0.8 | 19.8 ± 0.7 | 72.9 ± 0.3 | 37.2 ± 0.8 | 26.1 ± 0.5 | 59.2 ± 0.4 |
| MAXL-F | 56.1 ± 0.1 | 20.4 ± 0.6 | 75.4 ± 0.3 | 39.6 ± 1.3 | 26.2 ± 0.3 | 59.6 ± 1.1 |
| MAXL | 58.2 ± 0.3 | 21.0 ± 0.4 | 75.5 ± 0.4 | 40.7 ± 0.6 | 26.3 ± 0.6 | 60.4 ± 0.8 |
| AuxiLearn | 60.7 ± 1.3 | 21.5 ± 0.3 | 76.4 ± 0.2 | 44.5 ± 0.3 | 37.0 ± 0.6 | 64.4 ± 0.3 |
+
+In many cases, designing helpful auxiliaries is challenging. We now evaluate AuxiLearn in learning multi-class classification auxiliary tasks. We use three multi-class classification datasets: CIFAR10, CIFAR100 (Krizhevsky et al., 2009), SVHN (Netzer et al., 2011), and three fine-grained classification datasets: CUB-200-2011, Oxford-IIIT Pet (Parkhi et al., 2012), and Cars (Krause et al., 2013). Pet contains 7349 images of 37 species of dogs and cats, and Cars contains 16,185 images of 196 cars.
+
+Following Liu et al. (2019a), we learn a different auxiliary task for each class of the main task. In all experiments and all learned tasks, we set the number of classes to 5. To examine the effect of the learned auxiliary losses in the low-data regime, we evaluate the performance while training with only $5\%$ of the training set in CIFAR10, CIFAR100, and SVHN datasets, and $\sim 30$ samples per
+
+
+Figure 4: t-SNE applied to auxiliary labels learned for Frog and Deer classes, in CIFAR10. Best viewed in color.
+
+
+
+class in CUB, Oxford-IIIT Pet, and Cars. We use VGG-16 (Simonyan and Zisserman, 2014) as the backbone for both CIFAR datasets, a 4-layers ConvNet for the SVHN experiment, and ResNet18 for the fine-grained datasets. In all experiments, the architectures of the auxiliary and primary networks were set the same and were trained from scratch without pre-training.
+
+We compared our approach with the following baselines: (1) Single-task learning (STL): Training the main task only. (2) MAXL: Meta AuXiliary Learning (MAXL) proposed by Liu et al. (2019a) for learning auxiliary tasks. MAXL optimizes the label generator in a meta-learning fashion. (3) MAXL-F: A frozen MAXL label generator, that is initialized randomly. It decouples the effect of having a teacher network from the additional effect brought by the training process.
+
+Table 3 shows that AuxiLearn outperforms all baselines in all setups, even though it sacrifices some of the training set for the auxiliary set. It is also worth noting that our optimization approach is significantly faster than MAXL, yielding $\times 3$ improvement in run-time. In Appendix C.9 and C.10 we show additional experiments for this setup, including an extension of the method to point-cloud part segmentation and experiments with varying training data sizes.
+
+Figure 4 presents a 2D t-SNE projection of the 5D vector of auxiliary (soft) labels that are learned using AuxiLearn. We use samples of the main classes Frog (left) and Deer (right) from the CIFAR10 dataset. t-SNE was applied to each auxiliary task separately. When considering how images are projected to this space of auxiliary soft labels, several structures emerge. The auxiliary network learns a fine partition of the Frog class that separates real images from illustrations. More interesting, the soft labels learned for the class Deer have a middle region that only contains deers with antlers (in various poses and varying backgrounds). By capturing this semantic feature in the learned auxiliary labels, the auxiliary task can help the primary network to discriminate between main task classes.
+
+# 6 DISCUSSION
+
+In this paper, we presented a novel and unified approach for two tasks: combining predefined auxiliary tasks, and learning auxiliary tasks that are useful for the primary task. We theoretically showed which auxiliaries can be beneficial and the importance of using a separate auxiliary set. We empirically demonstrated that our method achieves significant improvement over existing methods on various datasets and tasks. This work opens interesting directions for future research. First, when training deep linear auxiliary networks, we observed similar learning dynamics to those of non-linear models. As a result, they generated better performance compared to their linear counterparts. This effect was observed in standard training setup, but the optimization path in auxiliary networks is very different. Second, we find that reallocating labeled data from the training set to an auxiliary set is consistently helpful. A broader question remains what is the most efficient allocation.
+
+# ACKNOWLEDGEMENTS
+
+This study was funded by a grant to GC from the Israel Science Foundation (ISF 737/2018), and by an equipment grant to GC and Bar-Ilan University from the Israel Science Foundation (ISF 2332/18). IA and AN were funded by a grant from the Israeli innovation authority, through the AVATAR consortium.
+
+# REFERENCES
+
+Achituve, I., Maron, H., and Chechik, G. (2020). Self-supervised learning for domain adaptation on point-clouds. arXiv preprint arXiv:2003.12641.
+Alemi, A. A., Fischer, I., Dillon, J. V., and Murphy, K. (2017). Deep variational information bottleneck. In International Conference on Learning Representations.
+Arora, S., Cohen, N., and Hazan, E. (2018). On the optimization of deep networks: Implicit acceleration by overparameterization. In International Conference on Machine Learning.
+Badrinarayanan, V., Kendall, A., and Cipolla, R. (2017). Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence, 39(12):2481-2495.
+Bengio, Y. (2000). Gradient-based optimization of hyperparameters. Neural computation, 12(8):1889-1900.
+Chang, A. X., Funkhouser, T., Guibas, L., Hanrahan, P., Huang, Q., Li, Z., Savarese, S., Savva, M., Song, S., Su, H., et al. (2015). Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012.
+Chen, Z., Badrinarayanan, V., Lee, C.-Y., and Rabinovich, A. (2018). Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks. In International Conference on Machine Learning, pages 794-803. PMLR.
+Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., and Schiele, B. (2016). The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3213-3223.
+Couprie, C., Farabet, C., Najman, L., and LeCun, Y. (2013). Indoor semantic segmentation using depth information. In International Conference on Learning Representations.
+Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 248-255.
+Doersch, C., Gupta, A., and Efros, A. A. (2015). Unsupervised visual representation learning by context prediction. In Proceedings of the IEEE International Conference on Computer Vision, pages 1422-1430.
+Doersch, C. and Zisserman, A. (2017). Multi-task self-supervised visual learning. In Proceedings of the IEEE International Conference on Computer Vision, pages 2051-2060.
+Du, Y., Czarnecki, W. M., Jayakumar, S. M., Pascanu, R., and Lakshminarayanan, B. (2018). Adapting auxiliary losses using gradient similarity. arXiv preprint arXiv:1812.02224.
+Eigen, D. and Fergus, R. (2015). Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In Proceedings of the IEEE international conference on computer vision, pages 2650-2658.
+Fan, X., Monti, E., Mathias, L., and Dreyer, M. (2017). Transfer learning for neural semantic parsing. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 48-56.
+Finn, C., Abbeel, P., and Levine, S. (2017). Model-agnostic meta-learning for fast adaptation of deep networks. In International Conference on Machine Learning, pages 1126-1135.
+Foo, C.-s., Do, C. B., and Ng, A. Y. (2008). Efficient multiple hyperparameter learning for log-linear models. In Advances in neural information processing systems, pages 377-384.
+Ganin, Y. and Lempitsky, V. (2015). Unsupervised domain adaptation by backpropagation. In International Conference on Machine Learning.
+Gidaris, S., Singh, P., and Komodakis, N. (2018). Unsupervised representation learning by predicting image rotations. In International Conference on Learning Representations.
+
+Goyal, P., Mahajan, D., Gupta, A., and Misra, I. (2019). Scaling and benchmarking self-supervised visual representation learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV).
+Hassani, K. and Haley, M. (2019). Unsupervised multi-task feature learning on point clouds. In Proceedings of the IEEE International Conference on Computer Vision, pages 8160-8171.
+He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778.
+Jaderberg, M., Mnih, V., Czarnecki, W. M., Schaul, T., Leibo, J. Z., Silver, D., and Kavukcuoglu, K. (2016). Reinforcement learning with unsupervised auxiliary tasks. arXiv preprint arXiv:1611.05397.
+Jing, L. and Tian, Y. (2020). Self-supervised visual feature learning with deep neural networks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence.
+Kendall, A., Gal, Y., and Cipolla, R. (2018). Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7482-7491.
+Kingma, D. P. and Ba, J. (2014). ADAM: A method for stochastic optimization. In International Conference on Learning Representations.
+Krause, J., Stark, M., Deng, J., and Fei-Fei, L. (2013). 3D object representations for fine-grained categorization. In 4th International IEEE Workshop on 3D Representation and Recognition, Sydney, Australia.
+Krizhevsky, A., Hinton, G., et al. (2009). Learning multiple layers of features from tiny images. Technical report, University of Toronto.
+Larsen, J., Hansen, L. K., Svarer, C., and Ohlsson, M. (1996). Design and regularization of neural networks: the optimal use of a validation set. In Neural Networks for Signal Processing VI. Proceedings of the IEEE Signal Processing Society Workshop, pages 62-71. IEEE.
+Liao, R., Xiong, Y., Fetaya, E., Zhang, L., Yoon, K., Pitkow, X., Urtasun, R., and Zemel, R. (2018). Reviving and improving recurrent back-propagation. In International Conference on Machine Learning.
+Lin, X., Baweja, H., Kantor, G., and Held, D. (2019). Adaptive auxiliary task weighting for reinforcement learning. In Advances in Neural Information Processing Systems, pages 4773-4784.
+Liu, S., Davison, A., and Johns, E. (2019a). Self-supervised generalisation with meta auxiliary learning. In Advances in Neural Information Processing Systems, pages 1677-1687.
+Liu, S., Johns, E., and Davison, A. J. (2019b). End-to-end multi-task learning with attention. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1871-1880.
+Lorraine, J., Vicol, P., and Duvenaud, D. (2020). Optimizing millions of hyperparameters by implicit differentiation. In International Conference on Artificial Intelligence and Statistics, pages 1540-1552. PMLR.
+Luketina, J., Berglund, M., Greff, K., and Raiko, T. (2016). Scalable gradient-based tuning of continuous regularization hyperparameters. In International conference on machine learning, pages 2952-2960.
+Mirowski, P. (2019). Learning to navigate. In 1st International Workshop on Multimodal Understanding and Learning for Embodied Applications, pages 25-25.
+Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., and Ng, A. Y. (2011). Reading digits in natural images with unsupervised feature learning.
+Noroozi, M. and Favaro, P. (2016). Unsupervised learning of visual representations by solving jigsaw puzzles. In Proceedings of the European Conference on Computer Vision, pages 69-84. Springer.
+
+Parkhi, O. M., Vedaldi, A., Zisserman, A., and Jawahar, C. V. (2012). Cats and dogs. In IEEE Conference on Computer Vision and Pattern Recognition.
+Pedregosa, F. (2016). Hyperparameter optimization with approximate gradient. In International Conference on Machine Learning, pages 737-746.
+Qi, C. R., Su, H., Mo, K., and Guibas, L. J. (2017). Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 652-660.
+Rajeswaran, A., Finn, C., Kakade, S. M., and Levine, S. (2019). Meta-learning with implicit gradients. In Advances in Neural Information Processing Systems, pages 113-124.
+Salimans, T. and Kingma, D. P. (2016). Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Advances in neural information processing systems, pages 901-909.
+Sauder, J. and Sievers, B. (2019). Self-supervised deep learning on point clouds by reconstructing space. In Advances in Neural Information Processing Systems, pages 12942-12952.
+Saxe, A. M., McClelland, J. L., and Ganguli, S. (2014). Exact solutions to the nonlinear dynamics of learning in deep linear neural network. In In International Conference on Learning Representations. Citeseer.
+Sener, O. and Koltun, V. (2018). Multi-task learning as multi-objective optimization. In Advances in Neural Information Processing Systems, pages 527-538.
+Silberman, N., Hoiem, D., Kohli, P., and Fergus, R. (2012). Indoor segmentation and support inference from RGBD images. In Proceedings of the European conference on computer vision, pages 746-760. Springer.
+Simonyan, K. and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
+Standley, T., Zamir, A. R., Chen, D., Guibas, L., Malik, J., and Savarese, S. (2019). Which tasks should be learned together in multi-task learning? arXiv preprint arXiv:1905.07553.
+Tang, L., Chen, K., Wu, C., Hong, Y., Jia, K., and Yang, Z. (2020). Improving semantic analysis on point clouds via auxiliary supervision of local geometric priors. arXiv preprint arXiv:2001.04803.
+Trinh, T., Dai, A., Luong, T., and Le, Q. (2018). Learning longer-term dependencies in RNNs with auxiliary losses. In International Conference on Machine Learning, pages 4965-4974.
+Vinyals, O., Blundell, C., Lillicrap, T., Wierstra, D., et al. (2016). Matching networks for one shot learning. In Advances in neural information processing systems, pages 3630-3638.
+Wah, C., Branson, S., Welinder, P., Perona, P., and Belongie, S. (2011). The Caltech-UCSD Birds-200-2011 Dataset. Technical Report CNS-TR-2011-001, California Institute of Technology.
+Wang, Y., Sun, Y., Liu, Z., Sarma, S. E., Bronstein, M. M., and Solomon, J. M. (2019). Dynamic graph cnn for learning on point clouds. ACM Transactions on Graphics, 38(5):1-12.
+Yi, L., Kim, V. G., Ceylan, D., Shen, I.-C., Yan, M., Su, H., Lu, C., Huang, Q., Sheffer, A., and Guibas, L. (2016). A scalable active framework for region annotation in 3D shape collections. ACM Transactions on Graphics, 35(6):1-12.
+Zhang, Y. and Yang, Q. (2017). A survey on multi-task learning. arXiv preprint arXiv:1707.08114.
+Zhang, Z., Luo, P., Loy, C. C., and Tang, X. (2014). Facial landmark detection by deep multi-task learning. In European conference on computer vision, pages 94-108. Springer.
+
+# Appendix: Auxiliary Learning by Implicit Differentiation
+
+# A GRADIENT DERIVATION
+
+We provide here the derivation of Eq. (4) in Section 3. One can look at the function $\nabla_W\mathcal{L}_T(W,\phi)$ around a certain local-minima point $(\hat{W},\hat{\phi})$ and assume the Hessian $\nabla_W^2\mathcal{L}_T(\hat{W},\hat{\phi})$ is positive-definite. At that point, we have $\nabla_W\mathcal{L}_T(\hat{W},\hat{\phi}) = 0$ . From the IFT, we have that locally around $(\hat{W},\hat{\phi})$ , there exists a smooth function $W^{*}(\phi)$ such that $\nabla_W\mathcal{L}_T(W,\phi) = 0$ if $W = W^{*}(\phi)$ . Since the function $\nabla_W\mathcal{L}_T(W^* (\phi),\phi)$ is constant and equal to zero, we have that its derivative w.r.t. $\phi$ is also zero. Taking the total derivative we obtain
+
+$$
+0 = \nabla_ {W} ^ {2} \mathcal {L} _ {T} (W, \phi) \nabla_ {\phi} W ^ {*} (\phi) + \nabla_ {\phi} \nabla_ {W} \mathcal {L} _ {T} (W, \phi). \tag {6}
+$$
+
+Multiplying by $\nabla_W^2\mathcal{L}_T(W,\phi)^{-1}$ and reordering we obtain
+
+$$
+\nabla_ {\phi} W ^ {*} (\phi) = - \nabla_ {W} ^ {2} \mathcal {L} _ {T} (W, \phi) ^ {- 1} \nabla_ {\phi} \nabla_ {W} \mathcal {L} _ {T} (W, \phi). \tag {7}
+$$
+
+We can use this result to compute the gradients of the auxiliary set loss w.r.t $\phi$
+
+$$
+\nabla_ {\phi} \mathcal {L} _ {A} \left(W ^ {*} (\phi)\right) = \nabla_ {W} \mathcal {L} _ {A} \cdot \nabla_ {\phi} W ^ {*} (\phi) = - \nabla_ {W} \mathcal {L} _ {A} \cdot \left(\nabla_ {W} ^ {2} \mathcal {L} _ {T}\right) ^ {- 1} \cdot \nabla_ {\phi} \nabla_ {W} \mathcal {L} _ {T}. \tag {8}
+$$
+
+As discussed in the main text, fully optimizing $W$ to convergence is too computationally expensive. Instead, we update $\phi$ once for every several update steps for $W$ , as seen in Alg. 1. To compute the vector inverse-Hessian product, we use Alg. 2 that was proposed in (Lorraine et al., 2020).
+
+# B EXPERIMENTAL DETAILS
+
+# B.1 CUB 200-2011
+
+Data. To examine the effect of varying training set sizes we use all 5994 predefined images for training according to the official split and, we split the predefined test set to 2897 samples for validation and 2897 for testing. All images were resized to $256 \times 256$ and Z-score normalized. During training, images were randomly cropped to 224 and flipped horizontally. Test images were centered cropped to 224. The same processing was applied in all fine-grain experiments.
+
+Training details for baselines. We fine-tuned a ResNet-18 (He et al., 2016) pre-trained on ImageNet (Deng et al., 2009) with a classification layer on top for all tasks. Because the scale of auxiliary losses differed from that of the main task, we multiplied each auxiliary loss, on all compared method, by the scaling factor $\tau = 0.1$ . It was chosen based on a grid search over $\{0.1, 0.3, 0.6, 1.0\}$ using the Equal baseline. We applied grid search over the learning rates in $\{1e - 3, 1e - 4, 1e - 5\}$ and the weight decay in $\{5e - 3, 5e - 4, 5e - 5\}$ . For DWA (Liu et al., 2019b), we searched over the temperature in $\{0.5, 2, 5\}$ and for GradNorm (Chen et al., 2018), over $\alpha$ in $\{0.3, 0.8, 1.5\}$ . The computational complexity of GSC (Du et al., 2018) grows with the number of tasks. As a result, we were able to run this baseline only in a setup where there are two loss terms: the main and the sum of all auxiliary tasks. We ran each configuration with 3 different seeds for 100 epochs with ADAM optimizer (Kingma and Ba, 2014) and used early stopping based on the validation set.
+
+The auxiliary set and auxiliary network. In our experiments, we found that allocating as little as 20 samples from the training set for the auxiliary set and using a NN with 5 layers and 10 units in each layer yielded good performance for both deep linear and non-linear models. We found that our method was not sensitive to these design choices. We use skip connection between the main loss $\ell_{\text{main}}$ and the overall loss term and Softplus activation.
+
+Optimization of the auxiliary network. In all variants of our method, the auxiliary network was optimized using SGD with 0.9 momentum. We applied grid search over the auxiliary network learning rate in $\{1e - 2, 1e - 3\}$ and weight decay in $\{1e - 5, 5e - 5\}$ . The total training time of all methods was 3 hours on a 16GB Nvidia V100 GPU.
+
+# B.2 NYUv2
+
+The data consists of 1449 RGB-D images, split into 795 train images and 654 test images. We further split the train set to allocate 79 images, $10\%$ of training examples, to construct a validation
+
+set. Following (Liu et al., 2019b), we resize images to $288 \times 384$ pixels for training and evaluation and use SegNet (Badrinarayanan et al., 2017) based architecture as the backbone.
+
+Similar to (Liu et al., 2019b), we train the model for 200 epochs using Adam optimizer (Kingma and Ba, 2014) with learning rate $1e - 4$ , and halve the learning rate after 100 epochs. We choose the best model with early stopping on a pre-allocated validation set. For DWA (Liu et al., 2019b) we set the temperature hyperparameter to 2, as in the NYUv2 experiment in (Liu et al., 2019b). For GradNorm (Chen et al., 2018) we set $\alpha = 1.5$ . This value for $\alpha$ was used in (Chen et al., 2018) for the NYUv2 experiments. In all variants of our method, the auxiliary networks are optimized using SGD with 0.9 momentum. We allocate $2.5\%$ of training examples to form an auxiliary set. We use grid search to tune the learning rate $\{1e - 3, 5e - 4, 1e - 4\}$ and weight decay $\{1e - 5, 1e - 4\}$ of the auxiliary networks. Here as well, we use skip connection between the main loss $\ell_{main}$ and the overall loss term and Softplus activation.
+
+# B.3 LEARNING AUXILIARIES
+
+Multi-class classification datasets. On the CIFAR datasets, we train the model for 200 epochs using SGD with momentum 0.9, weight decay $5e - 4$ , and initial learning rates $1e - 1$ and $1e - 2$ for CIFAR10 and CIFAR100, respectively. For the SVHN experiment, we train for 50 epochs using SGD with momentum 0.9, weight decay $5e - 4$ , and initial learning rates $1e - 1$ . The learning rate is modified using a cosine annealing scheduler. We use VGG-16 (Simonyan and Zisserman, 2014) based architecture for the CIFAR experiments, and a 4-layer ConvNet for the SVHN experiment. For MAXL (Liu et al., 2019a) label generating network, we tune the following hyperparameters: learning rate $\{1e - 3,5e - 4\}$ , weight decay $\{5e - 4,1e - 4,5e - 5\}$ , and entropy term weight $\{.2,.4,.6\}$ (see (Liu et al., 2019a) for details). We explore the same learning rate and weight decay for the auxiliary network in our method, and also tune the number of optimization steps between every auxiliary parameter update $\{5,15,25\}$ , and the size of the auxiliary set $\{1.5\%,2.5\% \}$ (of training examples). We choose the best model on the validation set and allow for early stopping.
+
+Fine-grain classification datasets. In CUB experiments we use the same data and splits as described in Sections 5.2 and B.1. Oxford-IIIT Pet contains 7349 images of 37 species of dogs and cats. We use the official train-test split. We pre-allocate $30\%$ from the training set to validation. As a results, the total number of train/validation/test images are 2576/1104/3669 respectively. Cars (Krause et al., 2013) contains 16, 185 images of 196 car classes. We use the official train-test split and pre-allocate $30\%$ from the training set to validation. As a results, the total number of train/validation/test images are 5700/2444/8041 respectively. In all experiments we use ResNet-18 as the backbone network for both the primary and auxiliary networks. Importantly, the networks are not pre-trained. The task specific (classification) heads in both the primary and auxiliary networks is implemented using a 2-layer NN with sizes 512 and $C$ . Where $C$ is number of labels (e.g., 200 for CUB and 37 for Oxford-IIIT Pet). In all experiments we use the same learning rate of $1e - 4$ and weight decay of $5e - 3$ which were shown to work best, based on a grid search applied on the STL baseline. For MAXL and AuxiLearn we applied a grid search over the auxiliary network learning rate and weight decay as described in the Multi-class classification datasets subsection. We tune the number of optimization steps between every auxiliary parameter update in $\{30,60\}$ for Oxford-IIIT Pet and $\{40,80\}$ for CUB and Cars. Also, the auxiliary set size was tuned over $\{0.084\%, 1.68\%, 3.33\%\}$ with stratified sampling. For our method, we leverage the module of AuxiLearn for combining auxiliaries. We use a Nonlinear network with either two or three hidden layers of sizes 10 (which was selected according to a grid search). The batch size was set to 64 in CUB and Cars experiments and to 16 in Oxford-IIIT Pet experiments. We ran each configuration with 3 different seeds for 150 epochs with ADAM optimizer and used early stopping based on the validation set.
+
+# C ADDITIONAL EXPERIMENTS
+
+# C.1 IMPORTANCE OF AUXILIARY SET
+
+In this section we illustrate the importance of the auxiliary set to complement our theoretical observation in Section 4. We repeat the experiment in Section 5.1, but this time we optimize the auxiliary parameters $\phi$ using the training data. Figure 5 shows how the tasks' weights change during training. The optimization procedure is reduced to single-task learning, which badly hurts
+
+
+Figure 5: Optimizing task weights on the training set reduce to single-task learning.
+
+generalization (see Figure 2). These results are consistent with (Liu et al., 2019a) that added an entropy loss term to avoid the diminishing auxiliary task.
+
+# C.2 MONOTONOCITY
+
+As discussed in the main text, it is a common practice to combine auxiliary losses as a convex combination. This is equivalent to parametrize the function $g(\ell; \phi)$ as a linear combination over losses $g(\ell; \phi) = \sum_{j=1}^{K} \phi_j \ell_j$ , with non-negative weights, $\phi_j \geq 0$ . Under this parameterization, $g$ is a monotonic non-decreasing function of the losses, since $\partial \mathcal{L}_T / \partial \ell_j \geq 0$ . The non-decreasing property means that the overall loss grows (or is left unchanged) with any increase to the auxiliary losses. As a result, an optimization procedure that operates to minimize the combined loss also operates in the direction of reducing individual losses (or not changing them).
+
+A natural question that arises is whether the function $g$ should generalize this behavior, and be constrained to be non-decreasing w.r.t. the losses as well? Non-decreasing networks can "ignore" an auxiliary task by zeroing its corresponding loss, but cannot reverse the gradient of a task by negating its weight. While monotonicity is a very natural requirement, in some cases, negative task weights (i.e., non-monotonicity) seem desirable if one wishes to "delete" input information not directly related to the task at hand (Alemi et al., 2017; Ganin and Lempitsky, 2015). For example, in domain adaptation, one might want to remove information that allows a discriminator to recognize the domain of a given sample (Ganin and Lempitsky, 2015). Empirically, we found that training with monotonic non-decreasing networks to be more stable and has better or equivalent performance, see Table 4 for comparison.
+
+Table 4 compares monotonic and non-monotonic auxiliary networks in both the semi-supervised and the fully-supervised setting. Monotonic networks show a small but consistent improvement over non-monotonic ones. It is also worth mentioning that the non-monotonic networks were harder to stabilize.
+
+Table 4: CUB 200-2011: Monotonic vs non-monotonic test classification accuracy (± SEM) over three runs.
+
+ | | Top 1 | Top 3 |
| 5-shot | Non-Monotonic | 46.3 ± 0.32 | 67.46 ± 0.55 |
| Monotonic | 47.07 ± 0.10 | 68.25 ± 0.32 |
| 10-shot | Non-Monotonic | 58.84 ± 0.04 | 77.67 ± 0.08 |
| Monotonic | 59.04 ± 0.22 | 78.08 ± 0.24 |
| Full Dataset | Non-Monotonic | 74.74 ± 0.30 | 88.3 ± 0.23 |
| Monotonic | 74.92 ± 0.21 | 88.55 ± 0.17 |
+
+# C.3 NOISY AUXILIARIES
+
+We demonstrate the effectiveness of AuxiLearn in identifying helpful auxiliaries and ignoring harmful ones. Consider a regression problem with main task $y = \mathbf{w}^T\mathbf{x} + \epsilon$ , where $\epsilon \sim \mathcal{N}(0,\sigma^2)$ . We learn this task jointly with $K = 100$ auxiliaries of the form $y_{j} = \mathbf{w}^{T}\mathbf{x} + |\epsilon_{j}|$ , where $\epsilon_{j} \sim \mathcal{N}(0,j\cdot \sigma_{aux}^{2})$ for $j = 1,\dots,100$ . We use the absolute value on the noise so that noisy estimations are no longer unbiased, making the noisy labels even less helpful as the noise increases. We use a linear auxiliary network to weigh the loss terms. Figure 6 shows the learned weight for each task. We can see that the auxiliary network captures the noise patterns, and assign weights based on the noise level.
+
+
+Figure 6: Learning with noisy labels: task ID is proportional to the label noise.
+
+# C.4 CUB SENSITIVITY ANALYSIS
+
+In this section, we provide further analysis for the experiments conducted on the CUB 200-2011 dataset in the 5-shot setup. We examine the sensitivity of a non-linear auxiliary network to the size of the auxiliary set, and the depth of the auxiliary network. In Figure 7a we test the effect of allocating (labeled) samples from the training set to the auxiliary set. As seen, allocating between $10 - 50$ samples results in similar performance picking at 20. The figure shows that removing too many samples from the training set can be damaging. Nevertheless, we notice that even when allocating 200 labeled samples (out of 1000), our nonlinear method is still better than the best competitor GSC (Du et al., 2018) (which reached an accuracy of 42.57).
+
+Figure 7b shows how accuracy changes with the number of hidden layers. As expected, there is a positive trend. As we increase the number of layers, the network expressivity increases, and the performance improves. Clearly, making the auxiliary network too large may cause the network to overfit the auxiliary set as was shown in Section 4, and empirically in (Lorraine et al., 2020).
+
+# C.5 LINEARLY WEIGHTED NON-LINEAR TERMS
+
+To further motivate the use of non-linear interactions between tasks, we train a linear auxiliary network over a polynomial kernel on the tasks segmentation, depth estimation and normal prediction from the NYUv2 dataset. Figure 8 shows the learned loss weights. From the figure, we learn that two of the three largest weights at the end of training belong to non-linear terms, specifically, $\text{Seg}^2$ and $\text{Seg} \cdot \text{Depth}$ . Also, we observe a scheduling effect, in which at the start of training, the auxiliary network focuses on the auxiliary tasks (first $\sim 50$ steps), and afterwards it draws most of the attention of the primary network towards the main task.
+
+# C.6 FIXED AUXILIARY
+
+As a result of alternating between optimizing the primary network parameters and the auxiliary parameters, the weighting of the loss terms are updated during the training process. This means that the loss landscape is changed during training. This effect is observed in the illustrative examples
+
+
+(a) Effect of auxiliary set size
+
+
+(b) Effect of Depth
+Figure 7: Mean test accuracy $(\pm$ SEM) averaged over 3 runs as a function of the number of samples in the auxiliary set (left) and the number of hidden layers (right). Results are on 5-shot CUB 200-2011 dataset.
+
+
+Polynomial kernel - linear weights
+Figure 8: Learned linear weights for a polynomial kernel on the loss terms of the tasks segmentation, depth estimation and normal prediction from the NYUv2 dataset.
+
+described in Section 5.1 and Section C.5, where the auxiliary network focuses on different tasks during different learning stages. Since the optimization is non-convex, the end result may depend not only on the final parameters but also on the loss landscape during the entire process.
+
+We examined this effect with the following setup on the 5-shot setting on CUB 200-2011 dataset: we trained a non-linear auxiliary network and saved the best model. Then we retrain with the same configuration, only this time, the auxiliary network is initialized using the best model, and is kept fixed. We repeat this using ten different random seeds, affecting the primary network initialization and data shuffling. As a result, we observed a drop of $6.7\%$ on average in the model performance with an std of $1.2\%$ (46.7% compared to $40\%$ ).
+
+# C.7 FULL CUB DATASET
+
+In Section 5.2 we evaluated AuxiLearn and the baseline models performance under a semi-supervised scenario in which we have 5 or 10 labeled samples per class. For completeness sake, we show in Table 5 the test accuracy results in the standard fully-supervised scenario. As can be seen, in this case the STL baseline achieves the highest top-1 test accuracy while our nonlinear method is second on the top-1 and first on the top-3. Most baselines suffer from severe negative transfer due to the large number of auxiliary tasks (which are not needed in this case) while our method cause minimal performance degradation.
+
+Table 5: CUB 200-2011: Fully supervised test classification accuracy (± SEM) averaged over three runs.
+
+ | Top 1 | Top 3 |
| STL | 75.2 ± 0.52 | 88.4 ± 0.36 |
| Equal | 70.16 ± 0.10 | 86.87 ± 0.22 |
| Uncertainty | 74.70 ± 0.56 | 88.21 ± 0.14 |
| DWA | 69.88 ± 0.10 | 86.62 ± 0.20 |
| GradNorm | 70.04 ± 0.21 | 86.63 ± 0.13 |
| GSC | 71.30 ± 0.01 | 86.91 ± 0.28 |
| AuxiLearn (ours) |
| Linear | 70.97± 0.31 | 86.92 ± 0.08 |
| Deep Linear | 73.6 ± 0.72 | 88.37 ± 0.21 |
| Nonlinear | 74.92 ± 0.21 | 88.55 ± 0.17 |
+
+# C.8 CITYSCAPES
+
+Cityscapes (Cordts et al., 2016) is a high-quality urban-scene dataset. We use the data provided in (Liu et al., 2019b) with 2975 training and 500 test images. The data comprises of four learning tasks: 19-classes, 7-classes and 2-classes semantic segmentation, and depth estimation. We use the 19-classes semantic segmentation as the main task, and all other tasks as auxiliaries. We allocate $10\%$ of the training data for validation set, to allow for hyperparameter tuning and early stopping. We further allocate $2.5\%$ of the remaining training examples to construct the auxiliary set. All images are resized to $128 \times 256$ to speed up computation.
+
+We train a SegNet (Badrinarayanan et al., 2017) based model for 150 epochs using Adam optimizer (Kingma and Ba, 2014) with learning rate $1e - 4$ , and halve the learning rate after 100 epochs. We search over weight decay in $\{1e - 4,1e - 5\}$ . We compare AuxiLearn to the same baselines used in Section 5.2 and search over the same hyperparameters as in the NYUv2 experiment. We set the DWA temperature to 2 similar to (Liu et al., 2019b), and the GradNorm hyperparameter $\alpha$ to 1.5, as used in (Chen et al., 2018) for the NYUv2 experiments. We present the results in Table 6. The ConvNet variant of the auxiliary network achieves best performance in terms of mIoU and pixel accuracy.
+
+Table 6: 19-classes semantic segmentation test set results on Cityscapes, averaged over three runs (± SEM).
+
+ | mIoU | Pixel acc. |
| STL | 30.18 ± 0.04 | 87.08 ± 0.18 |
| Equal | 30.45 ± 0.14 | 87.14 ± 0.08 |
| Uncertainty | 30.49 ± 0.21 | 86.89 ± 0.07 |
| DWA | 30.79 ± 0.32 | 86.97 ± 0.26 |
| GradNorm | 30.62 ± 0.03 | 87.15 ± 0.04 |
| GCS | 30.32 ± 0.23 | 87.02 ± 0.12 |
| AuxiLearn (ours) |
| Linear | 30.63 ± 0.19 | 86.88 ± 0.03 |
| Nonlinear | 30.85 ± 0.19 | 87.19 ± 0.20 |
| ConvNet | 30.99 ± 0.05 | 87.21 ± 0.11 |
+
+# C.9 LEARNING SEGMENTATION AUXILIARY FOR 3D POINT CLOUDS
+
+Recently, several methods were offered for learning auxiliary tasks in point clouds (Achituve et al., 2020; Hassani and Haley, 2019; Sauder and Sievers, 2019); however, this domain is still largely unexplored and it is not yet clear which auxiliary tasks could be beneficial beforehand. Therefore, it is desirable to automate this process, even at the cost of performance degradation to some extent compared to human designed methods.
+
+We further evaluate our method in the task of generating helpful auxiliary tasks for 3D point-cloud data. We propose to extend the use of AuxiLearn for segmentation tasks. In Section 5.4 we trained an auxiliary network to output soft auxiliary labels for classification task. Here, we use a similar
+
+Table 7: Learning auxiliary segmentation task. Test mean IOU on ShapeNet part dataset averaged over three runs (±SEM) - 30 shot
+
+ | Mean | Airplane | Bag | Cap | Car | Chair | Earphone | Guitar | Knife | Lamp | Laptop | Motorbike | Mug | Pistol | Rocket | Skateboard | Table |
| Num. samples | 2874 | 341 | 14 | 11 | 158 | 704 | 14 | 159 | 80 | 286 | 83 | 51 | 38 | 44 | 12 | 31 | 848 |
| STL | 75.6 | 68.7 | 82.9 | 85.2 | 65.6 | 82.3 | 70.2 | 86.1 | 75.1 | 68.4 | 94.3 | 55.1 | 91.0 | 72.6 | 60.2 | 72.3 | 74.2 |
| DAE | 74.0 | 66.6 | 77.6 | 79.1 | 60.5 | 81.2 | 73.8 | 87.1 | 77.0 | 65.4 | 93.6 | 51.8 | 88.4 | 74.0 | 55.4 | 68.4 | 72.7 |
| DefRec | 74.6 | 68.6 | 81.2 | 83.8 | 63.6 | 82.1 | 72.9 | 86.9 | 72.7 | 69.4 | 93.4 | 51.8 | 89.7 | 72.0 | 57.2 | 70.5 | 71.7 |
| RS | 76.5 | 69.7 | 79.1 | 85.9 | 64.9 | 83.8 | 68.4 | 82.8 | 79.4 | 70.7 | 94.5 | 58.9 | 91.8 | 72.0 | 53.4 | 70.3 | 75.0 |
| AuxiLearn | 76.2 | 68.9 | 78.3 | 83.6 | 64.9 | 83.4 | 69.7 | 87.4 | 80.7 | 68.3 | 94.6 | 53.2 | 92.1 | 73.7 | 61.6 | 72.4 | 74.6 |
+
+approach, assigning a soft label vector to each point. We then train the primary network on the main task and the auxiliary task of segmenting each point based on the learned labels.
+
+We evaluated the above approach in a part-segmentation task using the ShapeNet part dataset (Yi et al., 2016). This dataset contains 16,881 3D shapes from 16 object categories (including Airplane, Bag, Lamp), annotated with a total of 50 parts (at most 6 parts per object). The main task is to predict a part label for each point. We follow the official train/val/test split scheme in (Chang et al., 2015). We also follow the standard experimental setup in the literature, which assumes known object category labels during segmentation of a shape (see e.g., (Qi et al., 2017; Wang et al., 2019)). During training we uniformly sample 1024 points from each shape and we ignore points normal. During evaluation we use all points of a shape. For all methods (ours and baselines) we used the DGCNN architecture (Wang et al., 2019) as the backbone feature extractor and for part segmentation. We evaluated performance using point-Intersection over Union (IoU) following (Qi et al., 2017).
+
+We compared AuxiLearn with the following baselines: (1) Single Task Learning (STL): Training with the main task only. (2) DefRec: An auxiliary task of reconstructing a shape with a deformed region (Achituve et al., 2020). (3) Reconstructing Spaces (RS): An auxiliary task of reconstructing a shape from a shuffled version of it (Sauder and Sievers, 2019). and (4) Denoising Auto-encoder (DAE): An auxiliary task of reconstructing a point-cloud perturbed with an iid noise from $\mathcal{N}(0,0.01)$ .
+
+We performed hyper-parameter search over the primary network learning rate in $\{1e - 3,1e - 4\}$ , weight decay in $\{5e - 5,1e - 5\}$ and weight ratio between the main and auxiliary task of $\{1:1,1:0.5,1:0.25\}$ . We trained each method for 150 epochs, used the Adam optimizer with cosine scheduler. We applied early stopping based on the mean IoU of the validation set. We ran each configuration with 3 different seeds and report the average mean IoU along with the SEM. We used the segmentation network proposed in (Wang et al., 2019) with an exception that the network wasn't supplied with the object label as input.
+
+For AuxiLearn, we used a smaller version of PointNet (Qi et al., 2017) as the auxiliary network without input and feature transform layers. We selected PointNet because its model complexity is light and therefore is a good fit in our case. We learned a different auxiliary task per each object category (with 6 classes per category) since it showed better results. We performed hyper-parameter search over the auxiliary network learning rate in $\{1e - 2,1e - 3\}$ , weight decay in $\{5e - 3,5e - 4\}$ . Two training samples from each class were allocated for the auxiliary set.
+
+Table 7 shows the mean IOU per category when training with only 30 segmented point-clouds per object category (total of 480). As can be seen, AuxiLearn performance is close to RS (Sauder and Sievers, 2019) and improve upon other baselines. This shows that in this case, our method generates useful auxiliary tasks that has shown similar or better gain than those designed by humans.
+
+# C.10 LEARNING AN AUXILIARY CLASSIFIER
+
+In Section 5.4 we show how AuxiLearn learns a novel auxiliary to improve upon baseline methods. For the fine-grained classification experiments, we use only 30 samples per class. Here we also compare AuxiLearn with the baseline methods when there are only 15 images per class. Table 8 shows that AuxiLearn is superior to baseline methods in this setup as well, even though it requires to allocate some samples from the training data to the auxiliary set.
+
+To further examine the effect of learning novel auxiliary task with varying train set size, we provide here additional experiments on the CIFAR10 dataset. We evaluate the methods with of $10\%$ , $15\%$ and $100\%$ of training examples. The results are presented in Table 9. As expected, learning with
+
+Table 8: Learning auxiliary task. Test accuracy averaged over three runs (±SEM) - 15 shot
+
+ | CUB | Pet |
| STL | 22.6 ± 0.2 | 13.6 ± 0.7 |
| MAXL-F | 24.2 ± 0.7 | 14.1 ± 0.1 |
| MAXL | 24.2 ± 0.8 | 14.2 ± 0.2 |
| AuxiLearn | 26.1 ± 0.7 | 18.0 ± 0.9 |
+
+Table 9: CIFAR10 test results accuracy averaged over three runs (±SEM).
+
+ | CIFAR10 |
| 10% | 15% | 100% |
| STL | 72.63 ± 2.14 | 80.30 ± 0.09 | 93.36 ± 0.05 |
| MAXL | 75.85 ± 0.32 | 81.37 ± 0.26 | 93.49 ± 0.02 |
| AuxiLearn | 76.75 ± 0.08 | 81.42 ± 0.30 | 93.54 ± 0.05 |
+
+auxiliaries is mostly helpful in the low data regime. Nonetheless, AuxiLearn improves over single task learning and MAXL for all training set sizes.
+
+# D THEORETICAL CONSIDERATIONS
+
+In this section, we discuss the theoretical limitations of AuxiLearn. First, we discuss the smoothness of our loss criterion while learning to combine losses using DNNs. Next, we present limitations that may arise from utilizing the IFT and their resolution. Finally, we discuss the approximations made for achieving an efficient optimization procedure.
+
+Smoothness of the loss criterion. When learning to combine losses as described in Section 3.2, one must take into consideration the smoothness of the learn loss criterion as a function of $W$ . This limits, at least in theory, the design choice of the auxiliary network. In our experiments we use smooth activation functions, namely Softplus, to ensure the existence of $\partial \mathcal{L}_T / \partial W$ . Nonetheless, using non-smooth activation (e.g. ReLU) results with a piecewise smooth loss function hence might work well in practice.
+
+Assumptions for IFT. One assumption for applying the IFT as described in Section 3.4, is that $\mathcal{L}_T$ is continuously differentiable w.r.t to the auxiliary and primary parameters. This assumption limits the design choice of both the auxiliary, and the primary networks. For instance, one must utilize only smooth activation functions. However, many non-smooth components can be replaced with smooth counterparts. For example, ReLU can be replaced with Softplus, $ReLU(x) = \lim_{\alpha \to \infty}\ln (1 + \exp (\alpha x)) / \alpha$ , and the beneficial effects of Batch-Normalization can be captured with Weight-Normalization as argued in (Salimans and Kingma, 2016).
+
+For the setup of learning to combine losses, we use the above substitutes, namely Softplus and Weight Normalization, however for the learning a novel auxiliary setup, we share architecture between primary and auxiliary network (e.g. ResNet18). While using non-smooth components may, in theory, cause issues, we show empirically through extensive experiment that AuxiLean performs well in practice, and its optimization is stable. Furthermore, we note that while RLUs are non-smooth, they are piecewise smooth, hence the set of non-smoothness points is a zero-measure set.
+
+Approximations. Our optimization procedure relies on several approximations to efficiently solve complex bi-level optimization. This trade-off between computation efficiency and accurate approximation can be controlled by (i) The number of Neumann series components, and; (ii) The number of optimization steps between auxiliary parameters update. While we cannot guarantee that the bi-level optimization process converges, empirically we observe a stable optimization process.
+
+Our work builds on previous studies in the field of hyperparameter optimization (Lorraine et al., 2020; Pedregosa, 2016). Lorraine et al. (2020) provide an error analysis for both approximations, in a setup for which the exact Hessian can be evaluated in closed form. We refer the readers to Pedregosa (2016) for theoretical analysis and results regarding the second approximation (i.e. sub-optimally of the inner optimization problem in Eq. 2).
\ No newline at end of file
diff --git a/auxiliarylearningbyimplicitdifferentiation/images.zip b/auxiliarylearningbyimplicitdifferentiation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..27402e7488a852390077c9fc90837cf4bcd3e04f
--- /dev/null
+++ b/auxiliarylearningbyimplicitdifferentiation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c4788182c79fd99b7cdfe419a3ec4ccb809dcca8585bf41d15e99599f68d7e3f
+size 542032
diff --git a/auxiliarylearningbyimplicitdifferentiation/layout.json b/auxiliarylearningbyimplicitdifferentiation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..53ec9d15448a69600f462c5bc95e53cc56e4ee19
--- /dev/null
+++ b/auxiliarylearningbyimplicitdifferentiation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3d8ac08e247db4e08c9cab67edafd221aa273c3206d0244cbb362e1d9bb572e6
+size 685069
diff --git a/averagecaseaccelerationforbilineargamesandnormalmatrices/a2131a7c-1207-4a46-87c5-3883e655b7ea_content_list.json b/averagecaseaccelerationforbilineargamesandnormalmatrices/a2131a7c-1207-4a46-87c5-3883e655b7ea_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..77a2747bef47bcc70af9f6e632408a174c0728ec
--- /dev/null
+++ b/averagecaseaccelerationforbilineargamesandnormalmatrices/a2131a7c-1207-4a46-87c5-3883e655b7ea_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ab5b26bc84db76dfc3f77cdca5b2478a35fbfbf9b050067c7d59ced04bccb6f3
+size 171403
diff --git a/averagecaseaccelerationforbilineargamesandnormalmatrices/a2131a7c-1207-4a46-87c5-3883e655b7ea_model.json b/averagecaseaccelerationforbilineargamesandnormalmatrices/a2131a7c-1207-4a46-87c5-3883e655b7ea_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..1689484ace280bea4eb9ec7c45448e550e839107
--- /dev/null
+++ b/averagecaseaccelerationforbilineargamesandnormalmatrices/a2131a7c-1207-4a46-87c5-3883e655b7ea_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fc33f12e807d4e7a948be6d22075d16afa899115406ab55495da781b4fbbd197
+size 193352
diff --git a/averagecaseaccelerationforbilineargamesandnormalmatrices/a2131a7c-1207-4a46-87c5-3883e655b7ea_origin.pdf b/averagecaseaccelerationforbilineargamesandnormalmatrices/a2131a7c-1207-4a46-87c5-3883e655b7ea_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..9977d1aa278004142bb30ed2c49a04a76ec380ea
--- /dev/null
+++ b/averagecaseaccelerationforbilineargamesandnormalmatrices/a2131a7c-1207-4a46-87c5-3883e655b7ea_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:236d4d567420c1fb4f688eea4d7ab48979b03467fbe4445661dff825126d1acf
+size 533883
diff --git a/averagecaseaccelerationforbilineargamesandnormalmatrices/full.md b/averagecaseaccelerationforbilineargamesandnormalmatrices/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..b2610e966b2682ba1dd7087cde0cbaf812a71692
--- /dev/null
+++ b/averagecaseaccelerationforbilineargamesandnormalmatrices/full.md
@@ -0,0 +1,999 @@
+# AVERAGE-CASE ACCELERATION FOR BILINEAR GAMES AND NORMAL MATRICES
+
+Carles Domingo-Enrich
+
+Computer Science Department
+
+Courant Institute of Mathematical Sciences
+
+New York University
+
+New York, NY 10012, USA
+
+cd2754@nyu.edu
+
+Fabian Pedregosa
+
+Google Research
+
+pedregosa@google.com
+
+Damien Scieur
+
+Samsung SAIT AI Lab & Mila
+
+Montreal, Canada
+
+damien.scieur@gmail.com
+
+# ABSTRACT
+
+Advances in generative modeling and adversarial learning have given rise to renewed interest in smooth games. However, the absence of symmetry in the matrix of second derivatives poses challenges that are not present in the classical minimization framework. While a rich theory of average-case analysis has been developed for minimization problems, little is known in the context of smooth games. In this work we take a first step towards closing this gap by developing average-case optimal first-order methods for a subset of smooth games. We make the following three main contributions. First, we show that for zero-sum bilinear games the average-case optimal method is the optimal method for the minimization of the Hamiltonian. Second, we provide an explicit expression for the optimal method corresponding to normal matrices, potentially non-symmetric. Finally, we specialize it to matrices with eigenvalues located in a disk and show a provable speed-up compared to worst-case optimal algorithms. We illustrate our findings through numerical simulations with a varying degree of mismatch with our assumptions.
+
+# 1 INTRODUCTION
+
+The traditional analysis of optimization algorithms is a worst-case analysis (Nemirovski, 1995; Nesterov, 2004). This type of analysis provides a complexity bound for any input from a function class, no matter how unlikely. However, since hard-to-solve inputs might rarely occur in practice, the worst-case complexity bounds might not be representative of the observed running time.
+
+A more representative analysis is given by the average-case complexity, averaging the algorithm's complexity over all possible inputs. This analysis is standard for analyzing, e.g., sorting (Knuth, 1997) and cryptography algorithms (Katz & Lindell, 2014). Recently, a line of work (Berthier et al., 2020; Pedregosa & Scieur, 2020; Lacotte & Pilanci, 2020; Paquette et al., 2020) focused on optimal methods for the optimization of quadratics, specified by a symmetric matrix. While worst-case analysis uses bounds on the matrix eigenvalues to yield upper and lower bounds on convergence, average-case analysis relies on the expected distribution of eigenvalues and provides algorithms with sharp optimal convergence rates. While the algorithms developed in this context have been shown to be efficient for minimization problems, these have not been extended to smooth games.
+
+A different line of work considers algorithms for smooth games but studies worst-case optimal methods (Azizian et al., 2020). In this work, we combine average-case analysis with smooth games, and develop novel average-case optimal algorithms for finding the root of a linear system determined by a (potentially non-symmetric) normal matrix. We make the following main contributions:
+
+1. Inspired by the problem of finding equilibria in smooth games, we develop average-case optimal algorithms for finding the root of a non-symmetric affine operator, both under a normality assumption (Thm. 4.1), and under the extra assumption that eigenvalues of the operator are supported in a disk (Thm. 4.2). The proposed method shows a polynomial speedup compared to the worst-case optimal method, verified by numerical simulations.
+2. We make a novel connection between average-case optimal methods for optimization, and average-case optimal methods for bilinear games. In particular, we show that solving the Hamiltonian using an average-case optimal method is optimal (Theorem 3.1) for bilinear games. This result complements (Azizian et al., 2020), who proved that Polyak Heavy Ball algorithm on the Hamiltonian is asymptotically worst-case optimal for bilinear games.
+
+# 2 AVERAGE-CASE ANALYSIS FOR NORMAL MATRICES
+
+In this paper we consider the following class of problems.
+
+Definition 1. Let $\mathbf{A} \in \mathbb{R}^{d \times d}$ be a real matrix and $\mathbf{x}^{\star} \in \mathbb{R}^{d}$ a vector. The non-symmetric (affine) operator (NSO) problem is defined as:
+
+$$
+F i n d \boldsymbol {x}: F (\boldsymbol {x}) \stackrel {\text {d e f}} {=} A (\boldsymbol {x} - \boldsymbol {x} ^ {\star}) = \mathbf {0}. \tag {NSO}
+$$
+
+This problem generalizes that of minimization of a convex quadratic function $f$ , since we can cast the latter in this framework by setting the operator $F = \nabla f$ . The set of solutions is an affine subspace that we will denote $\mathcal{X}^{\star}$ . We will find convenient to consider the distance to this set, defined as
+
+$$
+\operatorname {d i s t} (\boldsymbol {x}, \mathcal {X} ^ {\star}) \stackrel {\text {d e f}} {=} \min _ {\boldsymbol {v} \in \mathcal {X} ^ {\star}} \| \boldsymbol {x} - \boldsymbol {v} \| ^ {2}, \quad \text {w i t h} \mathcal {X} ^ {\star} = \left\{\boldsymbol {x} \in \mathbb {R} ^ {d} \mid \boldsymbol {A} \left(\boldsymbol {x} - \boldsymbol {x} ^ {\star}\right) = \boldsymbol {0} \right\}. \tag {1}
+$$
+
+In this paper we will develop average-case optimal methods. For this, we consider $\mathbf{A}$ and $x^{\star}$ to be random vectors, and a random initialization $x_0$ . This induces a probability distribution over NSO problems, and we seek to find methods that have an optimal expected suboptimality w.r.t. this distribution. Denoting $\mathbb{E}_{(\mathbf{A},\mathbf{x}^{\star},\mathbf{x}_0)}$ the expectation over these random problems, we have that average-case optimal methods they verify the following property at each iteration $t$
+
+$$
+\min _ {\boldsymbol {x} _ {t}} \mathbb {E} _ {\left(\boldsymbol {A}, \boldsymbol {x} ^ {\star}, \boldsymbol {x} _ {0}\right)} \operatorname {d i s t} \left(\boldsymbol {x} _ {t}, \mathcal {X} ^ {\star}\right) \quad \text {s . t .} \boldsymbol {x} _ {i} \in \boldsymbol {x} _ {0} + \operatorname {s p a n} \left(\left\{F \left(\boldsymbol {x} _ {j}\right) \right\} _ {j = 0} ^ {i - 1}\right), \forall i \in [ 1: t ]. \tag {2}
+$$
+
+The last condition on $\boldsymbol{x}_t$ stems from restricting the class of algorithms to first-order methods. The class of first-order methods encompasses many known schemes such as gradient descent with momentum, or full-matrix AdaGrad. However, methods such as Adam (Kingma & Ba, 2015) or diagonal AdaGrad (Duchi et al., 2011) are not in this class, as the diagonal re-scaling creates iterates $\boldsymbol{x}_t$ outside the span of previous gradients. Although we will focus on the distance to the solution, the results can be extended to other convergence criteria such as $\| F(\boldsymbol{x}_t)\|^2$ .
+
+Finally, note that the expectations in this paper are on the problem instance and not on the randomness of the algorithm.
+
+# 2.1 ORTHOGONAL RESIDUAL POLYNOMIALS AND FIRST-ORDER METHODS
+
+The analysis of first-order methods simplifies through the use of polynomials. This section provides the tools required to leverage this connection.
+
+Definition 2. A residual polynomial is a polynomial $P$ that satisfies $P(0) = 1$ .
+
+Proposition 2.1. (Hestenes et al., 1952) If the sequence $(\pmb{x}_t)_{t\in \mathbb{Z}_+}$ is generated by a first-order method, then there exist residual polynomials $P_{t}$ , each one of degree at most $t$ , verifying
+
+$$
+\boldsymbol {x} _ {t} - \boldsymbol {x} ^ {\star} = P _ {t} (\boldsymbol {A}) \left(\boldsymbol {x} _ {0} - \boldsymbol {x} ^ {\star}\right). \tag {3}
+$$
+
+As we will see, optimal average-case method are strongly related to orthogonal polynomials. We first define the inner product between polynomials, where we use $z^*$ for the complex conjugate of $z \in \mathbb{C}$ .
+
+Definition 3. For $P, Q \in \mathbb{R}[X]$ , we define the inner product $\langle \cdot, \cdot \rangle_{\mu}$ for a measure $\mu$ over $\mathbb{C}$ as
+
+$$
+\langle P, Q \rangle_ {\mu} \stackrel {\text {d e f}} {=} \int_ {\mathbb {C}} P (\lambda) Q (\lambda) ^ {*} \mathrm {d} \mu (\lambda). \tag {4}
+$$
+
+Definition 4. A sequence of polynomials $\{P_i\}$ is orthogonal (resp. orthonormal) w.r.t. $\langle \cdot, \cdot \rangle_{\mu}$ if
+
+$$
+\langle P _ {i}, P _ {i} \rangle_ {\mu} > 0 (r e s p. = 1); \quad \langle P _ {i}, P _ {j} \rangle_ {\mu} = 0 i f i \neq j.
+$$
+
+# 2.2 EXPECTED SPECTRAL DISTRIBUTION
+
+Following (Pedregosa & Scieur, 2020), we make the following assumption on the problem family.
+
+Assumption 1. $\pmb{x}_0 - \pmb{x}^\star$ is independent of $\pmb{A}$ , and $\mathbb{E}_{(\pmb{x}_0, \pmb{x}^\star)}[(\pmb{x}_0 - \pmb{x}^\star)(\pmb{x}_0 - \pmb{x}^\star)^\top] = \frac{R^2}{d}\pmb{I}_d$ .
+
+We will also require the following definitions to characterize difficulty of a problem class. Let $\{\lambda_1,\dots ,\lambda_d\}$ be the eigenvalues of a matrix $A\in \mathbb{R}^{d\times d}$ . We define the empirical spectral distribution of $A$ as the probability measure
+
+$$
+\hat {\mu} _ {\boldsymbol {A}} (\lambda) \stackrel {\text {d e f}} {=} \frac {1}{d} \sum_ {i = 1} ^ {d} \delta_ {\lambda_ {i}} (\lambda), \tag {5}
+$$
+
+where $\delta_{\lambda_i}$ is the Dirac delta, a distribution equal to zero everywhere except at $\lambda_i$ and whose integral over the entire real line is equal to one. Note that with this definition, $\int_{\mathcal{D}}\mathrm{d}\hat{\mu}_A(\lambda)$ corresponds to the proportion of eigenvalues in $\mathcal{D}$ .
+
+When $\mathbf{A}$ is a matrix-valued random variable, $\mu_{\mathbf{A}}$ is a measure-valued random variable. As such, we can define its expected spectral distribution
+
+$$
+\mu_ {\mathbf {A}} \stackrel {\text {d e f}} {=} \mathbb {E} _ {\mathbf {A}} [ \hat {\mu} _ {\mathbf {A}} ], \tag {6}
+$$
+
+which by the Riesz representation theorem is the measure that verifies $\int f\mathrm{d}\mu = \mathbb{E}_A[\int f\mathrm{d}\mu_A]$ for all measurable $f$ . Surprisingly, the expected spectral distribution is the only required characteristic to design optimal algorithms in the average-case.
+
+# 2.3 EXPECTED ERROR OF FIRST-ORDER METHODS
+
+In this section we provide an expression for the expected convergence in terms of the residual polynomial and the expected spectral distribution introduced in the previous section. To go further in the analysis, we have to assume that $A$ is a normal matrix.
+
+Assumption 2. The (real) random matrix $\mathbf{A}$ is normal, that is, it verifies $\mathbf{A}\mathbf{A}^{\top} = \mathbf{A}^{\top}\mathbf{A}$ .
+
+Normality is equivalent to $\mathbf{A}$ having the spectral decomposition $\mathbf{A} = \mathbf{U}\boldsymbol{\Lambda}\mathbf{U}^{*}$ , where $\mathbf{U}$ is unitary, i.e., $\mathbf{U}^{*}\mathbf{U} = \mathbf{U}\mathbf{U}^{*} = \mathbf{I}$ . We now have everything to write the expected error of a first-order algorithm applied to (NSO).
+
+Theorem 2.1. Consider the application of a first-order method associated to the sequence of polynomials $\{P_t\}$ (Proposition 2.1) on the problem (NSO). Let $\mu$ be the expected spectral distribution of $A$ . Under Assumptions 1 and 2, we have
+
+$$
+\mathbb {E} \left[ \operatorname {d i s t} \left(\boldsymbol {x} _ {t}, \boldsymbol {\mathcal {X}} ^ {\star}\right) \right] = R ^ {2} \int_ {\mathbb {C} \backslash \{0 \}} \left| P _ {t} \right| ^ {2} \mathrm {d} \mu , \tag {7}
+$$
+
+Before designing optimal algorithms for certain specific distributions, we compare our setting with the average-case accelerating for minimization problems of Pedregosa & Scieur (2020), who proposed optimal optimization algorithms in the average-case.
+
+# 2.4 DIFFICULTIES OF FIRST-ORDER METHODS ON GAMES AND RELATED WORK
+
+This section compares our contribution with the existing framework of average-case optimal methods for quadratic minimization problems.
+
+Definition 5. Let $\pmb{H} \in \mathbb{R}^{d \times d}$ be a random symmetric positive-definite matrix and $\pmb{x}^{\star} \in \mathbb{R}^{d}$ a random vector. These elements determine the following random quadratic minimization problem
+
+$$
+\min _ {\boldsymbol {x} \in \mathbb {R} ^ {d}} \left\{f (\boldsymbol {x}) \stackrel {{\text {d e f}}} {{=}} \frac {1}{2} \left(\boldsymbol {x} - \boldsymbol {x} ^ {\star}\right) ^ {\top} \boldsymbol {H} \left(\boldsymbol {x} - \boldsymbol {x} ^ {\star}\right) \right\}. \tag {OPT}
+$$
+
+As in our paper, Pedregosa & Scieur (2020) find deterministic optimal first-order algorithms in expectation w.r.t. the matrix $\pmb{H}$ , the solution $\pmb{x}^{\star}$ , and the initialization $\pmb{x}_0$ . Since they work with problem (OPT), their problem is equivalent to (NSO) with the matrix $A = H$ . However, they have the stronger assumption that the matrix is symmetric, which implies being normal. The normality assumption is restrictive in the case of game theory, as they do not always naturally fit such applications. However, this set is expressive enough to consider interesting cases, such as bilinear games, and our experiments show that our findings are also consistent with non-normal matrices.
+
+Using orthogonal residual polynomials and spectral distributions, they derive the explicit formula of the expected error. Their result is similar to Theorem 2.1, but the major difference is the domain of the integral, a real positive line in convex optimization, but a shape in the complex plane in our case. This shape plays a crucial role in the rate of converge of first-order algorithms, as depicted in the work of Azizian et al. (2020); Bollapragada et al. (2018).
+
+In the case of optimization methods, they show that optimal schemes in the average-case follow a simple three-term recurrence arising from the three-term recurrence for residual orthogonal polynomials for the measure $\lambda \mu (\lambda)$ . Indeed, by Theorem 2.1 the optimal method corresponds to the residual polynomials minimizing $\langle P,P\rangle_{\mu}$ , and the following result holds:
+
+Theorem 2.2. (Fischer, 1996, §2.4) When $\mu$ is supported in the real line, the residual polynomial of degree $t$ minimizing $\langle P, P \rangle_{\mu}$ is given by the degree $t$ residual orthogonal polynomial w.r.t. $\lambda \mu(\lambda)$ .
+
+However, the analogous result does not hold for general measures in $\mathbb{C}$ , and hence our arguments will make use of the following Theorem 2.3 instead, which links the residual polynomial of degree at most $t$ that minimizes $\langle P, P \rangle_{\mu}$ to the sequence of orthonormal polynomials for $\mu$ .
+
+Theorem 2.3. [Theorem 1.4 of Assche (1997)] Let $\mu$ be a positive Borel measure in the complex plane. The minimum of the integral $\int_{\mathbb{C}}|P(\lambda)|^2\mathrm{d}\mu (\lambda)$ over residual polynomials $P$ of degree lower or equal than $t$ is uniquely attained by the polynomial
+
+$$
+P ^ {\star} (\lambda) = \frac {\sum_ {k = 0} ^ {t} \phi_ {k} (\lambda) \phi_ {k} (0) ^ {*}}{\sum_ {k = 0} ^ {t} | \phi_ {k} (0) | ^ {2}}, \quad \text {w i t h o p t i m a l v a l u e} \int_ {\mathbb {C}} | P ^ {\star} (\lambda) | ^ {2} \mathrm {d} \mu (\lambda) = \frac {1}{\sum_ {k = 0} ^ {t} | \phi_ {k} (0) | ^ {2}}, \tag {8}
+$$
+
+where $(\phi_k)_k$ is the orthonormal sequence of polynomials with respect to the inner product $\langle \cdot, \cdot \rangle_{\mu}$ .
+
+In the next sections we consider cases where the optimal scheme is identifiable.
+
+# 3 AVERAGE-CASE OPTIMAL METHODS FOR BILINEAR GAMES
+
+We consider the problem of finding a Nash equilibrium of the zero-sum minimax game given by
+
+$$
+\min _ {\boldsymbol {\theta} _ {1}} \max _ {\boldsymbol {\theta} _ {2}} \ell \left(\boldsymbol {\theta} _ {1}, \boldsymbol {\theta} _ {2}\right) \stackrel {\text {d e f}} {=} \left(\boldsymbol {\theta} _ {1} - \boldsymbol {\theta} _ {1} ^ {\star}\right) ^ {\top} M \left(\boldsymbol {\theta} _ {2} - \boldsymbol {\theta} _ {2} ^ {\star}\right). \tag {9}
+$$
+
+Let $\pmb{\theta}_1, \pmb{\theta}_1^* \in \mathbb{R}^{d_1}, \pmb{\theta}_2, \pmb{\theta}_2^* \in \mathbb{R}^{d_2}, \pmb{M} \in \mathbb{R}^{d_1 \times d_2}$ and $d \stackrel{\mathrm{def}}{=} d_1 + d_2$ . The vector field of the game (Balduzzi et al., 2018) is defined as $F(\pmb{x}) = \pmb{A}(\pmb{x} - \pmb{x}^*)$ , where
+
+$$
+F \left(\boldsymbol {\theta} _ {1}, \boldsymbol {\theta} _ {2}\right) = \left[ \begin{array}{c} \nabla_ {\boldsymbol {\theta} _ {1}} \ell \left(\boldsymbol {\theta} _ {1}, \boldsymbol {\theta} _ {2}\right) \\ - \nabla_ {\boldsymbol {\theta} _ {2}} \ell \left(\boldsymbol {\theta} _ {1}, \boldsymbol {\theta} _ {2}\right) \end{array} \right] = \underbrace {\left[ \begin{array}{c c} 0 & M \\ - M ^ {\top} & 0 \end{array} \right]} _ {= A} \left(\underbrace {\left[ \begin{array}{l} \boldsymbol {\theta} _ {1} \\ \boldsymbol {\theta} _ {2} \end{array} \right]} _ {= x} - \underbrace {\left[ \begin{array}{l} \boldsymbol {\theta} _ {1} ^ {\star} \\ \boldsymbol {\theta} _ {2} ^ {\star} \end{array} \right]} _ {= x ^ {*}}\right) = A \left(x - x ^ {\star}\right). \tag {10}
+$$
+
+As before, $\mathcal{X}^{\star}$ denotes the set of points $\pmb{x}$ such that $F(\pmb{x}) = 0$ , which is equivalent to the set of Nash equilibrium. If $\pmb{M}$ is sampled independently from $\pmb{x}_0, \pmb{x}^\star$ and $\pmb{x}_0 - \pmb{x}^\star$ has covariance $\frac{R^2}{d}\pmb{I}_d$ , Assumption 1 is fulfilled. Since $\pmb{A}$ is skew-symmetric, it is in particular normal and Assumption 2 is also satisfied.
+
+We now show that the optimal average-case algorithm to solve bilinear problems is Hamiltonian gradient descent with momentum, described below in its general form. Contrary to the methods in Azizian et al. (2020), the method we propose is anytime (and not only asymptotically) average-case optimal.
+
+# Optimal average-case algorithm for bilinear games.
+
+Initialization. $\pmb{x}_{-1} = \pmb{x}_0 = (\pmb{\theta}_{1,0},\pmb{\theta}_{2,0})$ , sequence $\{h_t,m_t\}$ given by Theorem 3.1.
+
+Main loop. For $t \geq 0$ ,
+
+$$
+\boldsymbol {g} _ {t} = F \left(\boldsymbol {x} _ {t} - F \left(\boldsymbol {x} _ {t}\right)\right) - F \left(\boldsymbol {x} _ {t}\right) \quad \left(= \frac {1}{2} \nabla \| F \left(\boldsymbol {x} _ {t}\right) \| ^ {2} \text {b y (1 2)}\right) \tag {11}
+$$
+
+$$
+\boldsymbol {x} _ {t + 1} = \boldsymbol {x} _ {t} - h _ {t + 1} \boldsymbol {g} _ {t} + m _ {t + 1} \left(\boldsymbol {x} _ {t - 1} - \boldsymbol {x} _ {t}\right)
+$$
+
+The quantity $\frac{1}{2}\| F(\pmb{x})\|^2$ is commonly known as the Hamiltonian of the game (Balduzzi et al., 2018), hence the name Hamiltonian gradient descent. Indeed, $\pmb{g}_t = \nabla \left(\frac{1}{2}\| F(\pmb{x})\|^2\right)$ when $F$ is affine:
+
+$$
+\begin{array}{l} F (\boldsymbol {x} - F (\boldsymbol {x})) - F (\boldsymbol {x}) = \boldsymbol {A} (\boldsymbol {x} - \boldsymbol {A} (\boldsymbol {x} - \boldsymbol {x} ^ {\star}) - \boldsymbol {x} ^ {\star}) - \boldsymbol {A} (\boldsymbol {x} - \boldsymbol {x} ^ {\star}) = - \boldsymbol {A} (\boldsymbol {A} (\boldsymbol {x} - \boldsymbol {x} ^ {\star})) \\ = \boldsymbol {A} ^ {\top} (\boldsymbol {A} (\boldsymbol {x} - \boldsymbol {x} ^ {\star})) = \nabla \left(\frac {1}{2} \| \boldsymbol {A} (\boldsymbol {x} - \boldsymbol {x} ^ {\star}) \| ^ {2}\right) = \nabla \left(\frac {1}{2} \| F (\boldsymbol {x}) \| ^ {2}\right). \tag {12} \\ \end{array}
+$$
+
+The following theorem shows that (11) is indeed the optimal average-case method associated to the minimization problem $\min_{\pmb{x}}\left(\frac{1}{2}\| F(\pmb{x})\| ^2\right)$ , as the following theorem shows.
+
+Theorem 3.1. Suppose that Assumption 1 holds and that the expected spectral distribution of $MM^{\top}$ is absolutely continuous with respect to the Lebesgue measure. Then, the method (11) is average-case optimal for bilinear games when $h_t$ , $m_t$ are chosen to be the coefficients of the average-case optimal minimization of $\frac{1}{2}\| F(\pmb{x})\|^2$ .
+
+How to find optimal coefficients? Since $\frac{1}{2}\| F(\pmb{x})\|^2$ is a quadratic problem, the coefficients $\{h_t, m_t\}$ can be found using the average-case framework for quadratic minimization problems of (Pedregosa & Scierur, 2020, Theorem 3.1).
+
+Proof sketch. When computing the optimal polynomial $\pmb{x}_t = P_t(\pmb{A})(\pmb{x}_0 - \pmb{x}^*)$ , we have that the residual orthogonal polynomial $P_t$ behaves differently if $t$ is even or odd.
+
+- Case 1: $t$ is even. In this case, we observe that the polynomial $P_{t}(\mathbf{A})$ can be expressed as $Q_{t/2}(-\mathbf{A}^{2})$ , where $(Q_{t})_{t \geq 0}$ is the sequence of orthogonal polynomials w.r.t. the expected spectral density of $-\mathbf{A}^{2}$ , whose eigenvalues are real and positive. This gives the recursion in (11).
+- Case 2: $t$ is odd. There is no residual orthogonal polynomial of degree $t$ for $t$ odd. Instead, odd iterations do correspond to the intermediate computation of $g_t$ in (11), but not to an actual iterate.
+
+# 3.1 PARTICULAR CASE: M WITH I.I.D. COMPONENTS
+
+We now show the optimal method when the entries of $M$ are i.i.d. sampled. For simplicity, we order the players such that $d_{1} \leq d_{2}$ .
+
+Assumption 3. Assume that each component of $M$ is sampled iid from a distribution of mean 0 and variance $\sigma^2$ , and we take $d_1, d_2 \to \infty$ with $\frac{d_1}{d_2} \to r < 1$ .
+
+In such case, the spectral distribution of $\frac{1}{d_2} MM^\top$ tends to the Marchenko-Pastur law, supported in $[\ell, L]$ and with density:
+
+$$
+\rho_ {M P} (\lambda) \stackrel {\text {d e f}} {=} \frac {\sqrt {(L - \lambda) (\lambda - \ell)}}{2 \pi \sigma^ {2} r \lambda}, \quad \text {w h e r e} L \stackrel {\text {d e f}} {=} \sigma^ {2} (1 + \sqrt {r}) ^ {2}, \ell \stackrel {\text {d e f}} {=} \sigma^ {2} (1 - \sqrt {r}) ^ {2}. \tag {13}
+$$
+
+Proposition 3.1. When $M$ satisfies Assumption 3, the optimal parameter of scheme (11) are
+
+$$
+h _ {t} = - \frac {\delta_ {t}}{\sigma^ {2} \sqrt {r}}, \quad m _ {t} = 1 + \rho \delta_ {t}, \quad \text {w h e r e} \rho = \frac {1 + r}{\sqrt {r}}, \delta_ {t} = (- \rho - \delta_ {t - 1}) ^ {- 1}, \delta_ {0} = 0. \tag {14}
+$$
+
+Proof. By Theorem 3.1, the problem reduces to finding the optimal average-case algorithm for the problem $\min_{\boldsymbol{x}} \frac{1}{2} \| F(\boldsymbol{x}) \|^2$ . Since the expected spectral distribution of $\frac{1}{d_2} MM^\top$ is the Marchenko-Pastur law, we can use the optimal algorithm from (Pedregosa & Scieur, 2020, Section 5).
+
+# 4 GENERAL AVERAGE-CASE OPTIMAL METHOD FOR NORMAL OPERATORS
+
+In this section we derive general average-case optimal first-order methods for normal operators. First, we need to assume the existence of a three-term recurrence for residual orthogonal polynomials (Assumption 4). As mentioned in subsection 2.4, for general measures in the complex plane, the existence of a three-term recurrence of orthogonal polynomials is not ensured. In Proposition B.3 in Appendix B we give a sufficient condition for its existence, and in the next subsection we will show specific examples where the residual orthogonal polynomials satisfy the three-term recurrence.
+
+Assumption 4 (Simplifying assumption). The sequence of residual polynomials $\{\psi_t\}_{t\geq 0}$ orthogonal w.r.t. the measure $\mu$ , defined on the complex plane, admits the three-term recurrence
+
+$$
+\psi_ {- 1} = 0, \quad \psi_ {0} = 1, \quad \psi_ {t} (\lambda) = \left(a _ {t} + b _ {t} \lambda\right) \psi_ {t - 1} (\lambda) + (1 - a _ {t}) \psi_ {t - 2} (\lambda). \tag {15}
+$$
+
+Under Assumption 4, Theorem 4.1 shows that the optimal algorithm can also be written as an average of iterates following a simple three-terms recurrence.
+
+Theorem 4.1. Under Assumption 4 and the assumptions of Theorem 2.1, the following algorithm is optimal in the average case, with $\mathbf{y}_{-1} = \mathbf{y}_0 = \mathbf{x}_0$ :
+
+$$
+\boldsymbol {y} _ {t} = a _ {t} \boldsymbol {y} _ {t - 1} + (1 - a _ {t}) \boldsymbol {y} _ {t - 2} + b _ {t} F (\boldsymbol {y} _ {t - 1})
+$$
+
+$$
+\boldsymbol {x} _ {t} = \frac {B _ {t}}{B _ {t} + \beta_ {t}} \boldsymbol {x} _ {t - 1} + \frac {\beta_ {t}}{B _ {t} + \beta_ {t}} \boldsymbol {y} _ {t}, \quad \beta_ {t} = \phi_ {t} ^ {2} (0), \quad B _ {t} = B _ {t - 1} + \beta_ {t - 1}, \quad B _ {0} = 0. \tag {16}
+$$
+
+where $(\phi_k(0))_{k\geq 0}$ can be computed using the three-term recurrence (upon normalization). Moreover, $\mathbb{E}_{(\pmb {A},\pmb{x}^{\star},\pmb {x}_0)}\mathrm{dist}(\pmb {x}_t,\mathcal{X}^\star)$ converges to zero at rate $1 / B_{t}$
+
+Remark. Notice that it is not immediate that (16) fulfills the definition of first-order algorithms stated in (2), as $\pmb{y}_t$ is clearly a first-order method but $\pmb{x}_t$ is an average of the iterates $\pmb{y}_t$ . Using that $F$ is an affine function we see that $\pmb{x}_t$ indeed fulfills (2).
+
+Remark. Assumption 4 is needed for the sequence $(\pmb{y}_t)_{t\geq 0}$ to be computable using a three-term recurrence. However, for some distribution, the associated sequence of orthogonal polynomials may admit another recurrence that may not satisfy Assumption 4.
+
+# 4.1 CIRCULAR SPECTRAL DISTRIBUTIONS
+
+In random matrix theory, the circular law states that if $\mathbf{A}$ is an $n \times n$ matrix with i.i.d. entries of mean $C$ and variance $R^2 / n$ , as $n \to \infty$ the spectral distribution of $\mathbf{A}$ tends to the uniform distribution on $D_{C,R}$ . In this subsection we apply Theorem 4.1 to a class of spectral distributions specified by Assumption 5, which includes the uniform distribution on $D_{C,R}$ . Even though the random matrices with i.i.d entries are not normal, we will see in section 6 that the empirical results for such matrices are consistent with our theoretical results under the normality assumption.
+
+Assumption 5. Assume that the expected spectral distribution $\mu_A$ is supported in the complex plane on the disk $D_{C,R}$ of center $C\in \mathbb{R},C > 0$ and radius $R < C$ . Moreover, assume that the spectral density is circularly symmetric, i.e. there exists a probability measure $\mu_R$ supported on $[0,R]$ such for all $f$ measurable and $r\in [0,R]$ , $\mathrm{d}\mu_A(C + r e^{i\theta}) = \frac{1}{2\pi}\mathrm{d}\theta \mathrm{d}\mu_R(r)$ .
+
+Proposition 4.1. If $\mu$ satisfies Assumption 5, the sequence of orthonormal polynomials is $(\phi_t)_{t\geq 0}$
+
+$$
+\phi_ {t} (\lambda) = \frac {(\lambda - C) ^ {t}}{K _ {t , R}}, \text {w h e r e} K _ {t, R} = \sqrt {\int_ {0} ^ {R} r ^ {2 t} \mathrm {d} \mu_ {R} (r)}. \tag {17}
+$$
+
+Example. The uniform distribution in $D_{C,R}$ is to $\mathrm{d}\mu_R = \frac{2r}{R^2}\mathrm{d}r$ , and $K_{t,R} = R^{t} / \sqrt{t + 1}$ .
+
+From Proposition 4.1, the sequence of residual polynomials is given by $\phi_t(\lambda) / \phi_t(0) = \left(1 - \frac{\lambda}{C}\right)^t$ , which implies that Assumption 4 is fulfilled with $a_{t} = 1, b_{t} = -\frac{1}{C}$ . Thus, by Theorem 4.1 we have
+
+Theorem 4.2. Given an initialization $\pmb{x}_0(\pmb{y}_0 = \pmb{x}_0)$ , if Assumption 5 is fulfilled with $R < C$ and the assumptions of Theorem 2.1 hold, then the average-case optimal first-order method is
+
+$$
+\boldsymbol {y} _ {t} = \boldsymbol {y} _ {t - 1} - \frac {1}{C} F (\boldsymbol {y} _ {t - 1}), \quad \beta_ {t} = C ^ {2 t} / K _ {t, R} ^ {2}, \quad B _ {t} = B _ {t - 1} + \beta_ {t - 1},
+$$
+
+$$
+\boldsymbol {x} _ {t} = \frac {B _ {t}}{B _ {t} + \beta_ {t}} \boldsymbol {x} _ {t - 1} + \frac {\beta_ {t}}{B _ {t} + \beta_ {t}} \boldsymbol {y} _ {t}. \tag {18}
+$$
+
+Moreover, $\mathbb{E}_{(A,\pmb{x}^{\star},\pmb{x}_0)}\mathrm{dist}(\pmb{x}_t,\mathcal{X}^\star)$ converges to zero at rate $1 / B_{t}$ .
+
+We now compare Theorem 4.2 with worst-case methods studied in Azizian et al. (2020). They give a worst-case convergence lower bound of $(R / C)^{2t}$ on the quantity $\mathrm{dist}(\pmb {z}_t,\mathcal{X}^\star)$ for first-order methods $(\pmb {z}_t)_{t\geq 0}$ on matrices with eigenvalues in the disk $D_{C,R}$ . By the classical analysis of first-order methods, this rate is achievable by gradient descent with stepsize $1 / C$ , i.e. the iterates $\pmb{y}_{t}$ defined in (18). However, by equation (79) in Proposition D.3 we have that under slight additional assumptions (those of Proposition 5.2), $\lim_{t\to \infty}\mathbb{E}\left[\mathrm{dist}(\pmb{x}_t,\mathcal{X}^\star)\right] / \mathbb{E}\left[\mathrm{dist}(\pmb{y}_t,\mathcal{X}^\star)\right] = 1 - \frac{R^2}{C^2}$ holds. That is, the average-case optimal algorithm outperforms gradient descent by a constant factor depending on the conditioning $R / C$ .
+
+# 5 ASYMPTOTIC BEHAVIOR
+
+The recurrence coefficients of the average-case optimal method typically converges to limiting values when $t \to \infty$ , which gives an "average-case asymptotically optimal first-order method" with constant coefficients. For the case of symmetric operators with spectrum in $[\ell, L]$ , Scieur & Pedregosa (2020) show that under mild conditions, the asymptotically optimal algorithm is the Polyak momentum method with coefficients depending only on $\ell$ and $L$ . For bilinear games, since the average-case optimal algorithm is the average-case optimal algorithm of an optimization algorithm, we can make use of their framework to obtain the asymptotic algorithm (see Theorem 3 of Scieur & Pedregosa (2020)).
+
+Proposition 5.1. Assume that the expected spectral density $\mu_{MM^{\top}}$ of $MM^{\top}$ is supported in $[\ell, L]$ for $0 < \ell < L$ , and strictly positive in this interval. Then, the asymptotically optimal algorithm for bilinear games is the following version of Polyak momentum:
+
+$$
+\boldsymbol {g} _ {t} = F \left(\boldsymbol {x} _ {t} - F \left(\boldsymbol {x} _ {t}\right)\right) - F \left(\boldsymbol {x} _ {t}\right)
+$$
+
+$$
+\boldsymbol {x} _ {t + 1} = \boldsymbol {x} _ {t} + \left(\frac {\sqrt {L} - \sqrt {\ell}}{\sqrt {L} + \sqrt {\ell}}\right) ^ {2} \left(\boldsymbol {x} _ {t - 1} - \boldsymbol {x} _ {t}\right) - \left(\frac {2}{\sqrt {L} + \sqrt {\ell}}\right) ^ {2} \boldsymbol {g} _ {t} \tag {19}
+$$
+
+Notice that the algorithm in (19) is the worst-case optimal algorithm from Proposition 4 of Azizian et al. (2020). For the case of circularly symmetric spectral densities with support on disks, we can also compute the asymptotically optimal algorithm.
+
+Proposition 5.2. Suppose that the assumptions of Theorem 4.2 hold with $\mu_R \in \mathcal{P}([0, R])$ fulfilling $\mu_R([r, R]) = \Omega((R - r)^\kappa)$ for $r$ in $[r_0, R]$ for some $r_0 \in [0, R)$ and for some $\kappa \in \mathbb{Z}$ . Then, the average-case asymptotically optimal algorithm is, with $\pmb{y}_0 = \pmb{x}_0$ :
+
+$$
+\boldsymbol {y} _ {t} = \boldsymbol {y} _ {t - 1} - \frac {1}{C} F (\boldsymbol {y} _ {t - 1}),
+$$
+
+$$
+\boldsymbol {x} _ {t} = \left(\frac {R}{C}\right) ^ {2} \boldsymbol {x} _ {t - 1} + \left(1 - \left(\frac {R}{C}\right) ^ {2}\right) \boldsymbol {y} _ {t}. \tag {20}
+$$
+
+Moreover, the convergence rate for this algorithm is asymptotically the same one as for the optimal algorithm in Theorem 4.2. Namely, $\lim_{t\to \infty}\mathbb{E}\left[\mathrm{dist}(\pmb{x}_t,\mathcal{X}^\star)\right]B_t = 1$
+
+The condition on $\mu_R$ simply rules out cases in which the spectral density has exponentially small mass around 1. It is remarkable that in algorithm (20) the averaging coefficients can be expressed so simply in terms of the quantity $R / C$ . Notice also that while the convergence rate of the algorithm
+
+is slower than the convergence rate for the optimal algorithm by definition, both rates match in the limit, meaning that the asymptotically optimal algorithm also outperforms gradient descent by a constant factor $1 - \frac{R^2}{C^2}$ in the limit $t\to \infty$
+
+# 6 EXPERIMENTS
+
+We compare some of the proposed methods on settings with varying degrees of mismatch with our assumptions.
+
+Bilinear Games. We consider min-max bilinear problems of the form (10), where the entries of $M$ are generated i.i.d. from a standard Gaussian distribution. We vary the ratio $r = d / n$ parameter for $d = 1000$ and compare the average-case optimal method of Theorems 3.1 and 5.1, the asymptotic worst-case optimal method of (Azizian et al., 2020) and extragradient (Korpelevich, 1976). In all cases, we use the convergence-rate optimal step-size assuming knowledge of the edges of the spectral distribution.
+
+The spectral density for these problems is displayed in the first row of Figure 1 and the benchmark results on the second row. Average-case optimal methods always outperform other methods, and the largest gain is in the ill-conditioned regime ( $r \approx 1$ ).
+
+Circular Distribution. For our second experiment we choose $A$ as a matrix with iid Gaussian random entries, therefore the support of the distribution of its eigenvalue is a disk. Note that $A$ does not satisfy the normality assumption of Assumption 2. Figure 1 (third row) compares the average-case optimal methods from Theorems 4.2 and 5.2 on two datasets with different levels of conditioning. Note that the methods converge despite the violation of Assumption 2, suggesting a broader applicability than the one proven in this paper. We leave this investigation for future work.
+
+# 7 DISCUSSION AND FUTURE RESEARCH DIRECTIONS
+
+In this paper, we presented a general framework for the design of optimal algorithms in the average-case for affine operators $F$ , whose underlying matrix is possibly non-symmetric. However, our approach presents some limitations, the major one being the restriction to normal matrices. Fortunately, the numerical experiments above suggests that this assumption can be relaxed. Developing a theory without that assumption is left for future work. Another avenue for future work is to analyze the nonlinear-case in which the non-symmetric operator $A$ is non-linear, as well as the case in which it is accessed through a stochastic estimator (as done by (Loizou et al., 2020) for the worst-case analysis).
+
+# ACKNOWLEDGEMENTS
+
+C. Domingo-Enrich has been partially funded by "la Caixa" Foundation (ID 100010434), under agreement LCF/BQ/AA18/11680094, and partially funded by the NYU Computer Science Department.
+
+
+Bilinear Problems
+Figure 1: Benchmarks and spectral density for different games. Top row: spectral density associated with bilinear games for varying values of the ratio parameter $r = n / d$ (the x-axis represents the imaginary line). Second row: Benchmarks. Average-case optimal methods always outperform other methods, and the largest gain is in the ill-conditioned regime ( $r \approx 1$ ). Third row. Benchmarks (columns 1 and 3) and eigenvalue distribution of a design matrix generated with iid entries for two different degrees of conditioning. Despite the normality assumption not being satisfied, we still observe an improvement of average-case optimal methods vs worst-case optimal ones.
+
+# REFERENCES
+
+Walter Van Assche. Orthogonal polynomials in the complex plane and on the real line. In Fields Institute Communications, volume 14, pp. 211-245, 1997.
+Waiss Azizian, Damien Scieur, Ioannis Mitliagkas, Simon Lacoste-Julien, and Gauthier Gidel. Accelerating smooth games by manipulating spectral shapes. In Proceedings of Machine Learning Research, 2020.
+David Balduzzi, Sébastien Racanière, James Martens, Jakob Foerster, Karl Tuyls, and Thore Graepel. The mechanics of $n$ -player differentiable games. In Proceedings of the International Conference on Machine Learning, 2018.
+Raphael Berthier, Francis Bach, and Pierre Gaillard. Accelerated gossip in networks of given dimension using Jacobi polynomial iterations. SIAM Journal on Mathematics of Data Science, 2 (1):24-47, 2020.
+Raghu Bollapragada, Damien Scieur, and Alexandre d'Aspremont. Nonlinear acceleration of momentum and primal-dual algorithms. arXiv preprint arXiv:1810.04539, 2018.
+John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121-2159, 2011.
+Bernd Fischer. *Polynomial Based Iteration Methods for Symmetric Linear Systems*. Vieweg+Teubner Verlag, 1996.
+Magnus R Hestenes, Eduard Stiefel, et al. Methods of conjugate gradients for solving linear systems. Journal of research of the National Bureau of Standards, 1952.
+Jonathan Katz and Yehuda Lindell. Introduction to modern cryptography. CRC press, 2014.
+Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015.
+Donald Knuth. The art of computer programming, volume 3. Pearson Education, 1997.
+GM Korpelevich. The extragradient method for finding saddle points and other problems. Matecon, 12, 1976.
+Jonathan Lacotte and Mert Pilanci. Optimal randomized first-order methods for least-squares problems. Proceedings of the 37th International Conference on Machine Learning, 2020.
+Nicolas Loizou, Hugo Berard, Alexia Jolicoeur-Martineau, Pascal Vincent, Simon Lacoste-Julien, and Ioannis Mitliagkas. Stochastic Hamiltonian gradient methods for smooth games. arXiv preprint arXiv:2007.04202, 2020.
+Arkadi Nemirovski. Information-based complexity of convex programming. Lecture Notes, 1995.
+Yurii Nesterov. Introductory Lectures on Convex Optimization. Springer, 2004.
+Courtney Paquette, Bart van Merrienboer, and Fabian Pedregosa. Halting time is predictable for large models: A universality property and average-case analysis. arXiv preprint arXiv:2006.04299, 2020.
+Fabian Pedregosa and Damien Scieur. Average-case acceleration through spectral density estimation. In Proceedings of the 37th International Conference on Machine Learning, 2020.
+Damien Scieur and Fabian Pedregosa. Universal average-case optimality of Polyak momentum. In Proceedings of the 37th International Conference on Machine Learning, 2020.
+
+# A PROOF OF THEOREM 2.1
+
+# A.1 PRELIMINARIES
+
+Before proving Theorem 2.1, we quickly analyze the distance function (1), recalled below,
+
+$$
+\operatorname {d i s t} (\boldsymbol {x}, \mathcal {X} ^ {\star}) \stackrel {{\mathrm {d e f}}} {{=}} \min _ {\boldsymbol {v} \in \mathcal {X} ^ {\star}} \| \boldsymbol {x} - \boldsymbol {v} \| ^ {2}.
+$$
+
+The definition of the distance function is not practical for the theoretical analysis. Fortunately, it is possible to find a simple expression that uses the orthogonal projection matrix $\Pi$ to the kernel $\mathrm{Ker}(A)$ . Since $\Pi$ is an orthogonal projection matrix to the kernel of a linear transformation, it satisfies
+
+$$
+\Pi = \Pi^ {T}, \quad \Pi^ {2} = \Pi , \quad \text {a n d} \quad A \Pi = 0. \tag {21}
+$$
+
+The normality assumption on $\mathbf{A}$ implies also that
+
+$$
+\Pi \boldsymbol {A} = 0. \tag {22}
+$$
+
+Indeed, the spectral decomposition of $\mathbf{A}$ is
+
+$$
+\boldsymbol {A} = [ \boldsymbol {U} _ {1} | \boldsymbol {U} _ {2} ] \left[ \begin{array}{c c} \boldsymbol {\Lambda} & 0 \\ 0 & 0 \end{array} \right] [ \boldsymbol {U} _ {1} | \boldsymbol {U} _ {2} ] ^ {*},
+$$
+
+and then $\Pi = U_2U_2^*$ . The next proposition uses $\Pi$ to derive the explicit solution of the (1).
+
+Proposition A.1. We have that
+
+$$
+\operatorname {d i s t} (\boldsymbol {y}, \mathcal {X} ^ {\star}) = \| (\boldsymbol {I} - \Pi) (\boldsymbol {y} - \boldsymbol {x} ^ {\star}) \| ^ {2} \quad \forall \boldsymbol {x} ^ {\star} \in \mathcal {X} ^ {\star}.
+$$
+
+Proof. We first parametrize the set of solution $\mathcal{X}^{\star}$ . By definition we have
+
+$$
+\mathcal {X} ^ {\star} = \left\{\boldsymbol {x}: \boldsymbol {A} \left(\boldsymbol {x} - \boldsymbol {x} ^ {\star}\right) = 0 \right\}.
+$$
+
+Which can be written in terms of the kernel of $A$ as
+
+$$
+\mathcal {X} ^ {\star} = \left\{\boldsymbol {x} ^ {\star} + \Pi \boldsymbol {w}: \boldsymbol {w} \in \mathbb {R} ^ {d} \right\}.
+$$
+
+From this, we can rewrite the distance function (1) as
+
+$$
+\operatorname {d i s t} (\boldsymbol {y}, \mathcal {X} ^ {\star}) = \min _ {\boldsymbol {w} \in \mathbb {R} ^ {d}} \| \boldsymbol {y} - (\boldsymbol {x} ^ {\star} + \Pi \boldsymbol {w}) \| ^ {2}.
+$$
+
+The minimum can be attained at different points, but in particular at $\pmb{w} = -(\pmb{y} - \pmb{x}^{\star})$ , which proves the statement.
+
+We now simplifies further the result of the previous proposition in the case where $\boldsymbol{x}_t$ is generated by a first order method.
+
+Proposition A.2. For every iterate $\pmb{x}_t$ of a first-order methods, i.e., $\pmb{x}_t$ satisfies
+
+$$
+\boldsymbol {x} _ {t} - \boldsymbol {x} ^ {\star} = P _ {t} (\boldsymbol {A}) (\boldsymbol {x} _ {0} - \boldsymbol {x} ^ {\star}), \quad \deg (P _ {t}) \leq t, \quad P (0) = \boldsymbol {I},
+$$
+
+we have that
+
+$$
+\operatorname {d i s t} \left(\boldsymbol {x} _ {t}, \mathcal {X} ^ {\star}\right) = \left\| \boldsymbol {x} _ {t} - \boldsymbol {x} ^ {\star} \right\| ^ {2} - \left\| \Pi \left(\boldsymbol {x} _ {0} - \boldsymbol {x} ^ {\star}\right) \right\| ^ {2}.
+$$
+
+Proof. We start with the result of Proposition A.1,
+
+$$
+\operatorname {d i s t} \left(\boldsymbol {x} _ {t}, \mathcal {X} ^ {\star}\right) = \left\| (\boldsymbol {I} - \Pi) \left(\boldsymbol {x} _ {t} - \boldsymbol {x} ^ {\star}\right) \right\| ^ {2}.
+$$
+
+The norm can be split into
+
+$$
+\begin{array}{l} \| (\boldsymbol {I} - \Pi) (\boldsymbol {x} _ {t} - \boldsymbol {x} ^ {\star}) \| ^ {2} = \| \boldsymbol {x} _ {t} - \boldsymbol {x} ^ {\star} \| ^ {2} + \| \underbrace {\Pi^ {2}} _ {= \Pi \text {b y (2 1)}} (\boldsymbol {x} _ {t} - \boldsymbol {x} ^ {\star}) \| ^ {2} - 2 \| \Pi (\boldsymbol {x} _ {t} - \boldsymbol {x} ^ {\star}) \| ^ {2} \\ = \left\| \boldsymbol {x} _ {t} - \boldsymbol {x} ^ {\star} \right\| ^ {2} - \left\| \Pi \left(\boldsymbol {x} _ {t} - \boldsymbol {x} ^ {\star}\right) \right\| ^ {2}. \\ \end{array}
+$$
+
+Since $x_{t}$ is generated by a first order method, we have
+
+$$
+\boldsymbol {x} _ {t} - \boldsymbol {x} ^ {\star} = P _ {t} (\boldsymbol {A}) (\boldsymbol {x} _ {0} - \boldsymbol {x} ^ {\star}), \quad P _ {t} (0) = 1.
+$$
+
+Since $P(0) = 1$ , the polynomial can be factorized as $P(\mathbf{A}) = \mathbf{I} + \mathbf{A}\mathbf{Q}_{t - 1}(\mathbf{A}), \mathbf{Q}_{t - 1}$ being a polynomial of degree $t - 1$ . Therefore, $\| \Pi (\pmb{x}_t - \pmb{x}^\star)\|^2$ reads
+
+$$
+\begin{array}{l} \left\| \Pi (\boldsymbol {x} _ {t} - \boldsymbol {x} ^ {\star}) \right\| ^ {2} = \left\| \Pi \left(\boldsymbol {I} + \boldsymbol {A} \boldsymbol {Q} _ {t - 1} (\boldsymbol {A})\right) (\boldsymbol {x} _ {0} - \boldsymbol {x} ^ {\star}) \right\| ^ {2} \\ = \| \Pi (\boldsymbol {x} _ {0} - \boldsymbol {x} ^ {\star}) + \underbrace {\Pi A} _ {= 0 \text {b y (2 2)}} Q _ {t - 1} (\boldsymbol {A}) (\boldsymbol {x} _ {0} - \boldsymbol {x} ^ {\star}) \| ^ {2} \\ = \left\| \Pi \left(\boldsymbol {x} _ {0} - \boldsymbol {x} ^ {\star}\right) \right\| ^ {2}, \\ \end{array}
+$$
+
+which prove the statement.
+
+
+
+# A.2 PROOF OF THE THEOREM
+
+We are now ready to prove the main result.
+
+Theorem 2.1. Consider the application of a first-order method associated to the sequence of polynomials $\{P_t\}$ (Proposition 2.1) on the problem (NSO). Let $\mu$ be the expected spectral distribution of $\mathbf{A}$ . Under Assumptions 1 and 2, we have
+
+$$
+\mathbb {E} \left[ \operatorname {d i s t} \left(\boldsymbol {x} _ {t}, \mathcal {X} ^ {\star}\right) \right] = R ^ {2} \int_ {\mathbb {C} \backslash \{0 \}} \left| P _ {t} \right| ^ {2} \mathrm {d} \mu , \tag {7}
+$$
+
+Proof. We start with the result of Proposition A.2,
+
+$$
+\operatorname {d i s t} \left(\boldsymbol {x} _ {t}, \mathcal {X} ^ {\star}\right) = \left\| \boldsymbol {x} _ {t} - \boldsymbol {x} ^ {\star} \right\| ^ {2} - \left\| \Pi \left(\boldsymbol {x} _ {0} - \boldsymbol {x} ^ {\star}\right) \right\| ^ {2}.
+$$
+
+We now write the expectation of the distance function,
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \operatorname {d i s t} \left(\boldsymbol {x} _ {t}, \mathcal {X} ^ {\star}\right) \right] = \mathbb {E} \left[ \left\| \boldsymbol {x} _ {t} - \boldsymbol {x} ^ {\star} \right\| ^ {2} - \left\| \Pi \left(\boldsymbol {x} _ {0} - \boldsymbol {x} ^ {\star}\right) \right\| ^ {2} \right] \\ = \mathbb {E} \left[ \| P _ {t} (\boldsymbol {A}) \left(\boldsymbol {x} _ {0} - \boldsymbol {x} ^ {\star}\right) \| ^ {2} - \| \Pi \left(\boldsymbol {x} _ {0} - \boldsymbol {x} ^ {\star}\right) \| ^ {2} \right] \\ = \mathbb {E} \left[ \operatorname {t r} P _ {t} (\boldsymbol {A}) P _ {t} (\boldsymbol {A}) ^ {T} \left(\boldsymbol {x} _ {0} - \boldsymbol {x} ^ {\star}\right) \left(\boldsymbol {x} _ {0} - \boldsymbol {x} ^ {\star}\right) ^ {T} - \operatorname {t r} \Pi^ {2} (\boldsymbol {x} _ {0} - \boldsymbol {x} ^ {\star}) \left(\boldsymbol {x} _ {0} - \boldsymbol {x} ^ {\star}\right) ^ {T} \right] \\ = \mathbb {E} _ {A} \left[ \operatorname {t r} P _ {t} (\boldsymbol {A}) P _ {t} (\boldsymbol {A}) ^ {T} \mathbb {E} \left[ \left(\boldsymbol {x} _ {0} - \boldsymbol {x} ^ {\star}\right) \left(\boldsymbol {x} _ {0} - \boldsymbol {x} ^ {\star}\right) ^ {T} | \boldsymbol {A} \right] - \operatorname {t r} \Pi \mathbb {E} \left[ \left(\boldsymbol {x} _ {0} - \boldsymbol {x} ^ {\star}\right) \left(\boldsymbol {x} _ {0} - \boldsymbol {x} ^ {\star}\right) ^ {T} | \boldsymbol {A} \right] \right] \\ = R \mathbb {E} _ {A} \left[ \operatorname {t r} P _ {t} (\boldsymbol {A}) P _ {t} (\boldsymbol {A}) ^ {T} - \operatorname {t r} \Pi \right] \\ = R \mathbb {E} \left[ \sum_ {i = 1} ^ {d} | P (\lambda_ {i}) | ^ {2} - \operatorname {t r} \Pi \right] \\ = R \mathbb {E} \left[ \int_ {\mathbb {C} \backslash \{0 \}} | P (\lambda) | ^ {2} \delta_ {\lambda_ {i}} (\lambda) + | P (0) | ^ {2} \cdot [ \# \text {z e r o e i g e n v a l u e s} ] - \operatorname {t r} \Pi \right] \\ \end{array}
+$$
+
+However, $|P(0)|^2 = 1$ and $\operatorname{tr} \Pi$ corresponds to the number of zero eigenvalues of $A$ , therefore,
+
+$$
+E \left[ \operatorname {d i s t} \left(\boldsymbol {x} _ {t}, \mathcal {X} ^ {\star}\right) \right] = R \mathbb {E} \left[ \int_ {\mathbb {C} \backslash \{0 \}} | P (\lambda) | ^ {2} \delta_ {\lambda_ {i}} (\lambda) \right] = R \int_ {\mathbb {C} \backslash \{0 \}} P (\lambda) \mu (\lambda).
+$$
+
+
+
+# B PROOFS OF THEOREM 3.1 AND PROPOSITION 3.1
+
+Proposition B.1. [Block determinant formula] If $A, B, C, D$ are (not necessarily square) matrices,
+
+$$
+\det \left[ \begin{array}{l l} \boldsymbol {A} & \boldsymbol {B} \\ \boldsymbol {C} & \boldsymbol {D} \end{array} \right] = \det (\boldsymbol {D}) \det (\boldsymbol {A} - \boldsymbol {B} \boldsymbol {D} ^ {- 1} \boldsymbol {C}), \tag {23}
+$$
+
+if $D$ is invertible.
+
+Definition 6 (Pushforward of a measure). Recall that the pushforward $f_*\mu$ of a measure $\mu$ by a function $f$ is defined as the measure such that for all measurable $g$ ,
+
+$$
+\int g (\lambda) \mathrm {d} (f _ {*} \mu) (\lambda) = \int g (f (\lambda)) \mathrm {d} \mu (\lambda). \tag {24}
+$$
+
+Equivalently, if $X$ is a random variable with distribution $\mu$ , then $f(X)$ has distribution $f_{*}\mu$ .
+
+Proposition B.2. Assume that the dimensions of $M \in \mathbb{R}^{d_x \times d_y}$ fulfill $d_x \leq d_y$ and let $r = d_x / d_y$ . Let $\mu_{MM^\top}$ be the expected spectral distribution of the random matrix $MM^\top \in \mathbb{R}^{d_x \times d_x}$ , and assume that it is absolutely continuous with respect to the Lebesgue measure. The expected spectral distribution of $A$ is contained in the imaginary line and is given by
+
+$$
+\mu_ {\mathbf {A}} (i \lambda) = \left(1 - \frac {2}{1 + \frac {1}{r}}\right) \delta_ {0} (\lambda) + \frac {2 | \lambda |}{1 + \frac {1}{r}} \mu_ {M M ^ {\top}} \left(\lambda^ {2}\right). \tag {25}
+$$
+
+for $\lambda \in \mathbb{R}$ . If $d_{x} \geq d_{y}$ , then (25) holds with $\mu_{M^{\top}M}$ in place of $\mu_{MM^{\top}}$ and $1 / r$ in place of $r$ .
+
+Proof. By the block determinant formula, we have that for $s \neq 0$
+
+$$
+\begin{array}{l} \det \left(s I _ {d _ {1} + d _ {2}} - A\right) = \left| \begin{array}{l l} s I _ {d _ {1}} & - M \\ M ^ {\top} & s I _ {d _ {2}} \end{array} \right| = \det \left(s I _ {d _ {2}}\right) \det \left(s I _ {d _ {1}} + M s ^ {- 1} I _ {d _ {2}} M ^ {\top}\right) \tag {26} \\ = s ^ {d _ {2} - d _ {1}} \det \left(s ^ {2} \boldsymbol {I} _ {d _ {1}} + \boldsymbol {M} \boldsymbol {M} ^ {\top}\right) \\ \end{array}
+$$
+
+Thus, for every eigenvalue $-\lambda \leq 0$ of $-MM^{\top}$ , both $i\sqrt{\lambda}$ and $-i\sqrt{\lambda}$ are eigenvalues of $\mathbf{A}$ . Since $\mathrm{rank}(MM^{\top}) = \mathrm{rank}(M)$ , we have $\mathrm{rank}(\mathbf{A}) = 2\mathrm{rank}(M)$ . Thus, the rest of the eigenvalues of $\mathbf{A}$ are 0 and there is a total of $d - 2d_{1} = d_{2} - d_{1}$ of them. Notice that
+
+$$
+\frac {d _ {1}}{d _ {1} + d _ {2}} = \frac {1}{\frac {d _ {1} + d _ {2}}{d _ {1}}} = \frac {1}{1 + \frac {1}{r}} \tag {27}
+$$
+
+Let $f_{+}(\lambda) = i\sqrt{\lambda}, f_{-}(\lambda) = -i\sqrt{\lambda}$ , and let $(f_{+})_{*}\mu_{MM^{\top}}$ (resp., $(f_{-})_{*}\mu_{MM^{\top}}$ ) be the pushforward measure of $\mu_{MM^{\top}}$ by the function $f_{+}$ (resp., $f_{-}$ ). Thus, by the definition of the pushforward measure (Definition 6),
+
+$$
+\mu_ {\mathbf {A}} (i \lambda) = \left(1 - \frac {2}{1 + \frac {1}{r}}\right) \delta_ {0} (\lambda) + \frac {1}{1 + \frac {1}{r}} \left(f _ {+}\right) * \mu_ {M M ^ {\top}} (\lambda) + \frac {1}{1 + \frac {1}{r}} \left(f _ {-}\right) * \mu_ {M M ^ {\top}} (\lambda) \tag {28}
+$$
+
+We compute the pushforwards $(f_{+})_{*}\mu_{MM^{\top}}$ , $(f_{-})_{*}\mu_{MM^{\top}}$ performing the change of variables $y = \pm i\sqrt{\lambda}$ under the assumption that $\mu_{MM^{\top}}(\lambda) = \rho_{MM^{\top}}(\lambda)d\lambda$ :
+
+$$
+\int_ {\mathbb {R} _ {\geq 0}} g (\pm i \sqrt {\lambda}) \mathrm {d} \mu_ {M M ^ {\top}} (\lambda) = \int_ {\mathbb {R} _ {\geq 0}} g (\pm i \sqrt {\lambda}) \rho_ {M M ^ {\top}} (\lambda) d \lambda = \int_ {\pm i \mathbb {R} _ {\geq 0}} g (y) \rho_ {M M ^ {\top}} (| y | ^ {2}) 2 | y | \mathrm {d} | y |, \tag {29}
+$$
+
+which means that the density of $(f_{+})_{*}\mu_{MM^{\top}}$ at $y\in i\mathbb{R}_{\geq 0}$ is $2|y|\rho_{MM^{\top}}(|y|^{2})$ and the density of $(f_{-})_{*}\mu_{MM^{\top}}$ at $y\in -i\mathbb{R}_{\geq 0}$ is also $2|y|\rho_{MM^{\top}}(|y|^{2})$ .
+
+# Proposition B.3. The condition
+
+$$
+\forall P, Q \text {p o l y n o m i a l s} \langle P (\lambda), \lambda Q (\lambda) \rangle = 0 \Longrightarrow \langle \lambda P (\lambda), Q (\lambda) \rangle = 0 \tag {30}
+$$
+
+is sufficient for any sequence $(P_k)_{k\geq 0}$ of orthogonal polynomials of increasing degrees to satisfy a three-term recurrence of the form
+
+$$
+\gamma_ {k} P _ {k} (\lambda) = (\lambda - \alpha_ {k}) P _ {k - 1} (\lambda) - \beta_ {k} P _ {k - 2} (\lambda), \tag {31}
+$$
+
+where
+
+$$
+\gamma_ {k} = \frac {\left\langle \lambda P _ {k - 1} (\lambda) , P _ {k} (\lambda) \right\rangle}{\left\langle P _ {k} (\lambda) , P _ {k} (\lambda) \right\rangle}, \quad \alpha_ {k} = \frac {\left\langle \lambda P _ {k - 1} (\lambda) , P _ {k - 1} (\lambda) \right\rangle}{\left\langle P _ {k - 1} (\lambda) , P _ {k - 1} (\lambda) \right\rangle}, \quad \beta_ {k} = \frac {\left\langle \lambda P _ {k - 1} (\lambda) , P _ {k - 2} (\lambda) \right\rangle}{\left\langle P _ {k - 2} (\lambda) , P _ {k - 2} (\lambda) \right\rangle} \tag {32}
+$$
+
+Proof. Since $\lambda P_{k-1}(\lambda)$ is a polynomial of degree $k$ , and $(P_j)_{0 \leq j \leq k}$ is a basis of the polynomials of degree up to $k$ , we can write
+
+$$
+\lambda P _ {k - 1} (\lambda) = \sum_ {j = 0} ^ {k} \frac {\left\langle \lambda P _ {k - 1} , P _ {j} \right\rangle}{\left\langle P _ {j} , P _ {j} \right\rangle} P _ {j} (\lambda) \tag {33}
+$$
+
+Now, remark that for all $j < k - 2$ , $\langle P_{k-1}, \lambda P_j \rangle = 0$ because the inner product of $P_{k-1}$ with a polynomial of degree at most $k - 2$ . If we make use of the condition (30), this implies that $\langle \lambda P_{k-1}, P_j \rangle = 0$ for all $j < k - 2$ . Plugging this into (33), we obtain (31).
+
+Proposition B.4. Let $\Pi_t^\mathbb{R}$ be the set of polynomials with real coefficients and degree at most $t$ . For $t \geq 0$ even, the minimum of the problem
+
+$$
+\min _ {P _ {t} \in \Pi_ {t} ^ {\mathbb {R}}, P _ {t} (0) = 1} \int_ {i \mathbb {R} \backslash \{0 \}} | P _ {t} (\lambda) | ^ {2} | \lambda | \rho_ {\boldsymbol {M M} ^ {\top}} \left(| \lambda | ^ {2}\right) d | \lambda | \tag {34}
+$$
+
+is attained by an even polynomial with real coefficients.
+
+Proof. Since $\mathrm{d}\mu(i\lambda) \stackrel{\mathrm{def}}{=} |\lambda| \rho_{MM^{\top}}(|\lambda|^2) \mathrm{d}|\lambda|$ is supported in the imaginary axis and is symmetric with respect to 0, for all polynomials $P, Q$ ,
+
+$$
+\langle \lambda P (\lambda), Q (\lambda) \rangle = \int_ {i \mathbb {R}} \lambda P (\lambda) Q (\lambda) ^ {*} d \mu (\lambda) = - \int_ {i \mathbb {R}} P (\lambda) \lambda^ {*} Q (\lambda) ^ {*} d \mu (\lambda) = - \langle P (\lambda), \lambda Q (\lambda) \rangle . \tag {35}
+$$
+
+Hence, $\langle P(\lambda),\lambda Q(\lambda)\rangle = 0$ implies $\langle \lambda P(\lambda),Q(\lambda)\rangle = 0$ . By Proposition B.3, a three-term recurrence (31) and (32) for the orthonormal sequence $(\phi_t)_{t\geq 0}$ of polynomials holds.
+
+By Proposition B.5, the orthonormal polynomials $(\phi_t)_{t\geq 0}$ of even (resp. odd) degree are even (resp. odd) and have real coefficients. Hence, for all $t\geq 0$ even
+
+$$
+\frac {\sum_ {k = 0} ^ {t} \phi_ {k} (\lambda) \phi_ {k} (0) ^ {*}}{\sum_ {k = 0} ^ {t} | \phi_ {k} (0) | ^ {2}} = \frac {\sum_ {k = 0} ^ {t / 2} \phi_ {2 k} (\lambda) \phi_ {2 k} (0) ^ {*}}{\sum_ {k = 0} ^ {t / 2} | \phi_ {2 k} (0) | ^ {2}} \tag {36}
+$$
+
+is an even polynomial with real coefficients. By Theorem 2.3, this polynomial attains the minimum of the problem
+
+$$
+\min _ {P _ {t} \in \Pi_ {t} ^ {\mathbb {C}}, P _ {t} (0) = 1} \int_ {i \mathbb {R} \backslash \{0 \}} | P _ {t} (\lambda) | ^ {2} | \lambda | \rho_ {M M ^ {\top}} \left(| \lambda | ^ {2}\right) d | \lambda | \tag {37}
+$$
+
+and, a fortiori, the minimum of the problem in (34), in which the minimization is restricted polynomials with real coefficients instead of complex coefficients.
+
+Proposition B.5. The polynomials $(\phi_t)_{t\geq 0}$ of the orthonormal sequence corresponding to the measure $\mu (i\lambda) = |\lambda |\rho_{MM^{\top}}(|\lambda |^{2})d|\lambda |$ have real coefficients and are even (resp. odd) for even (resp. odd) $k$ .
+
+Proof. The proof is by induction. The base case follows from the choice $\phi_0 = 1$ . Assuming that $\phi_{k - 1}\in \mathbb{R}[X]$ by the induction hypothesis, we show that $\alpha_{k} = 0$ (where $\alpha_{k}$ is the coefficient from (31) and (32)):
+
+$$
+\begin{array}{l} \langle \lambda \phi_ {k - 1} (\lambda), \phi_ {k - 1} (\lambda) \rangle = \int_ {i \mathbb {R}} \lambda | \phi_ {k - 1} (\lambda) | ^ {2} | \lambda | \rho_ {M M ^ {\top}} (| \lambda | ^ {2}) d | \lambda | \\ = \int_ {\mathbb {R} _ {\geq 0}} i \lambda \left(\left| \phi_ {k - 1} (i \lambda) \right| ^ {2} - \left| \phi_ {k - 1} (- i \lambda) \right| ^ {2}\right) \lambda \rho_ {M M ^ {\top}} \left(\lambda^ {2}\right) d \lambda = 0 \tag {38} \\ \end{array}
+$$
+
+The last equality follows from $|\phi_{k-1}(i\lambda)|^2 = |\phi_{k-1}(-i\lambda)|^2$ , which holds because $\phi_{k-1}(i\lambda)^* = \phi_{k-1}(-i\lambda)$ , and in turn this is true because $\phi_{k-1} \in \mathbb{R}[X]$ by the induction hypothesis.
+
+Once we have seen that $\alpha_{k} = 0$ , it is straightforward to apply the induction hypothesis once again to show that $\phi_{k}$ also satisfies the even/odd property. Namely, for $k$ even (resp. odd), $\gamma_{k}P_{k} = \lambda P_{k - 1} - \beta_{k}P_{k - 2}$ , and the two polynomials in the right-hand side have even (resp. odd) degrees.
+
+Finally, $\phi_{k}$ must have real coefficients because $\phi_{k - 1}$ and $\phi_{k - 2}$ have real coefficients by the induction hypothesis, and the recurrence coefficient $\beta_{k}$ is real, as
+
+$$
+\begin{array}{l} \langle \lambda P _ {k - 1} (\lambda), P _ {k - 2} (\lambda) \rangle = \int_ {i \mathbb {R}} \lambda \phi_ {k - 1} (\lambda) \phi_ {k - 2} (\lambda) ^ {*} | \lambda | \rho_ {M M ^ {\top}} (| \lambda | ^ {2}) d | \lambda | \\ = \int_ {\mathbb {R} _ {\geq 0}} i \lambda \left(\phi_ {k - 1} (i \lambda) \phi_ {k - 2} (i \lambda) ^ {*} - \phi_ {k - 1} (i \lambda) ^ {*} \phi_ {k - 2} (i \lambda)\right) \lambda \rho_ {M M ^ {\top}} \left(\lambda^ {2}\right) d \lambda \\ = - \int_ {\mathbb {R} _ {\geq 0}} 2 \lambda \operatorname {I m} \left(\phi_ {k - 1} (i \lambda) \phi_ {k - 2} (i \lambda) ^ {*}\right) \lambda \rho_ {M M ^ {\top}} \left(\lambda^ {2}\right) d \lambda \in \mathbb {R}. \tag {39} \\ \end{array}
+$$
+
+
+
+Proposition B.6. Let $t \geq 0$ even. Assume that on $\mathbb{R}_{>0}$ , the expected spectral density $\mu_{MM^{\top}}$ has Radon-Nikodym derivative $\rho_{MM^{\top}}$ with respect to the Lebesgue measure. If
+
+$$
+Q _ {t / 2} ^ {\star} \stackrel {\text {d e f}} {=} \underset { \begin{array}{c} P _ {t / 2} \in \Pi_ {t / 2} ^ {\mathbb {R}}, \\ P _ {t / 2} (0) = 1 \end{array} } {\arg \min } \int_ {\mathbb {R} > 0} P _ {t / 2} (\lambda) ^ {2} \mathrm {d} \mu_ {- \boldsymbol {A} ^ {2}} (\lambda), \tag {40}
+$$
+
+and
+
+$$
+P _ {t} ^ {\star} \stackrel {\text {d e f}} {=} \underset { \begin{array}{c} P _ {t} \in \Pi_ {t} ^ {\mathbb {R}}, \\ P _ {t} (0) = 1 \end{array} } {\arg \min } \int_ {i \mathbb {R} \backslash \{0 \}} | P _ {t} (\lambda) | ^ {2} | \lambda | \rho_ {M M ^ {\top}} (| \lambda | ^ {2}) d | \lambda |, \tag {41}
+$$
+
+then $P_{t}^{\star}(\lambda) = Q_{t / 2}^{\star}(-\lambda^{2})$
+
+Proof. First, remark that the equalities in (40) and (41) are well defined because the arg min are unique by Theorem 2.3. Without loss of generality, assume that $d_x \leq d_y$ (otherwise switch the players), and let $r \stackrel{\mathrm{def}}{=} d_x / d_y < 1$ . Since,
+
+$$
+- \boldsymbol {A} ^ {2} = \left[ \begin{array}{c c} M M ^ {\top} & 0 \\ 0 & M ^ {\top} M \end{array} \right], \tag {42}
+$$
+
+each eigenvalue of $MM^{\top} \in \mathbb{R}^{d_x \times d_x}$ is an eigenvalue of $-A^2$ with doubled duplicity, and the rest of eigenvalues are zero. Hence, we have $\mu_{-A^2} = \left(1 - 2 / (1 + \frac{1}{r})\right)\delta_0 + 2\mu_{MM^{\top}} / (1 + \frac{1}{r})$ . Thus, for all $t \geq 0$ ,
+
+$$
+Q _ {t} ^ {\star} = \arg \min _ { \begin{array}{c} P _ {t} \in \Pi_ {t} ^ {\mathbb {R}}, \\ P _ {t} (0) = 1 \end{array} } \int_ {\mathbb {R} _ {> 0}} P _ {t} (\lambda) ^ {2} \mathrm {d} \mu_ {- \boldsymbol {A} ^ {2}} (\lambda) = \underset { \begin{array}{c} P _ {t} \in \Pi_ {t} ^ {\mathbb {R}}, \\ P _ {t} (0) = 1 \end{array} } {\arg \min } \int_ {\mathbb {R} _ {> 0}} P _ {t} (\lambda) ^ {2} \rho_ {\boldsymbol {M M} ^ {\top}} (\lambda) \mathrm {d} \lambda \tag {43}
+$$
+
+By Proposition B.4, for an even $t \geq 0$ the minimum in (41) is attained by an even polynomial with real coefficients. Hence,
+
+$$
+\begin{array}{l} \min_{\substack{P_{t}\in \Pi_{t}^{\mathbb{R}},\\ P_{t}(0) = 1}}\int_{i\mathbb{R}\setminus \{0\}}|P_{t}(\lambda)|^{2}|\lambda |\rho_{\boldsymbol{M}\boldsymbol{M}^{\top}}(|\lambda |^{2}) \mathrm{d}|\lambda | = \min_{\substack{P_{t / 2}\in \Pi_{t / 2}^{\mathbb{R}},\\ P_{t / 2}(0) = 1}}\int_{i\mathbb{R}\setminus \{0\}}|P_{t / 2}(\lambda^{2})|^{2}|\lambda |\rho_{\boldsymbol{M}\boldsymbol{M}^{\top}}(|\lambda |^{2}) \mathrm{d}|\lambda | \\ = 2\min_{\substack{P_{t / 2}\in \Pi^{\mathbb{R}}_{t / 2},\\ P_{t / 2}(0) = 1}}\int_{\mathbb{R}_{>0}}|P_{t / 2}((i\lambda)^{2})|^{2}\lambda \rho_{MM^{\top}}(\lambda^{2}) \mathrm{d}\lambda \\ = 2\min_{\substack{P_{t / 2}\in \Pi^{\mathbb{R}}_{t / 2},\\ P_{t / 2}(0) = 1}}\int_{\mathbb{R}_{>0}}P_{t / 2}(\lambda^{2})^{2}\lambda \rho_{MM^{\top}}(\lambda^{2}) \mathrm{d}\lambda \\ = \min _ {\substack {P _ {t / 2} \in \Pi_ {t / 2} ^ {\mathbb {R}}, \\ P _ {t / 2} (0) = 1}} \int_ {\mathbb {R} > 0} P _ {t / 2} (\lambda) ^ {2} \rho_ {\boldsymbol {M M} ^ {\top}} (\lambda) \mathrm {d} \lambda \tag{44} \\ \end{array}
+$$
+
+Moreover, for any polynomial $Q_{t/2}$ that attains the minimum on the right-most term, the polynomial $P_t(\lambda) = Q_{t/2}(-\lambda^2)$ attains the minimum on the left-most term. In particular, using (43), $P_t^\star(\lambda) \stackrel{\mathrm{def}}{=} Q_{t/2}^\star(-\lambda^2)$ attains the minimum on the left-most term.
+
+Theorem 3.1. Suppose that Assumption 1 holds and that the expected spectral distribution of $MM^{\top}$ is absolutely continuous with respect to the Lebesgue measure. Then, the method (11) is average-case optimal for bilinear games when $h_t$ , $m_t$ are chosen to be the coefficients of the average-case optimal minimization of $\frac{1}{2}\| F(\pmb{x})\|^2$ .
+
+Proof. Making use of Theorem 2.1 and Proposition B.2, we obtain that for any first-order method using the vector field $F$ ,
+
+$$
+\mathbb {E} \left[ \operatorname {d i s t} \left(\boldsymbol {x} _ {t}, \mathcal {X} ^ {\star}\right) \right] = R ^ {2} \int_ {\mathbb {C} \backslash \{0 \}} | P _ {t} (\lambda) | ^ {2} \mathrm {d} \mu_ {\boldsymbol {A}} (\lambda) = \frac {2 R ^ {2}}{1 + \frac {1}{r}} \int_ {i \mathbb {R} \backslash \{0 \}} | P _ {t} (\lambda) | ^ {2} | \lambda | \rho_ {M M ^ {\top}} (| \lambda | ^ {2}) \mathrm {d} | \lambda | \tag {45}
+$$
+
+Let $Q_{t/2}^{\star}, P_t^{\star}$ be as defined in (41) and (40). For $t \geq 0$ even the iteration $t$ of the average-case optimal method for the bilinear game must satisfy
+
+$$
+\boldsymbol {x} _ {t} - P _ {\mathcal {X} ^ {*}} \left(\boldsymbol {x} _ {0}\right) = P _ {t} ^ {*} (\boldsymbol {A}) \left(\boldsymbol {x} _ {0} - P _ {\mathcal {X} ^ {*}} \left(\boldsymbol {x} _ {0}\right)\right) = Q _ {t / 2} ^ {*} (- \boldsymbol {A} ^ {2}) \left(\boldsymbol {x} _ {0} - P _ {\mathcal {X} ^ {*}} \left(\boldsymbol {x} _ {0}\right)\right) \tag {46}
+$$
+
+On the other hand, the first-order methods for the minimization of the function $\frac{1}{2}\| F(\pmb{x})\|^2$ make use of the vector field $\nabla \left(\frac{1}{2}\| F(\pmb{x})\|^2\right) = \pmb{A}^\top (\pmb{A}\pmb{x} + \pmb{b}) = -\pmb{A}^2 (\pmb{x} - \pmb{x}^\star)$ . Let $\mu_{-\pmb{A}^2}$ be the spectral density of $-\pmb{A}^2$ . By Theorem 2.1, the average-case optimal first-order method for the minimization problem is the one for which the residual polynomial $P_t$ (Proposition 2.1) minimizes the functional $\int_{\mathbb{R}} P_t^2 \mathrm{d}\mu_{-\pmb{A}^2}$ . That is, the residual polynomial is $Q_t^\star$ . From (46), we see that the $t$ -th iterate of the average-case optimal method for $F$ is equal to the $t/2$ -th iterator of the average-case optimal method for $\nabla \left(\frac{1}{2}\| F(\pmb{x})\|^2\right)$ .
+
+# C PROOFS OF THEOREM 4.1 AND THEOREM 4.2
+
+Theorem 4.1. Under Assumption 4 and the assumptions of Theorem 2.1, the following algorithm is optimal in the average case, with $\mathbf{y}_{-1} = \mathbf{y}_0 = \mathbf{x}_0$ :
+
+$$
+\boldsymbol {y} _ {t} = a _ {t} \boldsymbol {y} _ {t - 1} + (1 - a _ {t}) \boldsymbol {y} _ {t - 2} + b _ {t} F (\boldsymbol {y} _ {t - 1})
+$$
+
+$$
+\boldsymbol {x} _ {t} = \frac {B _ {t}}{B _ {t} + \beta_ {t}} \boldsymbol {x} _ {t - 1} + \frac {\beta_ {t}}{B _ {t} + \beta_ {t}} \boldsymbol {y} _ {t}, \quad \beta_ {t} = \phi_ {t} ^ {2} (0), \quad B _ {t} = B _ {t - 1} + \beta_ {t - 1}, \quad B _ {0} = 0. \tag {16}
+$$
+
+where $(\phi_k(0))_{k\geq 0}$ can be computed using the three-term recurrence (upon normalization). Moreover, $\mathbb{E}_{(\pmb {A},\pmb{x}^{\star},\pmb{x}_0)}\mathrm{dist}(\pmb {x}_t,\mathcal{X}^\star)$ converges to zero at rate $1 / B_{t}$
+
+Proof. We prove by induction that
+
+$$
+\boldsymbol {x} _ {t} - \boldsymbol {x} ^ {\star} = \frac {\sum_ {k = 0} ^ {t} \phi_ {k} (\boldsymbol {A}) \phi_ {k} (0) ^ {*}}{\sum_ {k = 0} ^ {t} \phi_ {k} (0) ^ {2}} \left(\boldsymbol {x} _ {0} - \boldsymbol {x} ^ {\star}\right) \tag {47}
+$$
+
+The base step $t = 0$ holds trivially because $\phi_0 = 1$ . Assume that (47) holds for $t - 1$ . Subtracting $x^{\star}$ from (16), we have
+
+$$
+\boldsymbol {x} _ {t} - \boldsymbol {x} ^ {\star} = \frac {\sum_ {k = 0} ^ {t - 1} \phi_ {k} (0) ^ {2}}{\sum_ {k = 0} ^ {t} \phi_ {k} (0) ^ {2}} \left(\boldsymbol {x} _ {t - 1} - \boldsymbol {x} ^ {\star}\right) + \frac {\phi_ {t} (0) ^ {2}}{\sum_ {k = 0} ^ {t} \phi_ {k} (0) ^ {2}} \left(\boldsymbol {y} _ {t} - \boldsymbol {x} ^ {\star}\right) \tag {48}
+$$
+
+If
+
+$$
+\phi_ {t} (0) ^ {2} \left(\boldsymbol {y} _ {t} - \boldsymbol {x} ^ {\star}\right) = \phi_ {t} (0) \phi_ {t} (\boldsymbol {A}) \left(\boldsymbol {x} _ {0} - \boldsymbol {x} ^ {\star}\right), \tag {49}
+$$
+
+by the induction hypothesis for $t - 1$ and (48), we have
+
+$$
+\begin{array}{l} \boldsymbol {x} _ {t} - \boldsymbol {x} ^ {\star} = \frac {\sum_ {k = 0} ^ {t - 1} \phi_ {t} (0) \phi_ {t} (\boldsymbol {A})}{\sum_ {k = 0} ^ {t} \phi_ {k} (0) ^ {2}} \left(\boldsymbol {x} _ {0} - \boldsymbol {x} ^ {\star}\right) + \frac {\phi_ {t} (0) \phi_ {t} (\boldsymbol {A})}{\sum_ {k = 0} ^ {t} \phi_ {k} (0) ^ {2}} \left(x _ {0} - x _ {*}\right) \tag {50} \\ = \frac {\sum_ {k = 0} ^ {t} \phi_ {t} (0) \phi_ {t} (\boldsymbol {A})}{\sum_ {k = 0} ^ {t} \phi_ {k} (0) ^ {2}} (x _ {0} - x _ {*}), \\ \end{array}
+$$
+
+which concludes the proof of (47). The only thing left is to show (49), again by induction. The base case follows readily from $\pmb{y}_0 = \pmb{x}_0$ in (16). Dividing by $\phi_t(0)^2$ , we rewrite (49) as
+
+$$
+\boldsymbol {y} _ {t} - \boldsymbol {x} ^ {\star} = \frac {\phi_ {t} (\boldsymbol {A})}{\phi_ {t} (0)} \left(\boldsymbol {x} _ {0} - \boldsymbol {x} ^ {\star}\right) = \psi_ {t} (\boldsymbol {A}) \left(\boldsymbol {x} _ {0} - \boldsymbol {x} ^ {\star}\right), \tag {51}
+$$
+
+where $\psi_t$ is the $t$ -th orthogonal residual polynomial of sequence. By Assumption 4, $\psi_t$ must satisfy the recurrence in (15). If we subtract $x_*$ from the second line of (16), we apply the induction hypothesis and then the recurrence in (15), we obtain
+
+$$
+\begin{array}{l} \boldsymbol {y} _ {t} - \boldsymbol {x} ^ {\star} = a _ {t} \left(\boldsymbol {y} _ {t - 1} - \boldsymbol {x} ^ {\star}\right) + \left(1 - a _ {t}\right) \left(\boldsymbol {y} _ {t - 2} - \boldsymbol {x} ^ {\star}\right) + b _ {t} F \left(\boldsymbol {y} _ {t - 1}\right) \\ = a _ {t} \left(\boldsymbol {y} _ {t - 1} - \boldsymbol {x} ^ {\star}\right) + \left(1 - a _ {t}\right) \left(\boldsymbol {y} _ {t - 2} - \boldsymbol {x} ^ {\star}\right) + b _ {t} \boldsymbol {A} \left(\boldsymbol {y} _ {t - 1} - \boldsymbol {x} _ {*}\right) \\ = a _ {t} \psi_ {t - 1} (\boldsymbol {A}) \left(\boldsymbol {x} _ {0} - \boldsymbol {x} ^ {\star}\right) + (1 - a _ {t}) \psi_ {t - 2} (\boldsymbol {A}) \left(\boldsymbol {x} _ {0} - \boldsymbol {x} ^ {\star}\right) + b _ {t} \boldsymbol {A} \psi_ {t - 1} (\boldsymbol {A}) \left(\boldsymbol {x} _ {0} - \boldsymbol {x} ^ {\star}\right) \tag {52} \\ = \psi_ {t} (\boldsymbol {A}) \left(\boldsymbol {x} _ {0} - \boldsymbol {x} ^ {\star}\right), \\ \end{array}
+$$
+
+thus concluding the proof of (49).
+
+
+
+Proposition C.1. Suppose that Assumption 5 holds with $C = 0$ , that is, the circular support of $\mu$ is centered at 0. Then, the basis of orthonormal polynomials for the scalar product
+
+$$
+\langle P, Q \rangle = \int_ {D _ {R, 0}} P (\lambda) Q (\lambda) ^ {*} \mathrm {d} \mu (\lambda) \quad i s \quad \phi_ {k} (\lambda) = \frac {\lambda^ {k}}{D _ {k , R}}, \quad \forall k \geq 0, \tag {53}
+$$
+
+where $K_{k,R} = \sqrt{2\pi\int_0^Rr^{2k}d\mu_R(r)}$
+
+Proof. First, we will show that if $\mu$ satisfies Assumption 5 with $C = 0$ , then $\langle \lambda^i, \lambda^j \rangle = 0$ if $j, k \geq 0$ with $j \neq k$ (without loss of generality, suppose that $j > k$ ).
+
+$$
+\begin{array}{l} \langle \lambda^ {j}, \lambda^ {k} \rangle = \int_ {D _ {R, 0}} \lambda^ {j} \left(\lambda^ {*}\right) ^ {k} d \mu (\lambda) = \int_ {D _ {R, 0}} \lambda^ {j - k} | \lambda | ^ {2 k} d \mu (\lambda) \\ = \int_ {0} ^ {R} \frac {1}{2 \pi} \int_ {0} ^ {2 \pi} (r e ^ {i \theta}) ^ {j - k} r ^ {2 k} \mathrm {d} \theta \mathrm {d} \mu_ {R} (r) = \frac {1}{2 \pi} \int_ {0} ^ {2 \pi} e ^ {i \theta (j - k)} \mathrm {d} \theta \int_ {0} ^ {R} r ^ {j + k} \mathrm {d} \mu_ {R} (r) \tag {54} \\ = \frac {e ^ {i 2 \pi} - 1}{2 \pi i (j - k)} \int_ {0} ^ {R} r ^ {j + k} \mathrm {d} \mu_ {R} (r) = 0 \\ \end{array}
+$$
+
+And for all $k\geq 0$
+
+$$
+\langle \lambda^ {k}, \lambda^ {k} \rangle = \int_ {D _ {R, 0}} | \lambda^ {k} | ^ {2} \mathrm {d} \mu (\lambda) = \int_ {0} ^ {R} \frac {1}{2 \pi} \int_ {0} ^ {2 \pi} r ^ {2 k} \mathrm {d} \theta \mathrm {d} \mu_ {R} (r) = \int_ {0} ^ {2 \pi} r ^ {2 k} \mathrm {d} \mu_ {R} (r). \tag {55}
+$$
+
+
+
+Proposition 4.1. If $\mu$ satisfies Assumption 5, the sequence of orthonormal polynomials is $(\phi_t)_{t\geq 0}$
+
+$$
+\phi_ {t} (\lambda) = \frac {(\lambda - C) ^ {t}}{K _ {t , R}}, \text {w h e r e} K _ {t, R} = \sqrt {\int_ {0} ^ {R} r ^ {2 t} \mathrm {d} \mu_ {R} (r)}. \tag {17}
+$$
+
+Proof. The result follows from Proposition C.1 using the change of variables $z \to z + C$ . To compute the measure $\mu_R$ for the uniform measure on $D_{C,R}$ , we perform a change of variables to circular coordinates:
+
+$$
+\begin{array}{l} \int_ {D _ {C, R}} f (\lambda) \mathrm {d} \mu (\lambda) = \frac {1}{\pi R ^ {2}} \int_ {0} ^ {R} \int_ {0} ^ {2 \pi} f (C + r e ^ {i \theta}) r \mathrm {d} \theta \mathrm {d} r = \int_ {0} ^ {R} \int_ {0} ^ {2 \pi} f (C + r e ^ {i \theta}) \mathrm {d} \theta \mathrm {d} \mu_ {R} (r). \\ \Rightarrow \mathrm {d} \mu_ {R} (r) = \frac {r}{\pi R ^ {2}} \mathrm {d} r \tag {56} \\ \end{array}
+$$
+
+And
+
+$$
+\int_ {0} ^ {R} r ^ {2 t} \mathrm {d} \mu_ {R} (r) = \frac {1}{\pi R ^ {2}} \int_ {0} ^ {R} r ^ {2 t + 1} \mathrm {d} r = \frac {1}{\pi} \frac {R ^ {2 t}}{2 t + 2} \Rightarrow K _ {t, R} = R ^ {t} / \sqrt {t + 1}. \tag {57}
+$$
+
+
+
+Theorem 4.2. Given an initialization $\pmb{x}_0(\pmb{y}_0 = \pmb{x}_0)$ , if Assumption 5 is fulfilled with $R < C$ and the assumptions of Theorem 2.1 hold, then the average-case optimal first-order method is
+
+$$
+\boldsymbol {y} _ {t} = \boldsymbol {y} _ {t - 1} - \frac {1}{C} F (\boldsymbol {y} _ {t - 1}), \quad \beta_ {t} = C ^ {2 t} / K _ {t, R} ^ {2}, \quad B _ {t} = B _ {t - 1} + \beta_ {t - 1},
+$$
+
+$$
+\boldsymbol {x} _ {t} = \frac {B _ {t}}{B _ {t} + \beta_ {t}} \boldsymbol {x} _ {t - 1} + \frac {\beta_ {t}}{B _ {t} + \beta_ {t}} \boldsymbol {y} _ {t}. \tag {18}
+$$
+
+Moreover, $\mathbb{E}_{(A,\pmb{x}^{\star},\pmb{x}_0)}\mathrm{dist}(\pmb{x}_t,\mathcal{X}^\star)$ converges to zero at rate $1 / B_{t}$ .
+
+Proof. By Proposition 4.1, the sequence of residual orthogonal polynomials is given by $\psi_t(\lambda) = \phi_t(\lambda) / \phi_t(0) = \left(1 - \frac{\lambda}{C}\right)^t$ . Hence, Assumption 4 is fulfilled with $a_t = 1, b_t = -\frac{1}{C}$ , as $\psi_t(\lambda) = \psi_{t-1}(\lambda) - \frac{\lambda}{C} \psi_{t-1}(\lambda)$ . We apply Theorem 4.1 and make use of the fact that $\phi_k(0)^2 = \frac{C^{2k}}{K_{t,R}^2}$ . See Proposition D.3 for the rate on $\mathrm{dist}(\boldsymbol{x}_t, \mathcal{X}^\star)$ .
+
+# D PROOF OF PROPOSITION 5.2
+
+Proposition D.1. Suppose that the assumptions of Theorem 4.2 hold with the probability measure $\mu_R$ fulfilling $\mu_R([r,R]) = \Omega((R - r)^\kappa)$ for $r$ in $[r_0,R]$ for some $r_0 \in [0,R)$ and for some $\kappa \in \mathbb{Z}$ . Then,
+
+$$
+\lim _ {t \rightarrow \infty} \frac {\frac {C ^ {2 t}}{K _ {t , R} ^ {2}}}{\sum_ {k = 0} ^ {t} \frac {C ^ {2 k}}{K _ {k , R} ^ {2}}} = 1 - \frac {R ^ {2}}{C ^ {2}}. \tag {58}
+$$
+
+Proof. Given $\epsilon > 0$ , let $c_{\epsilon} \in \mathbb{Z}_{\geq 0}$ be the minimum such that
+
+$$
+\frac {1}{\sum_ {i = 0} ^ {c _ {\epsilon}} \left(\frac {R ^ {2}}{C ^ {2}}\right) ^ {i}} \leq (1 + \epsilon) \frac {1}{\sum_ {i = 0} ^ {\infty} \left(\frac {R ^ {2}}{C ^ {2}}\right) ^ {i}} = (1 + \epsilon) \left(1 - \frac {R ^ {2}}{C ^ {2}}\right) \tag {59}
+$$
+
+Define $Q_{t,R} \stackrel{\mathrm{def}}{=} \frac{R^{2t}}{K_{t,R}^2}$ . Then,
+
+$$
+\frac {\frac {C ^ {2 t}}{K _ {t , R} ^ {2}}}{\sum_ {k = 0} ^ {t} \frac {C ^ {2 k}}{K _ {k , R} ^ {2}}} = \frac {\frac {C ^ {2 t}}{R ^ {2 t}} Q _ {t , R}}{\sum_ {k = 0} ^ {t} \frac {C ^ {2 k}}{R ^ {2 k}} Q _ {k , R}} = \frac {Q _ {t , R}}{\sum_ {k = 0} ^ {t} \left(\frac {R ^ {2}}{C ^ {2}}\right) ^ {t - k} Q _ {k , R}} \tag {60}
+$$
+
+Now, on one hand, using that $Q_{t,R}$ is an increasing sequence on $t$ ,
+
+$$
+\frac {Q _ {t , R}}{\sum_ {k = 0} ^ {t} \left(\frac {R ^ {2}}{C ^ {2}}\right) ^ {t - k} Q _ {k , R}} \geq \frac {1}{\sum_ {k = 0} ^ {t} \left(\frac {R ^ {2}}{C ^ {2}}\right) ^ {t - k}} \geq \frac {1}{\sum_ {k = 0} ^ {\infty} \left(\frac {R ^ {2}}{C ^ {2}}\right) ^ {k}} = 1 - \frac {R ^ {2}}{C ^ {2}} \tag {61}
+$$
+
+On the other hand, for $t \geq c_{\epsilon}$ ,
+
+$$
+\frac {Q _ {t , R}}{\sum_ {k = 0} ^ {t} \left(\frac {R ^ {2}}{C ^ {2}}\right) ^ {t - k} Q _ {k , R}} \leq \frac {Q _ {t , R}}{\sum_ {k = t - c _ {\epsilon}} ^ {t} \left(\frac {R ^ {2}}{C ^ {2}}\right) ^ {t - k} Q _ {k , R}} = \frac {Q _ {t , R}}{\sum_ {k = t - c _ {\epsilon}} ^ {t} \left(\frac {R ^ {2}}{C ^ {2}}\right) ^ {t - k} \left(Q _ {t , R} - \int_ {k} ^ {t} \frac {d}{d s} Q _ {s , R} d s\right)} \tag {62}
+$$
+
+Thus, we want to upper-bound $\int_{k}^{t}\frac{d}{ds} Q_{s,R}\mathrm{d}s$ . First, notice that
+
+$$
+\frac {d}{d s} Q _ {s, R} = \frac {d}{d s} \left(\int_ {0} ^ {R} \left(\frac {r}{R}\right) ^ {2 s} \mathrm {d} \mu_ {R} (r)\right) ^ {- 1} = \frac {\int_ {0} ^ {R} \left(\frac {r}{R}\right) ^ {2 s} \left(- \log \left(\frac {r}{R}\right)\right) \mathrm {d} \mu_ {R} (r)}{\left(\int_ {0} ^ {R} \left(\frac {r}{R}\right) ^ {2 s} \mathrm {d} \mu_ {R} (r)\right) ^ {2}} \tag {63}
+$$
+
+By concavity of the logarithm function we obtain $\log \left(\frac{R}{r}\right) \leq \frac{R}{r_0} - 1$ for $r \in [r_0, R]$ . Choose $r_0$ close enough to $R$ so that $\frac{R}{r_0} - 1 \leq \epsilon / c_{\epsilon}$ . We obtain that
+
+$$
+\int_ {0} ^ {R} \left(\frac {r}{R}\right) ^ {2 s} \log \left(\frac {R}{r}\right) d \mu_ {R} (r) \leq \int_ {0} ^ {r _ {0}} \left(\frac {r}{R}\right) ^ {2 s} \log \left(\frac {R}{r}\right) d \mu_ {R} (r) + \int_ {r _ {0}} ^ {R} \left(\frac {r}{R}\right) ^ {2 s} \left(\frac {R}{r _ {0}} - 1\right) d \mu_ {R} (r). \tag {64}
+$$
+
+Thus,
+
+$$
+\int_ {k} ^ {t} \frac {d}{d s} Q _ {s, R} \mathrm {d} s \leq \int_ {k} ^ {t} \frac {\int_ {0} ^ {r _ {0}} \left(\frac {r}{R}\right) ^ {2 s} \log \left(\frac {R}{r}\right) \mathrm {d} \mu_ {R} (r)}{\left(\int_ {0} ^ {R} \left(\frac {r}{R}\right) ^ {2 s} \mathrm {d} \mu_ {R} (r)\right) ^ {2}} \mathrm {d} s + \int_ {k} ^ {t} \frac {\int_ {r _ {0}} ^ {R} \left(\frac {r}{R}\right) ^ {2 s} \left(\frac {R}{r _ {0}} - 1\right) \mathrm {d} \mu_ {R} (r)}{\left(\int_ {0} ^ {R} \left(\frac {r}{R}\right) ^ {2 s} \mathrm {d} \mu_ {R} (r)\right) ^ {2}} \mathrm {d} s. \tag {65}
+$$
+
+Using that $\log x\leq x$ , for $k\in [t - c_{\epsilon},t]$ we can bound the first term of (65) as
+
+$$
+\begin{array}{l} \int_ {k} ^ {t} \frac {\int_ {0} ^ {r _ {0}} \left(\frac {r}{R}\right) ^ {2 s} \log \left(\frac {R}{r}\right) \mathrm {d} \mu_ {R} (r)}{\left(\int_ {0} ^ {R} \left(\frac {r}{R}\right) ^ {2 s} \mathrm {d} \mu_ {R} (r)\right) ^ {2}} \mathrm {d} s \leq \int_ {k} ^ {t} \frac {\int_ {0} ^ {r _ {0}} \left(\frac {r}{R}\right) ^ {2 s - 1} \mathrm {d} \mu_ {R} (r)}{\left(\int_ {0} ^ {R} \left(\frac {r}{R}\right) ^ {2 s} \mathrm {d} \mu_ {R} (r)\right) ^ {2}} \mathrm {d} s \\ \leq (t - k) \frac {\left(\frac {r _ {0}}{R}\right) ^ {2 k - 1}}{\left(\int_ {0} ^ {R} \left(\frac {r}{R}\right) ^ {2 t} \mathrm {d} \mu_ {R} (r)\right) ^ {2}} \tag {66} \\ \leq c _ {\epsilon} \left(\frac {r _ {0}}{R}\right) ^ {2 (t - c _ {\epsilon}) - 1} Q _ {t, R} ^ {2} \\ \leq c _ {\epsilon} \left(\frac {r _ {0}}{R}\right) ^ {2 (t - c _ {\epsilon}) - 1} \frac {1}{(c _ {1}) ^ {2}} (2 t + 1) ^ {2 \kappa} \xrightarrow {t \to \infty} 0. \\ \end{array}
+$$
+
+In the last inequality we use that by Proposition D.2, for $t$ large enough, $Q_{t,R} = \frac{R^{2t}}{K_{t,R}^2} \leq (2t + 1)^k / c_1$ . For $k \in [t - c_\epsilon, t]$ , the second term of (65) can be bounded as
+
+$$
+\begin{array}{l} \int_ {k} ^ {t} \frac {\int_ {r _ {0}} ^ {R} \left(\frac {r}{R}\right) ^ {2 s} \frac {R}{r _ {0}} \mathrm {d} \mu_ {R} (r)}{\left(\int_ {0} ^ {R} \left(\frac {r}{R}\right) ^ {2 s} \mathrm {d} \mu_ {R} (r)\right) ^ {2}} \mathrm {d} s \leq (t - k) \left(\frac {R}{r _ {0}} - 1\right) \frac {1}{\int_ {0} ^ {R} \left(\frac {r}{R}\right) ^ {2 t} \mathrm {d} \mu_ {R} (r)} \\ \leq c _ {\epsilon} \left(\frac {R}{r _ {0}} - 1\right) \frac {1}{\int_ {0} ^ {R} \left(\frac {r}{R}\right) ^ {2 t} \mathrm {d} \mu_ {R} (r)} \tag {67} \\ \le \epsilon Q _ {t, R}. \\ \end{array}
+$$
+
+From (65), (66) and (67), we obtain that for $t$ large enough, for $k \in [t - c_{\epsilon}, t]$ ,
+
+$$
+\int_ {k} ^ {t} \frac {d}{d s} Q _ {s, R} d s \leq 2 \epsilon Q _ {t, R}. \tag {68}
+$$
+
+Hence, we can bound the right-hand side of (62):
+
+$$
+\begin{array}{l} \frac {Q _ {t , R}}{\sum_ {k = t - c _ {\epsilon}} ^ {t} \left(\frac {R ^ {2}}{C ^ {2}}\right) ^ {t - k} \left(Q _ {t , R} - \int_ {k} ^ {t} \frac {d}{d s} Q _ {s , R} d s\right)} \leq \frac {Q _ {t , R}}{\sum_ {k = t - c _ {\epsilon}} ^ {t} \left(\frac {R ^ {2}}{C ^ {2}}\right) ^ {t - k} \left(Q _ {t , R} - 2 \epsilon Q _ {t , R}\right)} \tag {69} \\ = \frac {1}{(1 - 2 \epsilon) \sum_ {k = t - c _ {\epsilon}} ^ {t} \left(\frac {R ^ {2}}{C ^ {2}}\right) ^ {t - k}} = \frac {1}{(1 - 2 \epsilon) \sum_ {k = 0} ^ {c _ {\epsilon}} \left(\frac {R ^ {2}}{C ^ {2}}\right) ^ {k}} \leq \frac {1 + \epsilon}{1 - 2 \epsilon} \left(1 - \frac {R ^ {2}}{C ^ {2}}\right). \\ \end{array}
+$$
+
+The last inequality follows from the definition of $c_{\epsilon}$ in (59). Since $\epsilon$ is arbitrary, by the sandwich theorem applied on (60), (61) and (69),
+
+$$
+\lim _ {t \rightarrow \infty} \frac {\frac {C ^ {2 t}}{K _ {t , R} ^ {2}}}{\sum_ {k = 0} ^ {t} \frac {C ^ {2 k}}{K _ {k , R} ^ {2}}} = 1 - \frac {R ^ {2}}{C ^ {2}}. \tag {70}
+$$
+
+
+
+Proposition D.2. Under the assumptions of Theorem 4.2, we have that there exists $c_{1} > 0$ such that for $t$ large enough,
+
+$$
+K _ {t, R} ^ {- 2} \geq c _ {1} R ^ {2 t} (2 t + 1) ^ {- \kappa}. \tag {71}
+$$
+
+Proof. By the assumption on $\mu_R$ , there exist $r_0, c_1, \kappa > 0$ such that
+
+$$
+\begin{array}{l} K _ {t, R} ^ {2} \stackrel {\mathrm {d e f}} {=} 2 \pi \int_ {0} ^ {R} r ^ {2 t} \mathrm {d} \mu_ {R} (r) = 2 \pi \int_ {0} ^ {r _ {0}} r ^ {2 t} \mathrm {d} \mu_ {R} (r) + 2 \pi \int_ {r _ {0}} ^ {R} r ^ {2 t} \mathrm {d} \mu_ {R} (r) \\ \geq 2 \pi c _ {1} \int_ {r _ {0}} ^ {R} r ^ {2 t} (R - r) ^ {\kappa - 1} \mathrm {d} r = - 2 \pi c _ {1} \int_ {0} ^ {r _ {0}} r ^ {2 t} (R - r) ^ {\kappa - 1} \mathrm {d} r + 2 \pi c _ {1} \int_ {0} ^ {R} r ^ {2 t} (R - r) ^ {\kappa - 1} \mathrm {d} r \\ \geq - 2 \pi c _ {1} R r _ {0} ^ {2 t} + 2 \pi c _ {1} R ^ {2 t + \kappa} B (2 t + 1, \kappa). \tag {72} \\ \end{array}
+$$
+
+where the beta function $B(x,y)$ is defined as
+
+$$
+B (x, y) \stackrel {\text {d e f}} {=} \int_ {0} ^ {1} r ^ {x + 1} (1 - r) ^ {y + 1} \mathrm {d} r. \tag {73}
+$$
+
+Using the link between the beta function and the gamma function $B(x,y) = \Gamma (x)\Gamma (y) / \Gamma (x + y)$ , and Stirling's approximation, we obtain that for fixed $y$ and large $x$ ,
+
+$$
+B (x, y) \sim \Gamma (y) x ^ {- y}. \tag {74}
+$$
+
+Hence, for $t$ large enough, $B(2t + 1,\kappa)\sim \Gamma (\kappa)(2t + 1)^{-\kappa} = (\kappa -1)!(2t + 1)^{-\kappa}$ . Hence, from (72) we obtain that there exist $c_{1}^{\prime}$ depending only on $\kappa$ and $r_0$ such that for $t$ large enough
+
+$$
+K _ {t, R} ^ {2} \geq - 2 \pi c _ {1} R r _ {0} ^ {2 t} + 2 \pi c _ {1} R ^ {2 t + \kappa} (k - 1)! (2 t + 1) ^ {- \kappa} \geq c _ {1} ^ {\prime} R ^ {2 t} (2 t + 1) ^ {- \kappa}. \tag {75}
+$$
+
+
+
+Proposition 5.2. Suppose that the assumptions of Theorem 4.2 hold with $\mu_R \in \mathcal{P}([0, R])$ fulfilling $\mu_R([r, R]) = \Omega((R - r)^\kappa)$ for $r$ in $[r_0, R]$ for some $r_0 \in [0, R)$ and for some $\kappa \in \mathbb{Z}$ . Then, the average-case asymptotically optimal algorithm is, with $\pmb{y}_0 = \pmb{x}_0$ :
+
+$$
+\boldsymbol {y} _ {t} = \boldsymbol {y} _ {t - 1} - \frac {1}{C} F (\boldsymbol {y} _ {t - 1}),
+$$
+
+$$
+\boldsymbol {x} _ {t} = \left(\frac {R}{C}\right) ^ {2} \boldsymbol {x} _ {t - 1} + \left(1 - \left(\frac {R}{C}\right) ^ {2}\right) \boldsymbol {y} _ {t}. \tag {20}
+$$
+
+Moreover, the convergence rate for this algorithm is asymptotically the same one as for the optimal algorithm in Theorem 4.2. Namely, $\lim_{t\to \infty}\mathbb{E}\left[\mathrm{dist}(\pmb{x}_t,\mathcal{X}^\star)\right]B_t = 1$
+
+Proof. The proof follows directly from Theorem 4.2 and Proposition D.1. See (77) and (79) in Proposition D.3 for the statement regarding the convergence rate.
+
+Proposition D.3. For the average-case optimal algorithm (18),
+
+$$
+\mathbb {E} \operatorname {d i s t} \left(\boldsymbol {x} _ {t}, \boldsymbol {\chi} ^ {\star}\right) = \xi_ {o p t} (t) \stackrel {\text {d e f}} {=} \frac {1}{\sum_ {k = 0} ^ {t} \frac {C ^ {2 k}}{K _ {k , R} ^ {2}}} \tag {76}
+$$
+
+For the average-case asymptotically optimal algorithm (20),
+
+$$
+\mathbb {E} \operatorname {d i s t} \left(\boldsymbol {x} _ {t}, \boldsymbol {\chi} ^ {\star}\right) = \xi_ {\text {a s y m p}} (t) \stackrel {\text {d e f}} {=} \left(1 - \left(\frac {R}{C}\right) ^ {2}\right) ^ {2} \sum_ {k = 1} ^ {t} \frac {K _ {k , R} ^ {2}}{C ^ {2 k}} \left(\frac {R}{C}\right) ^ {4 (t - k)} + \left(\frac {R}{C}\right) ^ {4 t} \tag {77}
+$$
+
+For the iterates $\mathbf{y}_t$ in (18), i.e. gradient descent with stepsize $1 / C$ , we have
+
+$$
+\mathbb {E} \operatorname {d i s t} \left(\boldsymbol {y} _ {t}, \mathcal {X} ^ {\star}\right) = \xi_ {G D} (t) \stackrel {\text {d e f}} {=} \frac {K _ {t , R} ^ {2}}{C ^ {2 t}} \tag {78}
+$$
+
+Moreover, for all $t \geq 0$ , we have $\xi_{opt}(t) \leq \xi_{asymp}(t)$ , and under the assumptions of (5.1),
+
+$$
+\lim _ {t \rightarrow \infty} \frac {\xi_ {o p t} (t)}{\xi_ {a s y m p} (t)} = 1, \quad \lim _ {t \rightarrow \infty} \frac {\xi_ {o p t} (t)}{\xi_ {G D} (t)} = \frac {\xi_ {a s y m p} (t)}{\xi_ {G D} (t)} = 1 - \left(\frac {R}{C}\right) ^ {2} \tag {79}
+$$
+
+Proof. To show (76), (77), (78), we use the expression $\pmb{x}_t - \pmb{x}^\star = P_t(\pmb{A})(\pmb{x}_0 - \pmb{x}^\star)$ (Proposition 2.1) and then evaluate $\| P_t\|_\mu^2 = \int_{\mathbb{C}\setminus \{0\}}|P_t|^2 \, \mathrm{d}\mu$ (Theorem 2.1).
+
+For (76), the value of $\| P_t\|_\mu^2$ follows directly from Theorem 2.3, which states that the value for the optimal residual polynomial $P_{t}$ is
+
+$$
+\frac {1}{\sum_ {k = 0} ^ {t} | \phi_ {k} (0) | ^ {2}} = \frac {1}{\sum_ {k = 0} ^ {t} \frac {C ^ {2 k}}{K _ {k , R} ^ {2}}}. \tag {80}
+$$
+
+A simple proof by induction shows that for the asymptotically optimal algorithm (20), the following expression holds for all $t \geq 0$ :
+
+$$
+\boldsymbol {x} _ {t} - \boldsymbol {x} ^ {\star} = \left(\left(\frac {R}{C}\right) ^ {2 t} + \left(1 - \left(\frac {R}{C}\right) ^ {2}\right) \sum_ {k = 1} ^ {t} \left(1 - \frac {\boldsymbol {A}}{C}\right) ^ {k} \left(\frac {R}{C}\right) ^ {2 (t - k)}\right) \left(\boldsymbol {x} _ {0} - \boldsymbol {x} ^ {\star}\right) \tag {81}
+$$
+
+Thus,
+
+$$
+\begin{array}{l} P _ {t} (\lambda) = \left(\frac {R}{C}\right) ^ {2 t} + \left(1 - \left(\frac {R}{C}\right) ^ {2}\right) \sum_ {k = 1} ^ {t} \left(1 - \frac {\lambda}{C}\right) ^ {k} \left(\frac {R}{C}\right) ^ {2 (t - k)} \tag {82} \\ = \left(\frac {R}{C}\right) ^ {2 t} \phi_ {0} (\lambda) + \left(1 - \left(\frac {R}{C}\right) ^ {2}\right) \sum_ {k = 1} ^ {t} \frac {K _ {k , R}}{C ^ {k}} \phi_ {k} (\lambda) \left(\frac {R}{C}\right) ^ {2 (t - k)}, \\ \end{array}
+$$
+
+which concludes the proof of (77), as
+
+$$
+\left\| P _ {t} \right\| _ {\mu} ^ {2} = \left(1 - \left(\frac {R}{C}\right) ^ {2}\right) ^ {2} \sum_ {k = 1} ^ {t} \frac {K _ {k , R} ^ {2}}{C ^ {2 k}} \left(\frac {R}{C}\right) ^ {4 (t - k)} + \left(\frac {R}{C}\right) ^ {4 t}. \tag {83}
+$$
+
+By equation (52),
+
+$$
+\boldsymbol {y} _ {t} - \boldsymbol {x} ^ {\star} = \left(1 - \frac {\boldsymbol {A}}{C}\right) ^ {t} \left(\boldsymbol {y} _ {0} - \boldsymbol {x} ^ {\star}\right) = \frac {K _ {t , R}}{C ^ {t}} \phi_ {k} (\boldsymbol {A}) \left(\boldsymbol {y} _ {0} - \boldsymbol {x} ^ {\star}\right) \tag {84}
+$$
+
+Thus, for the $\mathbf{y}_t$ iterates, $\| P_t\|_\mu^2 = \frac{K_{t,R}^2}{C^{2t}}$ , and (78) follows.
+
+Now, $\xi_{\mathrm{opt}}(t) \leq \xi_{\mathrm{asymp}}(t), \forall t \geq 0$ is a consequence of $\xi_{\mathrm{opt}}(t)$ being the rate of the optimal algorithm. And
+
+$$
+\lim _ {t \rightarrow \infty} \frac {\xi_ {\mathrm {o p t}} (t)}{\xi_ {\mathrm {G D}} (t)} = \lim _ {t \rightarrow \infty} \frac {\frac {C ^ {2 t}}{K _ {t , R} ^ {2}}}{\sum_ {k = 0} ^ {t} \frac {C ^ {2 k}}{K _ {k , R} ^ {2}}} = 1 - \frac {R ^ {2}}{C ^ {2}} \tag {85}
+$$
+
+follows from Proposition D.1. To show $\lim_{t\to \infty}\frac{\xi_{\mathrm{opt}}(t)}{\xi_{\mathrm{GD}}(t)} = 1 - \frac{R^2}{C^2}$ , which concludes the proof, we rewrite
+
+$$
+\xi_ {\text {a s y m p}} (t) = \left(\frac {R}{C}\right) ^ {2 t} \left(\left(1 - \left(\frac {R}{C}\right) ^ {2}\right) ^ {2} \sum_ {k = 1} ^ {t} \frac {1}{Q _ {k , R}} \left(\frac {R}{C}\right) ^ {2 (t - k)} + \left(\frac {R}{C}\right) ^ {2 t}\right), \tag {86}
+$$
+
+using that by definition, $Q_{k,R} = R^{2k} / K_{k,R}^2$ . Now, let $c_{\epsilon} \in \mathbb{Z}_{\geq 0}$ such that
+
+$$
+\sum_ {k = c _ {\epsilon}} ^ {\infty} \left(\frac {R}{C}\right) ^ {2 k} \leq \epsilon . \tag {87}
+$$
+
+Using the same argument as in Proposition D.1 (see (68)), for $t$ large enough and $k \in [t - c_{\epsilon}, t]$ ,
+
+$$
+\int_ {k} ^ {t} \frac {d}{d s} Q _ {s, R} d s \leq 2 \epsilon Q _ {t, R}. \tag {88}
+$$
+
+Hence, for $t$ large enough,
+
+$$
+\begin{array}{l} \left(1 - \left(\frac {R}{C}\right) ^ {2}\right) ^ {2} \sum_ {k = 1} ^ {t} \frac {1}{Q _ {k , R}} \left(\frac {R}{C}\right) ^ {2 (t - k)} + \left(\frac {R}{C}\right) ^ {2 t} \\ = \left(1 - \left(\frac {R}{C}\right) ^ {2}\right) ^ {2} \left(\sum_ {k = t - c _ {\epsilon}} ^ {t} \frac {1}{Q _ {t , R} - \int_ {k} ^ {t} \frac {d}{d s} Q _ {s , R}} \left(\frac {R}{C}\right) ^ {2 (t - k)} + \sum_ {k = 1} ^ {t - c _ {\epsilon}} \frac {1}{Q _ {k , R}} \left(\frac {R}{C}\right) ^ {2 (t - k)}\right) + \left(\frac {R}{C}\right) ^ {2 t} \\ \leq \left(1 - \left(\frac {R}{C}\right) ^ {2}\right) ^ {2} \left(\frac {1}{(1 - 2 \epsilon) Q _ {t , R}} \sum_ {k = t - c _ {\epsilon}} ^ {t} \left(\frac {R}{C}\right) ^ {2 (t - k)} + \sum_ {k = 1} ^ {t - c _ {\epsilon}} \left(\frac {R}{C}\right) ^ {2 (t - k)}\right) + \epsilon \\ \leq \left(1 - \left(\frac {R}{C}\right) ^ {2}\right) \left(\frac {1}{(1 - 2 \epsilon) Q _ {t , R}} + \left(1 - \left(\frac {R}{C}\right) ^ {2}\right) \epsilon\right) + \epsilon , \tag {89} \\ \end{array}
+$$
+
+which can be made arbitrarily close to $\left(1 - \left(\frac{R}{C}\right)^2\right)\frac{1}{Q_{t,R}}$ by taking $\epsilon > 0$ small enough. Plugging this into (86), we obtain that we can make $\xi_{\mathrm{asymp}}(t)$ arbitrarily close to $\left(1 - \left(\frac{R}{C}\right)^2\right)\left(\frac{R}{C}\right)^{2t}\frac{1}{Q_{t,R}} = \left(1 - \left(\frac{R}{C}\right)^2\right)\xi_{\mathrm{GD}}(t)$ by taking $t$ large enough.
\ No newline at end of file
diff --git a/averagecaseaccelerationforbilineargamesandnormalmatrices/images.zip b/averagecaseaccelerationforbilineargamesandnormalmatrices/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..f5eb2c8028d91e1bbe04238cc9ba0180cea8ea4b
--- /dev/null
+++ b/averagecaseaccelerationforbilineargamesandnormalmatrices/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c62bd8349adbc9d9d35ad10c830aad5d11c26ffe8fe3c0d2edd7c6870a670adf
+size 1321387
diff --git a/averagecaseaccelerationforbilineargamesandnormalmatrices/layout.json b/averagecaseaccelerationforbilineargamesandnormalmatrices/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..7b756ce1527f964063a59020bf87f93c58ebcb97
--- /dev/null
+++ b/averagecaseaccelerationforbilineargamesandnormalmatrices/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:395f072646c7eb93d055c99c57c69e24abe6207ad45f7170dc8b1a68ff7b8775
+size 1015280
diff --git a/bagoftricksforadversarialtraining/0e009d16-83be-41b1-8610-beb69bfecf8e_content_list.json b/bagoftricksforadversarialtraining/0e009d16-83be-41b1-8610-beb69bfecf8e_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..ae23f7ed581e1e784f36cc0e88d0865c6e3e4648
--- /dev/null
+++ b/bagoftricksforadversarialtraining/0e009d16-83be-41b1-8610-beb69bfecf8e_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bba8ff54b6646d8bba05dd67b75e002898d92de3a061710022b2c7352230d899
+size 119972
diff --git a/bagoftricksforadversarialtraining/0e009d16-83be-41b1-8610-beb69bfecf8e_model.json b/bagoftricksforadversarialtraining/0e009d16-83be-41b1-8610-beb69bfecf8e_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..d876d0af13bac6271175adffe3770df5242dd5e1
--- /dev/null
+++ b/bagoftricksforadversarialtraining/0e009d16-83be-41b1-8610-beb69bfecf8e_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:abbd115ef5e78121f9f7798d521fd487c54b84136cd3a62fc352df1a8fb6f563
+size 152380
diff --git a/bagoftricksforadversarialtraining/0e009d16-83be-41b1-8610-beb69bfecf8e_origin.pdf b/bagoftricksforadversarialtraining/0e009d16-83be-41b1-8610-beb69bfecf8e_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..71f21c8f5251818a9bab2d6fb5d367c0522d8150
--- /dev/null
+++ b/bagoftricksforadversarialtraining/0e009d16-83be-41b1-8610-beb69bfecf8e_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8f637c42abf474215b0325aca5dd646756b5f5fba666f89a51508eefc83438bd
+size 1046442
diff --git a/bagoftricksforadversarialtraining/full.md b/bagoftricksforadversarialtraining/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..f959b95cb368eab98f9010924b8c017db9662acb
--- /dev/null
+++ b/bagoftricksforadversarialtraining/full.md
@@ -0,0 +1,397 @@
+# BAG OF TRICKS FOR ADVERSARIAL TRAINING
+
+Tianyu Pang, Xiao Yang, Yinpeng Dong, Hang Su, Jun Zhu*
+
+Department of Computer Science & Technology, Institute for AI, BNRist Center
+
+Tsinghua-Bosch Joint ML Center, THBI Lab, Tsinghua University, Beijing, 100084 China
+
+{pty17, yangxiaol9, dyp17}@mails.tsinghua.edu.cn, {suhangss, dcszj}@tsinghua.edu.cn
+
+# ABSTRACT
+
+Adversarial training (AT) is one of the most effective strategies for promoting model robustness. However, recent benchmarks show that most of the proposed improvements on AT are less effective than simply early stopping the training procedure. This counter-intuitive fact motivates us to investigate the implementation details of tens of AT methods. Surprisingly, we find that the basic settings (e.g., weight decay, training schedule, etc.) used in these methods are highly inconsistent. In this work, we provide comprehensive evaluations on CIFAR-10, focusing on the effects of mostly overlooked training tricks and hyperparameters for adversially trained models. Our empirical observations suggest that adversarial robustness is much more sensitive to some basic training settings than we thought. For example, a slightly different value of weight decay can reduce the model robust accuracy by more than $7\%$ , which is probable to override the potential promotion induced by the proposed methods. We conclude a baseline training setting and re-implement previous defenses to achieve new state-of-the-art results1. These facts also appeal to more concerns on the overlooked confounders when benchmarking defenses.
+
+# 1 INTRODUCTION
+
+Adversarial training (AT) has been one of the most effective defense strategies against adversarial attacks (Biggio et al., 2013; Szegedy et al., 2014; Goodfellow et al., 2015). Based on the primary AT frameworks like PGD-AT (Madry et al., 2018), many improvements have been proposed from different perspectives, and demonstrate promising results (detailed in Sec. 2). However, the recent benchmarks (Croce & Hein, 2020b; Chen & Gu, 2020) find that simply early stopping the training procedure of PGD-AT (Rice et al., 2020) can attain the gains from almost all the previously proposed improvements, including the state-of-the-art TRADES (Zhang et al., 2019b).
+
+This fact is somewhat striking since TRADES also executes early stopping (one epoch after decaying the learning rate) in their code implementation. Besides, the reported robustness of PGD-AT in Rice et al. (2020) is much higher than in Madry et al. (2018), even without early-stopping. This paradox motivates us to check the implementation details of these seminal works. We find that TRADES uses weight decay of $2 \times 10^{-4}$ , Gaussian PGD initialization as $\delta_0 \sim \mathcal{N}(0, \alpha I)$ , and eval mode of batch normalization (BN) when crafting adversarial examples, while Rice et al. (2020) use weight decay of $5 \times 10^{-4}$ , uniform PGD initialization as $\delta_0 \sim \mathcal{U}(-\epsilon, \epsilon)$ , and train mode of BN to generate adversarial examples. In our experiments on CIFAR-10 (e.g., Table 8), the two slightly different settings can differ the robust accuracy by $\sim 5\%$ , which is significant according to the reported benchmarks.
+
+To have a comprehensive study, we further investigate the implementation details of tens of papers working on the AT methods, some of which are summarized in Table 1. We find that even using the same model architectures, the basic hyperparameter settings (e.g., weight decay, learning rate schedule, etc.) used in these papers are highly inconsistent and customized, which could affect the model performance and may override the gains from the methods themselves. Under this situation, if we directly benchmark these methods using their released code or checkpoints, some actually effective improvements would be under-estimated due to the improper hyperparameter settings.
+
+Our contributions. We evaluate the effects of a wide range of basic training tricks (e.g., warmup, early stopping, weight decay, batch size, BN mode, etc.) on the adversarially trained models. Our empirical results suggest that improper training settings can largely degenerate the model performance,
+
+Table 1: Hyperparameter settings and tricks used to implement different AT methods on CIFAR-10. We convert the training steps into epochs, and provide code links for reference in Table 11. Compared to the model architectures, the listed settings are easy to be neglected and paid less attention to unify.
+
+| Method | l.r. | Total epoch (l.r. decay) | Batch size | Weight decay | Early stop (train / attack) | Warm-up (l.r. / pertub.) |
| Madry et al. (2018) | 0.1 | 200 (100, 150) | 128 | \( 2 \times 10^{-4} \) | No / No | No / No |
| Cai et al. (2018) | 0.1 | 300 (150, 250) | 200 | \( 5 \times 10^{-4} \) | No / No | No / Yes |
| Zhang et al. (2019b) | 0.1 | 76 (75) | 128 | \( 2 \times 10^{-4} \) | Yes / No | No / No |
| Wang et al. (2019) | 0.01 | 120 (60, 100) | 128 | \( 1 \times 10^{-4} \) | No / Yes | No / No |
| Qin et al. (2019) | 0.1 | 110 (100, 105) | 256 | \( 2 \times 10^{-4} \) | No / No | No / Yes |
| Mao et al. (2019) | 0.1 | 80 (50, 60) | 50 | \( 2 \times 10^{-4} \) | No / No | No / No |
| Carmon et al. (2019) | 0.1 | 100 (cosine anneal) | 256 | \( 5 \times 10^{-4} \) | No / No | No / No |
| Alayrac et al. (2019) | 0.2 | 64 (38, 46, 51) | 128 | \( 5 \times 10^{-4} \) | No / No | No / No |
| Shafahi et al. (2019b) | 0.1 | 200 (100, 150) | 128 | \( 2 \times 10^{-4} \) | No / No | No / No |
| Zhang et al. (2019a) | 0.05 | 105 (79, 90, 100) | 256 | \( 5 \times 10^{-4} \) | No / No | No / No |
| Zhang & Wang (2019) | 0.1 | 200 (60, 90) | 60 | \( 2 \times 10^{-4} \) | No / No | No / No |
| Atzmon et al. (2019) | 0.01 | 100 (50) | 32 | \( 1 \times 10^{-4} \) | No / No | No / No |
| Wong et al. (2020) | 0~0.2 | 30 (one cycle) | 128 | \( 5 \times 10^{-4} \) | No / No | Yes / No |
| Rice et al. (2020) | 0.1 | 200 (100, 150) | 128 | \( 5 \times 10^{-4} \) | Yes / No | No / No |
| Ding et al. (2020) | 0.3 | 128 (51, 77, 102) | 128 | \( 2 \times 10^{-4} \) | No / No | No / No |
| Pang et al. (2020a) | 0.01 | 200 (100, 150) | 50 | \( 1 \times 10^{-4} \) | No / No | No / No |
| Zhang et al. (2020) | 0.1 | 120 (60, 90, 110) | 128 | \( 2 \times 10^{-4} \) | No / Yes | No / No |
| Huang et al. (2020) | 0.1 | 200 (cosine anneal) | 256 | \( 5 \times 10^{-4} \) | No / No | Yes / No |
| Cheng et al. (2020) | 0.1 | 200 (80, 140, 180) | 128 | \( 5 \times 10^{-4} \) | No / No | No / No |
| Lee et al. (2020) | 0.1 | 200 (100, 150) | 128 | \( 2 \times 10^{-4} \) | No / No | No / No |
| Xu et al. (2020) | 0.1 | 120 (60, 90) | 256 | \( 1 \times 10^{-4} \) | No / No | No / No |
+
+while this degeneration may be mistakenly ascribed to the methods themselves. We provide a baseline recipe for PGD-AT on CIFAR-10 as an example, and demonstrate the generality of the recipe on training other frameworks like TRADES. As seen in Table 16, the retrained TRADES achieve new state-of-the-art performance on the AutoAttack benchmark (Croce & Hein, 2020b).
+
+Although our empirical conclusions may not generalize to other datasets or tasks, we reveal the facts that adversarially trained models could be sensitive to certain training settings, which are usually neglected in previous work. These results also encourage the community to re-implement the previously proposed defenses with fine-tuned training settings to better explore their potentials.
+
+# 2 RELATED WORK
+
+In this section, we introduce related work on the adversarial defenses and recent benchmarks. We detail on the adversarial attacks in Appendix A.1.
+
+# 2.1 ADVERSARIAL DEFENSES
+
+To alleviate the adversarial vulnerability of deep learning models, many defense strategies have been proposed, but most of them can eventually be evaded by adaptive attacks (Carlini & Wagner, 2017b; Athalye et al., 2018). Other more theoretically guaranteed routines include training provably robust networks (Dvijotham et al., 2018a;b; Hein & Andriushchenko, 2017; Wong & Kolter, 2018) and obtaining certified models via randomized smoothing (Cohen et al., 2019). While these methods are promising, they currently do not match the state-of-the-art robustness under empirical evaluations.
+
+The idea of adversarial training (AT) stems from the seminal work of Goodfellow et al. (2015), while other AT frameworks like PGD-AT (Madry et al., 2018) and TRADES (Zhang et al., 2019b) occupied the winner solutions in the adversarial competitions (Kurakin et al., 2018; Brendel et al., 2020). Based on these primary AT frameworks, many improvements have been proposed via encoding the mechanisms inspired from other domains, including ensemble learning (Tramér et al., 2018; Pang et al., 2019), metric learning (Mao et al., 2019; Li et al., 2019; Pang et al., 2020c), generative modeling (Jiang et al., 2018; Pang et al., 2018b; Wang & Yu, 2019; Deng et al., 2020), semi-supervised learning (Carmon et al., 2019; Alayrac et al., 2019; Zhai et al., 2019), and self-supervised
+
+Table 2: Test accuracy (\%) under different early stopping and warmup on CIFAR-10. The model is ResNet-18 (results on WRN-34-10 is in Table 14). For early stopping attack iter., we denote, e.g., 40 / 70 as the epochs to increase the tolerance step by one (Zhang et al., 2020). For warmup, the learning rate and the maximal perturbation linearly increase from zero to preset values in $10/15/20$ epochs.
+
+ | Base | Early stopping attack iter. | Warmup on l.r. | Warmup on perturb. |
| 40 / 70 | 40 / 100 | 60 / 100 | 10 | 15 | 20 | 10 | 15 | 20 |
| Clean | 82.52 | 86.52 | 86.56 | 85.67 | 82.45 | 82.64 | 82.31 | 82.64 | 82.75 | 82.78 |
| PGD-10 | 53.58 | 52.65 | 53.22 | 52.90 | 53.43 | 53.29 | 53.35 | 53.65 | 53.27 | 53.62 |
| AA | 48.51 | 46.6 | 46.04 | 45.96 | 48.26 | 48.12 | 48.37 | 48.44 | 48.17 | 48.48 |
+
+learning (Hendrycks et al., 2019; Chen et al., 2020a;b; Naseer et al., 2020). On the other hand, due to the high computational cost of AT, many efforts are devoted to accelerating the training procedure via reusing the computations (Shafahi et al., 2019b; Zhang et al., 2019a), adaptive adversarial steps (Wang et al., 2019; Zhang et al., 2020) or one-step training (Wong et al., 2020; Liu et al., 2020; Vivek B & Venkatesh Babu, 2020). The following works try to solve the side effects (e.g., catastrophic overfitting) caused by these fast AT methods (Andriushchenko & Flammarion, 2020; Li et al., 2020).
+
+# 2.2 ADVERSARIAL BENCHMARKS
+
+Due to the large number of proposed defenses, several benchmarks have been developed to rank the adversarial robustness of existing methods. Dong et al. (2020) perform large-scale experiments to generate robustness curves, which are used for evaluating typical defenses. Croce & Hein (2020b) propose AutoAttack, which is an ensemble of four selected attacks. They apply AutoAttack on tens of previous defenses and provide a comprehensive leader board. Chen & Gu (2020) propose the black-box RayS attack, and establish a similar leader board for defenses. In this paper, we mainly apply PGD attack and AutoAttack as two common ways to evaluate the models.
+
+Except for the adversarial robustness, there are other efforts that introduce augmented datasets for accessing the robustness against general corruptions or perturbations. Mu & Gilmer (2019) introduce MNIST-C with a suite of 15 corruptions applied to the MNIST test set, while Hendrycks & Dietterich (2019) introduce ImageNet-C and ImageNet-P with common corruptions and perturbations on natural images. Evaluating robustness on these datasets can reflect the generality of the proposed defenses, and avoid overfitting to certain attacking patterns (Engstrom et al., 2019; Tramér & Boneh, 2019).
+
+# 3 BAG OF TRICKS
+
+Our overarching goal is to investigate how the usually overlooked implementation details affect the performance of the adversarially trained models. Our experiments are done on CIFAR-10 (Krizhevsky & Hinton, 2009) under the $\ell_{\infty}$ threat model of maximal perturbation $\epsilon = 8 / 255$ , without accessibility to additional data. We evaluate the models under 10-steps PGD attack (PGD-10) (Madry et al., 2018) and AutoAttack (AA) (Croce & Hein, 2020b). For the PGD attack, we apply untargeted mode using ground truth labels, step size of $2 / 255$ , and 5 restarts for evaluation / no restart for training. For the AutoAttack2, we apply the standard version, with no restart for AutoPGD and FAB, compared to 5 restarts for plus version. We consider some basic training tricks and perform ablation studies on each of them, based on the default training setting as described below:
+
+Default setting. Following Rice et al. (2020), in the default setting, we apply the primary PGD-AT framework and the hyperparameters including batch size 128; SGD momentum optimizer with the initial learning rate of 0.1; weight decay $5 \times 10^{-4}$ ; ReLU activation function and no label smoothing; train mode for batch normalization when crafting adversarial examples. All the models are trained for 110 epochs with the learning rate decaying by a factor of 0.1 at 100 and 105 epochs, respectively. We report the results on the checkpoint with the best PGD-10 accuracy.
+
+Note that our empirical observations and conclusions may not always generalize to other datasets or AT frameworks, but we emphasize the importance of using consistent implementation details (not only the same model architectures) to enable fair comparisons among different AT methods.
+
+Table 3: Test accuracy $(\%)$ under different batch size and learning rate (l.r.) on CIFAR-10. The basic l.r. is 0.1, while the scaled l.r. is, e.g., 0.2 for batch size 256, and 0.05 for batch size 64.
+
+| ResNet-18 |
| Batch size | Basic l.r. | Scaled l.r. |
| Clean | PGD-10 | Clean | PGD-10 |
| 64 | 80.08 | 51.31 | 82.44 | 52.48 |
| 128 | 82.52 | 53.58 | - | - |
| 256 | 83.33 | 52.20 | 82.24 | 52.52 |
| 512 | 83.40 | 50.69 | 82.16 | 53.36 |
+
+| WRN-34-10 |
| Batch size | Basic l.r. | Scaled l.r. |
| Clean | PGD-10 | Clean | PGD-10 |
| 64 | 84.20 | 54.69 | 85.40 | 54.86 |
| 128 | 86.07 | 56.60 | - | - |
| 256 | 86.21 | 52.90 | 85.89 | 56.09 |
| 512 | 86.29 | 50.17 | 86.47 | 55.49 |
+
+Table 4: Test accuracy $(\%)$ under different degrees of label smoothing (LS) on CIFAR-10. More evaluation results under, e.g., PGD-1000 can be found in Table 17.
+
+| ResNet-18 |
| LS | Clean | PGD-10 | AA | RayS |
| 0 | 82.52 | 53.58 | 48.51 | 53.34 |
| 0.1 | 82.69 | 54.04 | 48.76 | 53.71 |
| 0.2 | 82.73 | 54.22 | 49.20 | 53.66 |
| 0.3 | 82.51 | 54.34 | 49.24 | 53.59 |
| 0.4 | 82.39 | 54.13 | 48.83 | 53.40 |
+
+| LS | Clean | PGD-10 | AA | RayS |
| 0 | 86.07 | 56.60 | 52.19 | 60.07 |
| 0.1 | 85.96 | 56.88 | 52.74 | 59.99 |
| 0.2 | 86.09 | 57.31 | 53.00 | 60.28 |
| 0.3 | 85.99 | 57.55 | 52.70 | 61.00 |
| 0.4 | 86.19 | 57.63 | 52.71 | 60.64 |
+
+Table 5: Test accuracy (\%) using different optimizers on CIFAR-10. The model is ResNet-18 (results on WRN-34-10 is in Table 15). The initial learning rate for Adam and AdamW is 0.0001.
+
+ | Mom | Nesterov | Adam | AdamW | SGD-GC | SGD-GCC |
| Clean | 82.52 | 82.83 | 83.20 | 81.68 | 82.77 | 82.93 |
| PGD-10 | 53.58 | 53.78 | 48.87 | 46.58 | 53.62 | 53.40 |
| AA | 48.51 | 48.22 | 44.04 | 42.39 | 48.33 | 48.51 |
+
+# 3.1 EARLY STOPPING AND WARMUP
+
+Early stopping training epoch. The trick of early stopping w.r.t. the training epoch was first applied in the implementation of TRADES (Zhang et al., 2019b), where the learning rate decays at the 75th epoch and the training is stopped at the 76th epoch. Later Rice et al. (2020) provide a comprehensive study on the overfitting phenomenon in AT, and advocate early stopping the training epoch as a general strategy for preventing adversarial overfitting, which could be triggered according to the PGD accuracy on a split validation set. Due to its effectiveness, we regard this trick as a default choice.
+
+Early stopping adversarial intensity. Another level of early stopping happens on the adversarial intensity, e.g., early stopping PGD steps when crafting adversarial examples for training. This trick was first applied by the runner-up of the defense track in NeurIPS 2018 adversarial vision challenge (Brendel et al., 2020). Later efforts are devoted to formalizing this early stopping mechanism with different trigger rules (Wang et al., 2019; Zhang et al., 2020). Balaji et al. (2019) early stop the adversarial perturbation, which has a similar effect on the adversarial intensity. In the left part of Table 2, we evaluate the method proposed by Zhang et al. (2020) due to its simplicity. As seen, this kind of early stopping can improve the performance on clean data while keeping comparable accuracy under PGD-10. However, the performance under the stronger AutoAttack is degraded.
+
+Warmup w.r.t. learning rate. Warmup w.r.t. learning rate is a general trick for training deep learning models (Goodfellow et al., 2016). In the adversarial setting, Wong et al. (2020) show that the one cycle learning rate schedule is one of the critical ingredients for the success of FastAT. Thus, we evaluate the effect of this trick for the piecewise learning rate schedule and PGD-AT framework. We linearly increase the learning rate from zero to the preset value in the first $10 / 15 / 20$ epochs. As shown in the middle part of Table 2, the effect of warming up learning rate is marginal.
+
+Warmup w.r.t. adversarial intensity. In the AT procedure, warmup can also be executed w.r.t. the adversarial intensity. Cai et al. (2018) propose the curriculum AT process to gradually increase the adversarial intensity and monitor the overfitting trend. Qin et al. (2019) increase the maximal
+
+Table 6: Test accuracy (\%) under different non-linear activation function on CIFAR-10. The model is ResNet-18. We apply the hyperparameters recommended by Xie et al. (2020) on ImageNet for the activation function. Here the notation $\ddagger$ indicates using weight decay of $5 \times 10^{-5}$ , where applying weight decay of $5 \times 10^{-4}$ with these activations will lead to much worse model performance.
+
+ | ReLU | Leaky. | ELU‡ | CELU‡ | SELU‡ | GELU | Softplus | Tanh‡ |
| Clean | 82.52 | 82.11 | 82.17 | 81.37 | 78.88 | 80.42 | 82.80 | 80.13 |
| PGD-10 | 53.58 | 53.25 | 52.08 | 51.37 | 49.53 | 52.21 | 54.30 | 49.12 |
+
+
+
+
+
+
+Figure 1: (a) Test accuracy w.r.t. different values of weight decay. The reported checkpoints correspond to the best PGD-10 accuracy (Rice et al., 2020). We test on two model architectures, and highlight (with red circles) three most commonly used weight decays in previous work; (b) Curves of test accuracy w.r.t. training epochs, where the model is WRN-34-10. We set weight decay be $1 \times 10^{-4}$ , $2 \times 10^{-4}$ , and $5 \times 10^{-4}$ , respectively. We can observe that smaller weight decay can learn faster but also more tend to overfit w.r.t. the robust accuracy. In Fig. 4, we early decay the learning rate before the models overfitting, but weight decay of $5 \times 10^{-4}$ still achieve better robustness.
+
+
+
+perturbation $\epsilon$ from zero to $8 / 255$ in the first 15 epochs. In the right part of Table 2, we linearly increase the maximal perturbation in the first $10 / 15 / 20$ epochs, while the effect is still limited.
+
+# 3.2 TRAINING HYPERPARAMETERS
+
+Batch size. On the large-scale datasets like ImageNet (Deng et al., 2009), it has been recognized that the mini-batch size is an important factor influencing the model performance (Goyal et al., 2017), where larger batch size traverses the dataset faster but requires more memory usage. In the adversarial setting, Xie et al. (2019) use a batch size of 4096 to train a robust model on ImageNet, which achieves state-of-the-art performance under adversarial attacks. As to the defenses reported on the CIFAR-10 dataset, the mini-batch sizes are usually chosen between 128 and 256, as shown in Table 1. To evaluate the effect, we test on two model architectures and four values of batch size in Table 3. Since the number of training epochs is fixed to 110, we also consider applying the linear scaling rule introduced in Goyal et al. (2017), i.e., when the mini-batch size is multiplied by $k$ , multiply the learning rate by $k$ . We treat the batch size of 128 and the learning rate of 0.1 as a basic setting to obtain the factor $k$ . We can observe that the batch size of 128 works well on CIFAR-10, while the linear scaling rule can benefit the cases with other batch sizes.
+
+Table 7: Test accuracy (\%) under different BN modes on CIFAR-10. We evaluate across several model architectures, since the BN layers have different positions in different models.
+
+ | BN mode | Model architecture |
| ResNet-18 | SENet-18 | DenseNet-121 | GoogleNet | DPN26 | WRN-34-10 |
| Clean | train | 82.52 | 82.20 | 85.38 | 83.97 | 83.67 | 86.07 |
| eval | 83.48 | 84.11 | 86.33 | 85.26 | 84.56 | 87.38 |
| - | +0.96 | +1.91 | +0.95 | +1.29 | +0.89 | +1.31 |
| PGD-10 | train | 53.58 | 54.01 | 56.22 | 53.76 | 53.88 | 56.60 |
| eval | 53.64 | 53.90 | 56.11 | 53.77 | 53.41 | 56.04 |
| - | +0.06 | -0.11 | -0.11 | +0.01 | -0.47 | -0.56 |
| AA | train | 48.51 | 48.72 | 51.58 | 48.73 | 48.50 | 52.19 |
| eval | 48.75 | 48.95 | 51.24 | 48.83 | 48.30 | 51.93 |
| - | +0.24 | +0.23 | -0.34 | +0.10 | -0.20 | -0.26 |
+
+
+Figure 2: Clean accuracy vs. PGD-10 accuracy for different model architectures. The circle sizes are proportional to the number of parameters that specified in Table 12.
+
+Label smoothing (LS). Shafahi et al. (2019a) propose to utilize LS to mimic adversarial training. Pang et al. (2019) also find that imposing LS on the ensemble prediction can alleviate the adversarial transferability among individual members. Unfortunately, combing LS with standard training cannot prevent the models from evaded by adaptive attacks (Tramer et al., 2020) or larger iteration steps (Summers & Dinneen, 2018). Beyond previous observations, we further evaluate the effect of LS on adversarial training. As shown in Table 4 and Table 17, mild LS can improve $0.5 \sim 1\%$ robust accuracy under the strong attacks we evaluated, including AutoAttack and PGD-1000, without affecting the clean performance. This can be regarded as the effect induced by calibrating the confidence (Stutz et al., 2020) of adversarily trained models ( $80\% \sim 85\%$ accuracy on clean data). In contrast, excessive LS could degrade the robustness (e.g., $\mathrm{LS} = 0.3$ vs. $\mathrm{LS} = 0.4$ on ResNet-18), which is consistent with the recent observations in Jiang et al. (2020) (they use $\mathrm{LS} = 0.5$ ). However, since LS is known for its potential gradient masking effect, we advocate careful evaluations when applying this trick on the proposed defenses, following the suggestions in Carlini et al. (2019).
+
+Optimizer. Most of the AT methods apply SGD with momentum as the optimizer. The momentum factor is usually set to be 0.9 with zero dampening. In other cases, Carmon et al. (2019) apply SGD with Nesterov, and Rice et al. (2020) apply Adam for cyclic learning rate schedule. We test some commonly used optimizers in Table 5, as well as the decoupled AdamW (Loshchilov & Hutter, 2019) and the recently proposed gradient centralization trick SGD-GC / SGD-GCC (Yong et al., 2020). We can find that SGD-based optimizers (e.g., Mom, Nesterov, SGD-GC / SGD-GCC) have similar performance, while Adam / AdamW performs worse for piecewise learning rate schedule.
+
+Weight decay. As observed in Table 1, three different values of weight decay are used in previous defenses, including $1 \times 10^{-4}$ , $2 \times 10^{-4}$ , and $5 \times 10^{-4}$ . While $5 \times 10^{-4}$ is a fairly widely used value for weight decay in deep learning, the prevalence of the value $2 \times 10^{-4}$ should stem from Madry et al. (2018) in the adversarial setting. In Fig. 1(a), we report the best test accuracy under different values of weight decay $^3$ . We can see that the gap of robust accuracy can be significant due to slightly
+
+Table 8: The default hyperparameters include batch size 128 and SGD momentum optimizer. The AT framework is PGD-AT. We highlight the setting used by the implementation in Rice et al. (2020).
+
+| Architecture | Label smooth | Weight decay | Activation function | BN mode | Clean | Accuracy PGD-10 | AA |
| WRN-34-10 | 0 | 1 × 10-4 | ReLU | train | 85.87 | 49.45 | 46.43 |
| 0 | 2 × 10-4 | ReLU | train | 86.14 | 52.08 | 48.72 |
| 0 | 5 × 10-4 | ReLU | train | 86.07 | 56.60 | 52.19 |
| 0 | 5 × 10-4 | ReLU | eval | 87.38 | 56.04 | 51.93 |
| 0 | 5 × 10-4 | Softplus | train | 86.60 | 56.44 | 52.70 |
| 0.1 | 5 × 10-4 | Softplus | train | 86.42 | 57.22 | 53.01 |
| 0.1 | 5 × 10-4 | Softplus | eval | 86.34 | 56.38 | 52.21 |
| 0.2 | 5 × 10-4 | Softplus | train | 86.10 | 56.55 | 52.91 |
| 0.2 | 5 × 10-4 | Softplus | eval | 86.98 | 56.21 | 52.10 |
| WRN-34-20 | 0 | 1 × 10-4 | ReLU | train | 86.21 | 49.74 | 47.58 |
| 0 | 2 × 10-4 | ReLU | train | 86.73 | 51.39 | 49.03 |
| 0 | 5 × 10-4 | ReLU | train | 86.97 | 57.57 | 53.26 |
| 0 | 5 × 10-4 | ReLU | eval | 87.62 | 57.04 | 53.14 |
| 0 | 5 × 10-4 | Softplus | train | 85.80 | 57.84 | 53.64 |
| 0.1 | 5 × 10-4 | Softplus | train | 85.69 | 57.86 | 53.66 |
| 0.1 | 5 × 10-4 | Softplus | eval | 87.86 | 57.33 | 53.23 |
| 0.2 | 5 × 10-4 | Softplus | train | 84.82 | 57.93 | 53.39 |
| 0.2 | 5 × 10-4 | Softplus | eval | 87.58 | 57.19 | 53.26 |
+
+
+Figure 3: Random normal cross-sections of the decision boundary for PGD-AT with different weight decay. The model architecture is WRN-34-10. Following the examples in Moosavi-Dezfooli et al. (2019), we craft PGD-10 perturbation as the normal direction $v$ , and $r$ be a random direction, under the $\ell_{\infty}$ constraint of $8/255$ . The values of x-axis and y-axis represent the multiplied scale factors.
+
+different values of weight decay (e.g., up to $\sim 7\%$ for $1\times 10^{-4}$ vs. $5\times 10^{-4}$ ). Besides, in Fig. 1(b) we plot the learning curves of test accuracy w.r.t. training epochs. Note that smaller values of weight decay make the model learn faster in the initial phase, but the overfitting phenomenon also appears earlier. In Fig. 3, we visualize the cross sections of the decision boundary. We can see that proper values of weight decay (e.g., $5\times 10^{-4}$ ) can enlarge margins from decision boundary and improve robustness. Nevertheless, as shown in the left two columns, this effect is less significant on promoting clean accuracy. As a result, weight decay is a critical and usually neglected ingredient that largely influences the robust accuracy of adversarially trained models. In contrast, the clean accuracy is much less sensitive to weight decay, for both adversarially and standardly (shown in Fig. 5) trained models.
+
+Activation function. Most of the previous AT methods apply ReLU as the non-linear activation function in their models, while Xie et al. (2020) empirically demonstrate that smooth activation functions can better improve model robustness on ImageNet. Following their settings, we test if a similar conclusion holds on CIFAR-10. By comparing the results on ReLU and Softplus in Table 6 (for PGD-AT) and Table 13 (for TRADES), we confirm that smooth activation indeed benefits model robustness for ResNet-18. However, as shown in Table 8 (for PGD-AT) and Table 9 (for TRADES), this benefit is less significant on larger models like WRN. Thus we deduce that smaller model capacity can benefit more from the smoothness of activation function. Besides, as shown in Table 6, models trained on CIFAR-10 seem to prefer activation function $\sigma(x)$ with zero truncation, i.e., $\sigma(x) \geq 0$ . Those with negative return values like ELU, LeakyReLU, Tanh have worse performance than ReLU.
+
+Table 9: Test accuracy (%). The AT framework is TRADES. We highlight the setting used by the original implementation in Zhang et al. (2019b). As listed in Table 16, our retrained TRADES models can achieve state-of-the-art performance in the AutoAttack benchmark.
+
+| Threat model: ℓ∞ constraint, ε = 0.031 |
| Architecture | Weight decay | BN mode | Activation | Clean | PGD-10 | AA |
| WRN-34-10 | 2 × 10-4 | train | ReLU | 83.86 | 54.96 | 51.52 |
| 2 × 10-4 | eval | ReLU | 85.17 | 55.10 | 51.85 |
| 5 × 10-4 | train | ReLU | 84.17 | 57.34 | 53.51 |
| 5 × 10-4 | eval | ReLU | 85.34 | 58.54 | 54.64 |
| 5 × 10-4 | eval | Softplus | 84.66 | 58.05 | 54.20 |
| WRN-34-20 | 5 × 10-4 | eval | ReLU | 86.93 | 57.93 | 54.42 |
| 5 × 10-4 | eval | Softplus | 85.43 | 57.94 | 54.32 |
| Threat model: ℓ∞ constraint, ε = 8/255 |
| Architecture | Weight decay | BN mode | Activation | Clean | PGD-10 | AA |
| WRN-34-10 | 2 × 10-4 | train | ReLU | 84.50 | 54.60 | 50.94 |
| 2 × 10-4 | eval | ReLU | 85.17 | 54.58 | 51.54 |
| 5 × 10-4 | train | ReLU | 84.04 | 57.41 | 53.83 |
| 5 × 10-4 | eval | ReLU | 85.48 | 57.45 | 53.80 |
| 5 × 10-4 | eval | Softplus | 84.24 | 57.59 | 53.88 |
| WRN-34-20 | 2 × 10-4 | train | ReLU | 84.50 | 53.86 | 51.18 |
| 2 × 10-4 | eval | ReLU | 85.48 | 53.21 | 50.59 |
| 5 × 10-4 | train | ReLU | 85.87 | 57.40 | 54.22 |
| 5 × 10-4 | eval | ReLU | 86.43 | 57.91 | 54.39 |
| 5 × 10-4 | eval | Softplus | 85.51 | 57.50 | 54.21 |
+
+Model architecture. Su et al. (2018) provide a comprehensive study on the robustness of standardly trained models, using different model architectures. For the adversarially trained models, it has been generally recognized that larger model capacity can usually lead to better robustness (Madry et al., 2018). Recently, Guo et al. (2020) blend in the technique of AutoML to explore robust architectures. In Fig. 2, we perform similar experiments on more hand-crafted model architectures. The selected models have comparable numbers of parameters. We can observe that DenseNet can achieve both the best clean and robust accuracy, while being memory-efficient (but may require longer inference time). This is consistent with the observation in Guo et al. (2020) that residual connections can benefit the AT procedure. Interestingly, Wu et al. (2020) demonstrate that residual connections allow easier generation of highly transferable adversarial examples, while in our case this weakness for the standardly trained models may turn out to strengthen the adversarially trained models.
+
+Batch normalization (BN) mode. When crafting adversarial examples in the training procedure, Zhang et al. (2019b) use eval mode for BN, while Rice et al. (2020) and Madry et al. (2018) use train mode for BN. Since the parameters in the BN layers are not updated in this progress, the difference between these two modes is mainly on the recorded moving average BN mean and variance used in the test phase. As pointed out in Xie & Yuille (2020), properly dealing with BN layers is critical to obtain a well-performed adversarily trained model. Thus in Table 7, we employ the train or eval mode of BN for crafting adversarial examples during training, and report the results on different model architectures to dig out general rules. As seen, using eval mode for BN can increase clean accuracy, while keeping comparable robustness. We also advocate for the eval mode, because if we apply train mode for multi-step PGD attack, the BN mean and variance will be recorded for every intermediate step, which could blur the adversarial distribution used by BN layers during inference.
+
+# Takeaways:
+
+(i) Slightly different values of weight decay could largely affect the robustness of trained models;
+(ii) Moderate label smoothing and linear scaling rule on l.r. for different batch sizes are beneficial;
+(iii) Applying eval BN mode to craft training adversarial examples can avoid blurring the distribution;
+(iv) Early stopping the adversarial steps or perturbation may degenerate worst-case robustness;
+(v) Smooth activation benefits more when the model capacity is not enough for adversarial training.
+
+Table 10: Test accuracy (\%). The considered AT frameworks are FastAT and FreeAT. The model architecture is WRN-34-10. Detailed settings used for these defenses are described in Sec. 3.5.
+
+| Defense | Label smooth | Weight decay | BN mode | Clean | Accuracy PGD-10 | AA |
| FastAT (Wong et al., 2020) | 0 | \( 2 \times 10^{-4} \) | train | 82.19 | 47.47 | 42.99 |
| 0 | \( 5 \times 10^{-4} \) | train | 82.93 | 48.48 | 44.06 |
| 0 | \( 5 \times 10^{-4} \) | eval | 84.00 | 48.16 | 43.66 |
| 0.1 | \( 5 \times 10^{-4} \) | train | 82.83 | 48.76 | 44.50 |
| FreeAT (Shafahi et al., 2019b) | 0 | \( 2 \times 10^{-4} \) | train | 87.42 | 47.66 | 44.24 |
| 0 | \( 5 \times 10^{-4} \) | train | 88.17 | 48.90 | 45.66 |
| 0 | \( 5 \times 10^{-4} \) | eval | 88.26 | 48.50 | 45.49 |
| 0.1 | \( 5 \times 10^{-4} \) | train | 88.07 | 49.26 | 45.91 |
+
+# 3.3 COMBINATION OF TRICKS
+
+In the above, we separately evaluate the effect of each training trick in the AT procedure. Now we investigate combining the selected useful tricks, which involve label smoothing, weight decay, activation function and BN mode. As demonstrated in Table 8, the improvements are not ideally additive by combining different tricks, while label smoothing and smooth activation function are helpful, but not significant, especially when we apply model architectures with a larger capacity.
+
+We also find that the high performance of the models trained by Rice et al. (2020) partially comes from its reasonable training settings, compared to previous work. Based on these, we provide a trick list for training robust models on CIFAR-10 for reference.
+
+# Baseline setting (CIFAR-10):
+
+Batch size 128; SGD momentum optimizer; weight decay $5 \times 10^{-4}$ ; eval mode BN for generating adversarial examples; warmups are not necessary; moderate label smoothing $(0.1 \sim 0.2)$ and smooth activation function could be beneficial; model architecture with residual connections.
+
+# 3.4 RE-IMPLEMENTATION OF TRADES
+
+As a sanity check, we re-implement TRADES to see if our conclusions derived from PGD-AT can generalize and provide the results in Table 9. We can observe that after simply changing the weight decay from $2 \times 10^{-4}$ to $5 \times 10^{-4}$ , the clean accuracy of TRADES improves by $\sim 1\%$ and the AA accuracy improves by $\sim 4\%$ , which make the trained model surpass the previously state-of-the-art models reported by the AutoAttack benchmark, as listed in Table 16. This fact highlights the importance of employing a standardized training setting for fair comparisons of different AT methods.
+
+# 3.5 EVALUATIONS ON OTHER AT FRAMEWORKS
+
+To examine the universality of our observations on PGD-AT and TRADES, we further evaluate on other AT frameworks, including FastAT (Wong et al., 2020) and FreeAT (Shafahi et al., 2019b). We base on the FastAT code4 to implement the methods. Specifically, for FastAT, we use cyclic learning rate schedule with $l_{\mathrm{min}} = 0$ and $l_{\mathrm{max}} = 0.2$ , training for 15 epochs. For FreeAT, we also use cyclic learning rate schedule with $l_{\mathrm{min}} = 0$ and $l_{\mathrm{max}} = 0.04$ , training for 24 epochs with mini-batch replays be 4. The results are provided in Table 10. We can find that our observations generalize well to other AT frameworks, which verifies that the proposed baseline setting could be a decent default choice for adversarial training on CIFAR-10.
+
+# 4 CONCLUSION
+
+In this work, we take a step in examining how the usually neglected implementation details impact the performance of adversarially trained models. Our empirical results suggest that compared to clean accuracy, robustness is more sensitive to some seemingly unimportant differences in training settings. Thus when building AT methods, we should more carefully fine-tune the training settings (on validation sets), or follow certain long-tested setup in the adversarial setting.
+
+# ACKNOWLEDGEMENTS
+
+This work was supported by the National Key Research and Development Program of China (Nos. 2020AAA0104304, 2017YFA0700904), NSFC Projects (Nos. 61620106010, 62076147, U19B2034, U19A2081), Beijing Academy of Artificial Intelligence (BAAI), Tsinghua-Huawei Joint Research Program, a grant from Tsinghua Institute for Guo Qiang, Tiangong Institute for Intelligent Computing, and the NVIDIA NVAIL Program with GPU/DGX Acceleration. Tianyu Pang was supported by MSRA Fellowship and Baidu Scholarship.
+
+# REFERENCES
+
+Jean-Baptiste Alayrac, Jonathan Uesato, Po-Sen Huang, Alhussein Fawzi, Robert Stanforth, and Pushmeet Kohli. Are labels required for improving adversarial robustness? In Advances in Neural Information Processing Systems (NeurIPS), pp. 12192-12202, 2019.
+Maksym Andriushchenko and Nicolas Flammarion. Understanding and improving fast adversarial training. In Advances in neural information processing systems (NeurIPS), 2020.
+Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion, and Matthias Hein. Square attack: a query-efficient black-box adversarial attack via random search. In European Conference on Computer Vision (ECCV), 2020.
+Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In International Conference on Machine Learning (ICML), 2018.
+Matan Atzmon, Niv Haim, Lior Yariv, Ofer Israelov, Haggai Maron, and Yaron Lipman. Controlling neural level sets. In Advances in Neural Information Processing Systems (NeurIPS), pp. 2034-2043, 2019.
+Yogesh Balaji, Tom Goldstein, and Judy Hoffman. Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets. arXiv preprint arXiv:1910.08051, 2019.
+Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Šrndić, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. Evasion attacks against machine learning at test time. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 387-402. Springer, 2013.
+Wieland Brendel, Jonas Rauber, and Matthias Bethge. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. In International Conference on Learning Representations (ICLR), 2018.
+Wieland Brendel, Jonas Rauber, Alexey Kurakin, Nicolas Papernot, Behar Veliqi, Sharada P Mohanty, Florian Laurent, Marcel Salathé, Matthias Bethge, Yaodong Yu, et al. Adversarial vision challenge. In The NeurIPS'18 Competition, pp. 129-153. Springer, 2020.
+Qi-Zhi Cai, Chang Liu, and Dawn Song. Curriculum adversarial training. In International Joint Conference on Artificial Intelligence (IJCAI), pp. 3740-3747, 2018.
+Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In IEEE Symposium on Security and Privacy (S&P), 2017a.
+Nicholas Carlini and David Wagner. Adversarial examples are not easily detected: Bypassing ten detection methods. In ACM Workshop on Artificial Intelligence and Security (AISec), 2017b.
+Nicholas Carlini, Anish Athalye, Nicolas Papernot, Wieland Brendel, Jonas Rauber, Dimitris Tsipras, Ian Goodfellow, Aleksander Madry, and Alexey Kurakin. On evaluating adversarial robustness. arXiv preprint arXiv:1902.06705, 2019.
+Yair Carmon, Aditi Raghunathan, Ludwig Schmidt, Percy Liang, and John C Duchi. Unlabeled data improves adversarial robustness. In Advances in Neural Information Processing Systems (NeurIPS), 2019.
+
+Jinghui Chen and Quanquan Gu. Rays: A ray searching method for hard-label adversarial attack. arXiv preprint arXiv:2006.12792, 2020.
+Kejiang Chen, Yuefeng Chen, Hang Zhou, Xiaofeng Mao, Yuhong Li, Yuan He, Hui Xue, Weiming Zhang, and Nenghai Yu. Self-supervised adversarial training. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2218-2222. IEEE, 2020a.
+Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In ACM Workshop on Artificial Intelligence and Security (AISec). ACM, 2017a.
+Pin-Yu Chen, Yash Sharma, Huan Zhang, Jinfeng Yi, and Cho-Jui Hsieh. Ead: elastic-net attacks to deep neural networks via adversarial examples. In AAAI Conference on Artificial Intelligence (AAAIA), 2018.
+Tianlong Chen, Sijia Liu, Shiyu Chang, Yu Cheng, Lisa Amini, and Zhangyang Wang. Adversarial robustness: From self-supervised pre-training to fine-tuning. In Conference on Computer Vision and Pattern Recognition (CVPR), pp. 699-708, 2020b.
+Yunpeng Chen, Jianan Li, Huaxin Xiao, Xiaojie Jin, Shuicheng Yan, and Jiashi Feng. Dual path networks. In Advances in Neural Information Processing Systems (NeurIPS), pp. 4467-4475, 2017b.
+Minhao Cheng, Thong Le, Pin-Yu Chen, Jinfeng Yi, Huan Zhang, and Cho-Jui Hsieh. Query-efficient hard-label black-box attack: An optimization-based approach. In International Conference on Learning Representations (ICLR), 2019a.
+Minhao Cheng, Qi Lei, Pin-Yu Chen, Inderjit Dhillon, and Cho-Jui Hsieh. Cat: Customized adversarial training for improved robustness. arXiv preprint arXiv:2002.06789, 2020.
+Shuyu Cheng, Yinpeng Dong, Tianyu Pang, Hang Su, and Jun Zhu. Improving black-box adversarial attacks with a transfer-based prior. In Advances in Neural Information Processing Systems (NeurIPS), 2019b.
+Jeremy M Cohen, Elan Rosenfeld, and J Zico Kolter. Certified adversarial robustness via randomized smoothing. In International Conference on Machine Learning (ICML), 2019.
+Francesco Croce and Matthias Hein. Minimally distorted adversarial examples with a fast adaptive boundary attack. In International Conference on Machine Learning (ICML), 2020a.
+Francesco Croce and Matthias Hein. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In International Conference on Machine Learning (ICML), 2020b.
+Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009.
+Zhijie Deng, Yinpeng Dong, Tianyu Pang, Hang Su, and Jun Zhu. Adversarial distributional training for robust deep learning. arXiv preprint arXiv:2002.05999, 2020.
+Gavin Weiguang Ding, Yash Sharma, Kry Yik Chau Lui, and Ruitong Huang. Mma training: Direct input space margin maximization through adversarial training. In International Conference on Learning Representations (ICLR), 2020.
+Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li. Boosting adversarial attacks with momentum. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
+Yinpeng Dong, Tianyu Pang, Hang Su, and Jun Zhu. Evading defenses to transferable adversarial examples by translation-invariant attacks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
+
+Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Zihao Xiao, and Jun Zhu. Benchmarking adversarial robustness on image classification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
+Krishnamurthy Dvijotham, Sven Gowal, Robert Stanforth, Relja Arandjelovic, Brendan O'Donoghue, Jonathan Uesato, and Pushmeet Kohli. Training verified learners with learned verifiers. arXiv preprint arXiv:1805.10265, 2018a.
+Krishnamurthy Dvijotham, Robert Stanforth, Sven Gowal, Timothy Mann, and Pushmeet Kohli. A dual approach to scalable verification of deep networks. In Annual Conference on Uncertainty in Artificial Intelligence (UAI), 2018b.
+Logan Engstrom, Brandon Tran, Dimitris Tsipras, Ludwig Schmidt, and Aleksander Madry. A rotation and a translation suffice: Fooling cnns with simple transformations. In International Conference on Machine Learning (ICML), 2019.
+Reuben Feinman, Ryan R Curtin, Saurabh Shintre, and Andrew B Gardner. Detecting adversarial samples from artifacts. arXiv preprint arXiv:1703.00410, 2017.
+Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016. http://www.deeplearningbook.org.
+Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR), 2015.
+Sven Gowal, Chongli Qin, Jonathan Uesato, Timothy Mann, and Pushmeet Kohli. Uncovering the limits of adversarial training against norm-bounded adversarial examples. arXiv preprint arXiv:2010.03593, 2020.
+Priya Goyal, Piotr Dólar, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017.
+Chuan Guo, Mayank Rana, Moustapha Cisse, and Laurens Van Der Maaten. Countering adversarial images using input transformations. In International Conference on Learning Representations (ICLR), 2018.
+Minghao Guo, Yuzhe Yang, Rui Xu, Ziwei Liu, and Dahua Lin. When nas meets robustness: In search of robust architectures against adversarial attacks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 631-640, 2020.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European Conference on Computer Vision (ECCV), pp. 630-645. Springer, 2016.
+Matthias Hein and Maksym Andriushchenko. Formal guarantees on the robustness of a classifier against adversarial manipulation. In Advances in Neural Information Processing Systems (NeurIPS), pp. 2266-2276, 2017.
+Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. In International Conference on Learning Representations (ICLR), 2019.
+Dan Hendrycks, Kimin Lee, and Mantas Mazeika. Using pre-training can improve model robustness and uncertainty. In International Conference on Machine Learning (ICML), 2019.
+Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7132-7141, 2018.
+Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4700-4708, 2017.
+Lang Huang, Chao Zhang, and Hongyang Zhang. Self-adaptive training: beyond empirical risk minimization. arXiv preprint arXiv:2002.10319, 2020.
+
+Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. Black-box adversarial attacks with limited queries and information. In International Conference on Machine Learning (ICML), 2018.
+Haoming Jiang, Zhehui Chen, Yuyang Shi, Bo Dai, and Tuo Zhao. Learning to defense by learning to attack. arXiv preprint arXiv:1811.01213, 2018.
+Linxi Jiang, Xingjun Ma, Zejia Weng, James Bailey, and Yu-Gang Jiang. Imbalanced gradients: A new cause of overestimated adversarial robustness. arXiv preprint arXiv:2006.13726, 2020.
+Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
+Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial examples in the physical world. In The International Conference on Learning Representations (ICLR) Workshops, 2017.
+Alexey Kurakin, Ian Goodfellow, Samy Bengio, Yinpeng Dong, Fangzhou Liao, Ming Liang, Tianyu Pang, Jun Zhu, Xiaolin Hu, Cihang Xie, et al. Adversarial attacks and defences competition. arXiv preprint arXiv:1804.00097, 2018.
+Saehyung Lee, Hyungyu Lee, and Sungroh Yoon. Adversarial vertex mixup: Toward better adversarially robust generalization. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 272-281, 2020.
+Bai Li, Shiqi Wang, Suman Jana, and Lawrence Carin. Towards understanding fast adversarial training. arXiv preprint arXiv:2006.03089, 2020.
+Pengcheng Li, Jinfeng Yi, Bowen Zhou, and Lijun Zhang. Improving the robustness of deep neural networks via adversarial training with triplet loss. In International Joint Conference on Artificial Intelligence (IJCAI), 2019.
+Guanxiong Liu, Issa Khalil, and Abdallah Khreishah. Using single-step adversarial training to defend iterative adversarial examples. arXiv preprint arXiv:2002.09632, 2020.
+Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations (ICLR), 2019.
+Xingjun Ma, Bo Li, Yisen Wang, Sarah M Erfani, Sudanthi Wijewickrema, Michael E Houle, Grant Schoenebeck, Dawn Song, and James Bailey. Characterizing adversarial subspaces using local intrinsic dimensionality. arXiv preprint arXiv:1801.02613, 2018.
+Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations (ICLR), 2018.
+Chengzhi Mao, Ziyuan Zhong, Junfeng Yang, Carl Vondrick, and Baishakhi Ray. Metric learning for adversarial robustness. In Advances in Neural Information Processing Systems (NeurIPS), pp. 478-489, 2019.
+Jan Hendrik Metzen, Tim Genewein, Volker Fischer, and Bastian Bischoff. On detecting adversarial perturbations. In International Conference on Learning Representations (ICLR), 2017.
+Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Jonathan Uesato, and Pascal Frossard. Robustness via curvature regularization, and vice versa. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
+Norman Mu and Justin Gilmer. Mnist-c: A robustness benchmark for computer vision. arXiv preprint arXiv:1906.02337, 2019.
+Muzammal Naseer, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Fatih Porikli. A self-supervised approach for adversarial robustness. In Conference on Computer Vision and Pattern Recognition (CVPR), pp. 262-271, 2020.
+Anh Nguyen, Jason Yosinski, and Jeff Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 427-436, 2015.
+
+Tianyu Pang, Chao Du, Yinpeng Dong, and Jun Zhu. Towards robust detection of adversarial examples. In Advances in Neural Information Processing Systems (NeurIPS), pp. 4579-4589, 2018a.
+Tianyu Pang, Chao Du, and Jun Zhu. Max-mahalanobis linear discriminant analysis networks. In International Conference on Machine Learning (ICML), 2018b.
+Tianyu Pang, Kun Xu, Chao Du, Ning Chen, and Jun Zhu. Improving adversarial robustness via promoting ensemble diversity. In International Conference on Machine Learning (ICML), 2019.
+Tianyu Pang, Kun Xu, Yinpeng Dong, Chao Du, Ning Chen, and Jun Zhu. Rethinking softmax cross-entropy loss for adversarial robustness. In International Conference on Learning Representations (ICLR), 2020a.
+Tianyu Pang, Kun Xu, and Jun Zhu. Mixup inference: Better exploiting mixup to defend adversarial attacks. In International Conference on Learning Representations (ICLR), 2020b.
+Tianyu Pang, Xiao Yang, Yinpeng Dong, Kun Xu, Hang Su, and Jun Zhu. Boosting adversarial training with hypersphere embedding. In Advances in Neural Information Processing Systems (NeurIPS), 2020c.
+Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and Ananthram Swami. The limitations of deep learning in adversarial settings. In IEEE European Symposium on Security and Privacy (EuroS&P), pp. 372-387. IEEE, 2016.
+Chongli Qin, James Martens, Sven Gowal, Dilip Krishnan, Krishnamurthy Dvijotham, Alhussein Fawzi, Soham De, Robert Stanforth, and Pushmeet Kohli. Adversarial robustness through local linearization. In Advances in Neural Information Processing Systems (NeurIPS), pp. 13824-13833, 2019.
+Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, and Piotr Dólár. Designing network design spaces. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10428-10436, 2020.
+Edward Raff, Jared Sylvester, Steven Forsyth, and Mark McLean. Barrage of random transforms for adversarially robust defense. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6528-6537, 2019.
+Leslie Rice, Eric Wong, and J Zico Kolter. Overfitting in adversarially robust deep learning. In International Conference on Machine Learning (ICML), 2020.
+Ali Shafahi, Amin Ghiasi, Furong Huang, and Tom Goldstein. Label smoothing and logit squeezing: A replacement for adversarial training? arXiv preprint arXiv:1910.11585, 2019a.
+Ali Shafahi, Mahyar Najibi, Amin Ghiasi, Zheng Xu, John Dickerson, Christoph Studer, Larry S Davis, Gavin Taylor, and Tom Goldstein. Adversarial training for free! In Advances in Neural Information Processing Systems (NeurIPS), 2019b.
+Dawn Song, Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Florian Tramer, Atul Prakash, and Tadayoshi Kohno. Physical adversarial examples for object detectors. In USENIX Workshop on Offensive Technologies, 2018a.
+Yang Song, Taesup Kim, Sebastian Nowozin, Stefano Ermon, and Nate Kushman. Pixeldefend: Leveraging generative models to understand and defend against adversarial examples. In International Conference on Learning Representations (ICLR), 2018b.
+David Stutz, Matthias Hein, and Bernt Schiele. Confidence-calibrated adversarial training: Generalizing to unseen attacks. In International Conference on Machine Learning (ICML), 2020.
+Dong Su, Huan Zhang, Hongge Chen, Jinfeng Yi, Pin-Yu Chen, and Yupeng Gao. Is robustness the cost of accuracy? - a comprehensive study on the robustness of 18 deep image classification models. In European Conference on Computer Vision (ECCV), 2018.
+
+Cecilia Summers and Michael J Dinneen. Logit regularization methods for adversarial robustness. ICLR submission, 2018. https://openreview.net/forum?id=BJlr0j0ctX.
+Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR), 2014.
+Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-9, 2015.
+Pedro Tabacof and Eduardo Valle. Exploring the space of adversarial images. In 2016 International Joint Conference on Neural Networks (IJCNN), pp. 426-433. IEEE, 2016.
+Florian Tramér and Dan Boneh. Adversarial training and robustness for multiple perturbations. In Advances in Neural Information Processing Systems (NeurIPS), pp. 5858-5868, 2019.
+Florian Tramér, Alexey Kurakin, Nicolas Papernot, Dan Boneh, and Patrick McDaniel. Ensemble adversarial training: Attacks and defenses. In International Conference on Learning Representations (ICLR), 2018.
+Florian Tramer, Nicholas Carlini, Wieland Brendel, and Aleksander Madry. On adaptive attacks to adversarial example defenses. In Advances in Neural Information Processing Systems (NeurIPS), 2020.
+Jonathan Uesato, Brendan O'Donoghue, Aaron van den Oord, and Pushmeet Kohli. Adversarial risk and the dangers of evaluating against weak attacks. In International Conference on Machine Learning (ICML), 2018.
+S Vivek B and R Venkatesh Babu. Single-step adversarial training with dropout scheduling. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
+Huaxia Wang and Chun-Nam Yu. A direct approach to robust deep learning using adversarial networks. In International Conference on Learning Representations (ICLR), 2019.
+Yisen Wang, Xingjun Ma, James Bailey, Jinfeng Yi, Bowen Zhou, and Quanquan Gu. On the convergence and robustness of adversarial training. In International Conference on Machine Learning (ICML), pp. 6586-6595, 2019.
+Eric Wong and Zico Kolter. Provable defenses against adversarial examples via the convex outer adversarial polytope. In International Conference on Machine Learning (ICML), pp. 5283-5292, 2018.
+Eric Wong, Leslie Rice, and J. Zico Kolter. Fast is better than free: Revisiting adversarial training. In International Conference on Learning Representations (ICLR), 2020.
+Dongxian Wu, Yisen Wang, Shu-Tao Xia, James Bailey, and Xingjun Ma. Skip connections matter: On the transferability of adversarial examples generated with resnets. In International Conference on Learning Representations (ICLR), 2020.
+Cihang Xie and Alan Yuille. Intriguing properties of adversarial training at scale. In International Conference on Learning Representations (ICLR), 2020.
+Cihang Xie, Jianyu Wang, Zhishuai Zhang, Zhou Ren, and Alan Yuille. Mitigating adversarial effects through randomization. In International Conference on Learning Representations (ICLR), 2018.
+Cihang Xie, Yuxin Wu, Laurens van der Maaten, Alan Yuille, and Kaiming He. Feature denoising for improving adversarial robustness. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
+Cihang Xie, Mingxing Tan, Boqing Gong, Alan Yuille, and Quoc V Le. Smooth adversarial training. arXiv preprint arXiv:2006.14536, 2020.
+
+Saining Xie, Ross Girshick, Piotr Dolkar, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1492-1500, 2017.
+Zheng Xu, Ali Shafahi, and Tom Goldstein. Exploring model robustness with adaptive networks and improved adversarial training. arXiv preprint arXiv:2006.00387, 2020.
+Hongwei Yong, Jianqiang Huang, Xiansheng Hua, and Lei Zhang. Gradient centralization: A new optimization technique for deep neural networks. In European Conference on Computer Vision (ECCV), 2020.
+Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In The British Machine Vision Conference (BMVC), 2016.
+Runtian Zhai, Tianle Cai, Di He, Chen Dan, Kun He, John Hopcroft, and Liwei Wang. Adversarily robust generalization just requires more unlabeled data. arXiv preprint arXiv:1906.00555, 2019.
+Dinghuai Zhang, Tianyuan Zhang, Yiping Lu, Zhanxing Zhu, and Bin Dong. You only propagate once: Accelerating adversarial training via maximal principle. In Advances in Neural Information Processing Systems (NeurIPS), 2019a.
+Haichao Zhang and Jianyu Wang. Defense against adversarial attacks using feature scattering-based adversarial training. In Advances in Neural Information Processing Systems (NeurIPS), pp. 1829-1839, 2019.
+Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P Xing, Laurent El Ghaoui, and Michael I Jordan. Theoretically principled trade-off between robustness and accuracy. In International Conference on Machine Learning (ICML), 2019b.
+Jingfeng Zhang, Xilie Xu, Bo Han, Gang Niu, Lizhen Cui, Masashi Sugiyama, and Mohan Kankanhalli. Attacks which do not kill training make adversarial learning stronger. In International Conference on Machine Learning (ICML), 2020.
+
+# A TECHNICAL DETAILS
+
+In this section we introduce more related backgrounds and technical details for reference.
+
+# A.1 ADVERSARIAL ATTACKS
+
+Since the seminal L-BFGS and FGSM attacks (Szegedy et al., 2014; Goodfellow et al., 2015), a large amount of attacking methods on generating adversarial examples have been introduced. In the white-box setting, gradient-based methods are popular and powerful, which span in the $\ell_{\infty}$ threat model (Nguyen et al., 2015; Madry et al., 2018), $\ell_2$ threat model (Carlini & Wagner, 2017a), $\ell_1$ threat model (Chen et al., 2018), and $\ell_0$ threat model (Papernot et al., 2016). In the black-box setting, the attack strategies are much more diverse. These include transfer-based attacks (Dong et al., 2018; 2019; Cheng et al., 2019b), quasi-gradient attacks (Chen et al., 2017a; Uesato et al., 2018; Ilyas et al., 2018), and decision-based attacks (Brendel et al., 2018; Cheng et al., 2019a). Adversarial attacks can be also realized in the physical world (Kurakin et al., 2017; Song et al., 2018a). Below we formulate the PGD attack and AutoAttack that we used in our evaluations.
+
+PGD attack. One of the most commonly studied adversarial attack is the projected gradient descent (PGD) method (Madry et al., 2018). Let $x_0$ be a randomly perturbed sample in the neighborhood of the clean input $x$ , then PGD iteratively crafts the adversarial example as
+
+$$
+x _ {i} = \operatorname {c l i p} _ {x, \epsilon} \left(x _ {i - 1} + \epsilon_ {i} \cdot \operatorname {s i g n} \left(\nabla_ {x _ {i - 1}} \mathcal {L} \left(x _ {i - 1}, y\right)\right)\right), \tag {1}
+$$
+
+where $\mathrm{clip}_{x,\epsilon}(\cdot)$ is the clipping function and $\mathcal{L}$ is the adversarial objective. The accuracy under PGD attack has been a standard metric to evaluate the model robustness.
+
+AutoAttack. Croce & Hein (2020b) first propose the Auto-PGD (APGD) algorithm, where the main idea is to automatically tune the adversarial step sizes according to the optimization trend. As to the adversarial objective, except for the traditional cross-entropy (CE) loss, they develop a new difference of logits ratio (DLR) loss as
+
+$$
+\mathrm {D L R} (x, y) = - \frac {z _ {y} - \operatorname* {m a x} _ {i \neq y} z _ {i}}{z _ {\pi_ {1}} - z _ {\pi_ {3}}}, \tag {2}
+$$
+
+where $z$ is the logits and $\pi$ is the ordering which sorts the components of $z$ . Finally, the authors propose to group $\mathrm{APGD}_{\mathrm{CE}}$ and $\mathrm{APGD}_{\mathrm{DLR}}$ with FAB (Croce & Hein, 2020a) and square attack (Andriushchenko et al., 2020) to form the AutoAttack (AA).
+
+# A.2 REFERENCE CODES
+
+In Table 11, we provide the code links for the referred defenses. The summarized training settings are either described in their papers or manually retrieved by us in their code implementations.
+
+Table 11: We summarize the code links for the referred defense methods in Table 1.
+
+| Method | Code link |
| Madry et al. (2018) | github.com/MadryLab/cifar10_challenge |
| Cai et al. (2018) | github.com/sun Blaze-ucb/curriculum-adversarial-training-CAT |
| Zhang et al. (2019b) | github.com/yaodongyu/TRADES |
| Wang et al. (2019) | github.com/YisenWang/dynamic_adv_training |
| Mao et al. (2019) | github.com/columbia/Metric_Learning_Adversarial_Robustness |
| Carmon et al. (2019) | github.com/yaircarmon/semisup-adv |
| Alayrac et al. (2019) | github.com/deepmind/deepmind-research/unsupervised_adversarial_training |
| Shafahi et al. (2019b) | github.com/ashafahi/free_adv_train |
| Zhang et al. (2019a) | github.com/a1600012888/YOPO-You-Only-Propagate-Once |
| Zhang & Wang (2019) | github.com/Haichao-Zhang/FeatureScatter |
| Atzmon et al. (2019) | github.com/matanatz/ControllingNeuralLevelsets |
| Wong et al. (2020) | github.com/locuslab/fast_adversarial |
| Rice et al. (2020) | github.com/locuslab/robust_overfitting |
| Ding et al. (2020) | github.com/BorealisAI/mma_training |
| Pang et al. (2020a) | github.com/P2333/Max-Mahalanobis-Training |
| Zhang et al. (2020) | github.com/zjfheart/Friendly-Adversarial-Training |
| Huang et al. (2020) | github.com/LayneH/self-adaptive-training |
| Lee et al. (2020) | github.com/Saehyung-Lee/cifar10_challenge |
+
+# A.3 MODEL ARCHITECTURES
+
+We select some typical hand-crafted model architectures as the objects of study, involving DenseNet (Huang et al., 2017), GoogleNet (Szegedy et al., 2015), (PreAct) ResNet (He et al., 2016), SENet (Hu et al., 2018), WRN (Zagoruyko & Komodakis, 2016), DPN (Chen et al., 2017b), ResNeXt (Xie et al., 2017), and RegNetX (Radosavovic et al., 2020). The models are implemented by https://github.com/kuangliu/pytorch-cifar.
+
+Table 12: Number of parameters for different model architectures.
+
+| Architecture | # of param. | Architecture | # of param. | Architecture | # of param. |
| DenseNet-121 | 28.29 M | DPN26 | 46.47 M | GoogleNet | 24.81 M |
| DenseNet-201 | 73.55 M | DPN92 | 137.50 M | ResNeXt-29 | 36.65 M |
| RegNetX (200MF) | 9.42 M | ResNet-18 | 44.70 M | SENet-18 | 45.09 M |
| RegNetX (400MF) | 19.34 M | ResNet-50 | 94.28 M | WRN-34-10 | 193.20 M |
+
+# A.4 INFERENCE-PHASE ADVERSARIAL DEFENSES
+
+Except for enhancing the models in the training phase, there are other methods that intend to improve robustness in the inference phase. These attempts include performing local linear transformation like adding Gaussian noise (Tabacof & Valle, 2016), different operations of image processing (Guo et al., 2018; Xie et al., 2018; Raff et al., 2019) or specified inference principle (Pang et al., 2020b). On the other hand, detection-based methods aim to filter out adversarial examples and resort to higher-level intervention. Although detection is a suboptimal strategy compared to classification, it can avoid over-confident wrong decisions. These efforts include training auxiliary classifiers to detect adversarial inputs (Metzen et al., 2017), designing detection statistics (Feinman et al., 2017; Ma et al., 2018; Pang et al., 2018a), or basing on additional probabilistic models (Song et al., 2018b).
+
+# A.5 CONCURRENT WORK
+
+Gowal et al. (2020) also provide a comprehensive study on different training tricks of AT, and push forward the state-of-the-art performance of adversarially trained models on MNIST, CIFAR-10 and CIFAR-100. While they analyze some properties that we also analyze in this paper (such as training batch size, label smoothing, weight decay, activation functions), they also complement our analyses with experiments on, e.g., weight moving average and data quality. Both of our works reveal the importance of training details in the process of AT, and contribute to establishing more justified perspectives for evaluating AT methods.
+
+# B ADDITIONAL RESULTS
+
+In this section, we provide additional results to further support the conclusions in the main text.
+
+# B.1 EARLY DECAYS LEARNING RATE
+
+As shown in Fig. 1, smaller values of weight decay make the training faster but also more tend to overfit. So in Fig. 4, we early decay the learning rate at 40 and 45 epochs, rather than 100 and 105 epochs. We can see that the models can achieve the same clean accuracy, but the weight decay of $5 \times 10^{-4}$ can still achieve better robustness. Besides, in Fig. 5, we use different values of weight decay for standard training, where the models can also achieve similar clean accuracy. These results demonstrate that adversarial robustness is a more difficult target than clean performance, and is more sensitive to the training hyperparameters, both for standardly and adversarily trained models.
+
+
+Figure 4: Curves of test accuracy w.r.t. training epochs, where the model is WRN-34-10. Here we early decay the learning rate at 40 and 45 epochs for the cases of weight decay $1 \times 10^{-4}$ and $2 \times 10^{-4}$ , just before they overfitting. We can see that the models can achieve the same clean accuracy as weight decay $5 \times 10^{-4}$ , but still worse robustness.
+
+
+
+
+Figure 5: Curves of test accuracy w.r.t. training epochs. The model architecture is WRN-34-10, and is standardly trained on CIFAR-10. We can observe that the final performance of each model is comparable, which means that clean accuracy is less sensitive to different values of weight decay. This observation also holds for the adversarially trained models as shown in Fig. 1.
+
+# B.2 THE EFFECT OF SMOOTH ACTIVATION FUNCTION
+
+In Table 13 we test the effect of Softplus and BN mode on ResNet-18.
+
+Table 13: Test accuracy $(\%)$ of TRADES. We compare with the results in Table 6 to check the effect of smooth activation function on TRADES, as well as the compatibility of it with eval BN mode.
+
+| Threat model: ℓ∞ constraint, ε = 8/255 |
| Architecture | Weight decay | BN mode | Activation | Clean | PGD-10 | AA |
| ResNet-18 | 5 × 10-4 | train | ReLU | 80.23 | 53.60 | 48.96 |
| 5 × 10-4 | train | Softplus | 81.26 | 54.58 | 50.35 |
| 5 × 10-4 | eval | ReLU | 81.45 | 53.51 | 49.06 |
| 5 × 10-4 | eval | Softplus | 82.37 | 54.37 | 50.51 |
+
+# B.3 RESULTS OF EARLY STOPPING, WARMUP, AND OPTIMIZERS ON WRN-34-10
+
+In Table 14 and Table 15, we provide the results on WRN-34-10.
+
+Table 14: Test accuracy (\%) under different early stopping and warmup on CIFAR-10. The model is WRN-34-10. For early stopping attack iterations, we denote, e.g., $40 / 70$ as the epochs to increase the tolerance step by one (Zhang et al., 2020). For warmup, the learning rate (l.r.) and the maximal perturbation (perturb.) linearly increase from zero to the preset value in the first $10 / 15 / 20$ epochs.
+
+ | Base | Early stopping attack iter. | Warmup on l.r. | Warmup on perturb. |
| 40 / 70 | 40 / 100 | 60 / 100 | 10 | 15 | 20 | 10 | 15 | 20 |
| Clean | 86.07 | 88.29 | 88.25 | 88.81 | 86.35 | 86.63 | 86.41 | 86.66 | 86.43 | 86.73 |
| PGD-10 | 56.60 | 56.06 | 55.49 | 56.41 | 56.31 | 56.60 | 56.28 | 56.25 | 56.37 | 55.65 |
| AA | 52.19 | 50.19 | 49.44 | 49.81 | 51.96 | 52.13 | 51.75 | 51.88 | 52.06 | 51.70 |
+
+Table 15: Test accuracy $(\%)$ using different optimizers on CIFAR-10. The model is WRN-34-10. The initial learning rate for Adam and AdamW is 0.0001, while for other optimizers is 0.1.
+
+ | Mom | Nesterov | Adam | AdamW | SGD-GC | SGD-GCC |
| Clean | 86.07 | 86.80 | 81.00 | 80.72 | 86.70 | 86.67 |
| PGD-10 | 56.60 | 56.34 | 52.54 | 50.32 | 56.06 | 56.14 |
| AA | 52.19 | 51.93 | 46.52 | 45.79 | 51.75 | 51.65 |
+
+# B.4 RANK IN THE AUTOATTACK BENCHMARK
+
+The models evaluated in this paper are all retrained based on the released codes (Zhang et al., 2019b; Rice et al., 2020). Now we compare our trained models with the AutoAttack public benchmark, where the results of previous work are based on the released pretrained models. In Table 16, we retrieve our results in Table 9 on the TRADES model where we simply change the weight decay from $2 \times 10^{-4}$ to $5 \times 10^{-4}$ . We can see that this seemingly unimportant difference sends the TRADES model back to the state-of-the-art position in the benchmark.
+
+Table 16: We retrieve the results of top-rank methods from https://github.com/fra31/auto-attack. All the methods listed below do not require additional training data on CIFAR-10. Here the model of Ours (TRADES) corresponds to lines of weight decay $5 \times 10^{-4}$ , eval BN mode and ReLU activation in Table 9, which only differs from the original TRADES in weight decay. We run our methods 5 times with different random seeds, and report the mean and standard deviation.
+
+| Threat model: ℓ∞ constraint, ε = 8/255 |
| Method | Architecture | Clean | AA |
| Ours (TRADES) | WRN-34-20 | 86.43 | 54.39 |
| Ours (TRADES) | WRN-34-10 | 85.49 ± 0.24 | 53.94 ± 0.10 |
| Pang et al. (2020c) | WRN-34-20 | 85.14 | 53.74 |
| Zhang et al. (2020) | WRN-34-10 | 84.52 | 53.51 |
| Rice et al. (2020) | WRN-34-20 | 85.34 | 53.42 |
| Qin et al. (2019) | WRN-40-8 | 86.28 | 52.84 |
| Threat model: ℓ∞ constraint, ε = 0.031 |
| Method | Architecture | Clean | AA |
| Ours (TRADES) | WRN-34-10 | 85.45 ± 0.09 | 54.28 ± 0.24 |
| Huang et al. (2020) | WRN-34-10 | 83.48 | 53.34 |
| Zhang et al. (2019b) | WRN-34-10 | 84.92 | 53.08 |
+
+# B.5 MORE EVALUATIONS ON LABEL SMOOTHING
+
+In Table 17 we further investigate the effect of label smoothing on adversarial training.
+
+Table 17: Test accuracy (\%) under different label smoothing on CIFAR-10. The model is ResNet-18 trained by PGD-AT. We evaluate under PGD-1000 with different number of restarts and step sizes. Here we use the cross-entropy (CE) objective and C&W objective (Carlini & Wagner, 2017a), respectively. We also evaluate under the SPSA attack (Uesato et al., 2018) for 10,000 iteration steps, with batch size 128, perturbation size 0.001 and learning rate of $\frac{1}{255}$ .
+
+| Evaluation method | Label smoothing |
| Attack | Restart | Step size | 0 | 0.1 | 0.2 | 0.3 | 0.4 |
| PGD-1000 (CE objective) | 1 | 2/255 | 52.45 | 52.95 | 53.08 | 53.10 | 53.14 |
| 5 | 2/255 | 52.41 | 52.89 | 53.01 | 53.04 | 53.03 |
| 10 | 2/255 | 52.31 | 52.85 | 52.92 | 53.02 | 52.96 |
| 10 | 0.5/255 | 52.63 | 52.94 | 53.33 | 53.30 | 53.25 |
| PGD-1000 (C&W objective) | 1 | 2/255 | 50.64 | 50.76 | 51.07 | 50.96 | 50.54 |
| 5 | 2/255 | 50.58 | 50.66 | 50.93 | 50.86 | 50.44 |
| 10 | 2/255 | 50.55 | 50.59 | 50.90 | 50.85 | 50.44 |
| 10 | 0.5/255 | 50.63 | 50.73 | 51.03 | 51.04 | 50.52 |
| SPSA-10000 | 1 | 1/255 | 61.69 | 61.92 | 61.93 | 61.79 | 61.53 |
\ No newline at end of file
diff --git a/bagoftricksforadversarialtraining/images.zip b/bagoftricksforadversarialtraining/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..46fc9508651398253459660441eee11271349afa
--- /dev/null
+++ b/bagoftricksforadversarialtraining/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c0a0d5299ca3763ad808a7c5c95013d6151ff6cdf9a44a41bbbc71e9c35f4bc8
+size 1313040
diff --git a/bagoftricksforadversarialtraining/layout.json b/bagoftricksforadversarialtraining/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..912aff01c951429878f83a86c9c6d2a078458fcf
--- /dev/null
+++ b/bagoftricksforadversarialtraining/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b0263f6be831e301d2e0ecb8626b1cc71a89067b49d602ff20c65d604e5c90e6
+size 559715
diff --git a/balancingconstraintsandrewardswithmetagradientd4pg/9dfee37f-53ff-4353-b1ef-ea06be2a64b1_content_list.json b/balancingconstraintsandrewardswithmetagradientd4pg/9dfee37f-53ff-4353-b1ef-ea06be2a64b1_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..bc54ffd1060dfafee7ce21b131300066f6cbdce7
--- /dev/null
+++ b/balancingconstraintsandrewardswithmetagradientd4pg/9dfee37f-53ff-4353-b1ef-ea06be2a64b1_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:67e62b4cf5882176318a8a175cd1671dd4b13d2b512ca4a50b127d1865011047
+size 64725
diff --git a/balancingconstraintsandrewardswithmetagradientd4pg/9dfee37f-53ff-4353-b1ef-ea06be2a64b1_model.json b/balancingconstraintsandrewardswithmetagradientd4pg/9dfee37f-53ff-4353-b1ef-ea06be2a64b1_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..0ae19508610abe409488a683240b2beb46a710ec
--- /dev/null
+++ b/balancingconstraintsandrewardswithmetagradientd4pg/9dfee37f-53ff-4353-b1ef-ea06be2a64b1_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:169755e1e95ebe08d9675cced73c64617db2aa4f24080658344c9b6b3aba5416
+size 80259
diff --git a/balancingconstraintsandrewardswithmetagradientd4pg/9dfee37f-53ff-4353-b1ef-ea06be2a64b1_origin.pdf b/balancingconstraintsandrewardswithmetagradientd4pg/9dfee37f-53ff-4353-b1ef-ea06be2a64b1_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..ee7438e508edbcfe6b83877653a594b9118e1216
--- /dev/null
+++ b/balancingconstraintsandrewardswithmetagradientd4pg/9dfee37f-53ff-4353-b1ef-ea06be2a64b1_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:884d79c9aede110c436663f292772b6e9f801c58d1ed08e95855f22138bf0068
+size 905110
diff --git a/balancingconstraintsandrewardswithmetagradientd4pg/full.md b/balancingconstraintsandrewardswithmetagradientd4pg/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..026e37227f21011ab348635d4ba753909458df70
--- /dev/null
+++ b/balancingconstraintsandrewardswithmetagradientd4pg/full.md
@@ -0,0 +1,196 @@
+# BALANCING CONSTRAINTS AND REWARDS WITH META-GRADIENT D4PG
+
+Dan A. Calian*, Daniel J. Mankowitz*, Tom Zahavy, Zhongwen Xu, Junhyuk Oh, Nir Levine & Timothy Mann
+
+DeepMind
+
+London, United Kingdom
+
+{dancalian, dmankowitz}@google.com
+
+# ABSTRACT
+
+Deploying Reinforcement Learning (RL) agents to solve real-world applications often requires satisfying complex system constraints. Often the constraint thresholds are incorrectly set due to the complex nature of a system or the inability to verify the thresholds offline (e.g., no simulator or reasonable offline evaluation procedure exists). This results in solutions where a task cannot be solved without violating the constraints. However, in many real-world cases, constraint violations are undesirable yet they are not catastrophic, motivating the need for soft-constrained RL approaches. We present a soft-constrained RL approach that utilizes meta-gradients to find a good trade-off between expected return and minimizing constraint violations. We demonstrate the effectiveness of this approach by showing that it consistently outperforms the baselines across four different Mujoco domains.
+
+# 1 INTRODUCTION
+
+Reinforcement Learning (RL) algorithms typically try to maximize an expected return objective (Sutton & Barto, 2018). This approach has led to numerous successes in a variety of domains which include board-games (Silver et al., 2017), computer games (Mnih et al., 2015; Tessler et al., 2017) and robotics (Abdolmaleki et al., 2018). However, formulating real-world problems with only an expected return objective is often sub-optimal when tackling many applied problems ranging from recommendation systems to physical control systems which may include robots, self-driving cars and even aerospace technologies. In many of these domains there are a variety of challenges preventing RL from being utilized as the algorithmic solution framework. Recently, Dulac-Arnold et al. (2019) presented nine challenges that need to be solved to enable RL algorithms to be utilized in real-world products and systems. One of those challenges is handling constraints. All of the above domains may include one or more constraints related to cost, wear-and-tear, or safety, to name a few.
+
+Hard and Soft Constraints: There are two types of constraints that are encountered in constrained optimization problems; namely hard-constraints and soft-constraints (Boyd & Vandenberghe, 2004). Hard constraints are pairs of pre-specified functions and thresholds that require the functions, when evaluated on the solution, to respect the thresholds. As such, these constraints may limit the feasible solution set. Soft constraints are similar to hard constraints in the sense that they are defined by pairs of pre-specified functions and thresholds, however, a soft constraint does not require the solution to hold the constraint; instead, it penalizes the objective function (according to a specified rule) if the solution violates the constraint (Boyd & Vandenberghe, 2004; Thomas et al., 2017).
+
+Motivating Soft-Constraints: In real-world products and systems, there are many examples of soft-constraints; that is, constraints that can be violated, where the violated behaviour is undesirable but not catastrophic (Thomas et al. 2017; Dulac-Arnold et al. 2020b). One concrete example is that of energy minimization in physical control systems. Here, the system may wish to reduce the amount of energy used by setting a soft-constraint. Violating the constraint is inefficient, but not catastrophic to the system completing the task. In fact, there may be desirable characteristics that can only be attained if there are some constraint violations (e.g., a smoother/faster control policy). Another common setting is where it is unclear how to set a threshold. In many instances, a product
+
+manager may desire to increase the level of performance on a particular product metric $A$ , while ensuring that another metric $B$ on the same product does not drop by 'approximately $X\%$ '. The value 'X' is often inaccurate and may not be feasible in many cases. In both of these settings, violating the threshold is undesirable, yet does not have catastrophic consequences.
+
+Lagrange Optimization: In the RL paradigm, a number of approaches have been developed to incorporate hard constraints into the overall problem formulation (Altman 1999; Tessler et al. 2018; Efroni et al. 2020; Achiam et al. 2017; Bohez et al. 2019; Chow et al. 2018; Paternain et al. 2019; Zhang et al. 2020; Efroni et al. 2020). One popular approach is to model the problem as a Constrained Markov Decision Process (CMDP) (Altman 1999). In this case, one method is to solve the following problem formulation: $\max_{\pi} J_R^{\pi}$ s.t. $J_C^\pi \leq \beta$ , where $\pi$ is a policy, $J_R^\pi$ is the expected return, $J_C^\pi$ is the expected cost and $\beta$ is a constraint violation threshold. This is often solved by performing alternating optimization on the unconstrained Lagrangian relaxation of the original problem (e.g. Tessler et al. (2018)), defined as: $\min_{\lambda \geq 0} \max_{\pi} J_R^\pi + \lambda (\beta - J_C^\pi)$ . The updates alternate between learning the policy and the Lagrange multiplier $\lambda$ .
+
+In many previous constrained RL works (Achiam et al., 2017; Tessler et al., 2018; Ray et al., 2019; Satija et al., 2020), because the problem is formulated with hard constraints, there are some domains in each case where a feasible solution is not found. This could be due to approximation errors, noise, or the constraints themselves being infeasible. The real-world applications, along with empirical constrained RL research results, further motivates the need to develop a soft-constrained RL optimization approach. Ideally, in this setup, we would like an algorithm that satisfies the constraints while solving the task by maximizing the objective. If the constraints cannot be satisfied, then this algorithm finds a good trade-off (that is, minimizing constraint violations while solving the task by maximizing the objective).
+
+In this paper, we extend the constrained RL Lagrange formulation to perform soft-constrained optimization by formulating the constrained RL objective as a nested optimization problem (Sinha et al., 2017) using meta-gradients. We propose MetaL that utilizes meta-gradients (Xu et al., 2018; Zahavy et al., 2020) to improve upon the trade-off between reducing constraint violations and improving expected return. We focus on Distributed Distributional Deterministic Policy Gradients (D4PG) (Barth-Maron et al., 2018) as the underlying algorithmic framework, a state-of-the-art continuous control RL algorithm. We show that MetaL can capture an improved trade-off between expected return and constraint violations compared to the baseline approaches. We also introduce a second approach called MeSh that utilizes meta-gradients by adding additional representation power to the reward shaping function. Our main contributions are as follows: (1) We extend D4PG to handle constraints by adapting it to Reward Constrained Policy Optimization (RCPO) (Tessler et al., 2018) yielding Reward Constrained D4PG (RC-D4PG); (2) We present a soft constrained meta-gradient technique: Meta-Gradients for the Lagrange multiplier learning rate (MetaL)1; (3) We derive the meta-gradient update for MetaL (Theorem 1); (4) We perform extensive experiments and investigative studies to showcase the properties of this algorithm. MetaL outperforms the baseline algorithms across domains, safety coefficients and thresholds from the Real World RL suite (Dulac-Arnold et al., 2020b).
+
+# 2 BACKGROUND
+
+A Constrained Markov Decision Process (CMDP) is an extension to an MDP (Sutton & Barto, 2018) and consists of the tuple $\langle S, A, P, R, C, \gamma \rangle$ where $S$ is the state space; $A$ is the action space; $P: S \times A \to S$ is a function mapping states and actions to a distribution over next states; $R: S \times A \to \mathbb{R}$ is a bounded reward function and $C: S \times A \to \mathbb{R}^K$ is a $K$ dimensional function representing immediate penalties (or costs) relating to $K$ constraints. The solution to a CMDP is a policy $\pi: S \to \Delta_A$ which is a mapping from states to a probability distribution over actions. This policy aims to maximize the expected return $J_R^\pi = \mathbb{E}[\sum_{t=0}^\infty \gamma^t r_t]$ and satisfy the constraints $J_{C_i}^\pi = \mathbb{E}[\sum_{t=0}^\infty \gamma^t c_{i,t}] \leq \beta_i, i = 1 \dots K$ . For the purpose of the paper, we consider a single constraint; that is, $K = 1$ , but this can easily be extended to multiple constraints.
+
+Meta-Gradients is an approach to optimizing hyperparameters such as the discount factor, learning rates, etc. by performing online cross validation while simultaneously optimizing for the overall RL optimization objective such as the expected return (Xu et al., 2018; Zahavy et al., 2020). The goal is to optimize both an inner loss and an outer loss. The update of the $\theta$ parameters on the inner
+
+loss is defined as $\theta' = \theta + f(\tau, \theta, \eta)$ , where $\theta \in \mathbb{R}^d$ corresponds to the parameters of the policy $\pi_\theta(a|s)$ and the value function $v_\theta(s)$ (if applicable). The function $f: \mathbb{R}^k \to \mathbb{R}^d$ is the gradient of the policy and/or value function with respect to the parameters $\theta$ and is a function of an n-step trajectory $\tau = \langle s_1, a_1, r_2, s_2 \ldots s_n \rangle$ , meta-parameters $\eta$ and is weighted by a learning rate $\alpha$ and is defined as $f(\tau, \theta, \eta) = \alpha \frac{\mathrm{d}J_{obj}^{\pi_\theta}(\theta, \tau, \eta)}{\mathrm{d}\theta}$ where $J_{obj}^{\pi_\theta}(\theta, \tau, \eta)$ is the objective being optimized with respect to $\theta$ . The idea is to then evaluate the performance of this new parameter value $\theta'$ on an outer loss - the meta-gradient objective. We define this objective as $J'(\tau', \theta', \bar{\eta})$ where $\tau'$ is a new trajectory, $\theta'$ are the updated parameters and $\bar{\eta}$ is a fixed meta-parameter (which needs to be selected/tuned in practice). We then need to take the gradient of the objective $J'$ with respect to the meta-parameters $\eta$ to yield the outer loss update $\eta' = \eta + \alpha_\eta \frac{\partial J'(\tau', \theta', \bar{\eta})}{\partial\eta}$ . This gradient is computed as follows: $\frac{\partial J'(\tau', \theta', \bar{\eta})}{\partial\eta} = \frac{\partial J'(\tau', \theta', \bar{\eta})}{\partial\theta'} \frac{\partial\theta'}{\partial\eta}$ . The outer loss is essentially the objective we are trying to optimize. This could be a policy gradient loss, a temporal difference loss, a combination of the two etc (Xu et al., 2018; Zahavy et al., 2020). Meta-gradients have been previously used to learn intrinsic rewards for policy gradient (Zheng et al., 2018) and auxiliary tasks (Veeriah et al., 2019). Meta-gradients have also been used to adapt optimizer parameters (Young et al., 2018; Franceschi et al., 2017). In our setup, we consider the continuous control setting, provide the first implementation of meta-gradients for an algorithm that uses an experience replay, and focus on adapting meta-parameters that encourage soft constraint satisfaction while maximizing expected return.
+
+D4PG is a state-of-the-art continuous control RL algorithm with a deterministic policy (Barth-Maron et al., 2018). It is an incremental improvement to DDPG (Lillicrap et al., 2015). The overall objective of DDPG is to maximize $J(\theta_{a}, \theta_{c}) = \mathbb{E}[Q_{\theta_{c}}(s, a) | s = s_{t}, a = \pi_{\theta_{a}}(s_{t})]$ where $\pi_{\theta_{a}}(s_{t})$ is a deterministic policy with parameters $\theta_{a}$ and $Q_{\theta_{c}}(s, a)$ is an action value function with parameters $\theta_{c}$ . The actor loss is defined as: $L_{\text{actor}} = \| \mathrm{SG}(\nabla_{a} Q_{\theta_{c}}(s_{t}, a_{t})|_{a_{t} = \pi_{\theta_{a}}(s)} + a_{\theta_{a}, t}) - a_{\theta_{a}, t} \|_{2}$ where SG is a stop gradient. The corresponding gradient update is defined as $\nabla_{\theta_{a}} J(\theta_{a}) = \mathbb{E}[\nabla_{a} Q_{\theta_{c}}(s, a) \nabla_{\theta_{a}} \pi_{\theta_{a}}(s_{t})]$ . The critic is updated using the standard temporal difference error loss: $L_{\text{critic}} = (r(s, a) + \gamma Q_{T}(s', \pi_{T}(s')) - Q_{\theta_{c}}(s, a))^{2}$ where $Q_{T}, \pi_{T}$ are the target critic and actor networks respectively. In D4PG, the critic is a distributional critic based on the C51 algorithm (Bellemare et al., 2017) and the agent is run in a distributed setup with multiple actors executed in parallel, n-step returns and with prioritized experience replay. We will use the non-distributional critic update in our notation for ease of visualization and clarity for the reader.
+
+# 3 REWARD CONSTRAINED D4PG (RC-D4PG)
+
+This section describes our modifications required to transform D4PG into Reward Constrained D4PG (RC-D4PG) such that it maximizes the expected return and satisfies constraints.
+
+The constrained optimisation objective is defined as: $\max_{\pi_{\theta}}J_{R}^{\pi_{\theta}}$ subject to $J_C^{\pi_\theta}\leq \beta$ where $J_R^{\pi_\theta} = \mathbb{E}[Q(s,a)|s = s_t,a = \pi_\theta (s_t)]$ and $J_{C}^{\pi_{\theta}} = \mathbb{E}[C(s,a)|s = s_{t},a = \pi_{\theta}(s_{t})]$ ; the parameter $\theta = \langle \theta_{a},\theta_{c}\rangle$ from here on in; $C(s,a)$ is a long-term penalty value function (e.g., sum of discounted immediate penalties) corresponding to constraint violations. The Lagrangian relaxation objective is defined as $J_{R}^{\pi_{\theta}} + \lambda (\beta -J_{C}^{\pi_{\theta}})$ . As in RCPO, a proxy objective $J_{R}^{\pi_{\theta}} - \lambda J_{C}^{\pi_{\theta}}$ is used that converges to the same set of locally optimal solutions as the relaxed objective (Tessler et al., 2018). Note that the constant $\beta$ does not affect the policy improvement step and is only used for the Lagrange multiplier loss update. To optimize the proxy objective with D4PG, reward shaping of the form $r(s,a) - \lambda c(s,a)$ is required to yield the reward shaped critic loss defined as: $L_{critic}(\theta_c,\lambda) = (r(s,a) - \lambda c(s,a) + \gamma Q_T(s,\pi_T(s')) - Q_{\theta_c}(s,a))^2$ . The actor loss is defined as before. The Lagrange loss is defined as: $L_{lagrange}(\lambda) = \lambda (\beta -J_C^{\pi_\theta})$ where $\lambda \geq 0$ . Since RC-D4PG is off-policy, it requires storing the per time-step penalties, $c$ , inside the transitions stored in the experience replay buffer (ER). For training the Lagrange multiplier an additional penalty buffer is used to store the per-episode penalties $J_C^{\pi_\theta}$ . The learner then reads from this penalty buffer for updating the Lagrange multiplier. RC-D4PG updates the actor/critic parameters and the Lagrange multipliers using alternating optimization. The full algorithm for this setup can be found in the Appendix, Algorithm 3.
+
+# 4 META-GRADIENTS FOR THE LAGRANGE LEARNING RATE (METAL)
+
+In this section, we introduce the MetaL algorithm which extends RC-D4PG to use meta-gradients for adapting the learning rate of the Lagrangian multiplier. The idea is to update the learning rate such that the outer loss (as defined in the next subsection) is minimized. Our intuition is that a learning rate gradient that takes into account the overall task objective and constraint thresholds will lead to improved overall performance.
+
+# Algorithm 1 MetaL
+
+1: Input: penalty $c(\cdot)$ , constraint $C(\cdot)$ , threshold $\beta$ , learning rates $\alpha_{1}, \alpha_{\theta_{a}}, \alpha_{\theta_{c}}, \alpha_{\eta}$ , max. number of episodes $M$
+2: Initialize actor, critic parameters $\theta_{a}$ and $\theta_{c}$ , Lagrange multiplier $\lambda = 0$ , meta-parameter $\eta = \alpha_{\lambda}$
+3: for $1 \ldots M$ do
+4: Inner loss:
+5: Sample episode penalty $J_{C}^{\pi}$ from the penalty replay buffer
+6: $\lambda^{\prime}\gets [\lambda -\alpha_{1}\exp (\alpha_{\lambda})(\beta -J_{C}^{\pi})]_{+}$ ▷ Lagrange multiplier update
+7: Sample a batch with tuples $\langle s_t, a_t, r_t, c_t \rangle_{t=1}^T$ from ER and split into training/validation sets
+8: Accumulate and apply actor and critic updates over training batch $T_{train}$ by:
+9: $\nabla \theta_{c} = 0, \nabla \theta_{a} = 0$
+10: for $t = 1\ldots T_{train}$ do
+11: $\hat{R}_t = r_t - \lambda' c_t + \gamma \hat{Q}(\lambda, s_{t+1}, a_{t+1} \sim \pi_T(s_{t+1}); \theta_c)$
+12: $\nabla \theta_{c} + = \alpha_{\theta_{c}}\partial (\hat{R}_{t} - \hat{Q} (\lambda ,s_{t},a_{t};\theta_{c}))^{2} / \partial \theta_{c}$
+13: $\nabla \theta_{a} + = \alpha_{\theta_{a}}\mathbb{E}\big[\nabla_{a}Q(s_{t},a_{t})\nabla_{\theta_{a}}\pi_{\theta_{a}}(s_{t})|_{a_{t} = \pi (s_{t})}\big]$
+14: $\theta_c^{\prime}\gets \theta_c - \frac{1}{T_{train}}\sum \nabla \theta_c$
+15: $\theta_{a}^{\prime}\gets \theta_{a} + \frac{1}{T_{train}}\sum \nabla \theta_{a}$
+16: Outer loss: Compute outer loss and meta-gradient update using validation set $T_{\text{validate}}$ :
+17: $\alpha_{\lambda}^{\prime}\gets \alpha_{\lambda} - \alpha_{\eta}\frac{\partial J^{\prime}(\theta_{c}^{\prime}(\alpha_{\lambda}),\lambda^{\prime}(\alpha_{\lambda}))}{\partial\alpha_{\lambda}}$
+18: $\lambda \leftarrow \lambda^{\prime},\theta_{a}\leftarrow \theta_{a}^{\prime},\theta_{c}\leftarrow \dot{\theta}_{c}^{\prime},\alpha_{\lambda}\leftarrow \alpha_{\lambda}^{\prime}$
+19: return $\theta_{a},\theta_{c},\lambda$
+
+Meta-parameters, inner and outer losses: The meta-parameter is defined as $\eta = \alpha_{\lambda}$ . The inner loss is composed of three losses, the actor, critic and Lagrange loss respectively. The actor and critic losses are the same as in RC-D4PG. The Lagrange multiplier loss is defined as: $L_{lagrange}(\lambda) = \exp (\alpha_{\lambda})\lambda (\beta -J_C^\pi)$ where $\alpha_{\lambda}$ is the meta-parameter as defined above. The meta-parameter is wrapped inside an exponential function to magnify the effect of $\alpha_{\lambda}$ while also ensuring non-negativity of the effective learning rate. The inner loss updates are
+
+$$
+\left[ \begin{array}{c} \theta_ {a} ^ {\prime} \\ \theta_ {c} ^ {\prime} \\ \lambda^ {\prime} \end{array} \right] = \left[ \begin{array}{c} \theta_ {a} \\ \theta_ {c} \\ \lambda \end{array} \right] - \left[ \begin{array}{c} f (\tau , \theta_ {a}, \eta) \\ f (\tau , \theta_ {c}, \eta) \\ f (\tau , \lambda , \eta) \end{array} \right] = \left[ \begin{array}{c} \theta_ {a} \\ \theta_ {c} \\ \lambda \end{array} \right] - \left[ \begin{array}{c} \alpha_ {\theta_ {a}} \frac {d L _ {\text {a c t o r}} (\theta_ {a})}{d \theta_ {a}} \\ \alpha_ {\theta_ {c}} \frac {d L _ {\text {c r i t i c}} (\theta_ {c} , \lambda)}{d \theta_ {c}} \\ \alpha_ {1} \frac {d L _ {\text {l a g r a n g e}} (\lambda)}{d \lambda} \end{array} \right]
+$$
+
+tor critic and Lagrange multiplier learning rates respectively. The outer loss is defined as $J^{\prime}(\theta_c^\prime (\alpha_\lambda),\lambda^\prime (\alpha_\lambda)) = L_{outer} = L_{critic}(\theta_c^\prime (\alpha_\lambda),\lambda^\prime (\alpha_\lambda))$ . We tried different variants of outer losses and found that this loss empirically yielded the best performance; we discuss this in more detail in the experiments section. This is analogous to formulating MetaL as the following nested optimization problem: $\min_{\alpha_{\lambda}}J^{\prime}(\theta (\alpha_{\lambda}),\lambda (\alpha_{\lambda}))$ , s.t. $\theta ,\lambda \in \arg \min_{\theta ,\lambda \geq 0}\{-J_R^{\pi_\theta} - \lambda (\alpha_\lambda)(\beta -J_C^{\pi_\theta})\}$ . We treat the lower level optimization problem as the Lagrange relaxation objective (inner loss). We then treat the upper level optimization as the meta-gradient objective $J^{\prime}(\theta (\alpha_{\lambda}),\lambda (\alpha_{\lambda}))$ (outer loss). This transforms the optimization problem into soft-constrained optimization since the meta-parameter $\alpha_{\lambda}$ guides the learning of the Lagrange multiplier $\lambda$ to minimize the outer loss while attempting to find a good trade-off between minimizing constraint violations and maximizing return (inner loss).
+
+As shown in Algorithm [1], the inner loss gradients are computed for $\lambda$ (line 6), $\theta_{c}$ (line 12) and $\theta_{a}$ (line 13) corresponding to the Lagrange multiplier, critic and actor parameters respectively. The Lagrange multiplier is updated by sampling episode penalties which is an empirical estimate of $J_C^\pi$ from a separate penalty replay buffer (line 5) to compute the gradient update. The updated multiplier
+
+is then utilized in the critic inner update (lines 11 and 12) to ensure that the critic parameters are a function of this new updated Lagrange multiplier. The actor and critic parameters are updated using the training batch, and these updated parameters along with a validation batch are used to compute the outer loss (line 17). The meta-parameter $\alpha_{\lambda}$ is then updated along the gradient of this outer loss with respect to $\eta = \alpha_{\lambda}$ . We next derive the meta-gradient update for $\alpha_{\lambda}$ , and present it in the following theorem (see the Appendix, Section A for the full derivation). Intuition for this meta-gradient update is provided in the experiments section.
+
+Theorem 1. MetaL gradient update: Let $\beta \geq 0$ be a pre-defined constraint violation threshold, meta-parameter $\eta = \alpha_{\lambda}$ and $J_C^{\pi_\theta} = \mathbb{E}\Big[C(s,a)|s = s_t,a = \pi_\theta (s_t)\Big]$ is the discounted constraint violation function, then, the meta-gradient update is:
+
+$$
+\alpha_ {\lambda} ^ {\prime} \leftarrow \alpha_ {\lambda} - \alpha_ {\eta} \bigg (- 2 \delta \cdot c (s, a) \cdot \alpha_ {1} \exp (\alpha_ {\lambda}) \cdot \bigg (J _ {C} ^ {\pi_ {\theta}} - \beta \bigg) \bigg (- 2 \alpha_ {\theta_ {c}} (\nabla_ {\theta_ {c} ^ {\prime}} Q _ {\theta_ {c} ^ {\prime}} (s, a)) ^ {T} \nabla_ {\theta_ {c}} Q _ {\theta_ {c}} (s, a) + 1 \bigg) \bigg),
+$$
+
+where $\delta$ is the TD error; $\alpha_{\theta_c}$ is the critic learning rate and $\alpha_{\eta}$ is the meta-parameter learning rate.
+
+# 5 EXPERIMENTS
+
+The experiments were performed using domains from the Real-World Reinforcement Learning (RWRL) suite4, namely cartpole:swingup, walker:walk, quadruped:walk and humanoid:walk. We will refer to these domains as cartpole, walker, quadruped and humanoid from here on in.
+
+We focus on two types of tasks with constraints: (1) solvable constraint tasks - where the task is solved and the constraints can be satisfied; (2) unsolvable constraint tasks - where the task can be solved but the constraints cannot be satisfied. Unsolvable constraint tasks correspond to tasks where the constraint thresholds are incorrectly set and cannot be satisfied, situations which occur in many real-world problems as motivated in the introduction. The specific constraints we focused on for each domain can be found in the Appendix (Section). The goal is to showcase the soft-constrained performance of MetaL, with respect to reducing constraint violations and maximizing the return in both of these scenarios (solvable and unsolvable constraint tasks) with respect to the baselines.
+
+The baseline algorithms we focused on for each experiment are D4PG without any constraints, RC-D4PG (i.e., hard constraint satisfaction) and Reward Shaping D4PG (RS-D4PG) (i.e., soft constraint satisfaction). RS-D4PG uses a fixed $\lambda$ for the duration of training. We compare these baselines to MetaL. Note that D4PG, RC-D4PG and MetaL have no prior information regarding the Lagrange multiplier. RC-D4PG and MetaL attempt to learn a suitable multiplier value from scratch, i.e. the initial Lagrange multiplier value is set to 0.0. In contrast, RS-D4PG has prior information (i.e. it uses a pre-selected fixed Lagrange multiplier).
+
+Experimental Setup: For each domain, the action and observation dimensions are shown in the Appendix, Table 4. The episode length is 1000 steps, the base reward function is computed within the dm_control suite (Tassa et al., 2018). The upper bound reward for each task is 1000. Each task was trained for 20000 episodes. Each variant of D4PG uses the same network architecture (see the Appendix, Table 5 for more details).
+
+We use different performance metrics to compare overall performance. We track the average episode return $(R)$ , but we also define the penalized return: $R_{\text{penalized}} = R - \kappa \cdot \psi_{\beta, C}$ , which captures the trade-off between achieving optimal performance and satisfying the constraints. Here, $R$ is the average return for the algorithm upon convergence (computed as an average over the previous 100 episodes); $\kappa$ is a fixed constant that determines how much to weight the constraint violation penalty. For the purposes of evaluation, we want to penalize algorithms that consistently violate the constraints and therefore set $\kappa = 1000$ . Since the upper bound of rewards for each domain is 1000, we are essentially weighing equally attaining high performance and satisfying constraints. Finally, $\psi_{\beta, C} = \max(0, J_C^\pi - \beta)$ is defined as the overshoot. Here $\beta$ is the constraint violation threshold and defines the allowable average constraint violations per episode; $J_C^\pi$ is the average constraint violation value per episode upon convergence for a policy $\pi$ . The overshoot, $\psi_{\beta, C}$ , tracks the average constraint violations that are above the allowed constraint violation threshold $\beta$ .
+
+We investigate each algorithm's performance along a variety of dimensions which include different constraint violation thresholds (see the Appendix, Table 3 for a list of thresholds used), safety
+
+coefficients and domains. The safety coefficient is a flag in the RWRL suite (Dulac-Arnold et al., 2020a). This flag contains values between 0.0 and 1.0. Reducing the value of the flag ensures that more constraint violations occur per domain per episode. As such, we searched over the values \{0.05, 0.1, 0.2, 0.3\}. These values vary from solvable constraint tasks (e.g., 0.3) to unsolvable constraint tasks (e.g., 0.05). We wanted to see how the algorithms behaved in these extreme scenarios. In addition, we analysed the performance across a variety of different constraint violation thresholds (see Appendix, Table 6). All experiments are averaged across 8 seeds.
+
+# 5.1 MAIN RESULTS
+
+We begin by analyzing the performance of our best variant, MetaL, with different outer losses. Then we analyse the overall performance of all methods, followed by dissecting performance along the dimensions of safety coefficient and domain respectively. Finally, we investigate the derived gradient update for MetaL from Theorem 1 and provide intuition for the algorithm's behaviour.
+
+
+Figure 1: MetaL performance using different outer losses (left) and comparison with D4PG and RC-D4PG (right).
+
+
+
+MetaL outer loss: We wanted to determine whether different outer losses would result in improved overall performance. We used the actor loss ( $L_{\text{actor}}$ ) and the combination of the actor and critic losses as the outer loss ( $L_{\text{actor}} + L_{\text{critic}}$ ) and compared them with the original MetaL outer loss ( $L_{\text{critic}}$ ) as well as the other baselines. Figure [1] shows that using just the actor loss results in the worst performance; while using the critic loss always results in better performance. The best performance is achieved by the original critic-only MetaL outer loss.
+
+There is some intuition for choosing a critic-only outer loss. In MetaL, the critic loss is a function of lambda. As a result, the value of lambda affects the agents ability to minimize this loss and therefore learn an accurate value function. In D4PG, an accurate value function (i.e., the critic) is crucial for learning a good policy (i.e., the actor). This is because the policy relies on an accurate estimate of the value function to learn good actions that maximize the return (see D4PG actor loss). This would explain why adding the actor loss to the outer loss does not have much effect on the final quality of the solution. However, removing the critic loss has a significant effect on the overall solution.
+
+Overall performance: We averaged the performance of MetaL across all safety coefficients, thresholds and domains and compared this with the relevant baselines. As seen in Table 1, MetaL outperforms all of the baseline approaches by achieving the best trade-off of minimizing constraint violations and maximizing return. This includes all of the soft-constrained optimization baselines (i.e., RS-D4PG variants), D4PG as well as the hard-constrained optimization algorithm RC-D4PG. It is interesting to analyze this table to see that the best reward shaping variants are (1) $RS - 0.1$ which achieves comparable return, but higher overshoot and therefore lower penalized return; (2) $RS - 1.0$ which attains significantly lower return but lower overshoot resulting in lower penalized return. D4PG has the highest return, but this results in significantly higher overshoot. While RC-D4PG attains lower overshoot, it also yields significantly lower overall return. We now investigate this performance in more detail by looking at the performance per safety coefficient and per domain.
+
+Performance as a function of safety coefficient: We analyzed the average performance per safety coefficient, while averaging across all domains and thresholds. As seen in Figure [right], MetaL achieves comparable average return to that of D4PG. In addition, it significantly outperforms both
+
+| Algorithm | Rpenalized | R | max(0, Jcπ - β) |
| D4PG | 432.70 ± 11.99 | 927.66 | 0.49 |
| MetaL | 677.93 ± 25.78 | 921.16 | 0.24 |
| RC-D4PG | 478.60 ± 89.26 | 648.42 | 0.17 |
| RS-0.1 | 641.41 ± 26.67 | 906.76 | 0.27 |
| RS-1.0 | 511.70 ± 15.50 | 684.30 | 0.17 |
| RS-10.0 | 208.57 ± 61.46 | 385.42 | 0.18 |
| RS-100.0 | 118.50 ± 62.54 | 314.93 | 0.20 |
+
+Table 1: Overall performance across domains, safety coefficients and thresholds.
+
+D4PG and RC-D4PG in terms of penalized return. Figure 2 includes the reward shaping baselines. As can be seen in this figure, choosing a different reward shaping value can lead to drastically different performance. This is one of the drawbacks of the RS-D4PG variants. It is possible however, to find comparable RS variants (e.g., $RS - 0.1$ for the lowest safety coefficient of 0.05). However, as can be seen in Figure 3 for the highest safety coefficient and largest threshold, this RS variant fails completely at the humanoid task, further highlighting the instability of the RS approach. Figure 3 which presents the performance of MetaL and the baselines on the highest safety coefficient and largest threshold (to ensure that the constraint task is solvable), shows that MetaL has comparable performance to RC-D4PG (a hard constrained optimization algorithm). This further highlights the power of MetaL whereby it can achieve comparable performance when the constraint task is solvable compared to hard constrained optimization algorithms and state-of-the-art performance when the constraint task is not solvable.
+
+Performance per domain: When analyzing the performance per domain, averaging across safety coefficients and constraint thresholds, we found that MetaL has significantly better penalized return compared to D4PG and RC-D4PG across the domains. A table of the results can be seen in the Appendix, Figure 7. Note that, as mentioned previously, the RS-D4PG variants fluctuate drastically in performance across domains.
+
+
+Figure 2: Performance as a function of safety coefficient.
+
+
+Figure 3: Performance per domain. MetaL compared to baselines in terms of average reward and penalized reward across the highest safety coefficient and largest thresholds for each domain.
+
+Algorithm behaviour analysis: Since MetaL is a soft-constrained adaptation of RC-D4PG, we next analyze MetaL's gradient update in Theorem 1 to understand why the performance of MetaL differs
+
+from that of RC-D4PG in two types of scenarios: (1) solvable and (2) unsolvable constraint tasks. For both scenarios, we investigate the performance on cartpole for a constraint threshold of $0.115^{6}$ .
+
+For (1), we set the safety coefficient to a value of 0.3. The learning curve for this converged setting can be seen in Figure 4 (left). We track 4 different parameters here: the Lagrangian multiplier $\lambda$ (red curve), the mean penalty value $J_{C}^{\pi}$ (orange curve), the meta-parameter $\alpha_{\lambda}$ (black curve) and the scaled Lagrangian learning rate $\alpha_{1} \cdot \exp(\alpha_{\lambda})$ (green curve). The threshold $\beta$ is shown as the blue dotted line. Initially there are many constraint violations. This corresponds to a large difference for $J_{C}^{\pi} - \beta$ (orange curve minus blue dotted line) which appears in the gradient in Theorem 1. As a result, the meta-parameter $\alpha_{\lambda}$ increases in value as seen in the figure, and therefore increases the scaled learning rate to modify the value of $\lambda$ such that an improved solution can be found. Once $J_{C}^{\pi}$ is satisfying the constraint in expectation ( $J_{C}^{\pi} - \beta \approx 0$ ), the scaled learning rate drops in value due to $J_{C}^{\pi} - \beta$ being small. This is an attempt by the algorithm to slow down the change in $\lambda$ since a reasonable solution has been found (see the return for MetaL (green curve) in Figure 4 (right)).
+
+For (2), we set the safety coefficient to a value of 0.05 making the constraint task unsolvable in this domain. The learning curves can be seen in Figure 4 (middle). Even though the constraint task is unsolvable, MetaL still manages to yield a reasonable expected return as seen in Figure 4 (right). This is compared to RC-D4PG that overfits to satisfying the constraint and, in doing so, results in poor average reward performance. This can be seen in Figure 4 (middle) where RC-D4PG has lower overshoot than MetaL for low safety coefficients. However, this is at the expense of poor expected return and penalized return performance as seen in Figure 4 (left). We will now provide some intuition for MetaL performance and relate it to the $\alpha_{\lambda}$ gradient update.
+
+In this setting, there are consistent constraint violations leading to a large value for $J_C^\pi - \beta$ . At this point an interesting effect occurs. The value of $\alpha_\lambda$ decreases, as seen in the figure, while it tries to adapt the value of $\lambda$ to satisfy the constraint. However, as seen in the gradient update, there is an exponential term $\exp(\alpha_\lambda)$ which scales the Lagrange multiplier learning rate. This quickly drives the gradient down to 0, and consequently the scaled Lagrange multiplier learning rate too, as seen in Figure 4 (middle). This causes $\lambda$ to settle on a value as seen in the figure. At this point the algorithm optimizes for a stable fixed $\lambda$ and as a result finds the best trade-off for expected return at this $\lambda$ value. In summary, MetaL will maximize the expected return for an 'almost' fixed $\lambda$ , whereas RC-D4PG will attempt to overfit to satisfying the constraint resulting in a poor overall solution.
+
+
+Figure 4: The learning progress of MetaL for solvable (left) and unsolvable (middle) constraint tasks. In both cases, MetaL attempts to try and maximize the return (right).
+
+
+
+
+
+# 6 DISCUSSION
+
+In this paper, we presented a soft-constrained RL technique called MetaL that combines metagratings and constrained RL to find a good trade-off between minimizing constraint violations and maximizing returns. This approach (1) matches the return and constraint performance of a hard-constrained optimization algorithm (RC-D4PG) on "solvable constraint tasks"; and (2) obtains an improved trade-off between maximizing return and minimizing constraint overshoot on "unsolvable constraint tasks" compared to the baselines. (This includes a hard-constrained RL algorithm where the return simply collapses in such a case). MetaL achieves this by adapting the learning rate for the Lagrange multiplier update. This acts as a proxy for adapting the lagrangian multiplier. By amplifying/dampening the gradient updates to the lagrangian during training, the agent is able to influence the tradeoff between maximizing return and satisfying the constraints to yield the behavior of (1) and (2). We also implemented a meta-gradient approach called MeSh that scales and offsets the
+
+shaped rewards. This approach did not outperform MetaL but is a direction of future work. The algorithm, derived meta-gradient update and a comparison to MetaL can be found in the Appendix, Section B. We show that across safety coefficients, domains and constraint thresholds, MetaL outperforms all of the baseline algorithms. We also derive the meta-gradient updates for MetaL and perform an investigative study where we provide empirical intuition for the derived gradient update that helps explain this meta-gradient variant's performance. We believe the proposed techniques will generalize to other policy gradient algorithms but leave this for future work.
+
+# REFERENCES
+
+Abbas Abdelmaleki, Jost Tobias Springenberg, Jonas Degrave, Steven Bohez, Yuval Tassa, Dan Belov, Nicolas Heess, and Martin A. Riedmiller. Relative entropy regularized policy iteration. CoRR, abs/1812.02256, 2018.
+Joshua Achiam, David Held, Aviv Tamar, and Pieter Abbeel. Constrained policy optimization. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 22-31. JMLR.org, 2017.
+Eitan Altman. Constrained Markov decision processes, volume 7. CRC Press, 1999.
+Gabriel Barth-Maron, Matthew W Hoffman, David Budden, Will Dabney, Dan Horgan, Dhruva Tb, Alistair Muldal, Nicolas Heess, and Timothy Lillicrap. Distributed distributional deterministic policy gradients. arXiv preprint arXiv:1804.08617, 2018.
+Marc G Bellemare, Will Dabney, and Rémi Munos. A distributional perspective on reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 449-458. JMLR.org, 2017.
+Steven Bohez, Abbas Abdelmaleki, Michael Neunert, Jonas Buchli, Nicolas Heess, and Raia Haddell. Value constrained model-free continuous control. arXiv preprint arXiv:1902.04623, 2019.
+Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.
+Yinlam Chow, Ofir Nachum, Edgar Duenez-Guzman, and Mohammad Ghavamzadeh. A lyapunov-based approach to safe reinforcement learning, 2018.
+Gabriel Dulac-Arnold, Daniel J. Mankowitz, and Todd Hester. Challenges of real-world reinforcement learning. CoRR, abs/1904.12901, 2019.
+Gabriel Dulac-Arnold, Nir Levine, Daniel J Mankowitz, Jerry Li, Cosmin Paduraru, Sven Gowal, and Todd Hester. An empirical investigation of the challenges of real-world reinforcement learning. arXiv preprint arXiv:2003.11881, 2020a.
+Gabriel Dulac-Arnold, Nir Levine, Daniel J. Mankowitz, Jerry Li, Cosmin Paduraru, Sven Gowal, and Todd Hester. An empirical investigation of the challenges of real-world reinforcement learning, 2020b.
+Yonathan Efroni, Shie Mannor, and Matteo Pirotta. Exploration-exploitation in constrained mdps, 2020.
+Luca Franceschi, Michele Donini, Paolo Frasconi, and Massimiliano Pontil. Forward and reverse gradient-based hyperparameter optimization, 2017.
+Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
+Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, 2015.
+Santiago Patermain, Luiz Chamon, Miguel Calvo-Fullana, and Alejandro Ribeiro. Constrained reinforcement learning has zero duality gap. In Advances in Neural Information Processing Systems, pp. 7555-7565, 2019.
+
+Alex Ray, Joshua Achiam, and Dario Amodei. Benchmarking safe exploration in deep reinforcement learning. arXiv preprint arXiv:1910.01708, 2019.
+Harsh Satija, Philip Amortila, and Joelle Pineau. Constrained markov decision processes via backward value functions. arXiv preprint arXiv:2008.11811, 2020.
+David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepel, and Demis Hassabis. Mastering the game of Go without human knowledge. Nature, 550, 2017.
+Ankur Sinha, Pekka Malo, and Kalyanmoy Deb. A review on bilevel optimization: from classical to evolutionary approaches and applications. IEEE Transactions on Evolutionary Computation, 22 (2):276-295, 2017.
+Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018.
+Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Budden, Abbas Abdelmaleki, Josh Merel, Andrew Lefrancq, Timothy P. Lillicrap, and Martin A. Riedmiller. Deepmind control suite. CoRR, abs/1801.00690, 2018.
+Chen Tessler, Shahar Givony, Tom Zahavy, Daniel J Mankowitz, and Shie Mannor. A deep hierarchical approach to lifelong learning in apacheft. In AAAI, volume 3, pp. 6, 2017.
+Chen Tessler, Daniel J Mankowitz, and Shie Mannor. Reward constrained policy optimization. arXiv preprint arXiv:1805.11074, 2018.
+Philip S Thomas, Bruno Castro da Silva, Andrew G Barto, and Emma Brunskill. On ensuring that intelligent machines are well-behaved. arXiv preprint arXiv:1708.05448, 2017.
+Vivek Veeriah, Matteo Hessel, Zhongwen Xu, Richard Lewis, Janarthanan Rajendran, Junhyuk Oh, Hado van Hasselt, David Silver, and Satinder Singh. Discovery of useful questions as auxiliary tasks. NeurIPS, 2019.
+Zhongwen Xu, Hado van Hasselt, and David Silver. Meta-Gradient Reinforcement Learning. NeurIPS, 2018.
+Kenny Young, Baoxiang Wang, and Matthew E. Taylor. Metatrace actor-critic: Online step-size tuning by meta-gradient descent for reinforcement learning control, 2018.
+Tom Zahavy, Zhongwen Xu, Vivek Veeriah, Matteo Hessel, Junhyuk Oh, Hado van Hasselt, David Silver, and Satinder Singh. Self-Tuning Deep Reinforcement Learning. 2020.
+Ruiyi Zhang, Tong Yu, Yilin Shen, Hongxia Jin, Changyou Chen, and Lawrence Carin. Reward constrained interactive recommendation with natural language feedback, 2020.
+Zeyu Zheng, Junhyuk Oh, and Satinder Singh. On learning intrinsic rewards for policy gradient methods. NeurIPS, 2018.
\ No newline at end of file
diff --git a/balancingconstraintsandrewardswithmetagradientd4pg/images.zip b/balancingconstraintsandrewardswithmetagradientd4pg/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..a1d7b52cef12e0266a2acfa6c77a374b86f71cdb
--- /dev/null
+++ b/balancingconstraintsandrewardswithmetagradientd4pg/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c31e45df6731b78f7756e6740f150af3c55a12ef47f83b54e25b9e8e1e9f2c7e
+size 196422
diff --git a/balancingconstraintsandrewardswithmetagradientd4pg/layout.json b/balancingconstraintsandrewardswithmetagradientd4pg/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..53dc8fc495071fa256a94603f6cde9af52e888d9
--- /dev/null
+++ b/balancingconstraintsandrewardswithmetagradientd4pg/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6cf027039ea58b1b20aa08507b44f183765c805205ba5200e522f81332974916
+size 374595
diff --git a/batchreinforcementlearningthroughcontinuationmethod/e8a933dc-225e-4a31-8e23-de6a3dce1bdd_content_list.json b/batchreinforcementlearningthroughcontinuationmethod/e8a933dc-225e-4a31-8e23-de6a3dce1bdd_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..830b4ff487e0aa1a03aefa1b9f62925540d82efe
--- /dev/null
+++ b/batchreinforcementlearningthroughcontinuationmethod/e8a933dc-225e-4a31-8e23-de6a3dce1bdd_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:62fb88164f63fc0ffec7de05ef385e7c83184e954bce7d4746371359dfa99f68
+size 71536
diff --git a/batchreinforcementlearningthroughcontinuationmethod/e8a933dc-225e-4a31-8e23-de6a3dce1bdd_model.json b/batchreinforcementlearningthroughcontinuationmethod/e8a933dc-225e-4a31-8e23-de6a3dce1bdd_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..eb6203e5d00fc086a1c9d4c2f279e03e5326bd8a
--- /dev/null
+++ b/batchreinforcementlearningthroughcontinuationmethod/e8a933dc-225e-4a31-8e23-de6a3dce1bdd_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:30083cacade4451dd3e02fa2a92a9e5a52f31b24bc804832d76dfae7a0745ba7
+size 84657
diff --git a/batchreinforcementlearningthroughcontinuationmethod/e8a933dc-225e-4a31-8e23-de6a3dce1bdd_origin.pdf b/batchreinforcementlearningthroughcontinuationmethod/e8a933dc-225e-4a31-8e23-de6a3dce1bdd_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..7210100da1fd7917727d1f4116e51bb8b4285609
--- /dev/null
+++ b/batchreinforcementlearningthroughcontinuationmethod/e8a933dc-225e-4a31-8e23-de6a3dce1bdd_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:481325d5c477f8b7b0eede3a141699b69e9216266f1c4fcb614a7cd057340bcf
+size 2147426
diff --git a/batchreinforcementlearningthroughcontinuationmethod/full.md b/batchreinforcementlearningthroughcontinuationmethod/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..a16ab2e42afc9f7bd14e81f2a2e8a80f94693c22
--- /dev/null
+++ b/batchreinforcementlearningthroughcontinuationmethod/full.md
@@ -0,0 +1,239 @@
+# BATCH REINFORCEMENT LEARNING THROUGH CONTINUATION METHOD
+
+Yijie Guo1 Shengyu Feng1 Nicolas Le Roux2 Ed Chi2 Honglak Lee1,2 Minmin Chen2
+
+1University of Michigan 2Google AI
+
+{guoyijie,shengyuf}@umich.edu {nlr,edchi,honglak,minminc}@google.com
+
+# ABSTRACT
+
+Many real-world applications of reinforcement learning (RL) require the agent to learn from a fixed set of trajectories, without collecting new interactions. Policy optimization under this setting is extremely challenging as: 1) the geometry of the objective function is hard to optimize efficiently; 2) the shift of data distributions causes high noise in the value estimation. In this work, we propose a simple yet effective policy iteration approach to batch RL using global optimization techniques known as continuation. By constraining the difference between the learned policy and the behavior policy that generates the fixed trajectories, and continuously relaxing the constraint, our method 1) helps the agent escape local optima; 2) reduces the error in policy evaluation in the optimization procedure. We present results on a variety of control tasks, game environments and a recommendation task to empirically demonstrate the efficacy of our proposed method.
+
+# 1 INTRODUCTION
+
+While RL is fundamentally an online learning paradigm, many practical applications of RL algorithms, e.g., recommender systems [5, 7] or autonomous driving [36], fall under the batch RL setup. Under this setting, the agent is asked to learn its policy from a fixed set of interactions collected by a different (and possibly unknown) policy commonly referred to as the behavior policy, without the flexibility to gather new interactions. Realizing the interactive nature of online RL has been hindering its wider adoptions, researchers strive to bring these techniques offline [24, 11, 20, 23, 31, 12, 21, 2, 32, 8]. We focus on policy optimization under batch RL setup. As pointed out in [3, 26], even with access to the exact gradient, the loss surface of the objective function maximizing the expected return is difficult to optimize, leading to slow convergence. Chen et al. [8] show that the objective function of expected return exhibits sub-optimal plateaus and exponentially many local optima in the worst case. Batch setup makes the learning even harder as it adds large variance to the gradient estimate, especially when the learned policy differs from the behavior policy used to generate the fixed trajectories. Recent works propose to constrain the size of the policy update [27, 28] or the distance between the learned policy and the behavior policy [14, 21]. The strength of that constraint is a critical hyperparameter that can be hard to tune [28], as a loose constraint does not alleviate the distribution shift while a strict one results in conservative updates.
+
+Here we propose to address the challenges using continuation methods [35, 6, 17]. Continuation methods attempt to solve the global optimization problem by progressively solving a sequence of new objectives that can be optimized more efficiently and then trace back the solutions to the original one. We change the objective function of policy optimization by including an additional term penalizing the KL divergence between the parameterized policy $\pi_{\theta}$ and the behavior policy. We then gradually decrease the weight of that penalty, eventually converging to optimizing the expected return. With this additional constraint, we benefit from more accurate policy evaluation in the early stage of training as the target policy is constrained to be close to the behavior policy. As training continues, we relax the constraint and allow for more aggressive improvement over the behavior policy as long as the policy evaluation is still stable and relatively reliable, i.e. with a small enough variance. By doing so, the proposed method exhaustively exploits the information in the collected trajectories while avoiding the overestimation of state-action pairs that lack support.
+
+The contributions of this paper are as follows: (1) We propose a soft policy iteration approach to batch RL through the continuation method. (2) We theoretically verify that in the tabular setting with exact gradients, maximizing KL regularized expected return leads to faster convergence than optimizing the expected return alone. Also, our method converges to the globally optimal policy if there are sufficient data samples for accurate value estimation. (3) We demonstrate the effectiveness of our method in reducing errors in value estimation using visualization; (4) We empirically verify the advantages of our method over existing batch RL methods on various complex tasks.
+
+# 2 RELATED WORK
+
+Batch Reinforcement Learning. Off-policy reinforcement learning has been extensively studied [11, 20, 30, 23, 31], with many works [12, 21, 2] focusing on variants of Q-learning. Fujimoto et al. [12], Kumar et al. [21] investigated the extrapolation error in batch RL resulting from the mismatch of state-action visitation distribution between the fixed dataset and the current policy, and proposed to address it by constraining the action distribution of the current policy from deviating much from the training dataset distribution. Recent works [29, 33] studied policy iteration under batch RL. The Q function is estimated in the policy evaluation step without special treatment while the policy updates are regularized to remain close to the prior policy with a fixed constraint. To further reduce uncertainty in Q learning, an ensemble of Q networks [21, 29] and distributional Q-function [2, 33] are introduced for the value estimation. [34, 18] use the KL divergence between the target policy and the behavior policy as a regularization term in the policy update and/or value estimation. The constraint is controlled by a fixed weight of the KL regularization or a fixed threshold for the KL divergence. While all of these works apply a fixed constraint determined by a sensitive hyperparameter to control the distance between the behavior/prior policy and the target policy, we focus on gradually relaxed constraints.
+
+Constrained Policy Updates. Several works [27, 1, 15] studied constrained policy updates in online settings. Kakade & Langford [19] show that large policy updates can be destructive, and propose a conservative policy iteration algorithm to find an approximately optimal policy. Schulman et al. [27] constrain the KL divergence between the old policy and new policy to guarantee policy improvement in each update. Grau-Moya et al. [15] force the policy to stay close to a learned prior distribution over actions, deriving a mutual-information regularization between state and action. Cheng et al. [9] propose to regularize in the function space. Again these methods focused on a fixed constraint while we are interested in continuing relaxing the constraint to maximize the expected return eventually. Also none of these methods have been extensively tested for batch RL with fixed training data.
+
+Continuation Method. Continuation method [35] is a global optimization technique. The main idea is to transform a nonlinear and highly non-convex objective function to a series of smoother and easier to optimize objective functions. The optimization procedure is successively applied to the new functions that are progressively more complex and closer to the original non-context problem, to trace their solutions back to the original objective function. Chapelle et al. [6] use the continuation method to optimize the objective function of semi-supervised SVMs and reach lower test error compared with algorithms directly minimizing the original objective. Hale et al. [17] apply the continuation method to 11-regularized problems and demonstrate better performance for compressed sensing problems. Inspired by prior works, we employ the continuation method to transform the objective of batch RL problems by adding regularization. We gradually decrease the regularization weight to trace the solution back to the original problem.
+
+# 3 METHOD
+
+In classical RL, an agent interacts with the environment while updating its policy. At each step $t$ , the agent observes a state $s_t \in S$ , selects an action $a_t \in \mathcal{A}$ according to its policy to receive a reward $r_t = r(s_t, a_t) : S \times \mathcal{A} \to \mathbb{R}$ and transitions to the next state $s_{t+1} \sim \mathcal{P}(\cdot | s_t, a_t)$ . The state value of a policy $\pi$ at a state $s$ is $V^{\pi}(s) = \mathbb{E}_{s_0 = s, a_t \sim \pi(\cdot | s_t), s_{t+1} \sim \mathcal{P}(\cdot | s_t, a_t)} [\sum_{t=0}^{\infty} \gamma^t r(s_t, a_t)]$ . $\gamma \in [0,1]$ is the discounting factor. At each step, the agent updates the policy $\pi$ so that the expected return $V^{\pi}(\rho) = \mathbb{E}_{s \sim \rho} [V^{\pi}(s)]$ (where $\rho$ is the initial state distribution) is maximized.
+
+In batch RL, the agent is not allowed to interact with the environment during policy learning. Instead it has access to a fixed set of trajectories sampled from the environment according to a behavior policy1. A trajectory $\{(s_0, a_0, r_0), (s_1, a_1, r_1), \dots, (s_T, a_T, r_T)\}$ is generated by sampling $s_0$ from the initial state distribution $\rho$ , sampling the action $a_t \sim \beta(\cdot | s_t)$ at the state $s_t$ and moving to $s_{t+1} \sim \mathcal{P}(\cdot | s_t, a_t)$ for each step $t \in [0, 1, \dots, T]$ . The length $T$ can vary among trajectories. We then convert the generated trajectories to a dataset $\mathcal{D} = \{(s_i, a_i, r_i, s_i')\}_{i=1}^N$ , where $s_i'$ is the next state after $s_i$ in a trajectory.
+
+The goal of batch RL is to learn a parameterized policy $\pi_{\theta}$ with the provided dataset to maximize the expected return $V^{\pi}(\rho)$ . In Sec. 3.1, we will first introduce a new objective function $\tilde{V}^{\pi,\tau}(\rho)$ , i.e. the expected return of policy $\pi$ with KL regularization term and the regularization weight $\tau$ . With exact gradients, $\tilde{V}^{\pi,\tau}(\rho)$ can be optimized more efficiently than the original objective $V^{\pi}(\rho)$ . With the
+
+continuation method, solving a sequence of optimization problems for $\tilde{V}^{\pi,\tau}(\rho)$ with decaying value of $\tau$ converges toward optimizing $V^{\pi}(\rho)$ and makes the optimization easier. In Sec. 3.2, we derive soft policy iteration with KL regularization to optimize $\tilde{V}^{\pi,\tau}(\rho)$ , without the assumption of exact gradients. Finally, in Sec. 3.3, we propose a practical batch RL algorithm with value estimation for target policy based on this theory.
+
+# 3.1 OPTIMIZING EXPECTED RETURN WITH KL REGULARIZATION
+
+In batch RL, the distribution of the trajectories generated by the behavior policy can be very different from that of the learned policy. We thus restrict the learned policy to stay close to the behavior policy via the regularization of KL divergence. Define the soft state value of a policy $\pi$ at a state $s$ as
+
+$$
+\tilde {V} ^ {\pi , \tau} (s) = \mathbb {E} _ {s _ {0} = s, a _ {t} \sim \pi (\cdot | s _ {t}), s _ {t + 1} \sim \mathcal {P} (\cdot | s _ {t}, a _ {t})} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} \left(r \left(s _ {t}, a _ {t}\right) - \tau \log \frac {\pi \left(a _ {t} \mid s _ {t}\right)}{\beta \left(a _ {t} \mid s _ {t}\right)}\right) \right], \tag {1}
+$$
+
+where the temperature parameter $\tau$ controls the deviation from $\beta$ . The new objective function becomes $\tilde{V}^{\pi, \tau}(\rho) = \mathbb{E}_{s \sim \rho}[\tilde{V}^{\pi, \tau}(s)]$ . This KL regularized objective differs from the original objective $V^{\pi}(\rho)$ , which however can be recovered as $\tau \to 0$ .
+
+As pointed out in [3], even with exact gradients, the objective function $V^{\pi}(\rho)$ is still difficult to optimize due to its highly non-smooth landscape. Mei et al. [26] further prove that, in a tabular setting with softmax parameterized policy and exact gradients, the vanilla policy gradient method (i.e. directly updating the parameters of policy $\pi$ to maximize $V^{\pi}(\rho)$ with gradient descent) converges to the global optimal policy at a convergence rate $\mathcal{O}(1 / t)$ , while the entropy-regularized policy gradient enjoys a significantly faster linear convergence rate $O(e^{-t})$ . Motivated by this line of work, we investigate the convergence rate of optimizing $\tilde{V}^{\pi,\tau}(\rho)$ with the exact gradient descent and compare it with the vanilla policy gradient method. We study the smoothness and Łojasiewicz inequality for the function $\tilde{V}^{\pi,\tau}(\rho)$ to prove the convergence rate, similar to [26]. The detailed proofs of all following theorems are provided in the appendix.
+
+Theorem 1. In the tabular setting with softmax parameterized policy $\pi_{\theta}$ , maximizing $\tilde{V}^{\pi,\tau}(\rho)$ using policy gradient with the learning rate $\eta = \frac{(1 - \gamma)^3}{(8M + \tau(4 + 8\log A))}$ , for all $t > 1$ , we have
+
+$$
+\tilde {V} ^ {\pi_ {\tau} ^ {*}, \tau} (\rho) - \tilde {V} ^ {\pi_ {\theta_ {t}}, \tau} (\rho) \leq C \cdot e ^ {- C _ {\tau} (t - 1)} \cdot \frac {M + \tau \log A}{(1 - \gamma) ^ {2}}
+$$
+
+where $\pi_{\tau}^{*}$ is the optimal policy maximizing $\tilde{V}^{\pi, \tau}(\rho)$ , $M$ is the bound of the absolute value of $r(s, a) + \tau \log \beta(a|s)$ , $A$ is the size of action space, $S$ is the size of state space, $C_{\tau} \propto \frac{(1 - \gamma)^4}{(8M / \tau + 4 + 8\log A)\cdot S}$ , and $C$ is a constant independent with $t$ and $\tau$ .
+
+Theorem 1 states that KL regularized expected return $\tilde{V}^{\pi, \tau}(\rho)$ can be optimized with a convergence rate $\mathcal{O}(e^{-t})$ rather than the $\mathcal{O}(1/t)$ , the convergence rate of vanilla policy gradient for expected return alone. The faster convergence inspires us to optimize $\tilde{V}^{\pi, \tau}(\rho)$ to reach policy $\pi_{\tau}^{*}$ , then use $\pi_{\tau}^{*}$ as initialization, gradually decrease the temperature $\tau$ towards 0, and eventually move from $\pi_{\tau}^{*}$ to $\pi^{*} = \arg \max_{\pi} V^{\pi}(\rho)$ . With a reasonable value of $\tau$ , we enjoy a linear convergence rate toward $\pi_{\tau}^{*}$ from the randomly initialized policy $\pi_{\theta}$ . As $\tau$ decreases, $\pi_{\tau}^{*}$ gets closer to $\pi^{*}$ . The final optimization of $V^{\pi_{\theta}}(\rho)$ from $\pi_{\tau}^{*}$ can be much faster than from a randomly initialized $\pi_{\theta}$ .
+
+We construct a toy example to illustrate this motivation. In the grid world (Fig. 1a), the start state, annotated with 'S', is in the center and the terminal states are marked in yellow. There are only two states with positive rewards (0.9 and 1). There are four actions {up, down, left, right}. A badly initialized policy $\pi_{\theta_0}$ is shown as arrows in Fig. 1a). The initialization results in a poor policy, having high tendency to go right toward a terminal state with zero reward. The vanilla policy gradient method (i.e. maximizing $V^{\pi}(\rho)$ with true gradient) starting from this initial point takes more than 7000 iterations to escape a sub-optimal solution (Fig. 1b). In contrast, we escape the sub-optimal solution much faster when ap
+
+plying the continuation method to update the policy havior policy $\beta (\cdot |s) = [u_{1},u_{2},u_{3},u_{4}]$ with $u_{i},i =$
+
+
+(a)
+
+
+(b)
+Figure 1: (a) A grid world with sparse rewards. (b) Learning curve of the value of learned policy $\pi_{\theta_i}$ . We conduct a hyper-parameter search for the learning rate 5,1,0.5,0.1,0.05,0.01,0.005,0.001 and report the best performance for each method.
+
+Algorithm 1 Soft Policy Iteration through Continuation Method
+1: Initialize: actor network $\pi_{\theta}$ , ensemble critic network $\{Q_{\phi^{(1)}}, Q_{\phi^{(2)}}, \dots, Q_{\phi^{(K)}}\}$ , behavior policy network $\beta_{\psi}$ , penalty coefficient $\tau$ , decay rate $\lambda$ , number of iterations $I$ for each $\tau$
+2: Input: training dataset $D = \{(s_i, a_i, r_i, s_i')\}_{i=0}^N$
+3: for update $j = 0, 1, \dots$ do
+4: Sample batch of data $\{(s_i, a_i, r_i)\}_{i=1}^B$ from $D$
+5: # Learn the behavior policy with behavior cloning objective
+6: Update $\psi$ to maximize $\frac{1}{B} \sum_{i=1}^B \log \beta_{\psi}(a_i | s_i)$
+7: # Train the critic network
+8: Update $\phi^{(k)}$ to minimize the temporal difference $\frac{1}{B} \sum_{i=1}^B (r_i + \gamma V(s_i')) - Q_{\phi^{(k)}}(s_i, a_i)^2$
+9: where $V(s) = \frac{1}{K} \sum_{k=1}^K \mathbb{E}_{a \sim \pi_{\theta}(\cdot | s)} (Q_{\phi^{(k)}}(s, a)) - \tau KL(\pi_{\theta}(\cdot | s) | \beta_{\psi}(\cdot | s))$
+10: # Train the actor network
+11: Update $\theta$ to maximize $\frac{1}{B} \sum_{i=1}^B \left[ \frac{1}{K} \sum_{k=1}^K \mathbb{E}_{a \sim \pi_{\theta}(\cdot | s_i)} (Q_{\phi^{(k)}}(s_i, a)) - \tau KL(\pi_{\theta}(\cdot | s_i) | \beta_{\psi}(\cdot | s_i) \right]$
+12: # Decay the weight of KL regularization $\tau$ for every $I$ updates
+13: if $j \mod I = 0$ then
+14: $\tau \gets \tau * \lambda$
+15: end if
+16: end for
+
+normalized for each state $s$ . In Fig. 1b, as we decrease $\tau$ , the value of learned policy $\pi_{\theta_i}$ for each iteration $i$ quickly converges to the optimal value. In other words, optimizing a sequence of objective functions $\tilde{V}^{\pi ,\tau}(\rho)$ can reach the optimal solution for $V^{\pi}(\rho)$ significantly faster.
+
+# 3.2 SOFT POLICY ITERATION WITH KL REGULARIZATION
+
+As explained in the previous section, we focus on the new objective function $\tilde{V}^{\pi,\tau}(\rho)$ , which can be optimized more efficiently, and use continuation method to relax toward optimizing $V^{\pi}(\rho)$ . Batch RL adds the complexity of estimating the gradient of $\tilde{V}^{\pi,\tau}(\rho)$ with respect to $\pi$ from a fixed set of trajectories. We propose to adapt soft actor-critic[16], a general algorithm to learn optimal maximum entropy policies in batch RL for our use case. We change the entropy regularization to KL regularization and derive the soft policy iteration to learn KL regularized optimal policy. For a policy $\pi$ and temperature $\tau$ , the soft state value is defined in Eq. 1 and soft Q function is defined as:
+
+$$
+\tilde {Q} ^ {\pi , \tau} (s, a) = r (s, a) + \gamma \mathbb {E} _ {s ^ {\prime} \sim \mathcal {P} (\cdot | s, a)} \tilde {V} ^ {\pi , \tau} \left(s ^ {\prime}\right) \tag {2}
+$$
+
+In the step of soft policy evaluation, we aim to compute the value of policy $\pi$ according to the minimum KL divergence objective $\tilde{V}^{\pi,\tau}(\rho) = \mathbb{E}_{s\sim \rho}[\tilde{V}^{\pi,\tau}(s)]$ . According to Lemma 1 in Appendix, the soft Q value can be computed by repeatedly applying the soft bellman backup operator.
+
+$$
+\mathcal {T} ^ {\pi , \tau} Q (s, a) = r (s, a) + \gamma \mathbb {E} _ {s ^ {\prime} \sim \mathcal {P} (\cdot | s, a)} (V (s ^ {\prime})), \text {w h e r e} V (s) = \mathbb {E} _ {a \sim \pi (\cdot | s)} \left[ Q (s, a) - \tau \log \frac {\pi (a | s)}{\beta (a | s)} \right].
+$$
+
+In the step of policy improvement, we maximize the expected return based on Q-value evaluation with the KL divergence regularization. The following policy update can be guaranteed to result in an improved policy in terms of its soft value (Lemma 2 in Appendix).
+
+$$
+\pi_ {n e w} (\cdot | s) = \arg \max _ {\pi \in \Pi} \left[ \mathbb {E} _ {a \sim \pi (\cdot | s)} \left(\tilde {Q} ^ {\pi_ {o l d}, \tau} (s, a)\right) - \tau K L (\pi (\cdot | s) | \beta (\cdot | s)) \right] \tag {3}
+$$
+
+$$
+\mathrm {w h e r e} \qquad K L (\pi (\cdot | s) | \beta (\cdot | s)) = \mathbb {E} _ {a \sim \pi (\cdot | s)} \left[ \log \frac {\pi (a | s)}{\beta (a | s)} \right]
+$$
+
+The soft policy iteration algorithm alternates between the soft policy evaluation and soft policy improvement, and it will provably converge to the optimal policy maximizing the objective $\hat{V}^{\pi ,\tau}(\rho)$ .
+
+Theorem 2. Repeated application of soft policy evaluation and soft policy improvement converges to a policy $\pi_{\tau}^{*}$ such that $\tilde{Q}^{\pi_{\tau}^{*},\tau}(s,a)\geq \tilde{Q}^{\pi ,\tau}(s,a)$ for any $\pi \in \Pi$ and $(s,a)\in S\times \mathcal{A}$ .
+
+The soft policy iteration finds a policy $\pi_{\tau}^{*}$ with optimal soft Q value for each state-action pair and hence gets the optimal value of $V^{\pi, \tau}(\rho)$ . Here we propose to use the soft policy iteration to solve objectives $\tilde{V}^{\pi, \tau}(\rho)$ with decreasing value of $\tau$ and move back to the objective $V^{\pi}(\rho)$ as $\tau = 0$ . The method is guaranteed to asymptotically converge to the optimal policy $\pi^{*}$ for the objective $V^{\pi}(\rho)$ .
+
+
+Figure 2: Visualization of the error in soft Q value estimation and quality of the learned policy. In the first four columns, triangles represent the error for actions that move in different directions. Darker color indicates higher error. To investigate the performance of the learned policy $\pi_{\theta_{1000}}$ , the length of arrows represents the probability of taking each action in each states. We run $\pi_{\theta_{1000}}$ in the grid world and visualize the visitation count in the last column (heatmap). Darker color means more visitation.
+
+Theorem 3. Let $\pi_{\tau}^{*}(a|s)$ be the optimal policy from soft policy iteration with fixed temperature $\tau$ . We have $\pi_{\tau}^{*}(a|s) \propto \exp \left(\frac{\tilde{Q}^{\pi_{\tau}^{*},\tau}(s,a)}{\tau}\right)\beta(a|s)$ . As $\tau \to 0$ , $\pi_{\tau}^{*}(a|s)$ will take the optimal action $a^{*}$ with optimal $Q$ value for state $s$ .
+
+# 3.3 ERROR IN VALUE ESTIMATE
+
+In the previous section, we show that the soft policy iteration with the continuation method provably converges to the global optimal policy maximizing expected return. However, in batch RL with a fixed dataset and limited samples, we cannot perform the soft policy iteration with KL regularization in its exact form. Specifically, in the policy evaluation step, when the learned policy $\pi$ deviates for the behavior policy $\beta$ , and chooses the state-action pair $(s,a)$ rarely visited by $\beta$ , the estimation of target $r(s,a) + \gamma \mathbb{E}_{s' \sim \mathcal{P}(\cdot | s,a)}(V(s'))$ can be very noisy. The error in the value estimate $Q(s,a)$ will be further propagated to other state-action pairs through the bellman update. Finally, inaccurate value estimation will cause errors in the policy improvement step, resulting in a worse policy. On the other hand, if we constrain the learned policy $\pi$ to be very close to the behavior policy $\beta$ , we can expect the policy evaluation to be reliable and safely update the learned policy. The tight constraint however prevents $\pi$ to be much better than $\beta$ due to the conservative update.
+
+On the grid world, we study this problem of value estimation with different values of $\tau$ . Figure 2 visualizes the propagation of Q value estimation errors and the learned policies. We assume a mediocre behavior policy tending to move left and down. For the rarely visited states in the upper right part of the grid, there are errors in the value estimation of $\tilde{Q}^{\pi,\tau}(s,a)$ , i.e. $|Q(s,a) - \tilde{Q}^{\pi,\tau}(s,a)| > 0$ where $Q(s,a)$ is the Q value we learn during training and $\tilde{Q}^{\pi,\tau}(s,a)$ is the ground truth soft Q value. Because the bad initial policy (Fig. 1a) tends to move towards the right part, without a strong KL regularization, the policy evaluation can be problematic due to the errors of value estimation in the right part of the grid world. In Fig. 2, with a small KL regularization weight $\tau = 0.001$ , the first row shows that errors even propagate to the frequently visited states by the behavior policy. On the other hand, when we set a large value of $\tau = 1$ (second row), the error $|Q(s,a) - \tilde{Q}^{\pi,\tau}(s,a)|$ is smaller. Yet the performance of the learned policy is not much better than the behavior policy. Our continuation method gradually moves the policy update between these two spectra. The value estimation benefits from the gradually relaxed KL regularization and the errors remain small. The last column of Fig. 2 visualizes the learned policy in these methods. With constant $\tau = 0.001$ , the wrong value estimates in some states mislead the agent. It fails to visit any terminal state and gets stuck at the state in dark orange. With constant $\tau = 1$ , the tight constraint of KL divergence makes the learned policy close to the behavior policy, mostly visiting the left bottom part of the environment. With continuation method, the agent learns to always take the optimal path moving left directly and obtains the highest expected return. More details of this example are provided in the appendix.
+
+In the toy example, gradually relaxing KL regularization towards zero alleviates the propagation of errors in the soft Q estimate and helps the agent converge to the optimal policy. In more complicated domains, we find that as $\tau$ decays close to 0, the policy evaluation is still erroneous. To mitigate this issue, we introduce an ensemble of critic networks $\{Q_{\phi^{(1)}},Q_{\phi^{(2)}},\dots ,Q_{\phi^{(K)}}\}$ to approximate the soft Q value, and monitor the variance of value estimation in different critic networks to measure the uncertainty. Given a batch of data samples $\{s_i\}_{i = 1}^B\subset \mathcal{D}$ , $var(Q^{\pi}) = \frac{1}{B}\sum_{i = 1}^{B}\mathbb{E}_{a\sim \pi (\cdot |s_i)}var(Q_{\phi^{(1)}}(s_i,a),Q_{\phi^{(2)}}(s_i,a),\dots ,Q_{\phi^{(k)}}(s_i,a))$ indicates whether the current policy $\pi$ tends to take actions with highly noisy value estimation.
+
+Our method is summarized in Algorithm 1. Instead of running the soft policy evaluation and policy improvement until convergence, we alternate between optimizing the critic network and actor network with stochastic gradient descent. We set $\tau$ to large value initially and let the KL divergence term dominate the objective, thus performing behavior cloning. We record a moving average of the Q value estimation variance $var(Q^{\pi ,\tau_0})$ over 1000 updates at the end of the phase. After that, we decay the temperature gradually with $\lambda = 0.9$ every $I$ steps. When the moving average of the Q value estimation variance $var(Q^{\pi ,\tau})$ is large compared with the initial value $var(Q^{\pi ,\tau_0})$ (i.e. at the end of behavior cloning), we no longer trust the value estimate under the current temperature $\tau$ and take the policy checkpointed before the temperature decays to this $\tau$ as our solution.
+
+# 4 EXPERIMENTS
+
+# 4.1 MUJOCO
+
+We evaluate our method with several baselines on continuous control tasks. We train a Proximal Policy Optimization agent [28] with entropy regularization for 1000 million steps in the environments. We parameterize the policy using Gaussian policies where the mean is a linear function of the agent's state $\theta^T s$ and variance is an identity matrix to keep the policy simple as introduced in [3]. To generate training datasets $\mathcal{D}$ with varying quality, we construct the behavior policy by mixing the well-trained policy $\mathcal{N}(\theta_{opt}^T s, 0.5\mathbb{I})$ , i.e. checkpoint with the highest score during training, and a poor policy $\mathcal{N}(\theta_0^T s, 0.5\mathbb{I})$ , i.e. checkpoint at the beginning of the training, with the weight $\alpha$ . Then the behavior policy $\beta(\cdot|s)$ is $\mathcal{N}((1 - \alpha)\theta_{opt} + \alpha \theta_0)^T s, 0.5\mathbb{I})$ . We generate trajectories and store a total of one million data samples from the mixed behavior for different values of the coefficient $\alpha$ .
+
+The architecture of the target policy is the same as the behavior policy. We consider six baseline approaches: BCQ [14], BEAR [21], ABM+SVG [29], CRR [33], CQL [22], BRAC [34]. For a fair comparison, the architectures of the ensemble critic network and the policy network are the same in the baselines and our method, except BCQ which has no policy network. To evaluate and compare the methods, we run the learned policy in the environments for 100 episodes and report the average episode reward in Fig 3. As for the continuation method, we report the score of policy checkpointed last with reasonable value estimation variance, as explained in Section 3.3. For the baselines, we report the score of the final policy when we terminate the training at 1.5M updates.
+
+ | α | BCQ | BEAR | ABM | CRR | CQL | BRAC | Ours |
| Hopper | 0.2 | 1908.0 ±327.0 | 1661.2 ± 163.3 | 1814.2 ±176.9 | 1861.5 ±112.8 | 1962.6 ±194.1 | 2375.4 ± 181.3 | 2085.8 ±204.8 |
| 0.4 | 876.0 ±462.2 | 1028.9 ± 49.0 | 1113.0 ±201.3 | 1281.0 ±218.2 | 1262.0±116.7 | 991.0 ± 39.5 | 1463.7 ±195.0 |
| 0.6 | 429.9 ±198.1 | 898.6 ± 28.2 | 1402.6 ±350.5 | 791.1 ±87.0 | 609.1 ±35.1 | 573.7 ± 41.6 | 1524.3 ±511.4 |
| 0.8 | 450.3 ±174.2 | 3.3 ± 4.3 | 3.6 ±0.0 | 304.2 ±36.3 | 295.7 ±41.9 | 83.6 ± 113.1 | 234.8 ±26.4 |
| Half Cheetah | 0.2 | 1765.8 ±41.2 | 2143.4 ± 18.3 | 2168.3 ±26.1 | 2154.1 ±6.6 | 2105.7 ±37.2 | 1649.3 ± 64.3 | 2149.3 ±32.9 |
| 0.4 | 1336.4 ±35.6 | 1809.3 ± 18.1 | 1914.7 ±13.2 | 1860.1 ±38.2 | 1811.9 ±60.7 | 1744.5 ± 15.1 | 1839.9 ±16.9 |
| 0.6 | 513.6 ±35.6 | 1149.9 ± 39.3 | 1457.9 ±36.4 | 1161.7 ±11.7 | 1156.6 ±43.3 | 1245.1 ± 67.2 | 1524.8±37.6 |
| 0.8 | 179.0 ±14.1 | 700.6 ± 11.4 | 600.4 ±8.6 | 572.8 ±14.3 | 605.8 ±44.5 | 499.1 ± 11.0 | 595.4± 19.1 |
| Walker | 0.2 | 1400.2 ±30.9 | 1414.6 ± 19.4 | 1405.2 ±57.3 | 1442.9 ±19.5 | 1385.2 ±73.4 | 437.5 ± 638.9 | 1467.3 ±45.2 |
| 0.4 | 965.9 ±69.0 | 1179.5 ± 21.2 | 1259.3 ±27.7 | 1233.5 ±35.8 | 979.6 ±83.5 | 1311.5 ± 29.7 | 1223.5±15.7 |
| 0.6 | 266.3 ±56.9 | 486.3 ± 43.3 | 664.9 ±37.5 | 529.9 ±46.8 | 218.8 ±16.9 | 559.1 ± 37.6 | 872.9 ±228.7 |
| 0.8 | 3.5 ±5.0 | 10.9 ± 0.9 | 5.9 ±0.6 | 10.7 ±0.3 | 3.3 ±6.5 | 5.6 ± 0.9 | 7.0± 6.3 |
+
+Table 1: Results on Mujoco. We show the average and standard deviation of the scores in 5 independent runs.
+Tab. 1 shows that our method outperforms all the baselines on 5 settings. On the dataset with relatively reasonable quality (i.e. $\alpha = 0.2, 0.4, 0.6$ ), ours performs comparable or better than the baselines. With $\alpha = 0.2$ , i.e., close to optimal behavior policy, all the methods perform similarly and
+
+
+Figure 3: Learning curves of average reward over 5 runs on Mujoco tasks with $\alpha = 0.6$ . The shaded area with light color indicates the standard deviation of the reward. The gray vertical lines on the yellow curves indicate where we take the checkpointed policy, according to the measure of Q value variance, as our final solution.
+
+
+
+
+
+ | Amidar | Asterix | Breakout | Enduro | MsPacman | Qbert | Seaquest | SpaceInvaders |
| BCQ | 154.4 ±11.0 | 2466.7 ±273.4 | 203.8 ±19.6 | 604.7 ±39.7 | 2299.7 ±150.2 | 4088.3 ±332.8 | 4420.0 ±548.7 | 726.5 ±58.7 |
| REM | 32.0 ±3.5 | 741.4 ±236.7 | 3.1 ±0.0 | 244.2 ±19.8 | 1997.4 ±6.4 | 2062.9 ±511.5 | 474.0 ±61.9 | 678.4 ±41.3 |
| CQL | 145.0 ±1.6 | 2618.7 ±102.1 | 253.7 ±14.4 | 206.6 ±5.8 | 2234.6 ±203.2 | 4094.7 ±74.1 | 4652.6 ±2017.1 | 493.5 ±11.4 |
| Ours | 174.5 ±7.1 | 3476.7 ±229.0 | 199.0 ±32.0 | 922.9 ±31.7 | 2494.0 ±301.3 | 4732.5 ±172.5 | 9935.0 ±1175.9 | 1070.3 ±137.1 |
+
+Table 2: Results on Atari, the mean and standard deviation of scores achieved in 3 independent runs.
+
+one can achieve a good return by simply cloning the behavior policy. With $\alpha = 0.8$ , i.e., low-quality behavior policy, there are few good trajectories in the dataset for any methods to learn. The advantage of our method is most obvious when $\alpha = 0.6$ (Fig. 3), as the dataset contains trajectories of both high and low cumulative rewards. Our method can learn from the relatively large number of good trajectories and at the same time deviate from the behavior policy to avoid those bad trajectories and achieve higher rewards. In Fig. 3, 'Constant' is the method of optimizing KL regularized expected reward with constant value of $\tau$ . We search several values of $\tau$ and report the best result. We can see that gradually relaxing constraint performs better than the fixed constraint. In Fig. 3(left), as $\tau$ decays to close to 0, the learned policy can degrade due to errors in the Q estimation, the stopping condition explained in Section 3.3 is however able to identify a good policy before the degenerating point. More experimental details are in the Appendix.
+
+# 4.2 ATARI
+
+We further study our method on several Atari games from the Arcade Learning Environment (ALE) [4]. The rich observation space requires more complicated policies and makes policy optimization even more challenging. We focus on eight games and generate the datasets as discussed in Fujimoto et al. [13]. We use a mediocre DQN agent, trained online for 10 million timesteps (40 million frames). The performance of the DQN agent is shown as 'Online DQN' in Fig. 4. We add exploratory noise on the DQN agent (at 10 million timesteps) to gather a new set of 10 million transitions, similar to [13]. The line "Behavior" in Fig. 4 shows the average of trajectory reward in the dataset $\mathcal{D}$ . The dataset $\mathcal{D}$ is used to train each offline RL agent. We compare with BCQ [13], REM [2] and CQL [22] because they are recently proposed offline RL algorithms and work well on Atari domain. For evaluation, we run 10 episodes on the Atari games with the learned policies and record the average episode reward (Fig. 4). Tab. 2 summarizes the performance of BCQ, REM and CQL after 6M updates. For our method, we report the score before the variance of Q estimate becomes too high.
+
+Our approach achieves higher scores than the baselines on 7 out of 8 games, and perform comparably on the other one. Agarwal et al. [2] reports that REM performs well in the dataset consisting of the entire replay experiences collected in the online training of the DQN agent for 50M timesteps (200M frames). We hypothesize that learning on the entire replay experience makes the setup easier as the training dataset contains more exploratory and higher quality trajectories. With the dataset of much smaller size and worse quality, REM performs poorly in this single behavioral policy setting. We use the same architecture of the critic network for both our method and BCQ with ensemble of 4 Q networks. As mentioned in [13], BCQ only matches the performance of the online DQN on most games. In contrast, ours is able to outperform online DQN significantly on several games. As presented in [22], on the Atari dataset, CQL performs better than REM, while our method outperforms CQL in 7 out of 8 datasets.
+
+
+
+
+
+
+
+
+
+
+Figure 4: Learning curves of average reward over 3 runs on Atari games.
+
+
+
+
+
+
+
+ | data with avg. reward 0.49 | data with avg. reward 0.63 | data avg. reward 0.81 |
| sample size | 30,000 | 60,000 | 30,000 | 60,000 | 30,000 | 60,000 |
| Cross-Entropy | 74.3±0.01 | 73.5±0.01 | 82.0±0.01 | 81.3±0.01 | 86.7±0.00 | 85.6±0.00 |
| IPS | 80.5±0.03 | 82.1±0.02 | 86.4±0.02 | 88.0±0.01 | 87.2 ± 0.02 | 90.0 ± 0.01 |
| Ours | 83.0±0.01 | 85.8±0.00 | 88.7±0.01 | 90.1±0.00 | 89.4±0.00 | 90.5±0.00 |
+
+Table 3: Results on MovieLens dataset. The number is precision at 10, i.e., the percent of recommended movies getting positive feedback when we use the well-trained policy to recommend top-10 movies to the test users. We report the mean and standard deviation of the precision over 20 independent runs,
+
+# 4.3 RECOMMENDER
+
+We also showcase our proposed method for building a softmax recommender agent. We use a publicly available dataset MovieLens-1M, a popular benchmark for recommender system. There are 1 million ratings of 3,900 movies (with the title and genre features) from 6,040 users (with demographic features). The problem of recommending movies for each user can be converted to a contextual bandit problem, where we aim to learn a target policy $\pi_{\theta}(a|s)$ selecting the proper action (movie) $a$ for each state (user) $s$ to get a high reward (rating) $r$ in a single step. The ratings of 5-score are converted to binary rewards using a cutoff of 4. To evaluate whether a learned target policy works well, ideally we should run the learned policy in real recommendation environments. However, such environments for online test are rarely publicly available. Thus, we use the online simulation method. We train a simulator to predict the immediate binary feedback from user and movie features, and the well-trained simulator can serve as a proxy of the real online environments, because it outputs the feedback for any user-movie pair. Similar to [25], we train the simulator with all records of logged feedback in MovieLens-1M dataset. The behavior policy is trained with partial data in MovieLens-1M. We then construct the bandit datasets $\mathcal{D} = \{s_i,a_i,r_i,\beta (a_i|s_i)\}_{i = 1}^N$ of different size and quality, by using different behavior policies $\beta$ to select movies $a_i$ for users $s_i$ and getting the binary feedback $r_i$ from the well-trained simulator. We train offline RL agents on the generated dataset $\mathcal{D}$ and use the simulator to evaluate the learned policies on a held-out test set of users.
+
+We compare our method with two baselines, as they are commonly used in current industrial recommender systems [10, 7]. (1) Cross-Entropy: a supervised learning method for the softmax recommender where the learning objective is the cross-entropy loss $J_{CE}(\theta) = -\frac{1}{N}\sum_{i=1}^{N}r_i\log \pi_\theta(a_i|s_i)$ (2) IPS: the off-policy policy gradient method introduced in [7] with the learning objective $J_{IPS}(\theta) = -\frac{1}{N}\sum_{i=1}^{N}\frac{sg(\pi_\theta(a_i|s_i))}{\beta(s_i,a_i)}r_i\log \pi_\theta(a_i|s_i)$ where $sg$ indicates a stop-gradient operation. $J_{IPS}(\theta)$ produces the same gradient as that of the function $-\frac{1}{N}\sum_{i=1}^{N}r_i\frac{\pi_\theta(a_i|s_i)}{\beta(a_i|s_i)}$ . Thus, minimizing the loss $J_{IPS}(\theta)$ is to maximizing the expected return with importance sampling. (3) Ours: in the bandit setting, we simply perform IPS with gradually decaying KL regularization since estimating the soft Q from bellman update is not needed. Tab. 3 clearly demonstrates the advantage of our proposed method over the baselines. IPS can be viewed as vanilla policy gradient with importance sampling to correct the distribution shift. Our method clearly outperforms it across datasets collected using different behavior policies.
+
+# 5 CONCLUSION
+
+We propose a simple yet effective approach, soft policy iteration algorithm through continuation method to alleviate two challenges in policy optimization under batch reinforcement learning: (1) highly non-smooth objective function which is difficult to optimize (2) high variance in value estimates. We provide theoretical ground and visualization tools to help understand this technique. We demonstrate its efficacy on multiple complex tasks.
+
+# REFERENCES
+
+[1] Joshua Achiam, David Held, Aviv Tamar, and Pieter Abbeel. Constrained policy optimization. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 22–31. JMLR.org, 2017.
+[2] Rishabh Agarwal, Dale Schuurmans, and Mohammad Norouzi. Striving for simplicity in off-policy deep reinforcement learning. arXiv preprint arXiv:1907.04543, 2019.
+[3] Zafarali Ahmed, Nicolas Le Roux, Mohammad Norouzi, and Dale Schuurmans. Understanding the impact of entropy on policy optimization. In International Conference on Machine Learning, pp. 151-160. PMLR, 2019.
+[4] Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47:253-279, 2013.
+[5] James Bennett, Stan Lanning, et al. The netflix prize. Citeseer, 2007.
+[6] Olivier Chapelle, Mingmin Chi, and Alexander Zien. A continuation method for semi-supervised svms. In Proceedings of the 23rd international conference on Machine learning, pp. 185-192, 2006.
+[7] Minmin Chen, Alex Beutel, Paul Covington, Sagar Jain, Francois Belletti, and Ed H Chi. Top- $k$ off-policy correction for a reinforce recommender system. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, pp. 456-464, 2019.
+[8] Minmin Chen, Ramki Gummadi, Chris Harris, and Dale Schuurmans. Surrogate objectives for batch policy optimization in one-step decision making. In Advances in Neural Information Processing Systems, pp. 8825-8835, 2019.
+[9] Richard Cheng, Abhinav Verma, Gabor Orosz, Swarat Chaudhuri, Yisong Yue, and Joel W Burdick. Control regularization for reduced variance reinforcement learning. arXiv preprint arXiv:1905.05380, 2019.
+[10] Paul Covington, Jay Adams, and Emre Sargin. Deep neural networks for youtube recommendations. In Proceedings of the 10th ACM conference on recommender systems, pp. 191-198, 2016.
+[11] Damien Ernst, Pierre Geurts, and Louis Wehenkel. Tree-based batch mode reinforcement learning. Journal of Machine Learning Research, 6(Apr):503-556, 2005.
+[12] Scott Fujimoto, David Meger, and Doina Precup. Off-policy deep reinforcement learning without exploration. arXiv preprint arXiv:1812.02900, 2018.
+[13] Scott Fujimoto, Edoardo Conti, Mohammad Ghavamzadeh, and Joelle Pineau. Benchmarking batch deep reinforcement learning algorithms. arXiv preprint arXiv:1910.01708, 2019.
+[14] Scott Fujimoto, David Meger, and Doina Precup. Off-policy deep reinforcement learning without exploration. In International Conference on Machine Learning, pp. 2052-2062, 2019.
+[15] Jordi Grau-Moya, Felix Leibfried, and Peter Vrancx. Soft q-learning with mutual-information regularization. 2018.
+
+[16] Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. arXiv preprint arXiv:1801.01290, 2018.
+[17] Elaine T Hale, Wotao Yin, and Yin Zhang. A fixed-point continuation method for 11-regularized minimization with applications to compressed sensing. CAAM TR07-07, Rice University, 43:44, 2007.
+[18] Natasha Jaques, Asma Ghandeharioun, Judy Hanwen Shen, Craig Ferguson, Agata Lapedriza, Noah Jones, Shixiang Gu, and Rosalind Picard. Way off-policy batch deep reinforcement learning of implicit human preferences in dialog. arXiv preprint arXiv:1907.00456, 2019.
+[19] Sham Kakade and John Langford. Approximately optimal approximate reinforcement learning. In ICML, volume 2, pp. 267-274, 2002.
+[20] Shivaram Kalyanakrishnan and Peter Stone. Batch reinforcement learning in a complex domain. In Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems, pp. 94. ACM, 2007.
+[21] Aviral Kumar, Justin Fu, Matthew Soh, George Tucker, and Sergey Levine. Stabilizing off-policy q-learning via bootstrapping error reduction. In Advances in Neural Information Processing Systems, pp. 11784-11794, 2019.
+[22] Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. Conservative q-learning for offline reinforcement learning. arXiv preprint arXiv:2006.04779, 2020.
+[23] Sascha Lange, Thomas Gabel, and Martin Riedmiller. Batch reinforcement learning. In Reinforcement learning, pp. 45-73. Springer, 2012.
+[24] Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. arXiv preprint arXiv:2005.01643, 2020.
+[25] Jiaqi Ma, Zhe Zhao, Xinyang Yi, Ji Yang, Minmin Chen, Jiaxi Tang, Lichan Hong, and Ed H Chi. Off-policy learning in two-stage recommender systems. In Proceedings of The Web Conference 2020, pp. 463-473, 2020.
+[26] Jincheng Mei, Chenjun Xiao, Csaba Szepesvari, and Dale Schuurmans. On the global convergence rates of softmax policy gradient methods. arXiv preprint arXiv:2005.06392, 2020.
+[27] John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In International conference on machine learning, pp. 1889-1897, 2015.
+[28] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
+[29] Noah Y Siegel, Jost Tobias Springenberg, Felix Berkenkamp, Abbas Abdelmaleki, Michael Neunert, Thomas Lampe, Roland Hafner, and Martin Riedmiller. Keep doing what worked: Behavioral modelling priors for offline reinforcement learning. arXiv preprint arXiv:2002.08396, 2020.
+[30] Alex Strehl, John Langford, Lihong Li, and Sham M Kakade. Learning from logged implicit exploration data. In Advances in Neural Information Processing Systems, pp. 2217-2225, 2010.
+[31] Adith Swaminathan and Thorsten Joachims. Batch learning from logged bandit feedback through counterfactual risk minimization. Journal of Machine Learning Research, 16(1):1731-1755, 2015.
+[32] Philip Thomas and Emma Brunskill. Data-efficient off-policy policy evaluation for reinforcement learning. In International Conference on Machine Learning, pp. 2139-2148, 2016.
+[33] Ziyu Wang, Alexander Novikov, Konrad Žolna, Jost Tobias Springenberg, Scott Reed, Bobak Shahrriari, Noah Siegel, Josh Merel, Caglar Gulcehre, Nicolas Heess, et al. Critic regularized regression. arXiv preprint arXiv:2006.15134, 2020.
+
+[34] Yifan Wu, George Tucker, and Ofir Nachum. Behavior regularized offline reinforcement learning. arXiv preprint arXiv:1911.11361, 2019.
+[35] Zhijun Wu. The effective energy transformation scheme as a special continuation approach to global optimization with application to molecular conformation. SIAM Journal on Optimization, 6(3):748-768, 1996.
+[36] Fisher Yu, Wenqi Xian, Yingying Chen, Fangchen Liu, Mike Liao, Vashisht Madhavan, and Trevor Darrell. Bdd100k: A diverse driving video database with scalable annotation tooling. arXiv preprint arXiv:1805.04687, 2018.
\ No newline at end of file
diff --git a/batchreinforcementlearningthroughcontinuationmethod/images.zip b/batchreinforcementlearningthroughcontinuationmethod/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..f6c6ae44adebd96d3aa5a082e313fa1750f1da12
--- /dev/null
+++ b/batchreinforcementlearningthroughcontinuationmethod/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cb3a11cd396aaf46448e23f13855f2678da97ce8280b97788d49e371887d6cb3
+size 576220
diff --git a/batchreinforcementlearningthroughcontinuationmethod/layout.json b/batchreinforcementlearningthroughcontinuationmethod/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..c173586260517926808055662b8ab488fed7e4b4
--- /dev/null
+++ b/batchreinforcementlearningthroughcontinuationmethod/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:939a9062c7bdb09efbac7ee35cd3311c34638dc30526d96cbb08c042652b44a6
+size 438143
diff --git a/bayesiancontextaggregationforneuralprocesses/e9ed5707-401b-44ea-b04c-3e4637bb35d2_content_list.json b/bayesiancontextaggregationforneuralprocesses/e9ed5707-401b-44ea-b04c-3e4637bb35d2_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..befa413baa098bcce3f532d1d17b18b2dde4c8b6
--- /dev/null
+++ b/bayesiancontextaggregationforneuralprocesses/e9ed5707-401b-44ea-b04c-3e4637bb35d2_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f5a4bb689ddcfd8e4cf81bf7980b159bb59459f161e4b213e4746a371edc56a9
+size 168688
diff --git a/bayesiancontextaggregationforneuralprocesses/e9ed5707-401b-44ea-b04c-3e4637bb35d2_model.json b/bayesiancontextaggregationforneuralprocesses/e9ed5707-401b-44ea-b04c-3e4637bb35d2_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..5ade2c12eff0bd5a6e25fbc23b4111443152ce7e
--- /dev/null
+++ b/bayesiancontextaggregationforneuralprocesses/e9ed5707-401b-44ea-b04c-3e4637bb35d2_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:80529ba56af806e91cd5bc3995768870fe01901a972d8ac96d3901d2d0b3d603
+size 206306
diff --git a/bayesiancontextaggregationforneuralprocesses/e9ed5707-401b-44ea-b04c-3e4637bb35d2_origin.pdf b/bayesiancontextaggregationforneuralprocesses/e9ed5707-401b-44ea-b04c-3e4637bb35d2_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..8fb3d9bda0487228e36d4308f678cf9d5c347f57
--- /dev/null
+++ b/bayesiancontextaggregationforneuralprocesses/e9ed5707-401b-44ea-b04c-3e4637bb35d2_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2dadc16c85f86730ab685ca75391c934af64d806178943118fba7d367b8c1bdb
+size 1924448
diff --git a/bayesiancontextaggregationforneuralprocesses/full.md b/bayesiancontextaggregationforneuralprocesses/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..79a851d6bf0ff32f56bb3560e54db0e8dac62bb0
--- /dev/null
+++ b/bayesiancontextaggregationforneuralprocesses/full.md
@@ -0,0 +1,734 @@
+# BAYESIAN CONTEXT AGGREGATION FOR NEURAL PROCESSES
+
+Michael Volpp1,2*
+
+Fabian Flürenbrock1
+
+Lukas Grossberger1
+
+Christian Daniel
+
+Gerhard Neumann2,3
+
+1Bosch Center for Artificial Intelligence, Renningen, Germany
+$^{2}$ Karlsruhe Institute of Technology, Karlsruhe, Germany
+3University of Tübingen, Tübingen, Germany
+
+# ABSTRACT
+
+Formulating scalable probabilistic regression models with reliable uncertainty estimates has been a long-standing challenge in machine learning research. Recently, casting probabilistic regression as a multi-task learning problem in terms of conditional latent variable (CLV) models such as the Neural Process (NP) has shown promising results. In this paper, we focus on context aggregation, a central component of such architectures, which fuses information from multiple context data points. So far, this aggregation operation has been treated separately from the inference of a latent representation of the target function in CLV models. Our key contribution is to combine these steps into one holistic mechanism by phrasing context aggregation as a Bayesian inference problem. The resulting Bayesian Aggregation (BA) mechanism enables principled handling of task ambiguity, which is key for efficiently processing context information. We demonstrate on a range of challenging experiments that BA consistently improves upon the performance of traditional mean aggregation while remaining computationally efficient and fully compatible with existing NP-based models.
+
+# 1 INTRODUCTION
+
+Estimating statistical relationships between physical quantities from measured data is of central importance in all branches of science and engineering and devising powerful regression models for this purpose forms a major field of study in statistics and machine learning. When judging representative power, neural networks (NNs) are arguably the most prominent member of the regression toolbox. NNs cope well with large amounts of training data and are computationally efficient at test time. On the downside, standard NN variants do not provide uncertainty estimates over their predictions and tend to overfit on small datasets. Gaussian processes (GPs) may be viewed as complementary to NNs as they provide reliable uncertainty estimates but their cubic (quadratic) scaling with the number of context data points at training (test) time in their basic formulation affects the application on tasks with large amounts of data or on high-dimensional problems.
+
+Recently, a lot of interest in the scientific community is drawn to combinations of aspects of NNs and GPs. Indeed, a prominent formulation of probabilistic regression is as a multi-task learning problem formalized in terms of amortized inference in conditional latent variable (CLV) models, which results in NN-based architectures which learn a distribution over target functions. Notable variants are given by the Neural Process (NP) (Garnelo et al., 2018b) and the work of Gordon et al. (2019), which presents a unifying view on a range of related approaches in the language of CLV models.
+
+Inspired by this research, we study context aggregation, a central component of such models, and propose a new, fully Bayesian, aggregation mechanism for CLV-based probabilistic regression models.
+
+To transform the information contained in the context data into a latent representation of the target function, current approaches typically employ a mean aggregator and feed the output of this aggregator into a NN to predict a distribution over global latent parameters of the function. Hence, aggregation and latent parameter inference have so far been treated as separate parts of the learning pipeline. Moreover, when using a mean aggregator, every context sample is assumed to carry the same amount of information. Yet, in practice, different input locations have different task ambiguity and, therefore, samples should be assigned different importance in the aggregation process. In contrast, our Bayesian aggregation mechanism treats context aggregation and latent parameter inference as one holistic mechanism, i.e., the aggregation directly yields the distribution over the latent parameters of the target function. Indeed, we formulate context aggregation as Bayesian inference of latent parameters using Gaussian conditioning in the latent space. Compared to existing methods, the resulting aggregator improves the handling of task ambiguity, as it can assign different variance levels to the context samples. This mechanism improves predictive performance, while it remains conceptually simple and introduces only negligible computational overhead. Moreover, our Bayesian aggregator can also be applied to deterministic model variants like the Conditional NP (CNP) (Garnelo et al., 2018a).
+
+In summary, our contributions are (i) a novel Bayesian Aggregation (BA) mechanism for context aggregation in NP-based models for probabilistic regression, (ii) its application to existing CLV architectures as well as to deterministic variants like the CNP, and (iii) an exhaustive experimental evaluation, demonstrating BA's superiority over traditional mean aggregation.
+
+# 2 RELATED WORK
+
+Prominent approaches to probabilistic regression are Bayesian linear regression and its kernelized counterpart, the Gaussian process (GP) (Rasmussen and Williams, 2005). The formal correspondence of GPs with infinite-width Bayesian NNs (BNNs) has been established in Neal (1996) and Williams (1996). A broad range of research aims to overcome the cubic scaling behaviour of GPs with the number of context points, e.g., through sparse GP approximations (Smola and Bartlett, 2001; Lawrence et al., 2002; Snelson and Ghahramani, 2005; Quinonero-Candela and Rasmussen, 2005), by deep kernel learning (Wilson et al., 2016), by approximating the posterior distribution of BNNs (MacKay, 1992; Hinton and van Camp, 1993; Gal and Ghahramani, 2016; Louizos and Welling, 2017), or, by adaptive Bayesian linear regression, i.e., by performing inference over the last layer of a NN which introduces sparsity through linear combinations of finitely many learned basis functions (Lazaro-Gredilla and Figueiras-Vidal, 2010; Hinton and Salakhutdinov, 2008; Snoek et al., 2012; Calandra et al., 2016). An in a sense complementary approach aims to increase the data-efficiency of deep architectures by a fully Bayesian treatment of hierarchical latent variable models ("DeepGPs") (Damianou and Lawrence, 2013).
+
+A parallel line of research studies probabilistic regression in the multi-task setting. Here, the goal is to formulate models which are data-efficient on an unseen target task by training them on data from a set of related source tasks. Bardenet et al. (2013); Yogatama and Mann (2014), and Golovin et al. (2017) study multi-task formulations of GP-based models. More general approaches of this kind employ the meta-learning framework (Schmidhuber, 1987; Thrun and Pratt, 1998; Vilalta and Drissi, 2005), where a model's training procedure is formulated in a way which incentivizes it to learn how to solve unseen tasks rapidly with only a few context examples ("learning to learn", "few-shot learning" (Fei-Fei et al., 2006; Lake et al., 2011)). A range of such methods trains a meta-learner to learn how to adjust the parameters of the learner's model (Bengio et al., 1991; Schmidhuber, 1992), an approach which has recently been applied to few-shot image classification (Ravi and Larochelle, 2017), or to learning data-efficient optimization algorithms (Hochreiter et al., 2001; Li and Malik, 2016; Andrychowicz et al., 2016; Chen et al., 2017; Perrone et al., 2018; Volpp et al., 2019). Other branches of meta-learning research aim to learn similarity metrics to determine the relevance of context samples for the target task (Koch et al., 2015; Vinyals et al., 2016; Snell et al., 2017; Sung et al., 2017), or explore the application of memory-augmented neural networks for meta-learning (Santoro et al., 2016). Finn et al. (2017) propose model-agnostic meta-learning (MAML), a general framework for fast parameter adaptation in gradient-based learning methods.
+
+A successful formulation of probabilistic regression as a few-shot learning problem in a multi-task setting is enabled by recent advances in the area of probabilistic meta-learning methods which allow a quantitative treatment of the uncertainty arising due to task ambiguity, a feature particularly
+
+relevant for few-shot learning problems. One line of work specifically studies probabilistic extensions of MAML (Grant et al., 2018; Ravi and Larochelle, 2017; Rusu et al., 2018; Finn et al., 2018; Kim et al., 2018). Further important approaches are based on amortized inference in multi-task CLV models (Heskes, 2000; Bakker and Heskes, 2003; Kingma and Welling, 2013; Rezende et al., 2014; Sohn et al., 2015), which forms the basis of the Neural Statistician proposed by Edwards and Storkey (2017) and of the NP model family (Garnelo et al., 2018b; Kim et al., 2019; Louizos et al., 2019). Gordon et al. (2019) present a unifying view on many of the aforementioned probabilistic architectures. Building on the conditional NPs (CNPs) proposed by Garnelo et al. (2018a), a range of NP-based architectures, such as Garnelo et al. (2018b) and Kim et al. (2019), consider combinations of deterministic and CLV model architectures. Recently, Gordon et al. (2020) extended CNPs to include translation equivariance in the input space, yielding state-of-the-art predictive performance.
+
+In this paper, we also employ a formulation of probabilistic regression in terms of a multi-task CLV model. However, while in previous work the context aggregation mechanism (Zaheer et al., 2017; Wagstaff et al., 2019) was merely viewed as a necessity to consume context sets of variable size, we take inspiration from Becker et al. (2019) and emphasize the fundamental connection of latent parameter inference with context aggregation and, hence, base our model on a novel Bayesian aggregation mechanism.
+
+# 3 PRELIMINARIES
+
+We present the standard multi-task CLV model which forms the basis for our discussion and present traditional mean context aggregation (MA) and the variational inference (VI) likelihood approximation as employed by the NP model family (Garnelo et al., 2018a; Kim et al., 2019), as well as an alternative Monte Carlo (MC)-based approximation.
+
+Problem Statement. We frame probabilistic regression as a multi-task learning problem. Let $\mathcal{F}$ denote a family of functions $f_{\ell}:\mathbb{R}^{d_x}\to \mathbb{R}^{d_y}$ with some form of shared statistical structure.
+
+We assume to have available data sets $\mathcal{D}_{\ell} \equiv \{(x_{\ell,i}, y_{\ell,i})\}_i$ of evaluations $y_{\ell,i} \equiv f_{\ell}(x_{\ell,i}) + \varepsilon$ from a subset of functions ("tasks") $\{f_{\ell}\}_{\ell=1}^{L} \subset \mathcal{F}$ with additive Gaussian noise $\varepsilon \sim \mathcal{N}\left(0, \sigma_n^2\right)$ . From this data, we aim to learn the posterior predictive distribution $p(y_{\ell}|x_{\ell}, \mathcal{D}_{\ell}^{c})$ over a (set of) $y_{\ell}$ , given the corresponding (set of) inputs $x_{\ell}$ as well as a context set $\mathcal{D}_{\ell}^{c} \subset \mathcal{D}_{\ell}$ .
+
+The Multi-Task CLV Model. We formalize the multi-task learning problem in terms of a CLV model (Heskes, 2000; Gordon et al., 2019) as shown in Fig. 1. The model employs task-specific global latent variables $z_{\ell} \in \mathbb{R}^{d_z}$ , as well as a task-independent latent variable $\theta$ , capturing the statistical structure shared between tasks. To learn $\theta$ , we split the data into context sets $\mathcal{D}_{\ell}^{c} \equiv \{(x_{\ell,n}^{c}, y_{\ell,n}^{c})\}_{n=1}^{N_{\ell}}$ and target sets $\mathcal{D}_{\ell}^{t} \equiv \{(x_{\ell,m}^{t}, y_{\ell,m}^{t})\}_{m=1}^{M_{\ell}}$ and maximize the posterior predictive likelihood function
+
+
+Figure 1: Multi-task CLV model with task-specific global latent variables $z_{\ell}$ and a task-independent variable $\theta$ describing statistical structure shared between tasks.
+
+$$
+\prod_ {\ell = 1} ^ {L} p \left(y _ {\ell , 1: M _ {\ell}} ^ {t} \mid x _ {\ell , 1: M _ {\ell}} ^ {t}, \mathcal {D} _ {\ell} ^ {c}, \theta\right) = \prod_ {\ell = 1} ^ {L} \int p \left(z _ {\ell} \mid \mathcal {D} _ {\ell} ^ {c}, \theta\right) \prod_ {m = 1} ^ {M _ {\ell}} p \left(y _ {\ell , m} ^ {t} \mid z _ {\ell}, x _ {\ell , m} ^ {t}, \theta\right) d z _ {\ell} \tag {1}
+$$
+
+w.r.t. $\theta$ . In what follows, we omit task indices $\ell$ to avoid clutter.
+
+Likelihood Approximation. Marginalizing over the task-specific latent variables $z$ is intractable for reasonably complex models, so one has to employ some form of approximation. The NP-family of models (Garnelo et al., 2018b; Kim et al., 2019) uses an approximation of the form
+
+$$
+\log p \left(y _ {1: M} ^ {t} \mid x _ {1: M} ^ {t}, \mathcal {D} ^ {c}, \theta\right) \gtrsim \mathbb {E} _ {q _ {\phi} (z | \mathcal {D} ^ {c} \cup \mathcal {D} ^ {t})} \left[ \sum_ {m = 1} ^ {M} \log p \left(y _ {m} ^ {t} \mid z, x _ {m} ^ {t}, \theta\right) + \log \frac {q _ {\phi} (z | \mathcal {D} ^ {c})}{q _ {\phi} (z | \mathcal {D} ^ {c} \cup \mathcal {D} ^ {t})} \right]. \tag {2}
+$$
+
+Being derived using a variational approach, this approximation utilizes an approximate posterior distribution $q_{\phi}(z|\mathcal{D}^c) \approx p(z|\mathcal{D}^c,\theta)$ . Note, however, that it does not constitute a proper evidence lower bound for the posterior predictive likelihood since the intractable latent posterior $p(z|\mathcal{D}^c,\theta)$ has been replaced by $q_{\phi}(z|\mathcal{D}^c)$ in the nominator of the rightmost term (Le et al., 2018). An alternative approximation, employed for instance in Gordon et al. (2019), also replaces the intractable latent posterior distribution by an approximate distribution $q_{\phi}(z|\mathcal{D}^c) \approx p(z|\mathcal{D}^c,\theta)$ and uses a Monte-Carlo (MC) approximation of the resulting integral based on $K$ latent samples, i.e.,
+
+$$
+\log p \left(y _ {1: M} ^ {t} \mid x _ {1: M} ^ {t}, \mathcal {D} ^ {c}, \theta\right) \approx - \log K + \log \sum_ {k = 1} ^ {K} \prod_ {m = 1} ^ {M} p \left(y _ {m} ^ {t} \mid z _ {k}, x _ {m} ^ {t}, \theta\right), \quad z _ {k} \sim q _ {\phi} (z | \mathcal {D} ^ {c}). \tag {3}
+$$
+
+Note that both approaches employ approximations $q_{\phi}(z|\mathcal{D}^c)$ of the latent posterior distribution $p(z|\mathcal{D}^c,\theta)$ and, as indicated by the notation, amortize inference in the sense that one single set of parameters $\phi$ is shared between all context data points. This enables efficient inference at test time, as no per-data-point optimization loops are required. As is standard in the literature (Garnelo et al., 2018b; Kim et al., 2019), we represent $q_{\phi}(z|\mathcal{D}^c)$ and $p(y_m^t |z,x_m^t,\theta)$ by NNs and refer to them as the encoder (enc, parameters $\phi$ ) and decoder (dec, parameters $\theta$ ) networks, respectively. These networks set the means and variances of factorized Gaussian distributions, i.e.,
+
+$$
+q _ {\phi} (z | \mathcal {D} ^ {c}) = \mathcal {N} (z | \mu_ {z}, \operatorname {d i a g} (\sigma_ {z} ^ {2})) , \quad \mu_ {z} = \operatorname {e n c} _ {\mu_ {z}, \phi} (\mathcal {D} ^ {c}), \quad \sigma_ {z} ^ {2} = \operatorname {e n c} _ {\sigma_ {z} ^ {2}, \phi} (\mathcal {D} ^ {c}) , \tag {4}
+$$
+
+$$
+p \left(y _ {m} ^ {t} \mid z, x _ {m} ^ {t}, \theta\right) = \mathcal {N} \left(y _ {m} ^ {t} \mid \mu_ {y}, \operatorname {d i a g} \left(\sigma_ {y} ^ {2}\right)\right), \mu_ {y} = \det _ {\mu_ {y}, \theta} \left(z, x _ {m} ^ {t}\right), \sigma_ {y} ^ {2} = \det _ {\sigma_ {y} ^ {2}, \theta} \left(z, x _ {m} ^ {t}\right). \tag {5}
+$$
+
+Context Aggregation. The latent variable $z$ is global in the sense that it depends on the whole context set $\mathcal{D}^c$ . Therefore, some form of aggregation mechanism is required to enable the encoder to consume context sets $\mathcal{D}^c$ of variable size. To represent a meaningful operation on sets, such an aggregation mechanism has to be invariant to permutations of the context data points. Zaheer et al. (2017) characterize possible aggregation mechanisms w.r.t. this permutation invariance condition, resulting in the structure of traditional aggregation mechanisms depicted in Fig. 2(a). Each context data tuple $(x_n^c, y_n^c)$ is first mapped onto a latent observation $r_n = \mathrm{enc}_{r,\phi}(x_n^c, y_n^c) \in \mathbb{R}^{d_r}$ . Then, a permutation-invariant operation is applied to the set $\{r_n\}_{n=1}^N$ to obtain an aggregated latent observation $\bar{r}$ . One prominent choice, employed for instance in Garnelo et al. (2018a), Kim et al. (2019), and Gordon et al. (2019), is to take the mean, i.e.,
+
+$$
+\bar {r} = \frac {1}{N} \sum_ {n = 1} ^ {N} r _ {n}. \tag {6}
+$$
+
+Subsequently, $\bar{r}$ is mapped onto the parameters $\mu_z$ and $\sigma_z^2$ of the approximate posterior distribution $q_{\phi}(z|\mathcal{D}^c)$ using additional encoder networks, i.e., $\mu_z = \mathrm{enc}_{\mu_z,\phi}(\bar{r})$ and $\sigma_z^2 = \mathrm{enc}_{\sigma_z^2,\phi}(\bar{r})$ . Note that three encoder networks are employed here: (i) $\mathrm{enc}_{r,\phi}$ to map from the context pairs to $r_n$ , (ii) $\mathrm{enc}_{\mu_z,\phi}$ to compute $\mu_z$ from the aggregated mean $\bar{r}$ and (iii) $\mathrm{enc}_{\sigma_z^2,\phi}$ to compute the variance $\sigma_z^2$ from $\bar{r}$ . In what follows, we refer to this aggregation mechanism as mean aggregation (MA) and to the networks $\mathrm{enc}_{\mu_z,\phi}$ and $\mathrm{enc}_{\sigma_z^2,\phi}$ collectively as “ $\bar{r}$ -to- $z$ -networks”.
+
+# 4 BAYESIAN CONTEXT AGGREGATION
+
+We propose Bayesian Aggregation (BA), a novel context data aggregation technique for CLV models which avoids the detour via an aggregated latent observation $\bar{r}$ and directly treats the object of interest, namely the latent variable $z$ , as the aggregated quantity. This reflects a central observation for CLV models with global latent variables: context data aggregation and hidden parameter inference are fundamentally the same mechanism. Our key insight is to define a probabilistic observation model $p(r|z)$ for $r$ which depends on $z$ . Given a new latent observation $r_n = \mathrm{enc}_{r,\phi}(x_n^c, y_n^c)$ , we can update $p(z)$ by computing the posterior $p(z|r_n) = p(r_n|z)p(z)/p(r_n)$ . Hence, by formulating context data aggregation as a Bayesian inference problem, we aggregate the information contained in $\mathcal{D}^c$ directly into the statistical description of $z$ based on first principles.
+
+# 4.1 BAYESIAN CONTEXT AGGREGATION VIA GAUSSIAN CONDITIONING
+
+BA can easily be implemented using a factorized Gaussian observation model of the form
+
+$$
+p \left(r _ {n} \mid z\right) = \mathcal {N} \left(r _ {n} \mid z, \operatorname {d i a g} \left(\sigma_ {r _ {n}} ^ {2}\right)\right), \quad r _ {n} = \operatorname {e n c} _ {r, \phi} \left(x _ {n} ^ {c}, y _ {n} ^ {c}\right), \quad \sigma_ {r _ {n}} ^ {2} = \operatorname {e n c} _ {\sigma_ {r} ^ {2}, \phi} \left(x _ {n} ^ {c}, y _ {n} ^ {c}\right). \tag {7}
+$$
+
+
+(a) Traditional mean aggregation (MA).
+
+
+(b) Our Bayesian aggregation (BA).
+Figure 2: Comparison of aggregation mechanisms in CLV models. Dashed lines correspond to learned components of the posterior approximation $q_{\phi}(z|\mathcal{D}^{c})$ . BA avoids the detour via a mean-aggregated latent observation $\bar{r}$ and aggregates $\mathcal{D}^c$ directly in the statistical description of $z$ . This allows to incorporate a quantification of the information content of each context tuple $(x_n^c, y_n^c)$ as well as of $z$ into the inference in a principled manner, while MA assigns the same weight to each context tuple.
+
+Note that, in contrast to standard variational auto-encoders (VAEs) (Kingma and Welling, 2013), we do not learn the mean and variance of a Gaussian distribution, but we learn the latent observation $r_n$ (which can be considered as a sample of $p(z)$ ) together with the variance $\sigma_{r_n}^2$ of this observation. This architecture allows the application of Gaussian conditioning while this is difficult for VAEs. Indeed, we impose a factorized Gaussian prior $p_0(z) \equiv \mathcal{N}(z|\mu_{z,0},\mathrm{diag}(\sigma_{z,0}^2))$ and arrive at a Gaussian aggregation model which allows to derive the parameters of the posterior distribution $q_{\phi}(z|\mathcal{D}^{c})$ in closed form1 (cf. App. 7.1):
+
+$$
+\sigma_ {z} ^ {2} = \left[ \left(\sigma_ {z, 0} ^ {2}\right) ^ {\ominus} + \sum_ {n = 1} ^ {N} \left(\sigma_ {r _ {n}} ^ {2}\right) ^ {\ominus} \right] ^ {\ominus}, \quad \mu_ {z} = \mu_ {z, 0} + \sigma_ {z} ^ {2} \odot \sum_ {n = 1} ^ {N} \left(r _ {n} - \mu_ {z, 0}\right) \oslash \left(\sigma_ {r _ {n}} ^ {2}\right). \tag {8}
+$$
+
+Here $\ominus$ , $\odot$ and $\oslash$ denote element-wise inversion, product, and division, respectively. These equations naturally lend themselves to efficient incremental updates as new context data $(x_{n}^{c},y_{n}^{c})$ arrives by using the current posterior parameters $\mu_{z,\mathrm{old}}$ and $\sigma_{z,\mathrm{old}}^2$ in place of the prior parameters, i.e.,
+
+$$
+\sigma_ {z, \text {n e w}} ^ {2} = \left[ \left(\sigma_ {z, \text {o l d}} ^ {2}\right) ^ {\ominus} + \left(\sigma_ {r _ {n}} ^ {2}\right) ^ {\ominus} \right] ^ {\ominus}, \quad \mu_ {z} = \mu_ {z, \text {o l d}} + \sigma_ {z, \text {n e w}} ^ {2} \odot (r _ {n} - \mu_ {z, \text {o l d}}) \otimes (\sigma_ {r _ {n}} ^ {2}). \tag {9}
+$$
+
+BA employs two encoder networks, $\mathrm{enc}_{r,\phi}$ and $\mathrm{enc}_{\sigma_r^2,\phi}$ , mapping context tuples to latent observations and their variances, respectively. In contrast to MA, it does not require $\bar{r}$ -to- $z$ -networks, because the set $\{r_n\}_{n=1}^N$ is aggregated directly into the statistical description of $z$ by means of Eq. (8), cf. Fig. 2(b). Note that our factorization assumptions avoid the expensive matrix inversions that typically occur in Gaussian conditioning and which are difficult to backpropagate. Using factorized distributions renders BA cheap to evaluate with only marginal computational overhead in comparison to MA. Furthermore, we can easily backpropagate through BA to compute gradients to optimize the parameters of the encoder and decoder networks. As the latent space $z$ is shaped by the encoder network, the factorization assumptions are valid because the network will find a space where these assumptions work well. Note further that BA represents a permutation-invariant operation on $\mathcal{D}^c$ .
+
+Discussion. BA includes MA as a special case. Indeed, Eq. (8) reduces to the mean-aggregated latent observation Eq. (6) if we impose a non-informative prior and uniform observation variances $\sigma_{r_n}^2 \equiv 1$ . This observation sheds light on the benefits of a Bayesian treatment of aggregation. MA assigns the same weight $1/N$ to each latent observation $r_n$ , independent of the amount of information contained in the corresponding context data tuple $(x_n^c, y_n^c)$ , as well as independent of the uncertainty about the current estimation of $z$ . Bayesian aggregation remedies both of these limitations: the influence of $r_n$ on the parameters $\mu_{z,\mathrm{old}}$ and $\sigma_{z,\mathrm{old}}^2$ describing the current aggregated state is determined by the relative magnitude of the observation variance $\sigma_{r_n}^2$ and the latent variance
+
+$\sigma_{z,\mathrm{old}}^2$ , cf. Eq. (9). This emphasizes the central role of the learned observation variances $\sigma_{r_n}^2$ : they allow to quantify the amount of information contained in each latent observation $r_n$ . BA can therefore handle task ambiguity more efficiently than MA, as the architecture can learn to assign little weight (by predicting high observation variances $\sigma_{r_n}^2$ ) to context points $(x_n^c, y_n^c)$ located in areas with high task ambiguity, i.e., to points which could have been generated by many of the functions in $\mathcal{F}$ . Conversely, in areas with little task ambiguity, i.e., if $(x_n^c, y_n^c)$ contains a lot of information about the underlying function, BA can induce a strong influence on the posterior latent distribution. In contrast, MA has to find ways to propagate such information through the aggregation mechanism by encoding it in the mean-aggregated latent observation $\bar{r}$ .
+
+# 4.2 LIKELIHOOD APPROXIMATION WITH BAYESIAN CONTEXT AGGREGATION
+
+We show that BA is versatile in the sense that it can replace traditional MA in various CLV-based NP architectures as proposed, e.g., in Garnelo et al. (2018b) and Gordon et al. (2019), which employ samples from the approximate latent posterior $q_{\phi}(z|\mathcal{D}^c)$ to approximate the likelihood (as discussed in Sec. 3), as well as in deterministic variants like the CNP (Garnelo et al., 2018a).
+
+Sampling-Based Likelihood Approximations. BA is naturally compatible with both the VI and MC likelihood approximations for CLV models. Indeed, BA defines a Gaussian latent distribution from which we can easily obtain samples $z$ in order to evaluate Eq. (2) or Eq. (3) using the decoder parametrization Eq. (5).
+
+Bayesian Context Aggregation for Conditional Neural Processes. BA motivates a novel, alternative, method to approximate the posterior predictive likelihood Eq. (1), resulting in a deterministic loss function which can be efficiently optimized for $\theta$ and $\phi$ in an end-to-end fashion. To this end, we employ a Gaussian approximation of the posterior predictive likelihood of the form
+
+$$
+p \left(y _ {1: M} ^ {t} \mid x _ {1: M} ^ {t}, \mathcal {D} ^ {c}, \theta\right) \approx \mathcal {N} \left(y _ {1: M} ^ {t} \mid \mu_ {y}, \Sigma_ {y}\right). \tag {10}
+$$
+
+This is inspired by GPs which also define a Gaussian likelihood. Maximizing this expression yields the optimal solution $\mu_y = \tilde{\mu}_y$ , $\Sigma_y = \tilde{\Sigma}_y$ , with $\tilde{\mu}_y$ and $\tilde{\Sigma}_y$ being the first and second moments of the true posterior predictive distribution. This is a well-known result known as moment matching, a popular variant of deterministic approximate inference used, e.g., in Deisenroth and Rasmussen (2011) and Becker et al. (2019). $\tilde{\mu}_y$ and $\tilde{\Sigma}_y$ are functions of the moments $\mu_z$ and $\sigma_z^2$ of the latent posterior $p(z|\mathcal{D}^c,\theta)$ which motivates the following decoder parametrization:
+
+$$
+\mu_ {y} = \det _ {\mu_ {y}, \theta} \left(\mu_ {z}, \sigma_ {z} ^ {2}, x _ {m} ^ {t}\right), \quad \sigma_ {y} ^ {2} = \det _ {\sigma_ {y} ^ {2}, \theta} \left(\mu_ {z}, \sigma_ {z} ^ {2}, x _ {m} ^ {t}\right), \quad \Sigma_ {y} = \operatorname {d i a g} \left(\sigma_ {y} ^ {2}\right). \tag {11}
+$$
+
+Here, $\mu_z$ and $\sigma_z^2$ are given by the BA Eqs. (8). Note that we define the Gaussian approximation to be factorized w.r.t. individual $y_{m}^{t}$ , an assumption which simplifies the architecture but could be dropped if a more expressive model was required. This decoder can be interpreted as a "moment matching network", computing the moments of $y$ given the moments of $z$ . Indeed, in contrast to decoder networks of CLV-based NP architectures as defined in Eq. (5), it operates on the moments $\mu_z$ and $\sigma_z^2$ of the latent distribution instead of on samples $z$ which allows to evaluate this approximation in a deterministic manner. In this sense, the resulting model is akin to the CNP which defines a deterministic, conditional model with a decoder operating on the mean-aggregated latent observation $\bar{r}$ . However, BA-based models trained in this deterministic manner still benefit from BA's ability to accurately quantify latent parameter uncertainty which yields significantly improved predictive likelihoods. In what follows, we refer to this approximation scheme as direct parameter-based (PB) likelihood optimization.
+
+Discussion. The concrete choice of likelihood approximation or, equivalently, model architecture depends mainly on the intended use-case. Sampling-based models are generally more expressive as they can represent complex, i.e., structured, non-Gaussian, posterior predictive distributions. Moreover, they yield true function samples while deterministic models only allow approximate function samples through auto-regressive (AR) sampling schemes. Nevertheless, deterministic models exhibit several computational advantages. They yield direct probabilistic predictions in a single forward pass, while the predictions of sampling-based methods are only defined through averages over multiple function samples and hence require multiple forward passes. Likewise, evaluating the MC-based likelihood approximation Eq. (3) during training requires to draw multiple
+
+Table 1: Posterior predictive log-likelihood on functions drawn from GP priors with RBF, weakly periodic, and Matern-5/2 kernels, averaged over context sets with $N \in \{0,1,\dots,64\}$ points (table) and in dependence of $N$ (figure). BA consistently outperforms MA, independent of the likelihood approximation, with MC being the most expressive choice. PB represents an efficient, deterministic alternative, while the VI approximation tends to perform worst, in particular for small $N$ .
+
+ | PB/det. | VI | MC | ANP |
| BA | MA (CNP) | BA | MA (LP-NP) | BA | MA | MA + Attention |
| RBF GP | 1.37 ± 0.15 | 0.94 ± 0.04 | 1.40 ± 0.04 | 0.45 ± 0.12 | 1.62 ± 0.05 | 1.07 ± 0.05 | 0.98 ± 0.02 |
| Weakly Periodic GP | 1.13 ± 0.08 | 0.76 ± 0.02 | 0.89 ± 0.03 | 0.07 ± 0.14 | 1.30 ± 0.06 | 0.85 ± 0.04 | 1.02 ± 0.02 |
| Matern-5/2 GP | -0.50 ± 0.07 | -0.68 ± 0.01 | -0.79 ± 0.01 | -1.09 ± 0.10 | -0.33 ± 0.01 | -0.90 ± 0.15 | 0.25 ± 0.02 |
+
+
+
+
+
+
+
+$(K)$ latent samples $z$ . While the VI likelihood approximation Eq. (2) can be optimized on a single function sample per training step through stochastic gradient descent (Bishop, 2006), it has the disadvantage that it requires to feed target sets $\mathcal{D}^t$ through the encoder which can impede the training for small context sets $\mathcal{D}^c$ as discussed in detail in App. 7.2.
+
+# 5 EXPERIMENTS
+
+We present experiments to compare the performances of BA and of MA in NP-based models. To provide a complete picture, we evaluate all combinations of likelihood approximations (PB/deterministic Eq. (10), VI Eq. (2), MC Eq. (3)) and aggregation methods (BA Eq. (8), MA Eq. (6)), resulting in six different model architectures, cf. Fig. 4 in App. 7.5.2. Two of these architectures correspond to existing members of the NP family: MA + deterministic is equivalent to the CNP (Garnelo et al., 2018a), and MA + VI corresponds to the Latent-Path NP (LP-NP) (Garnelo et al., 2018b), i.e., the NP without a deterministic path. We further evaluate the Attentive Neural Process (ANP) (Kim et al., 2019), which employs a hybrid approach, combining LP-NP with a cross-attention mechanism in a parallel deterministic path3, as well as an NP-architecture using MA with a self-attentive (SA) encoder network. Note that BA can also be used in hybrid models like ANP or in combination with SA, an idea we leave for future research. In App. 7.4 we discuss NP-based regression in relation to other methods for (scalable) probabilistic regression.
+
+The performance of NP-based models depends heavily on the encoder and decoder network architectures as well as on the latent space dimensionality $d_{z}$ . To assess the influence of the aggregation mechanism independently from all other confounding factors, we consistently optimize the encoder and decoder network architectures, the latent-space dimensionality $d_{z}$ , as well as the learning rate of the Adam optimizer (Kingma and Ba, 2015), independently for all model architectures and for all experiments using the Optuna (Akiba et al., 2019) framework, cf. App. 7.5.3. If not stated differently, we report performance in terms of the mean posterior predictive log-likelihood over 256 test tasks with 256 data points each, conditioned on context sets containing $N \in \{0,1,\dots,N_{\mathrm{max}}\}$ data points (cf. App. 7.5.4). For sampling-based methods (VI, MC, ANP), we report the joint log-likelihood over the test sets using a Monte-Carlo approximation with 25 latent samples, cf. App. 7.5.4. We average the resulting log-likelihood values over 10 training runs with different random seeds and report $95\%$ confidence intervals. We publish source code to reproduce the experimental results online. $^4$
+
+GP Samples. We evaluate the architectures on synthetic functions drawn from GP priors with different kernels (RBF, weakly periodic, Matern-5/2), as proposed by Gordon et al. (2020), cf. App. 7.5.1. We generate a new batch of functions for each training epoch. The results (Tab. 1) show that BA consistently outperforms MA, independent of the model architecture. In
+
+Table 3: Posterior predictive log-likelihood on 1D and 3D quadratic functions with limited numbers $L$ of training tasks,averaged over context sets with $N \in \{ 0,1,\ldots ,{20}\}$ data points. BA outperforms MA by considerable margins in this regime of little training data.
+
+ | PB/det. | VI | MC | ANP |
| BA | MA (CNP) | BA | MA (LP-NP) | BA | MA | MA + Attention |
| Quadratic 1D, L = 64 | 1.42 ± 0.20 | 0.47 ± 0.25 | 1.48 ± 0.05 | -0.32 ± 0.55 | 1.71 ± 0.23 | 1.27 ± 0.06 | 0.69 ± 0.08 |
| Quadratic 3D, L = 128 | -2.46 ± 0.12 | -2.73 ± 0.10 | -2.53 ± 0.07 | -3.45 ± 0.12 | -1.79 ± 0.07 | -2.14 ± 0.05 | -3.08 ± 0.02 |
+
+
+Figure 3: Predictions on two instances (dashed lines) of the 1D quadratic function class, given $N = 3$ context data points (circles). We show mean and standard deviation predictions (solid line, shaded area), and 10 function samples (AR samples for deterministic methods). Cf. also App. 7.6.
+
+
+
+
+
+
+
+
+
+
+
+
+
+terestingly, despite employing a factorized Gaussian approximation, our deterministic PB approximation performs at least on-par with the traditional VI approximation which tends to perform
+
+particularly poorly for small context sets, reflecting the intricacies discussed in Sec. 4.2. As expected, the MC approximation yields the best results in terms of predictive performance, as it is more expressive than the deterministic approaches and does not share the problems of the VI approach. As shown in Tab. 2 and Tab. 9, App. 7.6, our proposed PB likelihood approximation is
+
+Table 2: Relative evaluation runtimes and #parameters of the optimized network architectures on RBF GP. Also cf. Tab. 9.
+
+ | PB/det. | VI | MC |
| BA | MA (CNP) | BA | MA (LP-NP) | BA | MA |
| Runtime | 1 | 1.4 | 18 | 25 | 32 | 27 |
| #Parameters | 72k | 96k | 63k | 77k | 122k | 153k |
+
+much cheaper to evaluate compared to both sampling-based approaches which require multiple forward passes per prediction. We further observe that BA tends to require smaller encoder and decoder networks as it is more efficient at propagating context information to the latent state as discussed in Sec. 4.1. The hybrid ANP approach is competitive only on the Matern-5/2 function class. Yet, we refer the reader to Tab. 10, App. 7.6, demonstrating that the attention mechanism greatly improves performance in terms of MSE.
+
+Quadratic Functions. We further seek to study the performance of BA with very limited amounts of training data. To this end, we consider two quadratic function classes, each parametrized by three real parameters from which we generate limited numbers $L$ of training tasks. The first function class is defined on a one-dimensional domain, i.e., $x \in \mathbb{R}$ , and we choose $L = 64$ , while the second function class, as proposed by Perrone et al. (2018), is defined on $x \in \mathbb{R}^3$ with $L = 128$ , cf. App. 7.5.1. As shown in Tab. 3, BA again consistently outperforms MA, often by considerably large margins, underlining the efficiency of our Bayesian approach to aggregation in the regime of little training data. On the 1D task, all likelihood approximations perform approximately on-par in combination with BA, while MC outperforms both on the more complex 3D task. Fig. 3 compares prediction qualities.
+
+Dynamics of a Furuta Pendulum. We study BA on a realistic dataset given by the simulated dynamics of a rotary inverted pendulum, better known as the Furuta pendulum (Furuta et al., 1992), which is a highly non-linear dynamical system, consisting of an actuated arm rotating in the horizontal
+
+plane with an attached pendulum rotating freely in the vertical plane, parametrized by two masses, three lengths, and two damping constants. The regression task is defined as the one-step-ahead prediction of the four-dimensional system state with a step-size of $\Delta t = 0.1\mathrm{s}$ , as detailed in App. 7.5.1. The results (Tab. 4) show that BA improves predictive performance also on complex, non-synthetic regression tasks with higher-dimensional input- and output spaces. Further, they are consistent
+
+
+
+with our previous findings regarding the likelihood approximations, with MC being strongest in terms of predictive likelihood, followed by our efficient deterministic alternative PB.
+
+2D Image Completion. We consider a 2D image completion experiment where the inputs $x$ are pixel locations in images showing handwritten digits, and we regress onto the corresponding pixel intensities $y$ , cf. App. 7.6. Interestingly, we found that architectures without deterministic paths were not able to solve this task reliably which is why we only report results for deterministic models.
+
+Table 4: Posterior predictive log-likelihood on the dynamics of a Furuta pendulum, averaged over context sets with $N \in \{ 0,1,\ldots ,{20}\}$ state transitions. BA performs favorably on this real-world task.
+
+ | PB/det. | VI | MC | ANP |
| BA | MA (CNP) | BA | MA (LP-NP) | BA | MA | MA + Attention |
| Furuta Dynamics | 7.50 ± 0.27 | 7.06 ± 0.12 | 7.32 ± 0.18 | 5.57 ± 0.21 | 8.25 ± 0.33 | 7.55 ± 0.24 | 4.74 ± 0.16 |
+
+Table 6: Comparison of the posterior predictive log-likelihood of our BA with traditional MA, combined with a self-attention (SA) mechanism in the encoder (BA does not use an SA mechanism), using the PB and MC likelihood approximations. We provide results for Laplace SA (L-SA), dot-product SA (DP-SA), and mulhead SA (MH-SA) and repeat the results for BA and MA without SA ("no SA"). While L-SA and DP-SA do not increase predictive performance compared to MA without SA, MH-SA results in significant improvements. Nevertheless, vanilla BA still performs better or at least on-par, while being computationally more efficient.
+
+ | BA + PB
+no SA | MA + PB | BA + MC
+no SA | MA + MC |
| no SA | L-SA | DP-SA | MH-SA | no SA | L-SA | DP-SA | MH-SA |
| RBF GP | 1.37 ± 0.15 | 0.94 ± 0.04 | 0.74 ± 0.06 | 0.89 ± 0.04 | 1.46 ± 0.14 | 1.62 ± 0.05 | 1.07 ± 0.05 | 0.93 ± 0.05 | 0.98 ± 0.03 | 1.44 ± 0.09 |
| Weakly Periodic GP | 1.13 ± 0.08 | 0.76 ± 0.02 | 0.59 ± 0.02 | 0.71 ± 0.02 | 1.13 ± 0.15 | 1.30 ± 0.06 | 0.85 ± 0.04 | 0.77 ± 0.03 | 0.82 ± 0.03 | 1.29 ± 0.04 |
| Matern-5/2 GP | -0.50 ± 0.07 | -0.68 ± 0.01 | -1.03 ± 0.01 | -0.76 ± 0.01 | -0.64 ± 0.01 | -0.33 ± 0.01 | -0.90 ± 0.15 | -0.80 ± 0.02 | -0.86 ± 0.01 | -0.59 ± 0.03 |
| Quadratic 1D, L = 64 | 1.42 ± 0.20 | 0.47 ± 0.25 | 0.15 ± 0.32 | 0.47 ± 0.24 | 1.49 ± 0.11 | 1.71 ± 0.23 | 1.27 ± 0.06 | 1.19 ± 0.09 | 1.32 ± 0.14 | 1.66 ± 0.12 |
| Quadratic 3D, L = 128 | -2.46 ± 0.12 | -2.73 ± 0.10 | -2.94 ± 0.41 | -2.95 ± 0.13 | -2.13 ± 0.25 | -1.79 ± 0.07 | -2.14 ± 0.05 | -2.19 ± 0.11 | -2.18 ± 0.07 | -1.71 ± 0.05 |
| Furuta Dynamics | 7.50 ± 0.27 | 7.06 ± 0.12 | 7.13 ± 0.12 | 7.04 ± 0.20 | 7.40 ± 0.46 | 8.25 ± 0.33 | 7.55 ± 0.24 | 7.80 ± 0.13 | 7.67 ± 0.14 | 8.39 ± 0.20 |
+
+As shown in Tab. 5, BA improves performance in comparison to MA by a large margin. This highlights that BA's ability to quantify the information content of a context tuple is particularly beneficial on this task, as, e.g., pixels in the middle area of the images typically convey more information about the identity of the digit than pixels located near the borders.
+
+Table 5: Predictive log-likelihood on a 2D image completion task on MNIST, averaged over $N\in \{0,1,\dots ,392\}$ context pixels.
+
+ | PB/det. | ANP |
| BA | MA (CNP) | MA + Attention |
| 2D Image Completion | 2.75 ± 0.20 | 2.05 ± 0.36 | 1.62 ± 0.03 |
+
+Self-attentive Encoders. Another interesting baseline for BA is MA, combined with a self-attention (SA) mechanism in the encoder. Indeed, similar to BA, SA yields non-uniform weights for the latent observations $r_n$ , where a given weight is computed from some form of pairwise spatial relationship with all other latent observations in the context set (cf. App. 7.3 for a detailed discussion). As BA's weight for $r_n$ only depends on $(x_n, y_n)$ itself, BA is computationally more efficient: SA scales like $\mathcal{O}(N^2)$ in the number $N$ of context tuples while BA scales like $\mathcal{O}(N)$ , and, furthermore, SA does not allow for efficient incremental updates while this is possible for BA, cf. Eq. (9). Tab. 6 shows a comparison of BA with MA in combination with various different SA mechanisms in the encoder. We emphasize that we compare against BA in its vanilla form, i.e., BA does not use SA in the encoder. The results show that Laplace SA and dot-product SA do not improve predictive performance compared to vanilla MA, while multihead SA yields significantly better results. Nevertheless, vanilla BA still performs better or at least on-par and is computationally more efficient. While being out of the scope of this work, according to these results, a combination of BA with SA seems promising if computational disadvantages can be accepted in favour of increased predictive performance, cf. App. 7.3.
+
+# 6 CONCLUSION AND OUTLOOK
+
+We proposed a novel Bayesian Aggregation (BA) method for NP-based models, combining context aggregation and hidden parameter inference in one holistic mechanism which enables efficient handling of task ambiguity. BA is conceptually simple, compatible with existing NP-based model architectures, and consistently improves performance compared to traditional mean aggregation. It introduces only marginal computational overhead, simplifies the architectures in comparison to existing CLV models (no $\bar{r}$ -to- $z$ -networks), and tends to require less complex encoder and decoder network architectures. Our experiments further demonstrate that the VI likelihood approximation traditionally used to train NP-based models should be abandoned in favor of a MC-based approach, and that our proposed PB likelihood approximation represents an efficient deterministic alternative with strong predictive performance. We believe that a range of existing models, e.g., the ANP or NPs with self-attentive encoders, can benefit from BA, especially when a reliable quantification of uncertainty is crucial. Also, more complex Bayesian aggregation models are conceivable, opening interesting avenues for future research.
+
+# ACKNOWLEDGMENTS
+
+We thank Philipp Becker, Stefan Falkner, and the anonymous reviewers for valuable remarks and discussions which greatly improved this paper.
+
+# REFERENCES
+
+Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. Optuna: A Next-generation Hyperparameter Optimization Framework. 2019.
+Marcin Andrychowicz, Misha Denil, Sergio Gomez Colmenarejo, Matthew W. Hoffman, David Pfau, Tom Schaul, and Nando de Freitas. Learning to Learn by Gradient Descent by Gradient Descent. Advances in Neural Information Processing Systems, 2016.
+Bart Bakker and Tom Heskes. Task Clustering and Gating for Bayesian Multitask Learning. Journal of Machine Learning Research, 2003.
+Rémi Bardenet, Mátyás Brendel, Balázs Kégl, and Michèle Sebag. Collaborative Hyperparameter Tuning. International Conference on Machine Learning, 2013.
+Philipp Becker, Harit Pandya, Gregor H. W. Gebhardt, Cheng Zhao, C. James Taylor, and Gerhard Neumann. Recurrent Kalman Networks: Factorized Inference in High-Dimensional Deep Feature Spaces. International Conference on Machine Learning, 2019.
+Yoshua Bengio, Samy Bengio, and Jocelyn Cloutier. Learning a Synaptic Learning Rule. International Joint Conference on Neural Networks, 1991.
+Christopher M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.
+R. Calandra, J. Peters, C. E. Rasmussen, and M. P. Deisenroth. Manifold Gaussian Processes for Regression. International Joint Conference on Neural Networks, 2016.
+Benjamin Seth Cazzolato and Zebb Prime. On the Dynamics of the Furuta Pendulum. Journal of Control Science and Engineering, 2011.
+Yutian Chen, Matthew W. Hoffman, Sergio Gómez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, and Nando de Freitas. Learning to Learn without Gradient Descent by Gradient Descent. International Conference on Machine Learning, 2017.
+Andreas Damianou and Neil Lawrence. Deep Gaussian Processes. International Conference on Artificial Intelligence and Statistics, 2013.
+MP. Deisenroth and CE. Rasmussen. PILCO: A Model-Based and Data-Efficient Approach to Policy Search. International Conference on Machine Learning, 2011.
+Harrison A. Edwards and Amos J. Storkey. Towards a Neural Statistician. International Conference on Learning Representations, 2017.
+Li Fei-Fei, R. Fergus, and P. Perona. One-shot Learning of Object Categories. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006.
+Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. International Conference on Machine Learning, 2017.
+Chelsea Finn, Kelvin Xu, and Sergey Levine. Probabilistic Model-Agnostic Meta-Learning. Advances in Neural Information Processing Systems, 2018.
+K. Furuta, M. Yamakita, and S. Kobayashi. Swing-up Control of Inverted Pendulum Using Pseudo-State Feedback. Journal of Systems and Control Engineering, 1992.
+Yarin Gal and Zoubin Ghahramani. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. International Conference on Machine Learning, 2016.
+
+Marta Garnelo, Dan Rosenbaum, Christopher Maddison, Tiago Ramalho, David Saxton, Murray Shanahan, Yee Whye Teh, Danilo Rezende, and S. M. Ali Eslami. Conditional Neural Processes. International Conference on Machine Learning, 2018a.
+Marta Garnelo, Jonathan Schwarz, Dan Rosenbaum, Fabio Viola, Danilo Jimenez Rezende, S. M. Ali Eslami, and Yee Whye Teh. Neural Processes. arXiv abs/1807.01622, 2018b.
+Daniel Golovin, Benjamin Solnik, Subhodeep Moitra, Greg Kochanski, John Karro, and D. Sculley. Google Vizier: A Service for Black-Box Optimization. International Conference on Knowledge Discovery and Data Mining, 2017.
+Jonathan Gordon, John Bronskill, Matthias Bauer, Sebastian Nowozin, and Richard E. Turner. Meta-Learning Probabilistic Inference for Prediction. International Conference on Learning Representations, 2019.
+Jonathan Gordon, Wessel P. Bruinsma, Andrew Y. K. Foong, James Requeima, Yann Dubois, and Richard E. Turner. Convolutional Conditional Neural Processes. International Conference on Learning Representations, 2020.
+Erin Grant, Chelsea Finn, Sergey Levine, Trevor Darrell, and Thomas L. Griffiths. Recasting Gradient-Based Meta-Learning as Hierarchical Bayes. International Conference on Learning Representations, 2018.
+Tom Heskes. Empirical Bayes for Learning to Learn. International Conference on Machine Learning, 2000.
+Geoffrey E. Hinton and Russ R. Salakhutdinov. Using Deep Belief Nets to Learn Covariance Kernels for Gaussian Processes. Advances in Neural Information Processing Systems, 2008.
+Geoffrey E. Hinton and Drew van Camp. Keeping the Neural Networks Simple by Minimizing the Description Length of the Weights. Annual Conference on Computational Learning Theory, 1993.
+Sepp Hochreiter, A. Steven Younger, and Peter R. Conwell. Learning to Learn Using Gradient Descent. International Conference on Artificial Neural Networks, 2001.
+Hyunjik Kim, Andriy Mnih, Jonathan Schwarz, Marta Garnelo, S. M. Ali Eslami, Dan Rosenbaum, Oriol Vinyals, and Yee Whye Teh. Attentive Neural Processes. International Conference on Learning Representations, 2019.
+Taesup Kim, Jaesik Yoon, Ousmane Dia, Sungwoong Kim, Yoshua Bengio, and Sungjin Ahn. Bayesian Model-Agnostic Meta-Learning. Advances in Neural Information Processing Systems, 2018.
+Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. International Conference on Learning Representations, 2015.
+Diederik P. Kingma and Max Welling. Auto-Encoding Variational Bayes. International Conference on Learning Representations, 2013.
+Gregory Koch, Richard Zemel, and Ruslan Salakhutdinov. Siamese Neural Networks for One-shot Image Recognition. International Conference on Machine Learning, 2015.
+Brenden M. Lake, Ruslan Salakhutdinov, Jason Gross, and Joshua B. Tenenbaum. One Shot Learning of Simple Visual Concepts. Cognitive Science, 2011.
+Neil Lawrence, Matthias Seeger, and Ralf Herbrich. Fast Sparse Gaussian Process Methods: The Informative Vector Machine. Advances in Neural Information Processing Systems, 2002.
+M. Lazaro-Gredilla and A. R. Figueiras-Vidal. Marginalized Neural Network Mixtures for Large-Scale Regression. IEEE Transactions on Neural Networks, 2010.
+Tuan Anh Le, Hyunjik Kim, and Marta Garnelo. Empirical Evaluation of Neural Process Objectives. Third Workshop on Bayesian Deep Learning, 2018.
+Yann LeCun and Corinna Cortes. MNIST Handwritten Digit Database. 2010.
+
+Ke Li and Jitendra Malik. Learning to Optimize. International Conference on Machine Learning, 2016.
+Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, and Ameet Talwalkar. Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization. Journal of Machine Learning Research, 2017.
+Christos Louizos and Max Welling. Multiplicative Normalizing Flows for Variational Bayesian Neural Networks. International Conference on Machine Learning, 2017.
+Christos Louizos, Xiahan Shi, Klamer Schutte, and M. Welling. The Functional Neural Process. Advances in Neural Information Processing Systems, 2019.
+David J. C. MacKay. A Practical Bayesian Framework for Backpropagation Networks. Neural Computation, 1992.
+Radford M. Neal. Bayesian Learning for Neural Networks. Springer-Verlag, 1996.
+Valerio Perrone, Rodolphe Jenatton, Matthias W Seeger, and Cedric Archambeau. Scalable Hyperparameter Transfer Learning. Advances in Neural Information Processing Systems, 2018.
+Joaquin Quñonero-Candela and Carl Edward Rasmussen. A Unifying View of Sparse Approximate Gaussian Process Regression. Journal of Machine Learning Research, 2005.
+Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning. The MIT Press, 2005.
+Sachin Ravi and Hugo Larochelle. Optimization as a Model for Few-Shot Learning. International Conference on Learning Representations, 2017.
+Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic Backpropagation and Approximate Inference in Deep Generative Models. International Conference on Machine Learning, 2014.
+Andrei A. Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero, and Raia Hadsell. Meta-Learning with Latent Embedding Optimization. International Conference on Learning Representations, 2018.
+Adam Santoro, Sergey Bartunov, Matthew M. Botvinick, Daan Wierstra, and Timothy P. Lillicrap. Meta-Learning with Memory-Augmented Neural Networks. International Conference on Machine Learning, 2016.
+Jürgen Schmidhuber. Evolutionary Principles in Self-Referential Learning. On Learning how to Learn: The Meta-Meta-Meta...-Hook. Diploma Thesis, Technische Universität München, Germany, 1987.
+Jürgen Schmidhuber. Learning to Control Fast-Weight Memories: An Alternative to Dynamic Recurrent Networks. Neural Computation, 1992.
+Alex J. Smola and Peter L. Bartlett. Sparse Greedy Gaussian Process Regression. Advances in Neural Information Processing Systems, 2001.
+Jake Snell, Kevin Swersky, and Richard S. Zemel. Prototypical Networks for Few-shot Learning. Advances in Neural Information Processing Systems, 2017.
+Edward Snelson and Zoubin Ghahramani. Sparse Gaussian Processes Using Pseudo-Inputs. International Conference on Neural Information Processing Systems, 2005.
+Jasper Snoek, Hugo Larochelle, and Ryan P. Adams. Practical Bayesian Optimization of Machine Learning Algorithms. Advances in Neural Information Processing Systems, 2012.
+Kihyuk Sohn, Honglak Lee, and Xinchen Yan. Learning Structured Output Representation using Deep Conditional Generative Models. Advances in Neural Information Processing Systems, 2015.
+
+Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip H. S. Torr, and Timothy M. Hospedales. Learning to Compare: Relation Network for Few-Shot Learning. Conference on Computer Vision and Pattern Recognition, 2017.
+Sebastian Thrun and Lorien Pratt. Learning to Learn. Kluwer Academic Publishers, 1998.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is All you Need. Advances in Neural Information Processing Systems, 2017.
+Ricardo Vilalta and Youssef Drissi. A Perspective View and Survey of Meta-Learning. Artificial Intelligence Review, 2005.
+Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. Matching Networks for One Shot Learning. Advances in Neural Information Processing Systems, 2016.
+Michael Volpp, Lukas P. Fröhlich, Kirsten Fischer, Andreas Doerr, Stefan Falkner, Frank Hutter, and Christian Daniel. Meta-Learning Acquisition Functions for Transfer Learning in Bayesian Optimization. International Conference on Learning Representations, 2019.
+Edward Wagstaff, Fabian B. Fuchs, Martin Engelcke, Ingmar Posner, and Michael A. Osborne. On the Limitations of Representing Functions on Sets. International Conference on Machine Learning, 2019.
+Christopher K. I. Williams. Computing with Infinite Networks. Advances in Neural Information Processing Systems, 1996.
+Andrew Gordon Wilson, Zhiting Hu, Ruslan Salakhutdinov, and Eric P. Xing. Deep Kernel Learning. International Conference on Artificial Intelligence and Statistics, 2016.
+Dani Yogatama and Gideon Mann. Efficient Transfer Learning Method for Automatic Hyperparameter Tuning. International Conference on Artificial Intelligence and Statistics, 2014.
+Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabás Póczos, Ruslan Salakhutdinov, and Alexander J. Smola. Deep Sets. Advances in Neural Information Processing Systems, 2017.
+
+# 7 APPENDIX
+
+We present the derivation of the Bayesian aggregation update equations (Eqs. (8), (9)) in more detail. To foster reproducibility, we describe all experimental settings as well as the hyperparameter optimization procedure used to obtain the results reported in Sec. 5, and publish the source code online. We further provide additional experimental results and visualizations of the predictions of the compared architectures.
+
+# 7.1 DERIVATION OF THE BAYESIAN AGGREGATION UPDATE EQUATIONS
+
+We derive the full Bayesian aggregation update equations without making any factorization assumptions. We start from a Gaussian observation model of the form
+
+$$
+p \left(r _ {n} \mid z\right) \equiv \mathcal {N} \left(r _ {n} \mid z, \Sigma_ {r _ {n}}\right), \quad r _ {n} = \operatorname {e n c} _ {r, \phi} \left(x _ {n} ^ {c}, y _ {n} ^ {c}\right), \quad \Sigma_ {r _ {n}} = \operatorname {e n c} _ {\Sigma_ {r}, \phi} \left(x _ {n} ^ {c}, y _ {n} ^ {c}\right), \tag {12}
+$$
+
+where $r_n$ and $\Sigma_{r_n}$ are learned by the encoder network. If we impose a Gaussian prior in the latent space, i.e.,
+
+$$
+p (z) \equiv \mathcal {N} (z | \mu_ {z, 0}, \Sigma_ {z, 0}), \tag {13}
+$$
+
+we arrive at a Gaussian aggregation model which allows to derive the parameters of the posterior distribution, i.e., of
+
+$$
+q _ {\phi} (z | \mathcal {D} ^ {c}) = \mathcal {N} (z | \mu_ {z}, \Sigma_ {z}) \tag {14}
+$$
+
+in closed form using standard Gaussian conditioning (Bishop, 2006):
+
+$$
+\Sigma_ {z} = \left[ \left(\Sigma_ {z, 0}\right) ^ {- 1} + \sum_ {n = 1} ^ {N} \left(\Sigma_ {r _ {n}}\right) ^ {- 1} \right] ^ {- 1}, \tag {15a}
+$$
+
+$$
+\mu_ {z} = \mu_ {z, 0} + \Sigma_ {z} \sum_ {n = 1} ^ {N} \left(\Sigma_ {r _ {n}}\right) ^ {- 1} \left(r _ {n} - \mu_ {z, 0}\right). \tag {15b}
+$$
+
+As the latent space $z$ is shaped by the encoder network, it will find a space where the following factorization assumptions work well (given $d_{z}$ is large enough):
+
+$$
+\Sigma_ {r _ {n}} = \operatorname {d i a g} \left(\sigma_ {r _ {n}} ^ {2}\right), \quad \sigma_ {r _ {n}} ^ {2} = \operatorname {e n c} _ {\sigma_ {r} ^ {2}, \phi} \left(x _ {n} ^ {c}, y _ {n} ^ {c}\right), \quad \Sigma_ {z, 0} = \operatorname {d i a g} \left(\sigma_ {z, 0} ^ {2}\right). \tag {16}
+$$
+
+This yields a factorized posterior, i.e.,
+
+$$
+q _ {\phi} (z | \mathcal {D} ^ {c}) = \mathcal {N} (z | \mu_ {z}, \operatorname {d i a g} (\sigma_ {z} ^ {2})) , \tag {17}
+$$
+
+with
+
+$$
+\sigma_ {z} ^ {2} = \left[ \left(\sigma_ {z, 0} ^ {2}\right) ^ {\ominus} + \sum_ {n = 1} ^ {N} \left(\sigma_ {r _ {n}} ^ {2}\right) ^ {\ominus} \right] ^ {\ominus}, \tag {18a}
+$$
+
+$$
+\mu_ {z} = \mu_ {z, 0} + \sigma_ {z} ^ {2} \odot \sum_ {n = 1} ^ {N} \left(r _ {n} - \mu_ {z, 0}\right) \otimes \left(\sigma_ {r _ {n}} ^ {2}\right). \tag {18b}
+$$
+
+Here $\ominus$ , $\odot$ and $\oslash$ denote element-wise inversion, product, and division, respectively. This is the result Eq. (8) from the main part of this paper.
+
+# 7.2 DISCUSSION OF VI LIKELIHOOD APPROXIMATION
+
+To highlight the limitations of the VI approximation, we note that decoder networks of models employing the PB or the MC likelihood approximation are provided with the same context information at training and test time: the latent variable (which is passed on to the decoder in the form of latent samples $z$ (for MC) or in the form of parameters $\mu_z$ , $\sigma_z^2$ describing the latent distribution (for PB)) is in both cases conditioned only on the context set $\mathcal{D}^c$ . In contrast, in the variational approximation Eq. (2), the expectation is w.r.t. $q_{\phi}$ , conditioned on the union of the context set $\mathcal{D}^c$ and the target set $\mathcal{D}^t$ . As $\mathcal{D}^t$ is not available at test time, this introduces a mismatch between how the model is trained
+
+and how it is used at test time. Indeed, the decoder is trained on samples from $q_{\phi}(z|\mathcal{D}^c \cup \mathcal{D}^t)$ but evaluated on samples from $q_{\phi}(z|\mathcal{D}^c)$ . This is not a serious problem when the model is evaluated on context sets with sizes large enough to allow accurate approximations of the true latent posterior distribution. Small context sets, however, usually contain too little information to infer $z$ reliably. Consequently, the distributions $q_{\phi}(z|\mathcal{D}^c)$ and $q_{\phi}(z|\mathcal{D}^c \cup \mathcal{D}^t)$ typically differ significantly in this regime. Hence, incentivizing the decoder to yield meaningful predictions on small context sets requires intricate and potentially expensive additional sampling procedures to choose suitable target sets $\mathcal{D}^t$ during training. As a corner case, we point out that it is not possible to train the decoder on samples from the latent prior, because the right hand side of Eq. (2) vanishes for $\mathcal{D}^c = \mathcal{D}^t = \varnothing$ .
+
+# 7.3 SELF-ATTENTIVE ENCODER ARCHITECTURES
+
+Kim et al. (2019) propose to use attention-mechanisms to improve the quality of NP-based regression. In general, given a set of key-value pairs $\{(x_n,y_n)\}_{n = 1}^N$ , $x_{n}\in \mathbb{R}^{d_{x}}$ , $y_{n}\in \mathbb{R}^{d_{y}}$ , and a query $x^{*}\in \mathbb{R}^{d_{x}}$ , an attention mechanism $\mathcal{A}$ produces a weighted sum of the values, with the weights being computed from the keys and the query:
+
+$$
+\mathcal {A} \left(\left\{\left(x _ {n}, y _ {n}\right) \right\} _ {n = 1} ^ {N}, x ^ {*}\right) = \sum_ {n = 1} ^ {N} w \left(x _ {n}, x ^ {*}\right) y _ {n}. \tag {19}
+$$
+
+There are several types of attention mechanisms proposed in the literature (Vaswani et al., 2017), each defining a specific form of the weights. Laplace attention adjusts the weights according to the spatial distance of keys and query:
+
+$$
+w _ {\mathrm {L}} \left(x _ {n}, x ^ {*}\right) \propto \exp \left(- \| x _ {n} - x ^ {*} \| _ {1}\right). \tag {20}
+$$
+
+Similarly, dot-product attention computes
+
+$$
+w _ {\mathrm {D P}} \left(x _ {n}, x ^ {*}\right) \propto \exp \left(x _ {n} ^ {T} x ^ {*} / \sqrt {d _ {x}}\right). \tag {21}
+$$
+
+A more complex mechanism is multihead attention, which employs a set of $3H$ learned linear mappings $\{\mathcal{L}_h^K\}_{h=1}^H$ , $\{\mathcal{L}_h^V\}_{h=1}^H$ , $\{\mathcal{L}_h^Q\}_{h=1}^H$ , where $H$ is a hyperparameter. For each $h$ , these mappings are applied to keys, values, and queries, respectively. Subsequently, dot-product attention is applied to the set of transformed key-value pairs and the transformed query. The resulting $H$ values are then again combined by a further learned linear mapping $\mathcal{L}^O$ to obtain the final result.
+
+Self-attention (SA) is defined by setting the set of queries equal to the set of keys. Therefore, SA produces again a set of $N$ weighted values. Combining SA with an NP-encoder, i.e., applying SA to the set $\{f_x(x_n), r_n\}_{n=1}^N$ of inputs $x_n$ and corresponding latent observations $r_n$ (where we also consider a possible nonlinear transformation $f_x$ of the inputs) and subsequently applying MA yields an interesting baseline for our proposed BA. Indeed, similar to BA, SA computes a weighted sum of the latent observations $r_n$ . Note, however, that SA weighs each latent observation according to some form of spatial relationship of the corresponding input with all other latent observations in the context set. In contrast, BA's weight for a given latent observation is based only on features computed from the context tuple corresponding to this very latent observation and allows to incorporate an estimation of the amount of information contained in the context tuple into the aggregation (cf. Sec. 4.1). This leads to several computational advantages of BA over SA: (i) SA scales quadratically in the number $N$ of context tuples, as it has to be evaluated on all $N^2$ pairs of context tuples. In contrast, BA scales linearly with $N$ . (ii) BA allows for efficient incremental updates when context data arrives sequentially (cf. Eq. (9)), while using SA does not provide this possibility: it requires to store and encode the whole context set $\mathcal{D}^c$ at once and to subsequently aggregate the whole set of resulting (SA-weighted) latent observations.
+
+The results in Tab. 6, Sec. 5 show that multihead SA leads to significant improvements in predictive performance compared to vanilla MA. Therefore, a combination of BA with self-attentive encoders seems promising in situations where computational disadvantages can be accepted in favour of increased predictive performance. Note that BA relies on a second encoder output $\sigma_{r_n}^2$ (in addition to the latent observation $r_n$ ) which assesses the information content in each context tuple $(x_n, y_n)$ . As each SA-weighted $r_n$ is informed by the other latent observations in the context set, obviously, one would have to also process the set of $\sigma_{r_n}^2$ in a manner consistent with the SA-weighting. We leave such a combination of SA and BA for future research.
+
+Table 7: Comparison of the predictive log-likelihood of NP-based architectures with two simple GP-based baselines, (i) Vanilla GP (optimizes the hyperparameters individually on each target task and ignores the source data) (ii) Multi-task GP (optimizes one set of hyperparameters on all source tasks and uses them without further adaptation on the target tasks). Both GP implementations use RBF-kernels. As in the main text, we average performance over context sets with sizes $N \in \{0, \dots, 64\}$ for RBF GP and $N \in \{0, \dots, 20\}$ for the other experiments. Multi-task GP constitutes the optimal model (assuming it fits the hyperparameters perfectly) for the RBF GP experiment, which explains its superior performance. On the Quadratic 1D experiment, Multi-task GP still performs better than the other methods as this function class shows a relatively low degree of variability. In contrast, on more complex experiments like Quadratic 3D and the Furuta dynamics, none of the GP variants is able to produce meaningful results given the small budget of at most 20 context points, while NP-based methods produce predictions of high quality as they incorporate the source data more efficiently.
+
+ | NPs with MC-loss | GP |
| BA | MA | Vanilla | Multi-task |
| RBF GP | 1.62 ± 0.05 | 1.07 ± 0.05 | 1.96 | 2.99 |
| Quadratic 1D, L = 64 | 1.71 ± 0.23 | 1.27 ± 0.06 | -1.56 | 2.11 |
| Quadratic 3D, L = 128 | -1.79 ± 0.07 | -2.14 ± 0.05 | -472.76 | -173.78 |
| Furuta Dynamics | 8.25 ± 0.33 | 7.55 ± 0.24 | -6.16 | -2.47 |
+
+# 7.4 NEURAL PROCESS-BASED MODELS IN THE CONTEXT OF SCALABLE PROBABILISTIC REGRESSION
+
+We discuss in more detail how NP-based models relate to other existing methods for (scalable) probabilistic regression, such as (multi-task) GPs (Rasmussen and Williams, 2005; Bardenet et al., 2013; Yogatama and Mann, 2014; Golovin et al., 2017), Bayesian neural networks (BNNs) (MacKay, 1992; Gal and Ghahramani, 2016), and DeepGPs (Damianou and Lawrence, 2013).
+
+NPs are motivated in Garnelo et al. (2018a;b), Kim et al. (2019), as well as in our Sec. 1, as models which combine the computational efficiency of neural networks with well-calibrated uncertainty estimates (like those of GPs). Indeed, NPs scale linearly in the number $N$ of context and $M$ of target data points, i.e., like $\mathcal{O}(N + M)$ , while GPs scale like $\mathcal{O}(N^3 + M^2)$ . Furthermore, NPs are shown to exhibit well-calibrated uncertainty estimates. In this sense, NPs can be counted as members of the family of scalable probabilistic regression methods.
+
+A central aspect of NP training which distinguishes NPs from a range of standard methods is that they are trained in a multi-task fashion (cf. Sec. 3). This means that NPs rely on data from a set of related source tasks from which they automatically learn powerful priors and the ability to adapt quickly to unseen target tasks. This multi-task training procedure of NPs scales linearly in the number $L$ of source tasks, which makes it possible to train these architectures on large amounts of source data. Applying GPs in such a multi-task setting can be challenging, especially for large numbers of source tasks. Similarly, BNNs as well as DeepGPs are in their vanilla forms specifically designed for the single-task setting. Therefore, GPs, BNNs, and DeepGPs are not directly applicable in the NP multi-task setting, which is why they are typically not considered as baselines for NP-based models, as discussed in (Kim et al., 2019).
+
+The experiments presented in Garnelo et al. (2018a;b) and Kim et al. (2019) focus mainly on evaluating NPs in the context of few-shot probabilistic regression, i.e., on demonstrating the data-efficiency of NPs on the target task after training on data from a range of source tasks. In contrast, the application of NPs in situations with large (> 1000) numbers of context/target points per task has to the best of our knowledge not yet been investigated in detail in the literature. Furthermore, it has not been studied how to apply NPs in situations where only a single or very few source tasks are available. The focus of our paper is a clear-cut comparison of the performance of our BA with traditional MA in the context of NP-based models. Therefore, we also consider experiments similar to those presented in (Garnelo et al., 2018a;b; Kim et al., 2019) and leave further comparisons with existing methods for (multi-task) probabilistic regressions for future work.
+
+Nevertheless, to illustrate this discussion, we provide two simple GP-based baseline methods: (i) a vanilla GP, which optimizes the hyperparameters on each target task individually and does not use
+
+the source data, and (ii) a naive but easily interpretable example of a multi-task GP, which optimizes one set of hyperparameters on all source tasks and uses it for predictions on the target tasks without further adaptation. The results in Tab. 7 show that those GP-based models can only compete with NPs on function classes where either the inductive bias as given by the kernel functions fits the data well (RBF GP), or on function classes which exhibit a relatively low degree of variability (Quadratic 1D). On more complex function classes, NPs produce predictions of much better quality, as they incorporate the source data more efficiently.
+
+# 7.5 EXPERIMENTAL DETAILS
+
+We provide details about the data sets as well as about the experimental setup used in our experiments in Sec. 5.
+
+# 7.5.1 DATA GENERATION
+
+In our experiments, we use several classes of functions to evaluate the architectures under consideration. To generate training data from these function classes, we sample $L$ random tasks (as described in Sec. 5), and $N_{\mathrm{tot}}$ random input locations $x$ for each task. For each minibatch of training tasks, we uniformly sample a context set size $N \in \{n_{\mathrm{min}}, \dots, n_{\mathrm{max}}\}$ and use a random subset of $N$ data points from each task as context sets $\mathcal{D}^c$ . The remaining $M = N_{\mathrm{tot}} - N$ data points are used as the target sets $\mathcal{D}^t$ (cf. App. 7.5.3 for the special case of the VI likelihood approximation). Tab. 8 provides details about the data generation process.
+
+GP Samples. We sample one-dimensional functions $f: \mathbb{R} \to \mathbb{R}$ from GP priors with three different stationary kernel functions as proposed by Gordon et al. (2020).
+
+A radial basis functions (RBF) kernel with lenghtscale $l = 1.0$ ..
+
+$$
+k _ {\mathrm {R B F}} (r) \equiv \exp (- 0. 5 r ^ {2}). \tag {22}
+$$
+
+A weakly periodic kernel:
+
+$$
+k _ {\mathrm {W P}} (r) \equiv \exp \left(- 2 \sin (0. 5 r) ^ {2} - 0. 1 2 5 r ^ {2}\right). \tag {23}
+$$
+
+A Matern-5/2 kernel with lengthscale $l = 0.25$
+
+$$
+k _ {\mathrm {M} 5 / 2} (r) \equiv \left(1 + \frac {\sqrt {5} r}{0 . 2 5} + \frac {5 r ^ {2}}{3 \cdot 0 . 2 5 ^ {2}}\right) \exp \left(- \frac {\sqrt {5} r}{0 . 2 5}\right). \tag {24}
+$$
+
+Quadratic Functions. We consider two classes of quadratic functions. The first class $f^{Q,1D} : \mathbb{R} \to \mathbb{R}$ is defined on a one-dimensional domain and parametrized by three parameters $a, b, c \in \mathbb{R}$ :
+
+$$
+f ^ {Q, 1 \mathrm {D}} (x) \equiv a ^ {2} (x + b) ^ {2} + c. \tag {25}
+$$
+
+The second class $f^{Q,3\mathrm{D}}: \mathbb{R}^3 \to \mathbb{R}$ is defined on a three-dimensional domain and also parametrized by three parameters $a, b, c \in \mathbb{R}$ :
+
+$$
+f ^ {Q, 3 \mathrm {D}} \left(x _ {1}, x _ {2}, x _ {3}\right) \equiv 0. 5 a \left(x _ {1} ^ {2} + x _ {2} ^ {2} + x _ {3} ^ {2}\right) + b \left(x _ {1} + x _ {2} + x _ {3}\right) + 3 c. \tag {26}
+$$
+
+This function class was proposed in Perrone et al. (2018).
+
+For both function classes we add Gaussian noise with standard deviation $\sigma_{n}$ to the evaluations, cf. Tab. 8.
+
+Furuta Pendulum Dynamics. We consider a function class obtained by integrating the non-linear equations of motion governing the dynamics of a Furuta pendulum (Furuta et al., 1992; Cazzolato and Prime, 2011) for a time span of $\Delta t = 0.1$ s. More concretely, we consider the mapping
+
+$$
+\Theta (t) \rightarrow \Theta (t + \Delta t) - \Theta (t), \tag {27}
+$$
+
+Table 8: Input spaces and parameters used to generate data for training and testing the architectures discussed in the main part of this paper. $\mathrm{U}\left( {a,b}\right)$ denotes the uniform distribution on the interval $\left\lbrack {a,b}\right\rbrack$ , and, likewise $\mathrm{U}\left\{ {a,a + n}\right\}$ denotes the uniform distribution on the set $\{ a,a + 1,\ldots ,a + n\}$ .
+
+| Symbol | Description | Value/Sampling distribution |
| GP Samples |
| x | Input | U(-2.0,+2.0) |
| Ntot | Number of data points per task | 128 |
| {nmin,...} | Context set sizes during training | {3,...,50} |
| 1D Quadratic Functions |
| x | Input | U(-1.0,+1.0) |
| a | Parameter | U(-0.5,+1.5) |
| b | Parameter | U(-0.9,+0.9) |
| c | Parameter | U(-1.0,+1.0) |
| σn | Noise standard deviation | 0.01 |
| Ntot | Number of data points per task | 128 |
| {nmin,...} | Context set sizes during training | U{0,...,20} |
| 3D Quadratic Functions |
| x1,x2,x3 | Inputs | U(-1.0,+1.0) |
| a,b,c | Parameters | U(+0.1,+10.0) |
| σn | Noise standard deviation | 0.01 |
| Ntot | Number of data points per task | 128 |
| {nmin,...} | Context set sizes during training | U{0,...,20} |
| Furuta Dynamics |
| θarm, θpend | Input angles | U(0.0,2π rad) |
| θarm, θpend | Input angular velocities | U(-2π rad/0.5s, 2π rad/0.5s) |
| marm | Mass arm | U(6.0·10-2kg, 6.0·10-1kg) |
| mpend | Mass pendulum | U(1.5·10-2kg, 1.5·10-1kg) |
| larm | Length arm | U(5.6·10-2m, 5.6·10-1m) |
| Larm | Distance joint arm — mass arm | U(1.0·10-1m, 3.0·10-1m) |
| Lpend | Distance joint pend. — mass pend. | U(1.0·10-1m, 3.0·10-1m) |
| barm | Damping constant arm | U(2.0·10-5Nms, 2.0·10-3Nms) |
| bpend | Damping constant pendulum | U(5.6·10-5Nms, 5.6·10-3Nms) |
| στ,arm | Action noise standard dev. arm | 0.5Nm |
| στ,pend | Action noise standard dev. pend. | 0.5Nm |
| Ntot | Number of data points per task | 256 |
| {nmin,...} | Context set sizes during training | U{0,...,20} |
| 2D Image Completion MNIST |
| x1,x2 | Input pixel locations | U{0,27} (scaled to [0,1]) |
| Ntot | Number of data points per task | 28·28 |
| {nmin,...} | Context set sizes during training | U{0,...,28·28/2} |
+
+where $\Theta = \left[\theta_{\mathrm{arm}}(t),\theta_{\mathrm{pend}}(t),\dot{\theta}_{\mathrm{arm}}(t),\dot{\theta}_{\mathrm{pend}}(t)\right]^T$ denotes the four-dimensional vector describing the dynamical state of the Furuta pendulum. The Furuta pendulum is parametrized by seven parameters (two masses, three lengths, two damping constants) as detailed in Tab. 8. During training, we provide $L = 64$ tasks, corresponding to 64 different parameter configurations. We consider the free system and generate noise by applying random torques at each integration time step $(\Delta t_{\mathrm{Euler}} = 0.001\mathrm{s})$ to the joints of the arm and pendulum drawn from Gaussian distributions with standard deviations $\sigma_{\tau,\mathrm{pend}},\sigma_{\tau,\mathrm{arm}}$ , respectively.
+
+
+(a) $\mathrm{BA} + \mathrm{PB}$
+
+
+(b) $\mathrm{MA} + \mathrm{det}$ . (CNP).
+
+
+(c) $\mathrm{BA} + \mathrm{VI}$ , $\mathrm{BA} + \mathrm{MC}$ .
+
+
+(d) MA + VI (LP-NP), MA + MC.
+Figure 4: Model architectures used for our experiments in Sec. 5. For the ANP architecture we refer the reader to Kim et al. (2019). Orange rectangles denote MLPs. Blue rectangles denote aggregation operations. Variables in green rectangles are sampled from normal distributions with parameters given by the incoming nodes. To arrive at a fair comparison, we optimize all MLP architectures, the latent space dimensionality $d_{z}$ , as well as the Adam learning rate, individually for all model architectures and all experiments, cf. App. 7.5.3.
+
+2D Image Completion. For this task, we use the MNIST database of $28 \times 28$ images of handwritten digits (LeCun and Cortes, 2010), and define 2D functions mapping pixel locations $x_{1}, x_{2} \in \{0, \dots, 27\}$ (scaled to the unit square) to the corresponding pixel intensities $y \in \{0, \dots, 255\}$ (scaled to the unit interval), cf. Tab. 8. One training task corresponds to one image drawn randomly from the training set (consisting of 60000 images) and for evaluation we use a subset of the test set (consisting of 10000 images).
+
+# 7.5.2 MODEL ARCHITECTURES
+
+We provide the detailed architectures used for the experiments in Sec. 5 in Fig. 4. For ANP we use multihead cross attention and refer the reader to Kim et al. (2019) for details about the architecture.
+
+# 7.5.3 HYPERPARAMETERS AND HYPERPARAMETER OPTIMIZATION
+
+To arrive at a fair comparison of our BA with MA, it is imperative to use optimal model architectures for each aggregation method and likelihood approximation under consideration. Therefore, we optimize the number of hidden layers and the number of hidden units per layer of each encoder and decoder MLP (as shown in Fig. 4), individually for each model architecture and each experiment. For the ANP, we also optimize the multihead attention MLPs. We further optimize the latent space dimensionality $d_{z}$ and the learning rate of the Adam optimizer. For this hyperparameter optimization, we use the Optuna framework (Akiba et al., 2019) with TPE Sampler and Hyperband pruner (Li et al., 2017). We consistently use a minibatch size of 16. Further, we use $S = 10$ latent samples to evaluate
+
+the MC likelihood approximation during training. To evaluate the VI likelihood approximation, we sample target set sizes between $N_{\mathrm{tot}}$ and $N$ in each training epoch, cf. Tab. 8.
+
+# 7.5.4 EVALUATION PROCEDURE
+
+To evaluate the performance of the various model architectures we generate $L = 256$ unseen test tasks with target sets $\mathcal{D}_{\ell}^{t}$ consisting of $M = 256$ data points each and compute the average posterior predictive log-likelihood $\frac{1}{L}\frac{1}{M}\sum_{\ell = 1}^{L}\log p\left(y_{\ell ,1:M}^{t}\Bigg{|}x_{\ell ,1:M}^{t},\mathcal{D}_{\ell}^{c},\theta\right)$ , given context sets $\mathcal{D}_{\ell}^{c}$ of size $N$ .
+
+Depending on the architecture, we approximate the posterior predictive log-likelihood according to:
+
+- For BA + PB likelihood approximation:
+
+$$
+\frac {1}{L} \frac {1}{M} \sum_ {\ell = 1} ^ {L} \sum_ {m = 1} ^ {M} \log p \left(y _ {\ell , m} ^ {t} \mid x _ {\ell , m} ^ {t}, \mu_ {z, \ell}, \sigma_ {z, \ell} ^ {2}, \theta\right). \tag {28}
+$$
+
+- For MA + deterministic loss (= CNP):
+
+$$
+\frac {1}{L} \frac {1}{M} \sum_ {\ell = 1} ^ {L} \sum_ {m = 1} ^ {M} \log p \left(y _ {\ell , m} ^ {t} \mid x _ {\ell , m} ^ {t}, \bar {r} _ {\ell}, \theta\right). \tag {29}
+$$
+
+- For architectures employing sampling-based likelihood approximations (VI, MC-LL) we report the joint log-likelihood over all data points in a test set, i.e.
+
+$$
+\begin{array}{l} \frac {1}{L} \frac {1}{M} \sum_ {\ell = 1} ^ {L} \log \int q _ {\phi} (z _ {\ell} | \mathcal {D} _ {\ell} ^ {c}) \prod_ {m = 1} ^ {M} p \left(y _ {\ell , m} ^ {t} \mid x _ {\ell , m} ^ {t}, z _ {\ell}, \theta\right) d z _ {\ell} (30) \\ \approx \frac {1}{L} \frac {1}{M} \sum_ {\ell = 1} ^ {L} \log \frac {1}{S} \sum_ {s = 1} ^ {S} \prod_ {m = 1} ^ {M} p \left(y _ {\ell , m} ^ {t} \mid x _ {\ell , m} ^ {t}, z _ {\ell , s}, \theta\right) (31) \\ = - \frac {1}{M} \log S + \frac {1}{L} \frac {1}{M} \sum_ {l = 1} ^ {L} \operatorname {l o g s u m e x p} _ {s = 1} ^ {S} \left(\sum_ {m = 1} ^ {M} \log p \left(y _ {\ell , m} ^ {t} \mid x _ {\ell , m} ^ {t}, z _ {\ell , s}, \theta\right)\right), (32) \\ \end{array}
+$$
+
+where $z_{\ell ,s}\sim q_{\phi}(z|\mathcal{D}_{\ell})$ . We employ $S = 25$ latent samples.
+
+To compute the log-likelihood values given in tables, we additionally average over various context set sizes $N$ as detailed in the main part of this paper.
+
+We report the mean posterior predictive log-likelihood computed in this way w.r.t. 10 training runs with different random seeds together with $95\%$ confidence intervals
+
+Table 9: Relative evaluation runtimes and numbers of parameters of the optimized network architectures on the GP tasks. The deterministic methods (PB, det.) are much more efficient regarding evaluation runtime, as they require only on forward pass per prediction, while the sampling-based approaches (VI, MC) require multiple forward passes (each corresponding to one latent sample) to compute their predictions. We use $S = 25$ latent samples, as described in App. 7.5.4. Furthermore, BA tends to require less complex encoder and decoder network architectures compared to MA, because it represents a more efficient mechanism to propagate information from the context set to the latent state.
+
+ | PB/det. | VI | MC |
| BA | MA (CNP) | BA | MA (LP-NP) | BA | MA |
| RBF GP | Runtime | 1 | 1.4 | 18 | 25 | 32 | 27 |
| #Parameters | 72k | 96k | 63k | 77k | 122k | 153k |
| Weakly Periodic GP | Runtime | 1 | 1.4 | 11 | 10 | 22 | 15 |
| #Parameters | 51k | 87k | 48k | 72k | 87k | 89k |
| Matern-5/2 GP | Runtime | 1 | 1.1 | 6.5 | 11 | 15 | 19 |
| #Parameters | 53k | 100k | 32k | 35k | 108k | 104k |
+
+Table 10: Posterior predictive mean squared error (MSE) on all experiments presented in this paper. We average over the same context set sizes as used to compute the posterior predictive log-likelihood, cf. Sec. 5, and again use $S = 25$ latent samples to compute the mean prediction of sampling-based methods. Our BA consistently improves predictive performance compared to MA not only in terms of likelihood (as shown in Sec. 5), but also in terms of MSE. Furthermore, while ANP tends to perform poorly in terms of likelihood (cf. Sec. 5), it's MSE is improved greatly by the attention mechanism.
+
+ | PB/det. | VI | MC | ANP |
| BA | MA (CNP) | BA | MA (LP-NP) | BA | MA | MA + Attention |
| RBF GP | 0.0623 ± 0.0009 | 0.0687 ± 0.0010 | 0.0736 ± 0.0005 | 0.0938 ± 0.0036 | 0.0637 ± 0.0007 | 0.0741 ± 0.0012 | 0.0550 ± 0.0009 |
| Weakly Periodic GP | 0.0679 ± 0.0007 | 0.0761 ± 0.0014 | 0.0879 ± 0.0017 | 0.1326 ± 0.0518 | 0.0677 ± 0.0008 | 0.0832 ± 0.0009 | 0.0592 ± 0.0009 |
| Matern-5/2 GP | 0.2452 ± 0.0088 | 0.3021 ± 0.0035 | 0.3702 ± 0.0100 | 0.6292 ± 0.1077 | 0.2321 ± 0.0019 | 0.5166 ± 0.1438 | 0.1890 ± 0.0012 |
| Quadratics 1D, L = 64 | 0.1447 ± 0.0095 | 0.1513 ± 0.0091 | 0.1757 ± 0.0128 | 0.1833 ± 0.0154 | 0.1473 ± 0.0107 | 0.1636 ± 0.0082 | 0.1330 ± 0.0037 |
| Quadratics 3D, L = 128 | 190.5 ± 1.4 | 195.4 ± 1.5 | 253.1 ± 18.0 | 278.1 ± 40.5 | 196.8 ± 2.6 | 206.7 ± 5.3 | 192.5 ± 2.7 |
| Furuta Dynamics | 0.1742 ± 0.0092 | 0.1989 ± 0.0095 | 0.2269 ± 0.0088 | 0.2606 ± 0.0165 | 0.1758 ± 0.0124 | 0.1977 ± 0.0154 | 0.1516 ± 0.0073 |
| 2D Image Completion | 0.0348 ± 0.0010 | 0.0417 ± 0.0026 | - | - | - | - | 0.0215 ± 0.0003 |
+
+# 7.6 ADDITIONAL EXPERIMENTAL RESULTS
+
+We provide additional experimental results accompanying the experiments presented in Sec. 5:
+
+- Results for relative evaluation runtimes and numbers of parameters of the optimized network architectures on the full GP suite of experiments, cf. Tab. 9.
+- The posterior predictive mean squared error on all experiments, cf. Tab. 10.
+- The context-size dependent results for the predictive posterior log-likelihood for the 1D and 3D Quadratic experiments, the Furuta dynamics experiment, as well as the 2D image completion experiment, cf. Fig. 5.
+- More detailed plots of the predictions on one-dimensional experiments (1D Quadratics (Figs. 6, 7), RBF-GP, (Figs. 8, 9), Weakly Periodic GP (Figs. 10, 11), and Matern-5/2 GP (Figs. 12, 13)).
+
+
+Figure 5: Posterior predictive log-likelihood in dependence of the context set size $N$ for the 1D and 3D Quadratic experiments, the Furuta dynamics experiment as well as the 2D image completion experiment.
+
+
+
+
+
+
+
+
+(a) $\mathrm{BA} + \mathrm{PB}$
+
+
+(b) MA + det. (CNP)
+
+
+(c) $\mathrm{BA} + \mathrm{VI}$
+
+
+(d) MA + VI (LP-NP)
+
+
+(e) ANP
+
+
+(f) $\mathrm{BA} + \mathrm{MC}$ -LL
+Figure 6: Predictions on two instances (dashed lines) of the 1D quadratic function class, given $N = 3$ context data points (circles). We plot mean and standard deviation (solid line, shaded area) predictions together with 10 function samples (for deterministic methods we employ AR sampling).
+
+
+(g) MA + MC-LL
+
+
+(a) $\mathrm{BA} + \mathrm{PB}$
+
+
+(b) MA + det. (CNP)
+
+
+(c) $\mathrm{BA} + \mathrm{VI}$
+
+
+(d) MA + VI (LP-NP)
+
+
+(e) ANP
+
+
+(f) $\mathrm{BA} + \mathrm{MC}$ -LL
+Figure 7: Predictions on two instances (dashed lines) of the 1D quadratic function class, given $N = 19$ context data points (circles). We plot mean and standard deviation (solid line, shaded area) predictions together with 10 function samples (for deterministic methods we employ AR sampling).
+
+
+(g) MA + MC-LL
+
+
+(a) $\mathrm{BA} + \mathrm{PB}$
+
+
+(b) MA + det. (CNP)
+
+
+(c) $\mathrm{BA} + \mathrm{VI}$
+
+
+(d) MA + VI (LP-NP)
+
+
+(e) ANP
+
+
+(f) $\mathrm{BA} + \mathrm{MC}$ -LL
+
+
+(g) MA + MC-LL
+Figure 8: Predictions on two instances (dashed lines) of the RBF GP function class, given $N = 20$ context data points (circles). We plot mean and standard deviation (solid line, shaded area) predictions together with 10 function samples (for deterministic methods we employ AR sampling).
+
+
+(a) $\mathrm{BA} + \mathrm{PB}$
+
+
+(b) MA + det. (CNP)
+
+
+(c) $\mathrm{BA} + \mathrm{VI}$
+
+
+(d) MA + VI (LP-NP)
+
+
+(e) ANP
+
+
+(f) $\mathrm{BA} + \mathrm{MC}$ -LL
+
+
+(g) MA + MC-LL
+Figure 9: Predictions on two instances (dashed lines) of the RBF GP function class, given $N = 60$ context data points (circles). We plot mean and standard deviation (solid line, shaded area) predictions together with 10 function samples (for deterministic methods we employ AR sampling).
+
+
+(a) $\mathrm{BA} + \mathrm{PB}$
+
+
+(b) MA + det. (CNP)
+
+
+(c) $\mathrm{BA} + \mathrm{VI}$
+
+
+(d) MA + VI (LP-NP)
+
+
+(e) ANP
+
+
+(f) $\mathrm{BA} + \mathrm{MC}$ -LL
+
+
+(g) MA + MC-LL
+Figure 10: Predictions on two instances (dashed lines) of the Weakly Periodic GP function class, given $N = 20$ context data points (circles). We plot mean and standard deviation (solid line, shaded area) predictions together with 10 function samples (for deterministic methods we employ AR sampling).
+
+
+(a) $\mathrm{BA} + \mathrm{PB}$
+
+
+(b) MA + det. (CNP)
+
+
+(c) $\mathrm{BA} + \mathrm{VI}$
+
+
+(d) MA + VI (LP-NP)
+
+
+(e) ANP
+
+
+(f) $\mathrm{BA} + \mathrm{MC}$ -LL
+
+
+(g) MA + MC-LL
+Figure 11: Predictions on two instances (dashed lines) of the Weakly Periodic GP function class, given $N = 60$ context data points (circles). We plot mean and standard deviation (solid line, shaded area) predictions together with 10 function samples (for deterministic methods we employ AR sampling).
+
+
+(a) $\mathrm{BA} + \mathrm{PB}$
+
+
+(b) MA + det. (CNP)
+
+
+(c) $\mathrm{BA} + \mathrm{VI}$
+
+
+(d) MA + VI (LP-NP)
+
+
+(e) ANP
+
+
+(f) $\mathrm{BA} + \mathrm{MC}$ -LL
+
+
+(g) MA + MC-LL
+Figure 12: Predictions on two instances (dashed lines) of the Matern-5/2 GP function class, given $N = 20$ context data points (circles). We plot mean and standard deviation (solid line, shaded area) predictions together with 10 function samples (for deterministic methods we employ AR sampling).
+
+
+(a) $\mathrm{BA} + \mathrm{PB}$
+
+
+(b) MA + det. (CNP)
+
+
+(c) $\mathrm{BA} + \mathrm{VI}$
+
+
+(d) MA + VI (LP-NP)
+
+
+(e) ANP
+
+
+(f) $\mathrm{BA} + \mathrm{MC}$ -LL
+
+
+(g) MA + MC-LL
+Figure 13: Predictions on two instances (dashed lines) of the Matern-5/2 GP function class, given $N = 60$ context data points (circles). We plot mean and standard deviation (solid line, shaded area) predictions together with 10 function samples (for deterministic methods we employ AR sampling).
\ No newline at end of file
diff --git a/bayesiancontextaggregationforneuralprocesses/images.zip b/bayesiancontextaggregationforneuralprocesses/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..d60c0821b15f93c6c91e99dbff24d35602996357
--- /dev/null
+++ b/bayesiancontextaggregationforneuralprocesses/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:33259975dba522ff6c0fcc71c1571122f1060e2ff729902ca643a5afeb8c7492
+size 1344378
diff --git a/bayesiancontextaggregationforneuralprocesses/layout.json b/bayesiancontextaggregationforneuralprocesses/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..44590c22efb86f178f172b3c88c27442257d5792
--- /dev/null
+++ b/bayesiancontextaggregationforneuralprocesses/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6d583e49897c211d46a6edef6da6c9ca8d3d18cf4fffa1d8738f87f2e337060a
+size 1017862
diff --git a/bayesianfewshotclassificationwithonevseachplyagammaaugmentedgaussianprocesses/d67a40fe-301f-48a0-9127-71cd1e06517e_content_list.json b/bayesianfewshotclassificationwithonevseachplyagammaaugmentedgaussianprocesses/d67a40fe-301f-48a0-9127-71cd1e06517e_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..8cf99f852d673ea58e76fde25108331f4d17f7c9
--- /dev/null
+++ b/bayesianfewshotclassificationwithonevseachplyagammaaugmentedgaussianprocesses/d67a40fe-301f-48a0-9127-71cd1e06517e_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:97cfd77c1e243498c275ace8508f888dc9e176c4e0d3a95c2bf4a67766f54fce
+size 168149
diff --git a/bayesianfewshotclassificationwithonevseachplyagammaaugmentedgaussianprocesses/d67a40fe-301f-48a0-9127-71cd1e06517e_model.json b/bayesianfewshotclassificationwithonevseachplyagammaaugmentedgaussianprocesses/d67a40fe-301f-48a0-9127-71cd1e06517e_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..cf36d38060f3df7ad0a7d625eaa09b6c2f059d0f
--- /dev/null
+++ b/bayesianfewshotclassificationwithonevseachplyagammaaugmentedgaussianprocesses/d67a40fe-301f-48a0-9127-71cd1e06517e_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b5f7cda3d2ed8b1241c45f7fd044938e6b3e9f0e7d3566aa4100a9b3dbc09c52
+size 201514
diff --git a/bayesianfewshotclassificationwithonevseachplyagammaaugmentedgaussianprocesses/d67a40fe-301f-48a0-9127-71cd1e06517e_origin.pdf b/bayesianfewshotclassificationwithonevseachplyagammaaugmentedgaussianprocesses/d67a40fe-301f-48a0-9127-71cd1e06517e_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..5c6d7b3c7db4de8fd67281a48d72bfff12666d34
--- /dev/null
+++ b/bayesianfewshotclassificationwithonevseachplyagammaaugmentedgaussianprocesses/d67a40fe-301f-48a0-9127-71cd1e06517e_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7e681774d87418930858408576eeaae875d3bb210f2bef9bb06f75d45ecc7152
+size 6668584
diff --git a/bayesianfewshotclassificationwithonevseachplyagammaaugmentedgaussianprocesses/full.md b/bayesianfewshotclassificationwithonevseachplyagammaaugmentedgaussianprocesses/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..c778c369265fe12612a70f15b5ef9982b4b5f1f5
--- /dev/null
+++ b/bayesianfewshotclassificationwithonevseachplyagammaaugmentedgaussianprocesses/full.md
@@ -0,0 +1,783 @@
+# BAYESIAN FEW-SHOT CLASSIFICATION WITH ONE-VS-EACH POLYA-GAMMA AUGMENTED GAUSSIAN PROCESSES
+
+Jake C. Snell
+
+University of Toronto
+
+Vector Institute
+
+jsnell@cs.toronto.edu
+
+Richard Zemel
+
+University of Toronto
+
+Vector Institute
+
+Canadian Institute for Advanced Research
+
+zemel@cs.toronto.edu
+
+# ABSTRACT
+
+Few-shot classification (FSC), the task of adapting a classifier to unseen classes given a small labeled dataset, is an important step on the path toward human-like machine learning. Bayesian methods are well-suited to tackling the fundamental issue of overfitting in the few-shot scenario because they allow practitioners to specify prior beliefs and update those beliefs in light of observed data. Contemporary approaches to Bayesian few-shot classification maintain a posterior distribution over model parameters, which is slow and requires storage that scales with model size. Instead, we propose a Gaussian process classifier based on a novel combination of Pólya-Gamma augmentation and the one-vs-each softmax approximation (Titsias, 2016) that allows us to efficiently marginalize over functions rather than model parameters. We demonstrate improved accuracy and uncertainty quantification on both standard few-shot classification benchmarks and few-shot domain transfer tasks.
+
+# 1 INTRODUCTION
+
+Few-shot classification (FSC) is a rapidly growing area of machine learning that seeks to build classifiers able to adapt to novel classes given only a few labeled examples. It is an important step towards machine learning systems that can successfully handle challenging situations such as personalization, rare classes, and time-varying distribution shift. The shortage of labeled data in FSC leads to uncertainty over the parameters of the model, known as model uncertainty or epistemic uncertainty. If model uncertainty is not handled properly in the few-shot setting, there is a significant risk of overfitting. In addition, FSC is increasingly being used for risk-averse applications such as medical diagnosis (Prabhu, 2019) and human-computer interfaces (Wang et al., 2019) where it is important for a few-shot classifier to know when it is uncertain.
+
+Bayesian methods maintain a distribution over model parameters and thus provide a natural framework for capturing this inherent model uncertainty. In a Bayesian approach, a prior distribution is first placed over the parameters of a model. After data is observed, the posterior distribution over parameters is computed using Bayesian inference. This elegant treatment of model uncertainty has led to a surge of interest in Bayesian approaches to FSC that infer a posterior distribution over the weights of a neural network (Finn et al., 2018; Yoon et al., 2018; Ravi & Beatson, 2019).
+
+Although conceptually appealing, there are several practical obstacles to applying Bayesian inference directly to the weights of a neural network. Bayesian neural networks (BNNs) are expensive from both a computational and memory perspective. Moreover, specifying meaningful priors in parameter space is known to be difficult due to the complex relationship between weights and network outputs (Sun et al., 2019).
+
+Gaussian processes (GPs) instead maintain a distribution over functions rather than model parameters. The prior is directly specified by a mean and covariance function, which may be parameterized by deep neural networks. When used with Gaussian likelihoods, GPs admit closed form expressions for the posterior and predictive distributions. They exchange the computational drawbacks of BNNs
+
+for cubic scaling with the number of examples. In FSC, where the number of examples is small, this is often an acceptable trade-off.
+
+When applying GPs to classification with a softmax likelihood, the non-conjugacy of the GP prior renders posterior inference intractable. Many approximate inference methods have been proposed to circumvent this, including variational inference and expectation propagation. In this paper we investigate a particularly promising class of approaches that augment the GP model with a set of auxiliary random variables, such that when they are marginalized out the original model is recovered (Albert & Chib, 1993; Girolami & Rogers, 2006; Linderman et al., 2015). Such augmentation-based approaches typically admit efficient Gibbs sampling procedures for generating posterior samples which when combined with Fisher's identity (Douc et al., 2014) can be used to optimize the parameters of the mean and covariance functions.
+
+In particular, augmentation with Pólya-Gamma random variables (Polson et al., 2013) makes inference tractable in logistic models. Naively, this is useful for handling binary classification, but in this paper we show how to extend Pólya-Gamma augmentation to multiple classes by using the one-vs-each softmax approximation (Titsias, 2016), which can be expressed as a product of logistic sigmoidals. We further show that the one-vs-each approximation can be interpreted as a composite likelihood (Lindsay, 1988; Varin et al., 2011), a connection which to our knowledge has not been made in the literature.
+
+In this work, we make several contributions:
+
+- We show how the one-vs-each softmax approximation (Titsias, 2016) can be interpreted as a composite likelihood consisting of pairwise conditional terms.
+- We propose a novel GP classification method that combines the one-vs-each softmax approximation with Pólya-Gamma augmentation for tractable inference.
+- We demonstrate competitive classification accuracy of our method on standard FSC benchmarks and challenging domain transfer settings.
+- We propose several new benchmarks for uncertainty quantification in FSC, including calibration, robustness to input noise, and out-of-episode detection.
+- We demonstrate improved uncertainty quantification of our method on the proposed benchmarks relative to standard few-shot baselines.
+
+# 2 RELATED WORK
+
+Our work is related to both GP methods for handling non-conjugate classification likelihoods and Bayesian approaches to few-shot classification. We summarize relevant work here.
+
+# 2.1 GP CLASSIFICATION
+
+Non-augmentation approaches. There are several classes of approaches for applying Gaussian processes to classification. The most straightforward method, known as least squares classification (Rifkin & Klautau, 2004), treats class labels as real-valued observations and performs inference with a Gaussian likelihood. The Laplace approximation (Williams & Barber, 1998) constructs a Gaussian approximate posterior centered at the posterior mode. Variational approaches (Titsias, 2009; Matthews et al., 2016) maximize a lower bound on the log marginal likelihood. In expectation propagation (Minka, 2001; Kim & Ghahramani, 2006; Hernandez-Lobato & Hernandez-Lobato, 2016), local Gaussian approximations to the likelihood are fitted iteratively to minimize KL divergence from the true posterior.
+
+Augmentation approaches. Augmentation-based approaches introduce auxiliary random variables such that the original model is recovered when marginalized out. Girolami & Rogers (2006) propose a Gaussian augmentation for multinomial probit regression. Linderman et al. (2015) utilize Pólya-Gamma augmentation (Polson et al., 2013) and a stick-breaking construction to decompose a multinomial distribution into a product of binomials. Galy-Fajou et al. (2020) propose a logistic-softmax likelihood for classification and uses Gamma and Poisson augmentation in addition to Pólya-Gamma augmentation in order to perform inference.
+
+# 2.2 FEW-SHOT CLASSIFICATION
+
+Meta-learning. A common approach to FSC is meta-learning, which seeks to learn a strategy to update neural network parameters when faced with a novel learning task. The Meta-learner LSTM (Ravi & Larochelle, 2017) learns a meta-level LSTM to recurrently output a new set of parameters for a base learner. MAML (Finn et al., 2017) learns initializations of deep neural networks that perform well on task-specific losses after one or a few steps of gradient descent by backpropagating through the gradient descent procedure itself. LEO (Rusu et al., 2019) performs meta-learning in a learned low-dimensional latent space from which the parameters of a classifier are generated.
+
+Metric learning. Metric learning approaches learn distances such that input examples can be meaningfully compared. Siamese Networks (Koch, 2015) learn a shared embedding network along with a distance layer for computing the probability that two examples belong to the same class. Matching Networks (Vinyals et al., 2016) uses a nonparametric classification in the form of attention over nearby examples, which can be interpreted as a form of soft $k$ -nearest neighbors in the embedding space. Prototypical Networks (Snell et al., 2017) make predictions based on distances to nearest class centroids. Relation Networks (Sung et al., 2018) instead learn a more complex neural network distance function on top of the embedding layer.
+
+Bayesian Few-shot Classification. More recently, Bayesian FSC approaches that attempt to infer a posterior over task-specific parameters have appeared. Grant et al. (2018) reinterpret MAML as an approximate empirical Bayes algorithm and propose LLAMA, which optimizes the Laplace approximation to the marginal likelihood. Bayesian MAML (Yoon et al., 2018) instead uses Stein Variational Gradient Descent (SVGD) (Liu & Wang, 2016) to approximate the posterior distribution over model parameters. VERSA (Gordon et al., 2019) uses amortized inference networks to obtain an approximate posterior distribution over task-specific parameters. ABML (Ravi & Beatson, 2019) uses a few steps of Bayes by Backprop (Blundell et al., 2015) on the support set to produce an approximate posterior over network parameters. CNAPs (Requeima et al., 2019) modulate task-specific Feature-wise Linear Modulation (FiLM) (Perez et al., 2018) layer parameters as the output of an adaptation network that takes the support set as input.
+
+GPs for Few-shot Learning. There have been relatively few works applying GPs to few-shot learning. Tossou et al. (2020) consider Gaussian processes in the context of few-shot regression with Gaussian likelihoods. Deep Kernel Transfer (DKT) (Patacchiola et al., 2020) uses Gaussian processes with least squares classification to perform few-shot classification and learns covariance functions parameterized by deep neural networks. More recently, Titsias et al. (2020) applies GPs to meta-learning by maximizing the mutual information between the query set and a latent representation of the support set.
+
+# 3 BACKGROUND
+
+In this section we first review Pólya-Gamma augmentation for binary classification and the one-vseach approximation before we introduce our method in Section 4.
+
+# 3.1 PÓLYA-GAMMA AUGMENTATION
+
+The Pólya-Gamma augmentation scheme was originally introduced to address Bayesian inference in logistic models (Polson et al., 2013). Suppose we have a vector of logits $\psi \in \mathbb{R}^N$ with corresponding binary labels $\mathbf{y} \in \{0,1\}^N$ . The logistic likelihood is
+
+$$
+p (\mathbf {y} | \psi) = \prod_ {i = 1} ^ {N} \sigma \left(\psi_ {i}\right) ^ {y _ {i}} \left(1 - \sigma \left(\psi_ {i}\right)\right) ^ {1 - y _ {i}} = \prod_ {i = 1} ^ {N} \frac {\left(e ^ {\psi_ {i}}\right) ^ {y _ {i}}}{1 + e ^ {\psi_ {i}}}, \tag {1}
+$$
+
+where $\sigma (\cdot)$ is the logistic sigmoid function. Let the prior over $\psi$ be Gaussian: $p(\psi) = \mathcal{N}(\psi |\boldsymbol {\mu},\boldsymbol {\Sigma})$ . In Bayesian inference, we are interested in the posterior $p(\psi |\mathbf{y})\propto p(\mathbf{y}|\boldsymbol {\psi})p(\boldsymbol {\psi})$ but the form of (1) does not admit analytic computation of the posterior due to non-conjugacy. The main idea of Polya-Gamma augmentation is to introduce auxiliary random variables $\omega$ to the likelihood such that the original model is recovered when $\omega$ is marginalized out: $p(\mathbf{y}|\boldsymbol {\psi}) = \int p(\boldsymbol {\omega})p(\mathbf{y}|\boldsymbol {\psi},\boldsymbol {\omega})d\boldsymbol{\omega}$ .
+
+Conditioned on $\omega \sim \mathrm{PG}(1,0)$ , the batch likelihood is proportional to a diagonal Gaussian (see Section A for a full derivation):
+
+$$
+p (\mathbf {y} | \psi , \boldsymbol {\omega}) \propto \prod_ {i = 1} ^ {N} e ^ {- \omega_ {i} \psi_ {i} ^ {2} / 2} e ^ {\kappa_ {i} \psi_ {i}} \propto \mathcal {N} \left(\boldsymbol {\Omega} ^ {- 1} \boldsymbol {\kappa} | \psi , \boldsymbol {\Omega} ^ {- 1}\right), \tag {2}
+$$
+
+where $\kappa_{i} = y_{i} - 1 / 2$ and $\Omega = \mathrm{diag}(\omega)$ . The conditional distribution over $\psi$ given $\mathbf{y}$ and $\omega$ is now tractable:
+
+$$
+p (\boldsymbol {\psi} | \mathbf {y}, \boldsymbol {\omega}) \propto p (\mathbf {y} | \boldsymbol {\psi}, \boldsymbol {\omega}) p (\boldsymbol {\psi}) \propto \mathcal {N} (\boldsymbol {\psi} | \hat {\boldsymbol {\Sigma}} (\boldsymbol {\Sigma} ^ {- 1} \boldsymbol {\mu} + \boldsymbol {\kappa}), \hat {\boldsymbol {\Sigma}}), \tag {3}
+$$
+
+where $\tilde{\Sigma} = (\Sigma^{-1} + \Omega)^{-1}$ . The conditional distribution of $\omega$ given $\psi$ and $\mathbf{y}$ can also be easily computed:
+
+$$
+p \left(\omega_ {i} \mid y _ {i}, \psi_ {i}\right) \propto \operatorname {P G} \left(\omega_ {i} \mid 1, 0\right) e ^ {- \omega_ {i} \psi_ {i} ^ {2} / 2} \propto \operatorname {P G} \left(\omega_ {i} \mid 1, \psi_ {i}\right), \tag {4}
+$$
+
+where the last expression follows from the exponential tilting property of Pólya-Gamma random variables. This suggests a Gibbs sampling procedure in which iterates $\boldsymbol{\omega}^{(t)}\sim p(\boldsymbol {\omega}|\mathbf{y},\boldsymbol{\psi}^{(t - 1)})$ and $\psi^{(t)}\sim p(\psi |\mathbf{X},\mathbf{y},\boldsymbol{\omega}^{(t)})$ are drawn sequentially until the Markov chain reaches its stationary distribution, which is the joint posterior $p(\psi ,\omega |\mathbf{y})$ . Fortunately, efficient samplers for the Pólya-Gamma distribution have been developed (Windle et al., 2014) to facilitate this.
+
+# 3.2 ONE-VS-EACH APPROXIMATION TO SOFTMAX
+
+The one-vs-each (OVE) approximation (Titsias, 2016) was formulated as a lower bound to the softmax likelihood in order to handle classification over a large number of output classes, where computation of the normalizing constant is prohibitive. We employ the OVE approximation not to deal with extreme classification, but rather due to its compatibility with Pólya-Gamma augmentation, as we shall soon see. The one-vs-each approximation can be derived by first rewriting the softmax likelihood as follows:
+
+$$
+p (y = i \mid \mathbf {f}) \triangleq \frac {e ^ {f _ {i}}}{\sum_ {j} e ^ {f _ {j}}} = \frac {1}{1 + \sum_ {j \neq i} e ^ {- \left(f _ {i} - f _ {j}\right)}}, \tag {5}
+$$
+
+where $\mathbf{f} \triangleq (f_1, \ldots, f_C)^\top$ are the logits. Since in general $\prod_k (1 + \alpha_k) \geq (1 + \sum_k \alpha_k)$ for $\alpha_k \geq 0$ , the softmax likelihood (5) can be bounded as follows:
+
+$$
+p (y = i \mid \mathbf {f}) \geq \prod_ {j \neq i} \frac {1}{1 + e ^ {- \left(f _ {i} - f _ {j}\right)}} = \prod_ {j \neq i} \sigma \left(f _ {i} - f _ {j}\right), \tag {6}
+$$
+
+which is the OVE lower bound. This expression avoids the normalizing constant and factorizes into a product of pairwise sigmods, which is amenable to Pólya-Gamma augmentation for tractable inference.
+
+# 4 ONE-VS-EACH PÓLYA-GAMMA GPS
+
+In this section, we first show how the one-vs-each (OVE) approximation can be interpreted as a pairwise composite likelihood. We then we introduce our method for GP-based Bayesian few-shot classification, which brings together OVE and Pólya-Gamma augmentation in a novel combination.
+
+# 4.1 OVE AS A COMPOSITE LIKELIHOOD
+
+Titsias (2016) showed that the OVE approximation shares the same global optimum as the softmax maximum likelihood, suggesting a close relationship between the two. We show here that in fact OVE can be interpreted as a pairwise composite likelihood version of the softmax. Composite likelihoods (Lindsay, 1988; Varin et al., 2011) are a type of approximate likelihood often employed when the exact likelihood is intractable or otherwise difficult to compute. Given a collection of marginal or conditional events $\{E_1,\dots ,E_K\}$ and parameters $\mathbf{f}$ , a composite likelihood is defined as:
+
+$$
+\mathcal {L} _ {\mathrm {C L}} (\mathbf {f} \mid y) \triangleq \prod_ {k = 1} ^ {K} \mathcal {L} _ {k} (\mathbf {f} \mid y) ^ {w _ {k}}, \tag {7}
+$$
+
+where $\mathcal{L}_k(\mathbf{f} \mid y) \propto p(y \in E_k \mid \mathbf{f})$ and $w_k \geq 0$ are arbitrary weights.
+
+In order to make the connection to OVE, it will be useful to let the one-hot encoding of the label $y$ be denoted as $\mathbf{y} \in \{0,1\}^C$ . Define a set of $C(C - 1)/2$ pairwise conditional events $E_{ij}$ , one each for all pairs of classes $i \neq j$ , indicating the event that the model's output matches the target label for classes $i$ and $j$ conditioned on all the other classes:
+
+$$
+p (\mathbf {y} \in E _ {i j} \mid \mathbf {f}) \triangleq p \left(y _ {i}, y _ {j} \mid \mathbf {y} _ {\neg i j}, \mathbf {f}\right), \tag {8}
+$$
+
+where $\neg ij$ denotes the set of classes not equal to either $i$ or $j$ . This expression resembles the pseudolikelihood (Besag, 1975), but instead of a single conditional event per output site, the expression in (8) considers all pairs of sites. Stoehr & Friel (2015) explored similar composite likelihood generalizations of the pseudolikelihood in the context of random fields.
+
+Now suppose that $y_{c} = 1$ for some class $c \notin \{i,j\}$ . Then $p(y_{i},y_{j}|\mathbf{y}_{\neg ij},\mathbf{f}) = 1$ due to the one-hot constraint. Otherwise either $y_{i} = 1$ or $y_{j} = 1$ . In this case, assume without loss of generality that $y_{i} = 1$ and $y_{j} = 0$ and thus
+
+$$
+p \left(y _ {i}, y _ {j} \mid \mathbf {y} _ {\neg i j}, \mathbf {f}\right) = \frac {e ^ {f _ {i}}}{e ^ {f _ {i}} + e ^ {f _ {j}}} = \sigma \left(f _ {i} - f _ {j}\right). \tag {9}
+$$
+
+The composite likelihood defined in this way with unit component weights is therefore
+
+$$
+\mathcal {L} _ {\mathrm {O V E}} (\mathbf {f} \mid \mathbf {y}) = \prod_ {i} \prod_ {j \neq i} p \left(y _ {i}, y _ {j} \mid \mathbf {y} _ {\neg i j}, \mathbf {f}\right) = \prod_ {i} \prod_ {j \neq i} \sigma \left(f _ {i} - f _ {j}\right) ^ {y _ {i}}. \tag {10}
+$$
+
+Alternatively, we may simply write $\mathcal{L}_{\mathrm{OVE}}(\mathbf{f}|y = i) = \prod_{j\neq i}\sigma (f_i - f_j)$ , which is identical to the OVE bound (6).
+
+# 4.2 GP CLASSIFICATION WITH THE OVE LIKELIHOOD
+
+We now turn our attention to GP classification. Suppose we have access to examples $\mathbf{X} \in \mathbb{R}^{N \times D}$ with corresponding one-hot labels $\mathbf{Y} \in \{0,1\}^{N \times C}$ , where $C$ is the number of classes. We consider the logits jointly as a single vector
+
+$$
+\mathbf {f} \triangleq \left(f _ {1} ^ {1}, \dots , f _ {N} ^ {1}, f _ {1} ^ {2}, \dots , f _ {N} ^ {2}, \dots , f _ {1} ^ {C}, \dots , f _ {N} ^ {C}\right) ^ {\top} \tag {11}
+$$
+
+and place an independent GP prior on the logits for each class: $\mathbf{f}^c (\mathbf{x})\sim \mathcal{GP}(m(\mathbf{x}),k(\mathbf{x},\mathbf{x}'))$ Therefore we have $p(\mathbf{f}|\mathbf{X}) = \mathcal{N}(\mathbf{f}|\boldsymbol {\mu},\mathbf{K})$ , where $\mu_i^c = m(\mathbf{x}_i)$ and $\mathbf{K}$ is block diagonal with $K_{ij}^{c} = k(\mathbf{x}_{i},\mathbf{x}_{j})$ for each block $\mathbf{K}^c$
+
+The Pólya-Gamma integral identity used to derive (2) does not have a multi-class analogue and thus a direct application of the augmentation scheme to the softmax likelihood is nontrivial. Instead, we propose to directly replace the softmax with the OVE-based composite likelihood function from (10) with unit weights. The posterior over $\mathbf{f}$ when using OVE as the likelihood function can be expressed as:
+
+$$
+p (\mathbf {f} | \mathbf {X}, \mathbf {y}) \propto p (\mathbf {f} | \mathbf {X}) \prod_ {i = 1} ^ {N} \prod_ {c ^ {\prime} \neq y _ {i}} \sigma \left(f _ {i} ^ {y _ {i}} - f _ {i} ^ {c ^ {\prime}}\right), \tag {12}
+$$
+
+to which Pólya-Gamma augmentation can be applied as we show in the next section. Our motivation for using a composite likelihood therefore differs from the traditional motivation, which is to avoid the use of a likelihood function which is intractable to evaluate. Instead, we employ a composite likelihood because it makes posterior inference tractable when coupled with Pólya-Gamma augmentation.
+
+Prior work on Bayesian inference with composite likelihoods has shown that the composite posterior is consistent under fairly general conditions for correctly specified models (Miller, 2019) but can produce overly concentrated posteriors (Pauli et al., 2011; Ribatet et al., 2012) since each component likelihood event is treated as independent when in reality there may be significant dependencies. Nevertheless, we show in Section 5 that in practice our method exhibits competitive accuracy and strong calibration relative to baseline few-shot learning algorithms. We leave further theoretical analysis of the OVE composite posterior and its properties for future work.
+
+Compared to choices of likelihoods used by previous approaches, there are several reasons to prefer OVE. Relative to the Gaussian augmentation approach of Girolami & Rogers (2006), Pólya-Gamma augmentation has the benefit of fast mixing and the ability of a single value of $\omega$ to capture much of the marginal distribution over function values1. The stick-breaking construction of Linderman et al. (2015) induces a dependence on the ordering of classes, which leads to undesirable asymmetry. Finally, the logistic-softmax likelihood of Galy-Fajou et al. (2020) requires three augmentations and careful learning of the mean function to avoid a priori underconfidence (see Section F.1 for more details).
+
+# 4.3 POSTERIOR INFERENCE VIA GIBBS SAMPLING
+
+We now describe how we perform tractable posterior inference in our model with Gibbs sampling. Define the matrix $\mathbf{A} \triangleq \mathrm{OVE-Matrix}(\mathbf{Y})$ to be a $CN \times CN$ sparse block matrix with $C$ row partitions and $C$ column partitions. Each block $\mathbf{A}_{cc'}$ is a diagonal $N \times N$ matrix defined as follows:
+
+$$
+\mathbf {A} _ {c c ^ {\prime}} \triangleq \operatorname {d i a g} \left(\mathbf {Y} _ {. c ^ {\prime}}\right) - \mathbb {1} [ c = c ^ {\prime} ] \mathbf {I} _ {n}, \tag {13}
+$$
+
+where $\mathbf{Y}_{c'}$ denotes the $c'$ th column of $\mathbf{Y}$ . Now the binary logit vector $\psi \triangleq \mathbf{A}\mathbf{f} \in \mathbb{R}^{CN}$ will have entries equal to $f_{i}^{y_{i}} - f_{i}^{c}$ for each unique combination of $c$ and $i$ , of which there are $CN$ in total. The OVE composite likelihood can now be written as $\mathcal{L}(\psi|\mathbf{Y}) = 2^{N}\prod_{j=1}^{NC}\sigma(\psi_{j})$ , where the $2^{N}$ term arises from the $N$ cases in which $\psi_{j} = 0$ due to comparing the ground truth logit with itself.
+
+Analogous to (2), the likelihood of $\psi$ conditioned on $\omega$ and $\mathbf{Y}$ is proportional to a diagonal Gaussian:
+
+$$
+\mathcal {L} (\psi | \mathbf {Y}, \omega) \propto \prod_ {j = 1} ^ {N C} e ^ {- \omega_ {j} \psi_ {j} ^ {2} / 2} e ^ {\kappa_ {j} \psi_ {j}} \propto \mathcal {N} \left(\Omega^ {- 1} \kappa | \psi , \Omega^ {- 1}\right), \tag {14}
+$$
+
+where $\kappa_{j} = 1 / 2$ and $\Omega = \mathrm{diag}(\omega)$ . By exploiting the fact that $\psi = \mathbf{A}\mathbf{f}$ , we can express the likelihood in terms of $\mathbf{f}$ and write down the conditional composite posterior as follows:
+
+$$
+p (\mathbf {f} | \mathbf {X}, \mathbf {Y}, \boldsymbol {\omega}) \propto \mathcal {N} \left(\Omega^ {- 1} \boldsymbol {\kappa} \mid \mathbf {A f}, \Omega^ {- 1}\right) \mathcal {N} (\mathbf {f} \mid \boldsymbol {\mu}, \mathbf {K}) \propto \mathcal {N} (\mathbf {f} | \tilde {\boldsymbol {\Sigma}} \left(\mathbf {K} ^ {- 1} \boldsymbol {\mu} + \mathbf {A} ^ {\top} \boldsymbol {\kappa}\right), \tilde {\boldsymbol {\Sigma}}), \tag {15}
+$$
+
+where $\tilde{\Sigma} = (\mathbf{K}^{-1} + \mathbf{A}^{\top}\Omega \mathbf{A})^{-1}$ , which is an expression remarkably similar to (3). Analogous to (4), the conditional distribution over $\omega$ given $\mathbf{f}$ and the data becomes $p(\omega |\mathbf{y},\mathbf{f}) = \mathrm{PG}(\omega |\mathbf{1},\mathbf{A}\mathbf{f})$ .
+
+The primary computational bottleneck of posterior inference lies in sampling $\mathbf{f}$ from (15). Since $\tilde{\Sigma}$ is a $CN\times CN$ matrix, a naive implementation has complexity $\mathcal{O}(C^3 N^3)$ . By utilizing the matrix inversion lemma and Gaussian sampling techniques summarized in (Doucet, 2010), this can be brought down to $\mathcal{O}(CN^3)$ . Details may be found in Section B.
+
+# 4.4 LEARNING COVARIANCE HYPERPARAMETERS FOR FEW-SHOT CLASSIFICATION
+
+We now describe how we apply OVE Pólya-Gamma augmented GPs to few-shot classification. We assume the standard episodic few-shot setup in which one observes a labeled support set $S = (\mathbf{X}, \mathbf{Y})$ . Predictions must then be made for a query example $(\mathbf{x}_*, \mathbf{y}_*)$ . We consider a zero-mean GP prior over the class logits $\mathbf{f}^c(\mathbf{x}) \sim \mathcal{GP}(\mathbf{0}, k_\theta(\mathbf{x}, \mathbf{x}'))$ , where $\theta$ are learnable parameters of our covariance function. These could include traditional hyperparameters such as lengthscales or the weights of a deep neural network as in deep kernel learning (Wilson et al., 2016).
+
+We consider two objectives for learning hyperparameters of the covariance function: the marginal likelihood (ML) and the predictive likelihood (PL). Marginal likelihood measures the likelihood of the hyperparameters given the observed data and is intuitively appealing from a Bayesian perspective. On the other hand, many standard FSC methods optimize for predictive likelihood on the query set (Vinyals et al., 2016; Finn et al., 2017; Snell et al., 2017). Both objectives marginalize over latent functions, thereby making full use of our Bayesian formulation.
+
+The details of these objectives and how we compute gradients can be found in Section C. Our learning algorithm for both marginal and predictive likelihood may be found in Section D. Details of computing the posterior predictive distribution $p(\mathbf{y}_*|\mathbf{x}_*,\mathbf{X},\mathbf{Y},\boldsymbol{\omega})$ may be found in Section E. Finally, details of our chosen "cosine" kernel may be found in Section H.
+
+# 5 EXPERIMENTS
+
+In this section, we present our results on few-shot classification both in terms of accuracy and uncertainty quantification. Additional results comparing the one-vs-each composite likelihood to the softmax, logistic softmax, and Gaussian likelihoods may be found in Section F.
+
+One of our aims is to compare methods based on uncertainty quantification. We therefore developed new benchmark evaluations and tasks: few-shot calibration, robustness, and out-of-episode detection. In order to empirically compare methods, we could not simply borrow the accuracy results from other papers, but instead needed to train each of these baselines ourselves. For all baselines except Bayesian MAML, ABML, and Logistic Softmax GP, we ran the code from (Patacchiola et al., 2020) and verified that the accuracies matched closely to their reported results. We have made PyTorch code for our experiments publicly available2.
+
+# 5.1 FEW-SHOT CLASSIFICATION
+
+For our few-shot classification experiments, we follow the training and evaluation protocol of Patačiola et al. (2020). We train both 1-shot and 5-shot versions of our model in four different settings: Caltech-UCSD Birds (CUB) (Wah et al., 2011), mini-Imagenet with the split proposed by Ravi & Larochelle (2017), as well as two cross-domain transfer tasks. The first transfer task entails training on mini-ImageNet and testing on CUB, and the second measures transfer from Omniglot (Lake et al., 2011) to EMNIST (Cohen et al., 2017). Experimental details and an overview of the baselines we used can be found in Section G. Classification results are shown in Table 1 and 2. We find that our proposed Pólya-Gamma OVE GPs yield strong classification results, outperforming the baselines in five of the eight scenarios.
+
+Table 1: Average accuracy and standard deviation (percentage) on 5-way FSC. Baseline results (through DKT) are from Patacchiola et al. (2020). Evaluation is performed on 3,000 randomly generated test episodes. Standard deviation for the remaining methods are computed by averaging over 5 batches of 600 episodes with different random seeds. The best results are highlighted in bold.
+
+| Method | CUB | mini-ImageNet |
| 1-shot | 5-shot | 1-shot | 5-shot |
| Feature Transfer | 46.19 ± 0.64 | 68.40 ± 0.79 | 39.51 ± 0.23 | 60.51 ± 0.55 |
| Baseline++ | 61.75 ± 0.95 | 78.51 ± 0.59 | 47.15 ± 0.49 | 66.18 ± 0.18 |
| MatchingNet | 60.19 ± 1.02 | 75.11 ± 0.35 | 48.25 ± 0.65 | 62.71 ± 0.44 |
| ProtoNet | 52.52 ± 1.90 | 75.93 ± 0.46 | 44.19 ± 1.30 | 64.07 ± 0.65 |
| RelationNet | 62.52 ± 0.34 | 78.22 ± 0.07 | 48.76 ± 0.17 | 64.20 ± 0.28 |
| MAML | 56.11 ± 0.69 | 74.84 ± 0.62 | 45.39 ± 0.49 | 61.58 ± 0.53 |
| DKT + Cosine | 63.37 ± 0.19 | 77.73 ± 0.26 | 48.64 ± 0.45 | 62.85 ± 0.37 |
| Bayesian MAML | 55.93 ± 0.71 | 72.87 ± 0.26 | 44.46 ± 0.30 | 62.60 ± 0.25 |
| Bayesian MAML (Chaser) | 53.93 ± 0.72 | 71.16 ± 0.32 | 43.74 ± 0.46 | 59.23 ± 0.34 |
| ABML | 48.80 ± 0.40 | 70.91 ± 0.32 | 40.88 ± 0.25 | 58.19 ± 0.17 |
| Logistic Softmax GP + Cosine (ML) | 60.23 ± 0.54 | 74.58 ± 0.25 | 46.75 ± 0.20 | 59.93 ± 0.31 |
| Logistic Softmax GP + Cosine (PL) | 60.07 ± 0.29 | 78.14 ± 0.07 | 47.05 ± 0.20 | 66.01 ± 0.25 |
| OVE PG GP + Cosine (ML) [ours] | 63.98 ± 0.43 | 77.44 ± 0.18 | 50.02 ± 0.35 | 64.58 ± 0.31 |
| OVE PG GP + Cosine (PL) [ours] | 60.11 ± 0.26 | 79.07 ± 0.05 | 48.00 ± 0.24 | 67.14 ± 0.23 |
+
+# 5.2 UNCERTAINTY QUANTIFICATION THROUGH CALIBRATION
+
+We next turn to uncertainty quantification, an important concern for few-shot classifiers. When used in safety-critical applications such as medical diagnosis, it is important for a machine learning system to defer when there is not enough evidence to make a decision. Even in non-critical applications, precise uncertainty quantification helps practitioners in the few-shot setting determine when a class has an adequate amount of labeled data or when more labels are required, and can facilitate active learning.
+
+Table 2: Average accuracy and standard deviation (percentage) on 5-way cross-domain FSC, with the same experimental setup as in Table 1. Baseline results (through DKT) are from (Patacchiola et al., 2020).
+
+| Method | Omniglot→EMNIST | mini-ImageNet→CUB |
| 1-shot | 5-shot | 1-shot | 5-shot |
| Feature Transfer | 64.22 ± 1.24 | 86.10 ± 0.84 | 32.77 ± 0.35 | 50.34 ± 0.27 |
| Baseline++ | 56.84 ± 0.91 | 80.01 ± 0.92 | 39.19 ± 0.12 | 57.31 ± 0.11 |
| MatchingNet | 75.01 ± 2.09 | 87.41 ± 1.79 | 36.98 ± 0.06 | 50.72 ± 0.36 |
| ProtoNet | 72.04 ± 0.82 | 87.22 ± 1.01 | 33.27 ± 1.09 | 52.16 ± 0.17 |
| RelationNet | 75.62 ± 1.00 | 87.84 ± 0.27 | 37.13 ± 0.20 | 51.76 ± 1.48 |
| MAML | 72.68 ± 1.85 | 83.54 ± 1.79 | 34.01 ± 1.25 | 48.83 ± 0.62 |
| DKT + Cosine | 73.06 ± 2.36 | 88.10 ± 0.78 | 40.22 ± 0.54 | 55.65 ± 0.05 |
| Bayesian MAML | 63.94 ± 0.47 | 65.26 ± 0.30 | 33.52 ± 0.36 | 51.35 ± 0.16 |
| Bayesian MAML (Chaser) | 55.04 ± 0.34 | 54.19 ± 0.32 | 36.22 ± 0.50 | 51.53 ± 0.43 |
| ABML | 73.89 ± 0.24 | 87.28 ± 0.40 | 31.51 ± 0.32 | 47.80 ± 0.51 |
| Logistic Softmax GP + Cosine (ML) | 62.91 ± 0.49 | 83.80 ± 0.13 | 36.41 ± 0.18 | 50.33 ± 0.13 |
| Logistic Softmax GP + Cosine (PL) | 70.70 ± 0.36 | 86.59 ± 0.15 | 36.73 ± 0.26 | 56.70 ± 0.31 |
| OVE PG GP + Cosine (ML) [ours] | 68.43 ± 0.67 | 86.22 ± 0.20 | 39.66 ± 0.18 | 55.71 ± 0.31 |
| OVE PG GP + Cosine (PL) [ours] | 77.00 ± 0.50 | 87.52 ± 0.19 | 37.49 ± 0.11 | 57.23 ± 0.31 |
+
+We chose several commonly used metrics for calibration. Expected calibration error (ECE) (Guo et al., 2017) measures the expected binned difference between confidence and accuracy. Maximum calibration error (MCE) is similar to ECE but measures maximum difference instead of expected difference. Brier score (BRI) (Brier, 1950) is a proper scoring rule computed as the squared error between the output probabilities and the one-hot label. For a recent perspective on metrics for uncertainty evaluation, please refer to Ovadia et al. (2019). The results for representative approaches on 5-shot, 5-way CUB can be found in Figure 1. Our OVE PG GPs are the best calibrated overall across the metrics.
+
+
+Figure 1: Reliability diagrams, expected calibration error (ECE), maximum calibration error (MCE), and Brier Score (BRI) for 5-shot 5-way tasks on CUB (additional calibration results can be found in Appendix I). Metrics are computed on 3,000 random tasks from the test set. The last two plots are our proposed method.
+
+# 5.3 ROBUSTNESS TO INPUT NOISE
+
+Input examples for novel classes in FSC may have been collected under conditions that do not match those observed at training time. For example, labeled support images in a medical diagnosis application may come from a different hospital than the training set. To mimic a simplified version of this scenario, we investigate robustness to input noise. We used the Imagecorruptions package (Michaelis et al., 2019) to apply Gaussian noise, impulse noise, and defocus blur to both the support set and query sets of episodes at test time and evaluated both accuracy and calibration. We used corruption severity of 5 (severe) and evaluated across 1,000 randomly generated tasks on the three
+
+datasets involving natural images. The robustness results for Gaussian noise are shown in Figure 2. Full quantitative results tables for each noise type may be found in Section J. We find that in general Bayesian approaches tend to be robust due to their ability to marginalize over hypotheses consistent with the support labels. Our approach is one of the top performing methods across all settings.
+
+
+
+
+
+
+
+
+Figure 2: Accuracy $(\uparrow)$ and Brier Score $(\downarrow)$ when corrupting both support and query with Gaussian noise on 5-way 5-shot tasks. Quantitative results may be found in Appendix J.
+
+
+
+# 5.4 OUT-OF-EPISODE DETECTION
+
+Finally, we measure performance on out-of-episode detection, another application in which uncertainty quantification is important. In this experiment, we used 5-way, 5-shot support sets at test time but incorporated out-of-episode examples into the query set. Each episode had 150 query examples: 15 from each of 5 randomly chosen in-episode classes and 15 from each of 5 randomly chosen out-of-episode classes. We then computed the AUROC of binary outlier detection using the negative of the maximum logit as the score. Intuitively, if none of the support classes assign a high logit to the example, it can be classified as an outlier. The results are shown in Figure 3. Our approach generally performs the best across the datasets.
+
+
+Figure 3: Average AUROC $(\uparrow)$ for out-of-episode detection. The AUC is computed separately for each episode and averaged across 1,000 episodes. Bars indicate a $95\%$ bootstrapped confidence interval.
+
+# 6 CONCLUSION
+
+In this work, we have proposed a Bayesian few-shot classification approach based on Gaussian processes. Our method replaces the ordinary softmax likelihood with a one-vs-each pairwise composite likelihood and applies Pólya-Gamma augmentation to perform inference. This allows us to model class logits directly as function values and efficiently marginalize over uncertainty in each few-shot episode. Modeling functions directly enables our approach to avoid the dependence on model size that posterior inference in weight-space based models inherently have. Our approach compares favorably to baseline FSC methods under a variety of dataset and shot configurations, including dataset transfer. We also demonstrate strong uncertainty quantification, robustness to input noise, and out-of-episode detection. We believe that Bayesian modeling is a powerful tool for handling uncertainty and hope that our work will lead to broader adoption of efficient Bayesian inference in the few-shot scenario.
+
+# ACKNOWLEDGMENTS
+
+We would like to thank Ryan Adams, Ethan Fetaya, Mike Mozer, Eleni Triantafillou, Kuan-Chieh Wang, and Max Welling for helpful discussions. JS also thanks SK T-Brain for supporting him on an internship that led to precursors of some ideas in this paper. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute (https://www.vectorinstitute.ai/partners). This project is supported by NSERC and the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/Interior Business Center (DoI/IBC) contract number D16PC00003. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/IBC, or the U.S. Government.
+
+# REFERENCES
+
+James H. Albert and Siddhartha Chib. Bayesian analysis of binary and polychotomous response data. Journal of the American Statistical Association, 88(422):669-679, June 1993.
+Julian Besag. Statistical analysis of non-lattice data. Journal of the Royal Statistical Society: Series D (The Statistician), 24(3):179-195, 1975.
+Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural networks. In International Conference on Machine Learning, 2015.
+Glenn W. Brier. Verification of forecasts expressed in terms of probability. Monthly Weather Review, 78(1):1-3, 1950.
+Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Frank Wang, and Jia-Bin Huang. A closer look at few-shot classification. In International Conference on Learning Representations, 2019.
+Gregory Cohen, Saeed Afshar, Jonathan Tapson, and André van Schaik. EMNIST: Extending MNIST to handwritten letters. In 2017 International Joint Conference on Neural Networks (IJCNN), pp. 2921-2926, 2017.
+Randal Douc, Eric Moulines, and David Stoffer. Nonlinear Time Series: Theory, Methods and Applications with R Examples. CRC Press, 2014.
+Arnaud Doucet. A Note on Efficient Conditional Simulation of Gaussian Distributions. 2010. URL https://www.cs.ubc.ca/~arnaud/doucet_simulationconditionalgaussian.pdf.
+Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 1126–1135, International Convention Centre, Sydney, Australia, August 2017. PMLR.
+Chelsea Finn, Kelvin Xu, and Sergey Levine. Probabilistic model-agnostic meta-learning. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 31, pp. 9516–9527. Curran Associates, Inc., 2018.
+R. A. Fisher. The use of multiple measurements in taxonomic problems. Annals of Eugenics, 7(2): 179-188, 1936.
+Théo Galy-Fajou, Florian Wenzel, Christian Donner, and Manfred Opper. Multi-class Gaussian process classification made conjugate: Efficient inference via data augmentation. In Ryan P. Adams and Vibhav Gogate (eds.), Proceedings of the 35th Uncertainty in Artificial Intelligence Conference, volume 115 of Proceedings of Machine Learning Research, pp. 755-765, Tel Aviv, Israel, July 2020. PMLR.
+
+Mark Girolami and Simon Rogers. Variational Bayesian multinomial probit regression with Gaussian process priors. Neural Computation, 18(8):1790-1817, August 2006.
+Jonathan Gordon, John Bronskill, Matthias Bauer, Sebastian Nowozin, and Richard Turner. Meta-learning probabilistic inference for prediction. In International Conference on Learning Representations, 2019.
+Erin Grant, Chelsea Finn, Sergey Levine, Trevor Darrell, and Thomas Griffiths. Recasting gradient-based meta-learning as hierarchical Bayes. In International Conference on Learning Representations, 2018.
+Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. On calibration of modern neural networks. In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 1321-1330, International Convention Centre, Sydney, Australia, August 2017. PMLR.
+Daniel Hernandez-Lobato and Jose Miguel Hernandez-Lobato. Scalable Gaussian process classification via expectation propagation. In Arthur Gretton and Christian C. Robert (eds.), Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, volume 51 of Proceedings of Machine Learning Research, pp. 168-176, Cadiz, Spain, May 2016. PMLR.
+Nathan Hilliard, Lawrence Phillips, Scott Howland, Artem Yankov, Courtney D. Corley, and Nathan O. Hodas. Few-Shot Learning with Metric-Agnostic Conditional Embeddings. arXiv:1802.04376 [cs, stat], February 2018.
+Yehuda Hoffman and Erez Ribak. Constrained realizations of Gaussian fields-A simple algorithm. The Astrophysical Journal, 380:L5-L8, 1991.
+Hyun-Chul Kim and Zoubin Ghahramani. Bayesian Gaussian process classification with the EM-EP algorithm. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(12):1948-1959, 2006.
+Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. In International Conference on Learning Representations, 2015.
+Diederik P. Kingma, Tim Salimans, and Max Welling. Variational dropout and the local reparameterization trick. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 28, pp. 2575-2583. Curran Associates, Inc., 2015.
+Gregory Koch. Siamese Neural Networks for One-Shot Image Recognition. Master's Thesis, University of Toronto, 2015.
+Brenden Lake, Ruslan Salakhutdinov, Jason Gross, and Joshua Tenenbaum. One shot learning of simple visual concepts. In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 33, 2011.
+Scott Linderman, Matthew J Johnson, and Ryan P Adams. Dependent multinomial models made easy: Stick-breaking with the Polya-Gamma augmentation. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 28, pp. 3456-3464. Curran Associates, Inc., 2015.
+Bruce G. Lindsay. Composite Likelihood Methods. Contemporary Mathematics, 80:221-239, 1988.
+Qiang Liu and Dilin Wang. Stein variational gradient descent: A general purpose bayesian inference algorithm. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 29, pp. 2378-2386. Curran Associates, Inc., 2016.
+Alexander G. de G. Matthews, James Hensman, Richard Turner, and Zoubin Ghahramani. On sparse variational methods and the Kullback-Leibler divergence between stochastic processes. In Arthur Gretton and Christian C. Robert (eds.), Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, volume 51 of Proceedings of Machine Learning Research, pp. 231-239, Cadiz, Spain, May 2016. PMLR.
+
+Claudio Michaelis, Benjamin Mitzkus, Robert Geirhos, Evgenia Rusak, Oliver Bringmann, Alexander S. Ecker, Matthias Bethge, and Wieland Brendel. Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming. In NeurIPS 2019 Machine Learning for Autonomous Driving Workshop, 2019.
+Jeffrey W. Miller. Asymptotic normality, concentration, and coverage of generalized posteriors. arXiv:1907.09611 [math, stat], July 2019.
+Thomas Peter Minka. A Family of Algorithms for Approximate Bayesian Inference. PhD thesis, Massachusetts Institute of Technology, 2001.
+Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, D. Sculley, Sebastian Nowozin, Joshua Dillon, Balaji Lakshminarayanan, and Jasper Snoek. Can you trust your models uncertainty? Evaluating predictive uncertainty under dataset shift. In H. Wallach, H. Larochelle, A. Beygelzimer, F. dAlché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 32, pp. 13991-14002. Curran Associates, Inc., 2019.
+Massimiliano Patacchiola, Jack Turner, Elliot J. Crowley, Michael O'Boyle, and Amos Storkey. Bayesian Meta-Learning for the Few-Shot Setting via Deep Kernels. In Advances in Neural Information Processing Systems, 2020.
+Francesco Pauli, Walter Racugno, and Laura Ven. Bayesian composite marginal likelihoods. Statistica Sinica, pp. 17, 2011.
+Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, and Aaron Courville. FiLM: Visual reasoning with a general conditioning layer. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1), April 2018.
+Nicholas G. Polson, James G. Scott, and Jesse Windle. Bayesian inference for logistic models using Pólya-Gamma latent variables. Journal of the American Statistical Association, 108(504): 1339-1349, December 2013.
+Viraj Uday Prabhu. *Few-Shot Learning For Dermatological Disease Diagnosis*. Master's Thesis, Georgia Institute of Technology, 2019.
+Sachin Ravi and Alex Beatson. Amortized Bayesian meta-learning. In International Conference on Learning Representations, 2019.
+Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In International Conference on Learning Representations, 2017.
+James Requeima, Jonathan Gordon, John Bronskill, Sebastian Nowozin, and Richard E Turner. Fast and flexible multi-task classification using conditional neural adaptive processes. In H. Wallach, H. Larochelle, A. Beygelzimer, F. dAlché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 32, pp. 7959-7970. Curran Associates, Inc., 2019.
+Mathieu Ribatet, Daniel Cooley, and Anthony C. Davison. Bayesian inference from composite likelihoods, with an application to spatial extremes. Statistica Sinica, 2012.
+Ryan Rifkin and Aldebaro Klautau. In defense of one-vs-all classification. Journal of Machine Learning Research, 5(Jan):101-141, 2004.
+Andrei A. Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero, and Raia Hadsell. Meta-learning with latent embedding optimization. In International Conference on Learning Representations, 2019.
+Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 30, pp. 4077-4087. Curran Associates, Inc., 2017.
+Julien Stoehr and Nial Friel. Calibration of conditional composite likelihood for Bayesian inference on Gibbs random fields. In International Conference on Artificial Intelligence and Statistics, 2015.
+
+Shengyang Sun, Guodong Zhang, Jiaxin Shi, and Roger Grosse. Functional Variational Bayesian Neural Networks. In International Conference on Learning Representations, 2019.
+Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip H.S. Torr, and Timothy M. Hospedales. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
+Michalis Titsias. Variational learning of inducing variables in sparse Gaussian processes. In David van Dyk and Max Welling (eds.), Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics, volume 5 of Proceedings of Machine Learning Research, pp. 567-574, Hilton Clearwater Beach Resort, Clearwater Beach, Florida USA, April 2009. PMLR.
+Michalis K. Titsias. One-vs-each approximation to softmax for scalable estimation of probabilities. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 29, pp. 4161-4169. Curran Associates, Inc., 2016.
+Michalis K. Titsias, Sotirios Nikoloutsopoulos, and Alexandre Galashov. Information Theoretic Meta Learning with Gaussian Processes. arXiv:2009.03228 [cs, stat], October 2020.
+Prudencio Tossou, Basile Dura, Francois Laviolette, Mario Marchand, and Alexandre Lacoste. Adaptive Deep Kernel Learning. arXiv:1905.12131 [cs, stat], December 2020.
+Cristiano Varin, Nancy Reid, and David Firth. An overview of composite likelihood methods. *Institute of Statistical Science*, Academia Sinica, 2011.
+Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. Matching networks for one shot learning. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 29, pp. 3630-3638. Curran Associates, Inc., 2016.
+Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The Caltech-UCSD Birds-200-2011 Dataset. Technical Report CNS-TR-2011-001, California Institute of Technology, 2011.
+Kuan-Chieh Wang, Jixuan Wang, and Khai Truong. Customizable Facial Gesture Recognition For Improved Assistive Technology. In ICLR AI for Social Good Workshop, 2019.
+Yeming Wen, Paul Vicol, Jimmy Ba, Dustin Tran, and Roger Grosse. Flipout: Efficient Pseudo-Independent Weight Perturbations on Mini-Batches. In International Conference on Learning Representations, 2018.
+Christopher K. I. Williams and D. Barber. Bayesian classification with Gaussian processes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(12):1342-1351, 1998.
+Andrew Gordon Wilson, Zhiting Hu, Ruslan Salakhutdinov, and Eric P. Xing. Deep kernel learning. In Arthur Gretton and Christian C. Robert (eds.), Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, volume 51 of Proceedings of Machine Learning Research, pp. 370-378, Cadiz, Spain, May 2016. PMLR.
+Jesse Windle, Nicholas G. Polson, and James G. Scott. Sampling Polya-Gamma random variates: Alternate and approximate techniques. arXiv:1405.0506 [stat], May 2014.
+Jaesik Yoon, Taesup Kim, Ousmane Dia, Sungwoong Kim, Yoshua Bengio, and Sungjin Ahn. Bayesian model-agnostic meta-learning. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 31, pp. 7332-7342. Curran Associates, Inc., 2018.
+
+# A DERIVATION OF POLYA-GAMMA AUGMENTED LOGISTIC LIKELIHOOD
+
+In this section, we show the derivation for the augmented logistic likelihood presented in Section 3.1. First, recall the logistic likelihood:
+
+$$
+p (\mathbf {y} \mid \psi) = \prod_ {i = 1} ^ {N} \sigma \left(\psi_ {i}\right) ^ {y _ {i}} \left(1 - \sigma \left(\psi_ {i}\right)\right) ^ {1 - y _ {i}} = \prod_ {i = 1} ^ {N} \frac {\left(e ^ {\psi_ {i}}\right) ^ {y _ {i}}}{1 + e ^ {\psi_ {i}}}, \tag {16}
+$$
+
+where $\sigma(\cdot)$ is the logistic sigmoid function. We have a Gaussian prior $p(\psi) = \mathcal{N}(\psi|\boldsymbol{\mu},\boldsymbol{\Sigma})$ and introduce Pólya-Gamma auxiliary random variables $\omega$ to the likelihood such that the original model is recovered when $\omega$ is marginalized out: $p(\mathbf{y}|\psi) = \int p(\boldsymbol{\omega})p(\mathbf{y}|\boldsymbol{\psi},\boldsymbol{\omega})d\boldsymbol{\omega}$ .
+
+The Pólya-Gamma distribution $\omega \sim \mathrm{PG}(b,c)$ can be written as an infinite convolution of Gamma distributions:
+
+$$
+\omega \stackrel {D} {=} \frac {1}{2 \pi^ {2}} \sum_ {k = 1} ^ {\infty} \frac {\operatorname {G a} (b , 1)}{(k - 1 / 2) ^ {2} + c ^ {2} / (4 \pi^ {2})}. \tag {17}
+$$
+
+The following integral identity holds for $b > 0$ :
+
+$$
+\frac {\left(e ^ {\psi}\right) ^ {a}}{\left(1 + e ^ {\psi}\right) ^ {b}} = 2 ^ {- b} e ^ {\kappa \psi} \int_ {0} ^ {\infty} e ^ {- \omega \psi^ {2} / 2} p (\omega) d \omega , \tag {18}
+$$
+
+where $\kappa = a - b / 2$ and $\omega \sim \mathrm{PG}(b,0)$ . Specifically, when $a = y$ and $b = 1$ , we recover an individual term of the logistic likelihood (16):
+
+$$
+p (y | \psi) = \frac {(e ^ {\psi}) ^ {y}}{1 + e ^ {\psi}} = \frac {1}{2} e ^ {\kappa \psi} \int_ {0} ^ {\infty} e ^ {- \omega \psi^ {2} / 2} p (\omega) d \omega , \tag {19}
+$$
+
+where $\kappa = y - 1/2$ and $\omega \sim PG(1,0)$ . Conditioned on $\omega$ , the batch likelihood is proportional to a diagonal Gaussian:
+
+$$
+p (\mathbf {y} | \psi , \boldsymbol {\omega}) \propto \prod_ {i = 1} ^ {N} e ^ {- \omega_ {i} \psi_ {i} ^ {2} / 2} e ^ {\kappa_ {i} \psi_ {i}} \propto \mathcal {N} \left(\boldsymbol {\Omega} ^ {- 1} \boldsymbol {\kappa} | \psi , \boldsymbol {\Omega} ^ {- 1}\right), \tag {20}
+$$
+
+where $\kappa_{i} = y_{i} - 1 / 2$ and $\Omega = \mathrm{diag}(\omega)$ . The conditional distribution over $\psi$ given $\mathbf{y}$ and $\omega$ is now tractable:
+
+$$
+p (\boldsymbol {\psi} | \mathbf {y}, \boldsymbol {\omega}) \propto p (\mathbf {y} | \boldsymbol {\psi}, \boldsymbol {\omega}) p (\boldsymbol {\psi}) \propto \mathcal {N} (\boldsymbol {\psi} | \hat {\boldsymbol {\Sigma}} (\boldsymbol {\Sigma} ^ {- 1} \boldsymbol {\mu} + \boldsymbol {\kappa}), \hat {\boldsymbol {\Sigma}}), \tag {21}
+$$
+
+where $\tilde{\Sigma} = (\Sigma^{-1} + \Omega)^{-1}$
+
+# B EFFICIENT GIBBS SAMPLING
+
+The Gibbs conditional distribution over $\mathbf{f}$ is given by:
+
+$$
+p (\mathbf {f} | \mathbf {X}, \mathbf {y}, \boldsymbol {\omega}) = \mathcal {N} (\mathbf {f} | \tilde {\boldsymbol {\Sigma}} \left(\mathbf {K} ^ {- 1} \boldsymbol {\mu} + \mathbf {A} ^ {\top} \boldsymbol {\kappa}\right), \tilde {\boldsymbol {\Sigma}}), \tag {22}
+$$
+
+where $\tilde{\Sigma} = (\mathbf{K}^{-1} + \mathbf{A}^{\top}\boldsymbol {\Omega}\mathbf{A})^{-1}$ . Naively sampling from this distribution requires $\mathcal{O}(C^3 N^3)$ computation since $\tilde{\Sigma}$ is a $CN\times CN$ matrix. Here we describe a method for sampling from this distribution that requires $\mathcal{O}(CN^3)$ computation instead.
+
+First, we note that (22) can be interpreted as the conditional distribution $p(\mathbf{f}|\mathbf{z} = \Omega^{-1}\kappa)$ resulting from the following marginal distribution $p(\mathbf{f})$ and conditional $p(\mathbf{z}|\mathbf{f})$ :
+
+$$
+p (\mathbf {f}) = \mathcal {N} (\mathbf {f} | \boldsymbol {\mu}, \mathbf {K}) \tag {23}
+$$
+
+$$
+p (\mathbf {z} | \mathbf {f}) = \mathcal {N} (\mathbf {z} | \mathbf {A} \mathbf {f}, \boldsymbol {\Omega} ^ {- 1}), \tag {24}
+$$
+
+where we have made implicit the dependence on $\mathbf{X}$ , $\mathbf{Y}$ , and $\boldsymbol{\omega}$ for brevity of notation. Equivalently, the distribution over $\mathbf{f}$ and $\mathbf{z}$ can be represented by the partitioned Gaussian
+
+$$
+\left[ \begin{array}{l} \mathbf {f} \\ \mathbf {z} \end{array} \right] \sim \mathcal {N} \left(\left[ \begin{array}{c} \boldsymbol {\mu} \\ \mathbf {A} \boldsymbol {\mu} \end{array} \right], \left[ \begin{array}{c c} \mathbf {K} & \mathbf {K A} ^ {\top} \\ \mathbf {A K} & \mathbf {A K A} ^ {\top} + \boldsymbol {\Omega} ^ {- 1} \end{array} \right]\right). \tag {25}
+$$
+
+The conditional distribution $p(\mathbf{f}|\mathbf{z})$ is given as:
+
+$$
+p (\mathbf {f} | \mathbf {z}) = \mathcal {N} (\mathbf {f} | \tilde {\boldsymbol {\Sigma}} \left(\mathbf {K} ^ {- 1} \boldsymbol {\mu} + \mathbf {A} ^ {\top} \boldsymbol {\Omega} \mathbf {z}\right), \tilde {\boldsymbol {\Sigma}}), \tag {26}
+$$
+
+where $\tilde{\Sigma} = (\mathbf{K}^{-1} + \mathbf{A}^{\top}\boldsymbol {\Omega}\mathbf{A})^{-1}$ . Note that $p(\mathbf{f}|\mathbf{z} = \boldsymbol{\Omega}^{-1}\boldsymbol {\kappa})$ recovers our desired Gibbs conditional distribution from (22).
+
+An efficient approach to conditional Gaussian sampling is due to Hoffman & Ribak (1991) and described in greater clarity by Doucet (2010). The procedure is as follows:
+
+1. Sample $\mathbf{f}_0\sim p(\mathbf{f})$ and $\mathbf{z}_0\sim p(\mathbf{z}|\mathbf{f})$
+2. Return $\bar{\mathbf{f}} = \mathbf{f}_0 + \mathbf{K}\mathbf{A}^\top (\mathbf{A}\mathbf{K}\mathbf{A}^\top +\Omega^{-1})^{-1}(\Omega^{-1}\pmb {\kappa} - \mathbf{z}_0)$ as the sample from $p(\mathbf{f}|\mathbf{z})$
+
+$\mathbf{K}$ is block diagonal and thus sampling from $p(\mathbf{f})$ requires $\mathcal{O}(CN^3)$ time. $\mathbf{A}\mathbf{f}$ can be computed in $\mathcal{O}(CN)$ time, since each entry is the difference between $f_{i}^{y_{i}}$ and $f_{i}^{c}$ for some $i$ and $c$ . Overall, step 1 requires $\mathcal{O}(CN^3)$ time.
+
+We now show how to compute $\overline{\mathbf{f}}$ from step 2 in $\mathcal{O}(CN^3)$ time. We first expand $(\mathbf{A}\mathbf{K}\mathbf{A}^\top +\Omega^{-1})^{-1}$ :
+
+$$
+\left(\mathbf {A} \mathbf {K} \mathbf {A} ^ {\top} + \boldsymbol {\Omega} ^ {- 1}\right) ^ {- 1} = \boldsymbol {\Omega} - \boldsymbol {\Omega} \mathbf {A} \left(\mathbf {K} ^ {- 1} + \mathbf {A} ^ {\top} \boldsymbol {\Omega} \mathbf {A}\right) ^ {- 1} \mathbf {A} ^ {\top} \boldsymbol {\Omega} \tag {27}
+$$
+
+We substitute into the expression for $\bar{\mathbf{f}}$ :
+
+$$
+\begin{array}{l} \bar {\mathbf {f}} = \mathbf {f} _ {0} + \mathbf {K} \mathbf {A} ^ {\top} (\boldsymbol {\Omega} - \boldsymbol {\Omega} \mathbf {A} (\mathbf {K} ^ {- 1} + \mathbf {A} ^ {\top} \boldsymbol {\Omega} \mathbf {A}) ^ {- 1} \mathbf {A} ^ {\top} \boldsymbol {\Omega}) (\boldsymbol {\Omega} ^ {- 1} \boldsymbol {\kappa} - \mathbf {z} _ {0}) (28) \\ = \mathbf {f} _ {0} + \mathbf {K A} ^ {\top} \boldsymbol {\Omega} \left(\boldsymbol {\Omega} ^ {- 1} \boldsymbol {\kappa} - \mathbf {z} _ {0}\right) - \mathbf {K A} ^ {\top} \boldsymbol {\Omega} \mathbf {A} \left(\mathbf {K} ^ {- 1} + \mathbf {A} ^ {\top} \boldsymbol {\Omega} \mathbf {A}\right) ^ {- 1} \mathbf {A} ^ {\top} \boldsymbol {\Omega} \left(\boldsymbol {\Omega} ^ {- 1} \boldsymbol {\kappa} - \mathbf {z} _ {0}\right) (29) \\ = \mathbf {f} _ {0} + \mathbf {K} \mathbf {v} - \mathbf {K} \mathbf {A} ^ {\top} \boldsymbol {\Omega} \mathbf {A} \left(\mathbf {K} ^ {- 1} + \mathbf {A} ^ {\top} \boldsymbol {\Omega} \mathbf {A}\right) ^ {- 1} \mathbf {v}, (30) \\ \end{array}
+$$
+
+where we have defined $\mathbf{v} \triangleq \mathbf{A}^{\top} \boldsymbol{\Omega} (\boldsymbol{\Omega}^{-1} \boldsymbol{\kappa} - \mathbf{z}_0)$ .
+
+Now let $\mathbf{d} \triangleq (d_1^1, \ldots, d_N^1, d_1^2, \ldots, d_N^2, \ldots, d_1^C, \ldots, d_N^C)^\top$ , where $d_i^c = Y_{ic} \sum_{c'} \omega_i^{c'}$ . Define $\mathbf{Y}^\dagger$ to be the $CN \times N$ matrix produced by vertically stacking $\mathrm{diag}(Y_{c})$ , and let $\mathbf{W}^\dagger$ be the $CN \times N$ matrix produced by vertically stacking $\mathrm{diag}((\omega_1^c, \ldots, \omega_N^c)^\top)$ . $\mathbf{A}^\top \Omega \mathbf{A}$ may then be written as follows:
+
+$$
+\mathbf {A} ^ {\top} \boldsymbol {\Omega} \mathbf {A} = \mathbf {D} - \mathbf {S P S} ^ {\top}, \text {w h e r e} \tag {31}
+$$
+
+$$
+\mathbf {D} = \boldsymbol {\Omega} + \operatorname {d i a g} (\mathbf {d}), \tag {32}
+$$
+
+$$
+\mathbf {S} = \left[ \begin{array}{l l} \mathbf {Y} ^ {\dagger} & \mathbf {W} ^ {\dagger} \end{array} \right], \tag {33}
+$$
+
+$$
+\mathbf {P} = \left[ \begin{array}{l l} \mathbf {0} _ {N} & \mathbf {I} _ {N} \\ \mathbf {I} _ {N} & \mathbf {0} _ {N} \end{array} \right]. \tag {34}
+$$
+
+Substituting (31) into (30):
+
+$$
+\bar {\mathbf {f}} = \mathbf {f} _ {0} + \mathbf {K v} - \mathbf {K A} ^ {\top} \boldsymbol {\Omega} \mathbf {A} \left(\mathbf {K} ^ {- 1} + \mathbf {D} - \mathbf {S P S} ^ {\top}\right) ^ {- 1} \mathbf {v}. \tag {35}
+$$
+
+Now we expand $(\mathbf{K}^{-1} + \mathbf{D} - \mathbf{S}\mathbf{P}\mathbf{S}^{\top})^{-1}$
+
+$$
+\left(\mathbf {K} ^ {- 1} + \mathbf {D} - \mathbf {S P S} ^ {\top}\right) ^ {- 1} = \mathbf {E} - \mathbf {E S} \left(\mathbf {S} ^ {\top} \mathbf {E S} - \mathbf {P} ^ {- 1}\right) ^ {- 1} \mathbf {S} ^ {\top} \mathbf {E}, \tag {36}
+$$
+
+where $\mathbf{E} = (\mathbf{K}^{-1} + \mathbf{D})^{-1} = \mathbf{K}(\mathbf{K} + \mathbf{D}^{-1})^{-1}\mathbf{D}^{-1}$ is a block-diagonal matrix that can be computed in $\mathcal{O}(CN^3)$ time, since $\mathbf{D}$ is diagonal and $\mathbf{K}$ is block diagonal. Now, substituting (36) back into (35),
+
+$$
+\bar {\mathbf {f}} = \mathbf {f} _ {0} + \mathbf {K v} - \mathbf {K A} ^ {\top} \boldsymbol {\Omega} \mathbf {A E v} + \mathbf {K A} ^ {\top} \boldsymbol {\Omega} \mathbf {A E S} (\mathbf {S} ^ {\top} \mathbf {E S} - \mathbf {P} ^ {- 1}) ^ {- 1} \mathbf {S} ^ {\top} \mathbf {E v}. \tag {37}
+$$
+
+Note that $(\mathbf{S}^{\top}\mathbf{E}\mathbf{S} - \mathbf{P}^{-1})^{-1}$ is a $2N\times 2N$ matrix and thus can be inverted in $\mathcal{O}(N^3)$ time. The overall complexity is therefore $\mathcal{O}(CN^3)$ .
+
+# C MARGINAL LIKELIHOOD AND PREDICTIVE LIKELIHOOD OBJECTIVES
+
+Marginal Likelihood (ML). The log marginal likelihood can be written as follows:
+
+$$
+\begin{array}{l} L _ {\mathrm {M L}} (\boldsymbol {\theta}; \mathbf {X}, \mathbf {Y}) \triangleq \log p _ {\boldsymbol {\theta}} (\mathbf {Y} | \mathbf {X}) = \log \int p (\boldsymbol {\omega}) p _ {\boldsymbol {\theta}} (\mathbf {Y} | \boldsymbol {\omega}, \mathbf {X}) d \boldsymbol {\omega} \\ = \log \int p (\boldsymbol {\omega}) \int \mathcal {L} (\mathbf {f} | \mathbf {Y}, \boldsymbol {\omega}) p _ {\boldsymbol {\theta}} (\mathbf {f} | \mathbf {X}) d \mathbf {f} d \boldsymbol {\omega} \tag {38} \\ \end{array}
+$$
+
+The gradient of the log marginal likelihood can be estimated by posterior samples $\omega \sim p_{\theta}(\omega | \mathbf{X}, \mathbf{Y})$ . In practice, we use a stochastic training objective based on samples of $\omega$ from Gibbs chains. We use Fisher's identity (Douc et al., 2014) to derive the following gradient estimator:
+
+$$
+\nabla_ {\boldsymbol {\theta}} L _ {\mathrm {M L}} = \int p _ {\boldsymbol {\theta}} (\boldsymbol {\omega} | \mathbf {X}, \mathbf {Y}) \nabla_ {\boldsymbol {\theta}} \log p _ {\boldsymbol {\theta}} (\mathbf {Y} | \boldsymbol {\omega}, \mathbf {X}) d \boldsymbol {\omega} \approx \frac {1}{M} \sum_ {m = 1} ^ {M} \nabla_ {\boldsymbol {\theta}} \log p _ {\boldsymbol {\theta}} (\mathbf {Y} | \mathbf {X}, \boldsymbol {\omega} ^ {(m)}), \tag {39}
+$$
+
+where $\omega^{(1)},\ldots ,\omega^{(M)}$ are samples from the posterior Gibbs chain. As suggested by Patacchiola et al. (2020), who applied GPs to FSC via least-squares classification, we merge the support and query sets during learning to take full advantage of the available data within each episode.
+
+Predictive Likelihood (PL). The log predictive likelihood for a query example $\mathbf{x}_{*}$ is:
+
+$$
+L _ {\mathrm {P L}} \left(\boldsymbol {\theta}; \mathbf {X}, \mathbf {Y}, \mathbf {x} _ {*}, \mathbf {y} _ {*}\right) \triangleq \log p _ {\boldsymbol {\theta}} \left(\mathbf {y} _ {*} \mid \mathbf {x} _ {*}, \mathbf {X}, \mathbf {Y}\right) = \log \int p (\boldsymbol {\omega}) p _ {\boldsymbol {\theta}} \left(\mathbf {y} _ {*} \mid \mathbf {x} _ {*}, \mathbf {X}, \mathbf {Y}, \boldsymbol {\omega}\right) d \boldsymbol {\omega}. \tag {40}
+$$
+
+We use an approximate gradient estimator again based on posterior samples of $\omega$ :
+
+$$
+\nabla_ {\boldsymbol {\theta}} L _ {\mathrm {P L}} \approx \int p _ {\boldsymbol {\theta}} (\boldsymbol {\omega} | \mathbf {X}, \mathbf {Y}) \nabla_ {\boldsymbol {\theta}} \log p _ {\boldsymbol {\theta}} \left(\mathbf {y} _ {*} \mid \mathbf {x} _ {*}, \mathbf {X}, \mathbf {Y}\right) d \boldsymbol {\omega} \approx \frac {1}{M} \sum_ {m = 1} ^ {M} \nabla_ {\boldsymbol {\theta}} \log p _ {\boldsymbol {\theta}} \left(\mathbf {y} _ {*} \mid \mathbf {x} _ {*}, \mathbf {X}, \mathbf {Y}, \boldsymbol {\omega} ^ {(m)}\right). \tag {41}
+$$
+
+We note that this is not an unbiased estimator of the gradient, but find it works well in practice.
+
+# D LEARNING ALGORITHM
+
+Our learning algorithm for both marginal and predictive likelihood is summarized in Algorithm 1.
+
+Algorithm 1 One-vs-Each Pólya-Gamma GP Learning
+Input: Objective $L\in \{L_{\mathrm{ML}},L_{\mathrm{PL}}\}$ , Task distribution $\mathcal{T}$ , number of parallel Gibbs chains $M$
+number of steps $T$ , learning rate $\eta$
+Initialize hyperparameters $\theta$ randomly.
+repeat Sample $\mathcal{S} = (\mathbf{X},\mathbf{Y}),\mathcal{Q} = (\mathbf{X}_{*},\mathbf{Y}_{*})\sim \mathcal{T}$ if $L = L_{\mathrm{ML}}$ then $\mathbf{X}\gets \mathbf{X}\cup \mathbf{X}_{*},\mathbf{Y}\gets \mathbf{Y}\cup \mathbf{Y}_{*}$
+end if A $\leftarrow$ OVE-MATRiX(Y)
+for $m = 1$ to $M$ do $\omega_0^{(m)}\sim PG(1,0),\mathbf{f}_0^{(m)}\sim p_\theta (\mathbf{f}|\mathbf{X})$ for $t = 1$ to $T$ do $\psi_t^{(m)}\gets \mathbf{A}\mathbf{f}_{t - 1}^{(m)}$ $\omega_{t}^{(m)}\sim \mathrm{PG}(1,\psi_{t}^{(m)})$ $\mathbf{f}_t^{(m)}\sim p_\theta (\mathbf{f}|\mathbf{X},\mathbf{Y},\omega_t^{(m)})$ end for
+end for
+if $L = L_{\mathrm{ML}}$ then $\theta \leftarrow \theta +\frac{\eta}{M}\sum_{m = 1}^{M}\nabla_{\theta}\log p_{\theta}(\mathbf{Y}|\mathbf{X},\omega_T^{(m)})$ else $\theta \leftarrow \theta +\frac{\eta}{M}\sum_{m = 1}^{M}\sum_{j}\nabla_{\theta}\log p_{\theta}(\mathbf{y}_{*j}|\mathbf{x}_{*j},\mathcal{S},\omega_T^{(m)})$
+end if
+until convergence
+
+# E POSTERIOR PREDICTIVE DISTRIBUTION
+
+The posterior predictive distribution for a query example $\mathbf{x}_{*}$ conditioned on $\omega$ is:
+
+$$
+p \left(\mathbf {y} _ {*} \mid \mathbf {x} _ {*}, \mathbf {X}, \mathbf {Y}, \boldsymbol {\omega}\right) = \int p \left(\mathbf {y} _ {*} \mid \mathbf {f} _ {*}\right) p \left(\mathbf {f} _ {*} \mid \mathbf {x} _ {*}, \mathbf {X}, \mathbf {Y}, \boldsymbol {\omega}\right) d \mathbf {f} _ {*}, \tag {42}
+$$
+
+where $\mathbf{f}_{*}$ are the query example's logits. The predictive distribution over $\mathbf{f}_{*}$ can be obtained by noting that $\psi$ and the query logits are jointly Gaussian:
+
+$$
+\left[ \begin{array}{c} \boldsymbol {\psi} \\ \mathbf {f} _ {*} \end{array} \right] \sim \mathcal {N} \left(0, \left[ \begin{array}{c c} \mathbf {A K A} ^ {\top} + \boldsymbol {\Omega} ^ {- 1} & \mathbf {A K} _ {*} \\ (\mathbf {A K} _ {*}) ^ {\top} & \mathbf {K} _ {* *} \end{array} \right]\right), \tag {43}
+$$
+
+where $\mathbf{K}_{*}$ is the $NC\times C$ block diagonal matrix with blocks $K_{\theta}(\mathbf{X},\mathbf{x}_{*})$ and $\mathbf{K}_{**}$ is the $C\times C$ diagonal matrix with diagonal entries $k_{\theta}(\mathbf{x}_{*},\mathbf{x}_{*})$ . The predictive distribution becomes:
+
+$$
+p \left(\mathbf {f} _ {*} \mid \mathbf {x} _ {*}, \mathbf {X}, \mathbf {Y}, \omega\right) = \mathcal {N} \left(\mathbf {f} _ {*} \mid \boldsymbol {\mu} _ {*}, \boldsymbol {\Sigma} _ {*}\right), \text {w h e r e}
+$$
+
+$$
+\boldsymbol {\mu} _ {*} = \left(\mathbf {A} \mathbf {K} _ {*}\right) ^ {\top} \left(\mathbf {A} \mathbf {K} \mathbf {A} ^ {\top} + \boldsymbol {\Omega} ^ {- 1}\right) ^ {- 1} \boldsymbol {\Omega} ^ {- 1} \kappa \text {a n d} \tag {44}
+$$
+
+$$
+\boldsymbol {\Sigma} _ {*} = \mathbf {K} _ {* *} - (\mathbf {A} \mathbf {K} _ {*}) ^ {\top} (\mathbf {A} \mathbf {K} \mathbf {A} ^ {\top} + \boldsymbol {\Omega} ^ {- 1}) ^ {- 1} \mathbf {A} \mathbf {K} _ {*}.
+$$
+
+With $p(\mathbf{f}_*|\mathbf{x}_*,\mathbf{X},\mathbf{Y},\boldsymbol {\omega})$ in hand, the integral in (42) can easily be computed numerically for each class $c$ by forming the corresponding OVE linear transformation matrix $\mathbf{A}^c$ and then performing 1D Gaussian-Hermite quadrature on each dimension of $\mathcal{N}(\psi_{*}^{c}|\mathbf{A}^{c}\boldsymbol{\mu}^{*},\mathbf{A}^{c}\boldsymbol{\Sigma}_{*}\mathbf{A}^{c\top})$
+
+# F DETAILED COMPARISON OF LIKELIHOODS
+
+In this section we seek to better understand the behaviors of the softmax, OVE, logistic softmax, and Gaussian likelihoods for classification. For convenience, we summarize the forms of these likelihoods in Table 3.
+
+Table 3: Likelihoods used in Section F.
+
+| Likelihood | L(f | y = c) |
| Softmax | exp(fc)/∑c' exp(fc') |
| Gaussian | Πc' N(2 · 11[c' = c] - 1 | μ = fc', σ² = 1) |
| Logistic Softmax (LSM) | σ(fc)/∑c' σ(fc') |
| One-vs-Each (OVE) | Πc' ≠ c σ(fc - fc') |
+
+# F.1 HISTOGRAM OF CONFIDENCES
+
+We sampled logits from $f_{c} \sim \mathcal{N}(0,1)$ and plotted a histogram and kernel density estimate of the maximum output probability $\max_c p(y = c|\mathbf{f})$ for each of the likelihoods shown in Table 3, where $C = 5$ . The results are shown in Figure 4. Logistic softmax is a priori underconfident: it puts little probability mass on confidence above 0.4. This may be due to the use of the sigmoid function which squashes large values of $f$ . Gaussian likelihood and OVE are a priori overconfident in that they put a large amount of probability mass on confident outputs. Note that this is not a complete explanation, because GP hyperparameters such as the prior mean or Gaussian likelihood variance may be able to compensate for these imperfections to some degree. Indeed, we found it helpful to learn a constant mean for the logistic softmax likelihood, as mentioned in Section G.2.
+
+# F.2 LIKELIHOOD VISUALIZATION
+
+In order to visualize the various likelihoods under consideration, we consider a trivial classification task with a single observed example. We assume that there are three classes $(C = 3)$ and the single example belongs to the first class $(y = 1)$ . We place the following prior on $\mathbf{f} = (f_1, f_2, f_3)^\top$ :
+
+$$
+p (\mathbf {f}) = \mathcal {N} \left(\mathbf {f} \mid \boldsymbol {\mu} = \left[ \begin{array}{l} 0 \\ 0 \\ 0 \end{array} \right], \boldsymbol {\Sigma} = \left[ \begin{array}{l l l} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{array} \right]\right). \tag {45}
+$$
+
+
+Figure 4: Histogram and kernel density estimate of confidence for randomly generated function samples $f_{c} \sim \mathcal{N}(0,1)$ . Normalized output probabilities were computed for $C = 5$ and a histogram of $\max_c p(y = c|\mathbf{f})$ was computed for 50,000 randomly generated simulations.
+
+In other words, the prior for $f_{1}$ and $f_{2}$ is a standard normal and $f_{3}$ is clamped at zero (for ease of visualization). The likelihoods are plotted in Figure 5 and the corresponding posteriors are plotted in Figure 6.
+
+
+(a) Softmax
+
+
+(b) Gaussian
+Figure 5: Plot of $\mathcal{L}(\mathbf{f} \mid y = 1)$ , where $f_{3}$ is clamped to 0. The Gaussian likelihood penalizes configurations far away from $(f_{1}, f_{2}) = (1, -1)$ . Logistic softmax is much flatter compared to softmax and has visibly different contours. One-vs-Each is visually similar to the softmax but penalizes $(f_{1}, f_{2})$ near the origin slightly more.
+
+
+(c) Logistic Softmax
+
+
+(d) One-vs-Each
+
+
+(a) Softmax
+Figure 6: Plot of posterior $p(\mathbf{f} \mid y = 1)$ , where $f_{3}$ is clamped to 0. The mode of each posterior distribution is similar, but each differs slightly in shape. Gaussian is more peaked about its mode, while logistic softmax is more spread out. One-vs-Each is similar to softmax, but is slightly more elliptical.
+
+
+(b) Gaussian
+
+
+(c) Logistic Softmax
+
+
+(d) One-vs-Each
+
+# F.3 2D IRIS EXPERIMENTS
+
+We also conducted experiments on a 2D version of the Iris dataset (Fisher, 1936), which contains 150 examples across 3 classes. The first two features of the dataset were retained (sepal length and width). We used a zero-mean GP prior and an RBF kernel $k(\mathbf{x},\mathbf{x}^{\prime}) = \exp \left(-\frac{1}{2} d(\mathbf{x},\mathbf{x}^{\prime})^{2}\right)$ , where $d(\cdot ,\cdot)$ is Euclidean distance. We considered training set sizes with 1, 2, 3, 4, 5, 10, 15, 20, 25, and 30 examples per class. For each training set size, we performed GP inference on 200 randomly generated train/test splits and compared the predictions across Gaussian, logistic softmax, and one-vs-each likelihoods.
+
+Predictions at a test point $\mathbf{x}_{*}$ were made by applying the (normalized) likelihood to the posterior predictive mean $\bar{\mathbf{f}}_{*}$ . The predictive probabilities for each likelihood is shown in Figure 7 for a randomly generated train/test split with 30 examples per class. Test predictive accuracy, Brier score, expected calibration error, and evidence lower bound (ELBO) results across various training set sizes are shown in Figure 8.
+
+The ELBO is computed by treating each likelihood's posterior $q(\mathbf{f}|\mathbf{X},\mathbf{Y})$ as an approximation to the softmax posterior $p(\mathbf{f}|\mathbf{X},\mathbf{Y})$ .
+
+$$
+\begin{array}{l} \operatorname {E L B O} (q) = \mathbb {E} _ {q} [ \log p (\mathbf {f} | \mathbf {X}) ] + \mathbb {E} _ {q} [ \log p (\mathbf {Y} | \mathbf {f}) ] - \mathbb {E} _ {q} [ \log q (\mathbf {f} | \mathbf {X}, \mathbf {Y}) ] \\ = \log p (\mathbf {x}) - \operatorname {K L} (q (\mathbf {f} | \mathbf {X}, \mathbf {Y}) | | p (\mathbf {f} | \mathbf {X}, \mathbf {Y})). \\ \end{array}
+$$
+
+Even though direct computation of the softmax posterior $p(\mathbf{f}|\mathbf{X},\mathbf{y})$ is intractable, computing the ELBO is tractable. A larger ELBO indicates a lower KL divergence to the softmax posterior.
+
+One-vs-Each performs well for accuracy, Brier score, and ELBO across the training set sizes. Gaussian performs best on expected calibration error through 15 examples per class, beyond which one-vs-each is better.
+
+
+(a) Gaussian
+
+
+(b) Logistic Softmax
+
+
+(c) One-vs-Each
+
+
+Figure 7: Training points (colored points) and maximum predictive probability for various likelihoods on the Iris dataset. The Gaussian likelihood produces more warped decision boundaries than the others. Logistic softmax tends to produce lower confidence predictions, while one-vs-each produces larger regions of greater confidence than the others.
+(a) Accuracy
+Figure 8: Comparison across likelihoods in terms of test predictive accuracy, Brier score, expected calibration error (computed with 10 bins), and ELBO. Results are averaged over 200 randomly generated splits for each training set size (1, 2, 3, 4, 5, 10, 15, 20, 25, and 30 examples per class). Error bars indicate $95\%$ confidence intervals.
+
+
+(b) Brier
+
+
+(c) ECE
+
+
+(d) ELBO
+
+# G FEW-SHOT EXPERIMENTAL DETAILS
+
+Here we provide more details about our experimental setup for our few-shot classification experiments, which are based on the protocol of (Patacchiola et al., 2020).
+
+# G.1 DATASETS
+
+We used the four dataset scenarios described below. The first three are the same used by Chen et al. (2019) and the final was proposed by Patacchiola et al. (2020).
+
+- CUB. Caltech-UCSD Birds (CUB) (Wah et al., 2011) consists of 200 classes and 11,788 images. A split of 100 training, 50 validation, and 50 test classes was used (Hilliard et al., 2018; Chen et al., 2019).
+- mini-Imagenet. The mini-Imagenet dataset (Vinyals et al., 2016) consists of 100 classes with 600 images per class. We used the split proposed by Ravi & Larochelle (2017), which has 64 classes for training, 16 for validation, and 20 for test.
+- mini-Imagenet $\rightarrow$ CUB. This cross-domain transfer scenario takes the training split of mini-Imagenet and the validation & test splits of CUB.
+- Omniglot $\rightarrow$ EMNIST. We use the same setup as proposed by Patacchiola et al. (2020). Omniglot (Lake et al., 2011) consists of 1,623 classes, each with 20 examples, and is augmented by rotations of 90 degrees to create 6,492 classes, of which 4,114 are used for training. The EMNIST dataset (Cohen et al., 2017), consisting of 62 classes, is split into 31 training and 31 test classes.
+
+# G.2 FEW-SHOT CLASSIFICATION BASELINES
+
+Here we explain the few-shot baselines in greater detail.
+
+- Feature Transfer (Chen et al., 2019) involves first training an off-line classifier on the training classes and then training a new classification layer on the episode.
+- Baseline++ (Chen et al., 2019) is similar to Feature Transfer except it uses a cosine distance module prior to the softmax during fine-tuning.
+- Matching Networks (Vinyals et al., 2016) can be viewed as a soft form of $k$ -nearest neighbors that computes attention and sums over the support examples to form a predictive distribution over classes.
+- Prototypical Networks (Snell et al., 2017) computes class means (prototypes) and forms a predictive distribution based on Euclidean distance to the prototypes. It can be viewed as a Gaussian classifier operating in an embedding space.
+- MAML (Finn et al., 2017) performs one or a few steps of gradient descent on the support set and then makes predictions on the query set, backpropagating through the gradient descent procedure. For this baseline, we simply quote the classification accuracy reported by (Patacchiola et al., 2020).
+- RelationNet (Sung et al., 2018) rather than using a predefined distance metric as in Matching Networks or Prototypical Networks instead learns a deep distance metric as the output of a neural network that accepts as input the latent representation of both examples. It is trained to minimize squared error of output predictions.
+- Deep Kernel Transfer (DKT) (Patacchiola et al., 2020) relies on least squares classification (Rifkin & Klautau, 2004) to maintain tractability of Gaussian process posterior inference. In DKT, a separate binary classification task is formed for each class in one-vs-rest fashion by treating labels in $\{-1, +1\}$ as continuous targets. We include the results of DKT with the cosine kernel as implemented by Patacchiola et al. (2020), which is parameterized slightly differently from the version we used in (47):
+
+$$
+k _ {\mathrm {d k t}} ^ {\cos} \left(\mathbf {x}, \mathbf {x} ^ {\prime}; \boldsymbol {\theta}, \alpha , \nu\right) = \operatorname {s o f t p l u s} (\alpha) \cdot \operatorname {s o f t p l u s} (\nu) \cdot \frac {g _ {\boldsymbol {\theta}} \left(\mathbf {x}\right) ^ {\top} g _ {\boldsymbol {\theta}} \left(\mathbf {x} ^ {\prime}\right)}{\| g _ {\boldsymbol {\theta}} (\mathbf {x}) \| \| g _ {\boldsymbol {\theta}} \left(\mathbf {x} ^ {\prime}\right) \|}. \tag {46}
+$$
+
+- Bayesian MAML (Yoon et al., 2018) relies on Stein Variational Gradient Descent (SVGD) (Liu & Wang, 2016) to get an approximate posterior distribution in weight-space. We compare to both the non-chaser version, which optimizes cross-entropy of query predictions, and the chaser version, which optimizes mean squared error between the approximate posterior on the support set and the approximate posterior on the merged support & query set. The non-chaser version is therefore related to predictive likelihood methods and the chaser version is more analogous to the marginal likelihood methods. For the non-chaser version, we used 20 particles and 1 step of adaptation at both train and test time. For the chaser version, we also used 20 particles. At train time, the chaser took 1 step and the leader 1 additional step. At test time, we used 5 steps of adaptation. Due to the slow performance of this method, we followed the advice of Yoon et al. (2018) and only performed adaptation on the final layer of weights, which may help explain the drop in performance relative to MAML. The authors released Tensorflow code for regression only, so we reimplemented this baseline for classification in PyTorch.
+
+- Amortized Bayesian Meta-Learning (ABML) (Ravi & Beatson, 2019) performs a few steps of Bayes-by-backprop (Blundell et al., 2015) in order to infer a fully-factorized approximate posterior over the weights. The authors did not release code and so we implemented our own version of ABML in PyTorch. We found the weighting on the inner and outer KL divergences to be important for achieving good performance. We took the negative log likelihood to be mean cross entropy and used an inner KL weight of 0.01 and an outer KL weight of 0.001. These values were arrived upon by doing a small amount of hyperparameter tuning on the Omniglot $\rightarrow$ EMNIST dataset. We used $\alpha = 1.0$ and $\beta = 0.01$ for the Gamma prior over the weights. We only applied ABML to the weights of the network; the biases were learned as point estimates. We used 4 steps of adaptation and took 5 samples when computing expectations (using any more than this did not fit into GPU memory). We used the local reparameterization trick (Kingma et al., 2015) and flipout (Wen et al., 2018) when computing expectations in order to reduce variance. In order to match the architecture used by Ravi & Beatson (2019), we trained this baseline with 32 filters throughout the classification network. We trained each 1-shot ABML model for 800 epochs and each 5-shot ABML model for 600 epochs as the learning had not converged within the epoch limits specified in Section G.3.
+
+- Logistic Softmax GP (Galy-Fajou et al., 2020) is the multi-class Gaussian process classification method that relies on the logistic softmax likelihood. Galy-Fajou et al. (2020) did not consider few-shot, but we use the same objectives described in Section 4.4 to adapt this method to FSC. In addition, we used the cosine kernel (see Section H for a description) that we found to work best with our OVE PG GPs. For this method, we found it important to learn a constant mean function (rather than a zero mean) in order to improve calibration.
+
+# G.3 TRAINING DETAILS
+
+All methods employed the commonly-used Conv4 architecture (Vinyals et al., 2016) (see Table 4 for a detailed specification), except ABML which used 32 filters throughout. All of our experiments used the Adam (Kingma & Ba, 2015) optimizer with learning rate $10^{-3}$ . During training, all models used epochs consisting of 100 randomly sampled episodes. A single gradient descent step on the encoder network and relevant hyperparameters is made per episode. All 1-shot models are trained for 600 epochs and 5-shot models are trained for 400 epochs, except for ABML which was trained for an extra 200 epochs. Each episode contained 5 classes (5-way) and 16 query examples. At test time, 15 query examples are used for each episode. Early stopping was performed by monitoring accuracy on the validation set. The validation set was not used for retraining.
+
+We train both marginal likelihood and predictive likelihood versions of our models. For Polya-Gamma sampling we use the PyPólyaGamma package3. During training, we use a single step of Gibbs ( $T = 1$ ). For evaluation, we run until $T = 50$ . In both training and evaluation, we use $M = 20$ parallel Gibbs chains to reduce variance.
+
+Table 4: Specification of Conv4 architecture. Conv2d layers are $3 \times 3$ with stride 1 and same padding. MaxPool2d layers are $2 \times 2$ with stride 2 and valid padding.
+
+| Output Size | Layers |
| 1 × 28 × 28 | Input image |
| 64 × 14 × 14 | Conv2d
+BatchNorm2d
+ReLU
+MaxPool2d |
| 64 × 7 × 7 | Conv2d
+BatchNorm2d
+ReLU
+MaxPool2d |
| 64 × 3 × 3 | Conv2d
+BatchNorm2d
+ReLU
+MaxPool2d |
| 64 × 1 × 1 | Conv2d
+BatchNorm2d
+ReLU
+MaxPool2d |
| 64 | Flatten |
+
+(a) Omniglot $\rightarrow$ EMNIST dataset.
+
+| Output Size | Layers |
| 3 × 84 × 84 | Input image |
| 64 × 42 × 42 | Conv2d
+BatchNorm2d
+ReLU
+MaxPool2d |
| 64 × 21 × 21 | Conv2d
+BatchNorm2d
+ReLU
+MaxPool2d |
| 64 × 10 × 10 | Conv2d
+BatchNorm2d
+ReLU
+MaxPool2d |
| 64 × 5 × 5 | Conv2d
+BatchNorm2d
+ReLU
+MaxPool2d |
| 1600 | Flatten |
+
+(b) All other datasets.
+
+# H EFFECT OF KERNEL CHOICE ON CLASSIFICATION ACCURACY
+
+In this section, we examine the effect of kernel choice on classification accuracy for our proposed One-vs-Each Pólya-Gamma OVE GPs.
+
+Cosine Kernel. In the main paper, we showed results for the following kernel, which we refer to as the "cosine" kernel due to its resemblance to cosine similarity:
+
+$$
+k ^ {\cos} (\mathbf {x}, \mathbf {x} ^ {\prime}; \boldsymbol {\theta}, \alpha) = \exp (\alpha) \frac {g _ {\boldsymbol {\theta}} (\mathbf {x}) ^ {\top} g _ {\boldsymbol {\theta}} \left(\mathbf {x} ^ {\prime}\right)}{\| g _ {\boldsymbol {\theta}} (\mathbf {x}) \| \| g _ {\boldsymbol {\theta}} \left(\mathbf {x} ^ {\prime}\right) \|}, \tag {47}
+$$
+
+where $g_{\theta}(\cdot)$ is a deep neural network that outputs a fixed-dimensional encoded representation of the input and $\alpha$ is the scalar log output scale. Both $\theta$ and $\alpha$ are considered hyperparameters and learned simultaneously as shown in Algorithm 1. We found that this kernel works well for a range of datasets and shot settings. We note that the use of cosine similarity is reminiscent of the approach taken by Baseline++ method of (Chen et al., 2019), which computes the softmax over cosine similarity to class weights.
+
+Here we consider three additional kernels: linear, RBF, and normalized RBF.
+
+Linear Kernel. The linear kernel is defined as follows:
+
+$$
+k ^ {\mathrm {l i n}} \left(\mathbf {x}, \mathbf {x} ^ {\prime}; \boldsymbol {\theta}, \alpha\right) = \frac {1}{D} \exp (\alpha) g _ {\boldsymbol {\theta}} \left(\mathbf {x}\right) ^ {\top} g _ {\boldsymbol {\theta}} \left(\mathbf {x} ^ {\prime}\right), \tag {48}
+$$
+
+where $D$ is the output dimensionality of $g_{\theta}(\mathbf{x})$ . We apply this dimensionality scaling because the dot product between $g_{\theta}(\mathbf{x})$ and $g_{\theta}(\mathbf{x}^{\prime})$ may be large depending on $D$ .
+
+RBF Kernel. The RBF (also known as squared exponential) kernel can be defined as follows:
+
+$$
+k ^ {\mathrm {r b f}} \left(\mathbf {x}, \mathbf {x} ^ {\prime}; \boldsymbol {\theta}, \alpha , \ell\right) = \exp (\alpha) \exp \left(- \frac {1}{2 D \exp (\ell) ^ {2}} \| g _ {\boldsymbol {\theta}} (\mathbf {x}) - g _ {\boldsymbol {\theta}} \left(\mathbf {x} ^ {\prime}\right) \| ^ {2}\right), \tag {49}
+$$
+
+where $\ell$ is the log lengthscale parameter (as with $\alpha$ , we learn the $\ell$ alongside $\theta$ ).
+
+Normalized RBF Kernel. Finally, we consider a normalized RBF kernel similar in spirit to the cosine kernel:
+
+$$
+k ^ {\mathrm {r b f - n o r m}} (\mathbf {x}, \mathbf {x} ^ {\prime}; \boldsymbol {\theta}, \alpha , \ell) = \exp (\alpha) \exp \left(- \frac {1}{2 \exp (\ell) ^ {2}} \left\| \frac {g _ {\boldsymbol {\theta}} (\mathbf {x})}{\| g _ {\boldsymbol {\theta}} (\mathbf {x}) \|} - \frac {g _ {\boldsymbol {\theta}} \left(\mathbf {x} ^ {\prime}\right)}{\| g _ {\boldsymbol {\theta}} \left(\mathbf {x} ^ {\prime}\right) \|} \right\| ^ {2}\right). \tag {50}
+$$
+
+The results of our Pólya-Gamma OVE GPs with different kernels can be found in Tables 5 and 6. In general, we find that the cosine kernel works best overall, with the exception of Omniglot $\rightarrow$ EMNIST, where RBF does best.
+
+Table 5: Classification accuracy for Polya-Gamma OVE GPs (our method) using different kernels. Cosine is overall the best, followed closely by linear. RBF-based kernels perform worse, except for the Omniglot $\rightarrow$ EMNIST dataset. Evaluation is performed on 5 randomly generated sets of 600 test episodes. Standard deviation of the mean accuracy is also shown. ML = Marginal Likelihood, PL = Predictive Likelihood.
+
+| Kernel | CUB | mini-ImageNet |
| Objective | 1-shot | 5-shot | 1-shot | 5-shot |
| Cosine | ML | 63.98 ± 0.43 | 77.44 ± 0.18 | 50.02 ± 0.35 | 64.58 ± 0.31 |
| Linear | ML | 62.48 ± 0.26 | 77.94 ± 0.21 | 50.81 ± 0.30 | 66.66 ± 0.45 |
| RBF | ML | 58.49 ± 0.40 | 75.50 ± 0.18 | 50.33 ± 0.26 | 64.62 ± 0.37 |
| RBF (normalized) | ML | 62.75 ± 0.32 | 78.71 ± 0.08 | 50.26 ± 0.31 | 64.84 ± 0.39 |
| Cosine | PL | 60.11 ± 0.26 | 79.07 ± 0.05 | 48.00 ± 0.24 | 67.14 ± 0.23 |
| Linear | PL | 60.44 ± 0.39 | 78.54 ± 0.19 | 47.29 ± 0.31 | 66.66 ± 0.36 |
| RBF | PL | 56.18 ± 0.69 | 77.96 ± 0.19 | 48.06 ± 0.28 | 66.66 ± 0.39 |
| RBF (normalized) | PL | 59.78 ± 0.34 | 78.42 ± 0.13 | 47.51 ± 0.20 | 66.42 ± 0.36 |
+
+Table 6: Cross-domain classification accuracy for Polya-Gamma OVE GPs (our method) using different kernels. The experimental setup is the same as Table 5.
+
+| Kernel | Objective | Omniglot→EMNIST | mini-ImageNet→CUB |
| 1-shot | 5-shot | 1-shot | 5-shot |
| Cosine | ML | 68.43 ± 0.67 | 86.22 ± 0.20 | 39.66 ± 0.18 | 55.71 ± 0.31 |
| Linear | ML | 72.42 ± 0.49 | 88.27 ± 0.20 | 39.61 ± 0.19 | 55.07 ± 0.29 |
| RBF | ML | 78.05 ± 0.38 | 88.98 ± 0.16 | 36.99 ± 0.07 | 51.75 ± 0.27 |
| RBF (normalized) | ML | 75.51 ± 0.47 | 88.86 ± 0.16 | 38.42 ± 0.16 | 54.20 ± 0.13 |
| Cosine | PL | 77.00 ± 0.50 | 87.52 ± 0.19 | 37.49 ± 0.11 | 57.23 ± 0.31 |
| Linear | PL | 75.87 ± 0.43 | 88.77 ± 0.10 | 36.83 ± 0.27 | 56.46 ± 0.22 |
| RBF | PL | 74.62 ± 0.35 | 89.87 ± 0.13 | 35.06 ± 0.25 | 55.12 ± 0.21 |
| RBF (normalized) | PL | 76.01 ± 0.31 | 89.42 ± 0.16 | 37.50 ± 0.28 | 56.80 ± 0.39 |
+
+# I ADDITIONAL CALIBRATION RESULTS
+
+In Figure 9, we include calibration results for mini-Imagenet and Omniglot $\rightarrow$ EMNIST. They follow similar trends to the results presented in Section 5.2.
+
+
+
+
+
+
+Figure 9: Reliability diagrams, expected calibration error, maximum calibration error, and Brier scores for 5-shot 5-way tasks on mini-Imagenet, Omniglot $\rightarrow$ EMNIST, and mini-Imagenet $\rightarrow$ CUB. Metrics are computed on 3,000 random tasks from the test set.
+
+# J QUANTITATIVE ROBUSTNESS TO INPUT NOISE RESULTS
+
+In this section we include quantitative results for the robustness to input noise results presented in Figure 2. Results for Gaussian noise are shown in Table 7, impulse noise in Table 8, and defocus blur in Table 9.
+
+Table 7: Accuracy (%) and Brier Score when applying Gaussian noise corruption of severity 5 to both the support and query set of test-time episodes. Results were evaluated across 1,000 randomly generated 5-shot 5-way tasks.
+
+| Method | CUB | mini-ImageNet | mini-ImageNet→CUB |
| Acc. (↑) | Brier (↓) | Acc. (↑) | Brier (↓) | Acc. (↑) | Brier (↓) |
| Feature Transfer | 30.45 | 0.775 | 22.58 | 0.799 | 22.75 | 0.799 |
| Baseline++ | 22.60 | 0.798 | 23.82 | 0.797 | 24.13 | 0.797 |
| MatchingNet | 26.72 | 0.803 | 24.80 | 0.797 | 23.59 | 0.804 |
| ProtoNet | 32.28 | 0.778 | 29.97 | 0.781 | 32.30 | 0.779 |
| RelationNet | 25.23 | 0.799 | 23.69 | 0.800 | 20.00 | 0.800 |
| DKT + Cosine | 29.54 | 0.779 | 27.78 | 0.792 | 31.94 | 0.782 |
| Bayesian MAML | 22.79 | 0.905 | 20.52 | 0.963 | 20.46 | 0.949 |
| Bayesian MAML (Chaser) | 20.20 | 1.133 | 20.41 | 1.118 | 21.39 | 1.039 |
| LSM GP + Cosine (ML) | 27.92 | 0.787 | 22.43 | 0.798 | 22.36 | 0.799 |
| LSM GP + Cosine (PL) | 31.21 | 0.772 | 31.77 | 0.768 | 34.74 | 0.754 |
| OVE PG GP + Cosine (ML) [ours] | 32.27 | 0.774 | 29.99 | 0.776 | 29.97 | 0.784 |
| OVE PG GP + Cosine (PL) [ours] | 33.01 | 0.771 | 33.29 | 0.760 | 31.41 | 0.764 |
+
+Table 8: Accuracy (%) and Brier Score when applying impulse noise corruption of severity 5 to both the support and query set of test-time episodes. Results were evaluated across 1,000 randomly generated 5-shot 5-way tasks.
+
+| Method | CUB | mini-ImageNet | mini-ImageNet→CUB |
| Acc. (↑) | Brier (↓) | Acc. (↑) | Brier (↓) | Acc. (↑) | Brier (↓) |
| Feature Transfer | 30.20 | 0.776 | 23.54 | 0.798 | 22.87 | 0.799 |
| Baseline++ | 28.05 | 0.790 | 23.72 | 0.798 | 25.58 | 0.795 |
| MatchingNet | 28.25 | 0.790 | 23.80 | 0.803 | 23.21 | 0.811 |
| ProtoNet | 32.12 | 0.774 | 28.81 | 0.783 | 32.70 | 0.775 |
| RelationNet | 25.23 | 0.799 | 23.13 | 0.800 | 20.00 | 0.800 |
| DKT + Cosine | 29.74 | 0.778 | 29.11 | 0.789 | 32.26 | 0.781 |
| Bayesian MAML | 22.76 | 0.903 | 20.50 | 0.970 | 20.56 | 0.950 |
| Bayesian MAML (Chaser) | 20.25 | 1.172 | 20.51 | 1.116 | 21.45 | 1.022 |
| LSM GP + Cosine (ML) | 28.18 | 0.787 | 21.82 | 0.799 | 23.64 | 0.797 |
| LSM GP + Cosine (PL) | 32.10 | 0.769 | 30.22 | 0.776 | 35.09 | 0.751 |
| OVE PG GP + Cosine (ML) [ours] | 31.41 | 0.778 | 29.66 | 0.778 | 30.28 | 0.783 |
| OVE PG GP + Cosine (PL) [ours] | 33.36 | 0.772 | 33.23 | 0.761 | 32.06 | 0.762 |
+
+Table 9: Accuracy (%) and Brier Score when applying defocus blur corruption of severity 5 to both the support and query set of test-time episodes. Results were evaluated across 1,000 randomly generated 5-shot 5-way tasks.
+
+| Method | CUB | mini-ImageNet | mini-ImageNet→CUB |
| Acc. (↑) | Brier (↓) | Acc. (↑) | Brier (↓) | Acc. (↑) | Brier (↓) |
| Feature Transfer | 38.03 | 0.734 | 33.06 | 0.791 | 33.47 | 0.792 |
| Baseline++ | 42.55 | 0.710 | 35.89 | 0.761 | 39.88 | 0.740 |
| MatchingNet | 44.43 | 0.682 | 34.43 | 0.754 | 35.95 | 0.741 |
| ProtoNet | 46.78 | 0.676 | 36.92 | 0.737 | 41.45 | 0.714 |
| RelationNet | 40.81 | 0.759 | 30.11 | 0.790 | 25.69 | 0.794 |
| DKT + Cosine | 45.34 | 0.695 | 38.29 | 0.737 | 45.17 | 0.703 |
| Bayesian MAML | 42.65 | 0.697 | 30.63 | 0.808 | 37.32 | 0.736 |
| Bayesian MAML (Chaser) | 40.66 | 0.881 | 29.93 | 1.121 | 31.33 | 1.125 |
| LSM GP + Cosine (ML) | 45.37 | 0.706 | 34.10 | 0.769 | 39.66 | 0.753 |
| LSM GP + Cosine (PL) | 48.55 | 0.690 | 39.46 | 0.737 | 43.15 | 0.714 |
| OVE PG GP + Cosine (ML) [ours] | 46.46 | 0.701 | 37.65 | 0.775 | 43.48 | 0.723 |
| OVE PG GP + Cosine (PL) [ours] | 49.44 | 0.695 | 38.95 | 0.780 | 43.66 | 0.720 |
\ No newline at end of file
diff --git a/bayesianfewshotclassificationwithonevseachplyagammaaugmentedgaussianprocesses/images.zip b/bayesianfewshotclassificationwithonevseachplyagammaaugmentedgaussianprocesses/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..5b3358d6ace720765a5a77459573b971d0d689b2
--- /dev/null
+++ b/bayesianfewshotclassificationwithonevseachplyagammaaugmentedgaussianprocesses/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3bad3681a8c3e19ed71556c5318c31fbb5cc72db1394e5f2c7272cd0ee7a4fd1
+size 1786485
diff --git a/bayesianfewshotclassificationwithonevseachplyagammaaugmentedgaussianprocesses/layout.json b/bayesianfewshotclassificationwithonevseachplyagammaaugmentedgaussianprocesses/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..808fe6a897f699ed7c2ae6461243e7e4accee4ed
--- /dev/null
+++ b/bayesianfewshotclassificationwithonevseachplyagammaaugmentedgaussianprocesses/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6ffb9ef3238a240ab9a8f075709e42d59900ea0816ec91647f06b52c0465b54e
+size 900857
diff --git a/benchmarksfordeepoffpolicyevaluation/d9f491a2-5714-40c0-be0a-de7214aaadf3_content_list.json b/benchmarksfordeepoffpolicyevaluation/d9f491a2-5714-40c0-be0a-de7214aaadf3_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..be7a2383ffce94500812c5a339097fe4bda687e7
--- /dev/null
+++ b/benchmarksfordeepoffpolicyevaluation/d9f491a2-5714-40c0-be0a-de7214aaadf3_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8395c2336b55854060199d1bd61c1282bd3beb4b483c2b1c19af698f52684b80
+size 125190
diff --git a/benchmarksfordeepoffpolicyevaluation/d9f491a2-5714-40c0-be0a-de7214aaadf3_model.json b/benchmarksfordeepoffpolicyevaluation/d9f491a2-5714-40c0-be0a-de7214aaadf3_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..c19234e1fb6a0eda0dab5ac76bc2d85eae0cf171
--- /dev/null
+++ b/benchmarksfordeepoffpolicyevaluation/d9f491a2-5714-40c0-be0a-de7214aaadf3_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d7216c36637deb8cc0f98e804b1e3296a6c60d543047de158641e5ae461344e5
+size 146677
diff --git a/benchmarksfordeepoffpolicyevaluation/d9f491a2-5714-40c0-be0a-de7214aaadf3_origin.pdf b/benchmarksfordeepoffpolicyevaluation/d9f491a2-5714-40c0-be0a-de7214aaadf3_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..93aff0a00a02a234db0b1c7011b4544a58cf1c9b
--- /dev/null
+++ b/benchmarksfordeepoffpolicyevaluation/d9f491a2-5714-40c0-be0a-de7214aaadf3_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0512b054358af8ad87f64d67d0c453af3a053052e9625bc0626afd1f323d7982
+size 4798780
diff --git a/benchmarksfordeepoffpolicyevaluation/full.md b/benchmarksfordeepoffpolicyevaluation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..9ab4535aaeedf52772ea934916bd8905efb128df
--- /dev/null
+++ b/benchmarksfordeepoffpolicyevaluation/full.md
@@ -0,0 +1,381 @@
+# BENCHMARKS FOR DEEP OFF-POLICY EVALUATION
+
+Justin Fu $^{1}$ Mohammad Norouzi $^{2}$ Ofir Nachum $^{2}$ George Tucker $^{2}$
+Ziyu Wang $^{2}$ Alexander Novikov $^{3}$ Mengjiao Yang $^{2}$ Michael R. Zhang $^{2}$
+Yutian Chen $^{3}$ Aviral Kumar $^{1}$ Cosmin Paduraru $^{3}$ Sergey Levine $^{1}$ Tom Le Paine $^{3}$
+
+$^{1}$ UC Berkeley ${}^{2}$ Google Brain ${}^{3}$ DeepMind justinfu@berkeley.edu, {mnorouzi, ofirnachum, gjt, tpaine}@google.com
+
+# ABSTRACT
+
+Off-policy evaluation (OPE) holds the promise of being able to leverage large, offline datasets for both evaluating and selecting complex policies for decision making. The ability to learn offline is particularly important in many real-world domains, such as in healthcare, recommender systems, or robotics, where online data collection is an expensive and potentially dangerous process. Being able to accurately evaluate and select high-performing policies without requiring online interaction could yield significant benefits in safety, time, and cost for these applications. While many OPE methods have been proposed in recent years, comparing results between papers is difficult because currently there is a lack of a comprehensive and unified benchmark, and measuring algorithmic progress has been challenging due to the lack of difficult evaluation tasks. In order to address this gap, we present a collection of policies that in conjunction with existing offline datasets can be used for benchmarking off-policy evaluation. Our tasks include a range of challenging high-dimensional continuous control problems, with wide selections of datasets and policies for performing policy selection. The goal of our benchmark is to provide a standardized measure of progress that is motivated from a set of principles designed to challenge and test the limits of existing OPE methods. We perform an evaluation of state-of-the-art algorithms and provide open-source access to our data and code to foster future research in this area $\dagger$ .
+
+# 1 INTRODUCTION
+
+Reinforcement learning algorithms can acquire effective policies for a wide range of problems through active online interaction, such as in robotics (Kober et al., 2013), board games and video games (Tesauro, 1995; Mnih et al., 2013; Vinyals et al., 2019), and recommender systems (Aggarwal et al., 2016). However, this sort of active online interaction is often impractical for real-world problems, where active data collection can be costly (Li et al., 2010), dangerous (Hauskrecht & Fraser, 2000; Kendall et al., 2019), or time consuming (Gu et al., 2017). Batch (or offline) reinforcement learning, has been studied extensively in domains such as healthcare (Thapa et al., 2005; Raghu et al., 2018), recommender systems (Dudik et al., 2014; Theocharous et al., 2015; Swaminathan et al., 2017), education (Mandel et al., 2014), and robotics (Kalashnikov et al., 2018). A major challenge with such methods is the off-policy evaluation (OPE) problem, where one must evaluate the expected performance of policies solely from offline data. This is critical for several reasons, including providing high-confidence guarantees prior to deployment (Thomas et al., 2015), and performing policy improvement and model selection (Bottou et al., 2013; Doroudi et al., 2017).
+
+The goal of this paper is to provide a standardized benchmark for evaluating OPE methods. Although considerable theoretical (Thomas & Brunskill, 2016; Swaminathan & Joachims, 2015; Jiang & Li, 2015; Wang et al., 2017; Yang et al., 2020) and practical progress (Gilotte et al., 2018; Nie et al., 2019; Kalashnikov et al., 2018) on OPE algorithms has been made in a range of different domains, there are few broadly accepted evaluation tasks that combine complex, high-dimensional problems
+
+commonly explored by modern deep reinforcement learning algorithms (Bellemare et al., 2013; Brockman et al., 2016) with standardized evaluation protocols and metrics. Our goal is to provide a set of tasks with a range of difficulty, exercise a variety of design properties, and provide policies with different behavioral patterns in order to establish a standardized framework for comparing OPE algorithms. We put particular emphasis on large datasets, long-horizon tasks, and task complexity to facilitate the development of scalable algorithms that can solve high-dimensional problems.
+
+Our primary contribution is the Deep Off-Policy Evaluation (DOPE) benchmark. DOPE is designed to measure the performance of OPE methods by 1) evaluating on challenging control tasks with properties known to be difficult for OPE methods, but which occur in real-world scenarios, 2) evaluating across a range of policies with different values, to directly measure performance on policy evaluation, ranking and selection, and 3) evaluating in ideal and adversarial settings in terms of dataset coverage and support. These factors are independent of task difficulty, but are known to have a large impact on OPE performance. To achieve 1, we selected tasks on a set of design principles outlined in Section 3.1. To achieve 2, for each task we include 10 to 96 policies for evaluation and devise an evaluation protocol that measures policy evaluation, ranking, and selection as outlined in Section 3.2. To achieve 3, we provide two domains with differing dataset coverage and support properties described in Section 4. Finally, to enable an easy-to-use research platform, we provide the datasets, target policies, evaluation API, as well as the recorded results of state-of-the-art algorithms (presented in Section 5) as open-source.
+
+# 2 BACKGROUND
+
+We briefly review the off-policy evaluation (OPE) problem setting. We consider Markov decision processes (MDPs), defined by a tuple $(\mathcal{S},\mathcal{A},\mathcal{T},R,\rho_0,\gamma)$ , with state space $\mathcal{S}$ , action space $\mathcal{A}$ , transition distribution $\mathcal{T}(s'|s,a)$ , initial state distribution $\rho_0(s)$ , reward function $R(s,a)$ and discount factor $\gamma \in (0,1]$ . In reinforcement learning, we are typically concerned with optimizing or estimating the performance of a policy $\pi(a|s)$ .
+
+The performance of a policy is commonly measured by the policy value $V^{\pi}$ , defined as the expected sum of discounted rewards:
+
+$$
+V ^ {\pi} := \mathbb {E} _ {s _ {0} \sim \rho_ {0}, s _ {1: \infty}, a _ {0: \infty} \sim \pi} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} R (s _ {t}, a _ {t}) \right]. \tag {1}
+$$
+
+If we have access to state and action samples collected from a policy $\pi$ , then we can use the sample mean of observed returns to estimate the value function above. However, in off-policy evaluation we are typically interested in estimating the value of a policy when the data is collected from a separate behavior policy $\pi_B(a|s)$ . This setting can arise, for example, when data is being generated online from another process, or in the purely offline case when we have a historical dataset.
+
+
+Figure 1: In Off-Policy Evaluation (top) the goal is to estimate the value of a single policy given only data. Offline Policy Selection (bottom) is a closely related problem: given a set of N policies, attempt to pick the best given only data.
+
+In this work we consider the latter, purely offline setting. The typical setup for this problem formulation is that we are provided with a discount $\gamma$ , a dataset of trajectories collected from a behavior policy $\mathcal{D} = \{(s_0, a_0, r_0, s_1, \ldots)\}$ , and optionally the action probabilities for the behavior policy $\pi_B(a_t | s_t)$ . In many practical applications, logging action propensities is not possible, for example, when the behavior policy is a mix of ML and hard-coded business logic. For this reason, we focus on the setting without propensities to encourage future work on behavior-agnostic OPE methods. For the methods that require propensities, we estimate the propensities with behavior cloning.
+
+The objective can take multiple flavors, as shown in Fig. 1. A common task in OPE is to estimate the performance, or value, of a policy $\pi$ (which may not be the same as $\pi_B$ ) so that the estimated
+
+value is as close as possible to $V^{\pi}$ under a metric such as MSE or absolute error. A second task is to perform policy selection, where the goal is to select the best policy or set of policies out of a group of candidates. This setup corresponds to how OPE is commonly used in practice, which is to find the best performing strategy out of a pool when online evaluation is too expensive to be feasible.
+
+# 3 DOPE: DEEP OFF-POLICY EVALUATION
+
+The goal of the Deep Off-Policy Evaluation (DOPE) benchmark is to provide tasks that are challenging and effective measures of progress for OPE methods, yet is easy to use in order to better facilitate research. Therefore, we design our benchmark around a set of properties which are known to be difficult for existing OPE methods in order to gauge their shortcomings, and keep all tasks amenable to simulation in order for the benchmark to be accessible and easy to evaluate.
+
+# 3.1 TASK PROPERTIES
+
+We describe our motivating properties for selecting tasks for the benchmark as follows:
+
+High Dimensional Spaces (H) High-dimensionality is a key-feature in many real-world domains where it is difficult to perform feature engineering, such as in robotics, autonomous driving, and more. In these problems, it becomes challenging to accurately estimate quantities such as the value function without the use of high-capacity models such as neural networks and large datasets with wide state coverage. Our benchmark contains complex continuous-space tasks which exercise these challenges.
+
+Long Time-Horizon (L) Long time horizon tasks are known to present difficult challenges for OPE algorithms. Some algorithms have difficulty doing credit assignment for these tasks. This can be made worse as the state dimension or action dimension increases.
+
+Sparse Rewards (R) Sparse reward tasks increase the difficulty of credit assignment and add exploration challenges, which may interact with data coverage in the offline setting. We include a range robotics and navigation tasks which are difficult to solve due to reward sparsity.
+
+Temporally extended control (T) The ability to make decisions hierarchically is major challenge in many reinforcement learning applications. We include two navigation tasks which require high-level planning in addition to low-level control in order to simulate the difficulty in such problems.
+
+# 3.2 EVALUATION PROTOCOL
+
+The goal of DOPE to provide metrics for policy ranking, evaluation and selection. Many existing OPE methods have only been evaluated on point estimates of value such as MSE, but policy selection is an important, practical use-case of OPE. In order to explicitly measure the quality of using OPE for policy selection, we provide a set of policies with varying value, and devise two metrics that measure how well OPE methods can rank policies.
+
+For each task we include a dataset of logged experiences $\mathcal{D}$ and a set of policies $\{\pi_1,\pi_2,\dots,\pi_N\}$ with varying values. For each policy, OPE algorithms must use $\mathcal{D}$ to produce an estimate of the policy's value. For evaluation of these
+
+estimates, we provide "ground truth values" $\{V^{\pi_1}, V^{\pi_2}, \dots, V^{\pi_N}\}$ that are computed by running the policy for $M \geq 1000$ episodes, where the exact value of $M$ is given by the number of episodes needed to lower the error bar on the ground truth values to 0.666. The estimated values are then compared to these ground truth values using three different metrics encompassing both policy evaluation and selection (illustrated in Figure 2; see Appendix A.1 for mathematical definitions).
+
+
+Figure 2: Error is a natural measure for off-policy evaluation. However for policy selection, it is sufficient to (i) rank the policies as measured by rank correlation, or (ii) select a policy with the lowest regret.
+
+Absolute Error This metric measures estimate accuracy instead of its usefulness for ranking. Error is the most commonly used metric to assess performance of OPE algorithms. We opted to use absolute error instead of MSE to be robust to outliers.
+
+Regret@k This metric measures how much worse the best policies identified by the estimates are than the best policy in the entire set. It is computed by identifying the top-k policies according to the estimated returns. Regret@k is the difference between the actual expected return of the best policy in the entire set, and the actual value of the best policy in the top-k set.
+
+Rank correlation This metric directly measures how well estimated values rank policies, by computing the correlation between ordinal rankings according to the OPE estimates and ordinal rankings according to the ground truth values.
+
+# 4 DOMAINS
+
+DOPE contains two domains designed to provide a more comprehensive picture of how well OPE methods perform in different settings. These two domains are constructed using two benchmarks previously proposed for offline reinforcement learning: RL Unplugged (Gulcehre et al., 2020) and D4RL (Fu et al., 2020), and reflect the challenges found within them.
+
+The DOPE RL Unplugged domain is constrained in two important ways: 1) the data is always generated using online RL training, ensuring there is adequate coverage of the state-action space, and 2) the policies are generated by applying offline RL algorithms to the same dataset we use for evaluation, ensuring that the behavior policy and evaluation policies induce similar state-action distributions. Using it, we hope to understand how OPE methods work as task complexity increases from simple Cartpole tasks to controlling a Humanoid body while controlling for ideal data.
+
+On the other hand, the DOPE D4RL domain has: 1) data from various sources (including random exploration, human teleoperation, and RL-trained policies with limited exploration), which results in varying levels of coverage of the state-action space, and 2) policies that are generated using online RL algorithms, making it less likely that the behavior and evaluation policies share similar induced state-action distributions. Both of these result in distribution shift which is known to be challenging for OPE methods, even in simple tasks. So, using it we hope to measure how well OPE methods work in more practical data settings.
+
+# 4.1 DOPERL UNPLUGGED
+
+DeepMind Control Suite (Tassa et al., 2018) is a set of control tasks implemented in MuJoCo (Todorov et al., 2012). We consider the subset included in RL Unplugged. This subset includes tasks that cover a range of difficulties. From Cartpole swingup, a simple task with a single degree
+
+
+
+of freedom, to Humanoid run which involves control of a complex bodies with 21 degrees of freedom. All tasks use the default feature representation of the system state, including proprioceptive information such as joint positions and velocity, and additional sensor information and target position where appropriate. The observation dimension ranges from 5 to 67.
+
+Datasets and policies We train four offline RL algorithms (D4PG (Barth-Maron et al., 2018), ABM (Siegel et al., 2020), CRR (Wang et al., 2020) and behavior cloning), varying their hyperparameters. For each algorithm-task-hyperparameter combination, we train an agent with 3 random seeds on the DM Control Suite dataset from RL Unplugged and record policy snapshots at exponentially increasing intervals (after 25k learner steps, 50k, 100K, 200K, etc). Following Gulcehre et al. (2020), we consider a deterministic policy for D4PG and stochastic policies for BC, ABM and CRR. The datasets are taken from the RL Unplugged benchmark, where they were created by training multiple (online) RL agents and collecting both successful and unsuccessful episodes throughout training. All offline RL algorithms are implemented using the Acme framework (Hoffman et al., 2020).
+
+# 4.2 DOPE D4RL
+
+Gym-MuJoCo tasks. Gym-MuJoCo consists of several continuous control tasks implemented within the MuJoCo simulator (Todorov et al., 2012) and provided in the OpenAI Gym (Brockman et al., 2016) benchmark for online RL. We include the HalfCheetah, Hopper, Walker2D, and Ant tasks. We include this domain primarily for comparison with past works, as a vast array of popular RL
+
+| Statistics | cartpole swingup | cheetah run | finger turn hard | fish swim | humanoid run | walker stand | walker walk | manipulator insert ball | manipulator insert peg |
| Dataset size | 40K | 300K | 500K | 200K | 3M | 200K | 200K | 1.5M | 1.5M |
| State dim. | 5 | 17 | 12 | 24 | 67 | 24 | 24 | 44 | 44 |
| Action dim. | 1 | 6 | 2 | 5 | 21 | 6 | 6 | 5 | 5 |
| Properties | - | H, L | H, L | H, L | H, L | H, L | H, L | H, L, T | H, L, T |
+
+| Statistics | maze2d | antmaze | halfcheetah | hopper | walker | ant | hammer | door | relocate | pen |
| Dataset size | 1/2/4M | 1M | 1M | 1M | 1M | 1M | 11K/1M | 7K/1M | 10K/1M | 5K/500K |
| # datasets | 1 | 1 | 5 | 5 | 5 | 5 | 3 | 3 | 3 | 3 |
| State dim. | 4 | 29 | 17 | 11 | 17 | 111 | 46 | 39 | 39 | 45 |
| Action dim. | 2 | 8 | 6 | 3 | 6 | 8 | 26 | 28 | 30 | 24 |
| Properties | T | T, R | H | H | H | H | H, R | H, R | H, R | H, R |
+
+Table 1: Task statistics for RLUnplugged tasks (top) and D4RL tasks (bottom). Dataset size is the number of $(s,a,r,s^{\prime})$ tuples. For each dataset, we note the properties it possesses: high dimensional spaces $(\mathbf{H})$ , long time-horizon $(\mathbf{L})$ , sparse rewards $(\mathbf{R})$ , temporally extended control $(\mathbf{T})$ .
+
+
+Figure 3: Online evaluation of policy checkpoints for 4 Offline RL algorithms with 3 random seeds. We observe a large degree of variability between the behavior of algorithms on different tasks. Without online evaluation, tuning the hyperparameters (e.g., choice of Offline RL algorithm and policy checkpoint) is challenging. This highlights the practical importance of Offline policy selection when online evaluation is not feasible. See Figure A.7 for additional tasks.
+
+
+
+
+
+
+
+methods have been evaluated and developed on these tasks (Schulman et al., 2015; Lillicrap et al., 2015; Schulman et al., 2017; Fujimoto et al., 2018; Haarnoja et al., 2018).
+
+Gym-MuJoCo datasets and policies. For each task, in order to explore the effect of varying distributions, we include 5 datasets originally proposed by Fu et al. (2020). 3 correspond to different performance levels of the agent – “random”, “medium”, and “expert”. We additionally include labeled “medium-expert”, and data collected from a replay level of performance, labeled “medium-replay”. For police evenly-spaced snapshots of training a Soft Actor-Critic ag range of performance between random and expert.
+
+
+
+Maze2D and AntMaze tasks. Maze2D and AntMaze are two maze navigation tasks originally proposed in D4RL (Fu et al., 2020). The domain consists of 3 mazes ranging from easy to hard ("umaze", "medium", "large"), and two morphologies: a 2D ball in Maze2D and the "Ant" robot of the Gym benchmark in AntMaze. For Maze2D,
+
+
+
+we provide a less challenging reward computed base on distance to a fixed goal. For the AntMaze environment reward is given only upon reaching the fixed goal.
+
+Maze2D and AntMaze datasets and policies. Datasets for both morphologies consists of undirect data navigating randomly to different goal locations. The datasets for Maze2D are collected by using a high-level planner to command waypoints to a low-level PID controller in order to reach randomly selected goals. The dataset in AntMaze is generated using the same high-level planner, but the low-
+
+level planner is replaced with a goal-conditioned policy trained to reach arbitrary waypoints. Both of these datasets are generated from non-Markovian policies, as the high-level controller maintains a history of waypoints reached in order to construct a plan to the goal. We provide policies for all environments except "antmaze-large" by taking training snapshots obtained while running the DAPG algorithm (Rajeswaran et al., 2017). Because obtaining high-performing policies for "antmaze-large" was challenging, we instead used imitation learning on a large amount of expert data to generate evaluation policies. This expert data is obtained by collecting additional trajectories that reach the goal using a high-level waypoint planner in conjunction with a low-level goal-conditioned policy (this is the same method as was used to generate the dataset, Sec. 5 (Fu et al., 2020)).
+
+Adroit tasks. The Adroit domain is a realistic simulation based on the Shadow Hand robot, first proposed by Rajeswaran et al. (2017). There are 4 tasks in this domain: opening a door ("door"), pen twirling ("pen"), moving a ball to a target location ("relocate"), and hitting a nail with a hammer ("hammer"). These tasks all contain sparse rewards and are difficult to learn without demonstrations.
+
+Adroit datasets and policies. We include 3 datasets for each task. The "human" dataset consists of a small amount of human demonstrations
+
+
+
+performing the task. The "expert" dataset consists of data collected from an expert trained via DAPG (Rajeswaran et al., 2017). Finally, the "cloned" dataset contains a mixture of human demonstrations and data collected from an imitation learning algorithm trained on the demonstrations. For policies, we include 11 policies collected from snapshots while running the DAPG algorithm, which range from random performance to expert performance.
+
+# 5 BASELINES AND RESULTS
+
+The goal of our evaluation is two-fold. First, we wish to measure the performance of a variety of existing algorithms to provide baselines and reference numbers for future research. Second, we wish to identify shortcomings in these approaches to reveal promising directions for future research.
+
+# 5.1 BASELINES
+
+We selected six methods to evaluate, which cover a variety of approaches that have been explored for the OPE problem.
+
+Fitted Q-Evaluation (FQE) As in Le et al. (2019), we train a neural network to estimate the value of the evaluation policy $\pi$ by bootstrapping from $Q(s', \pi(s'))$ . We tried two different implementations, one from Kostrikov & Nachum (2020) $^3$ and another from Paine et al. (2020) labeled FQE-L2 and FQE-D respectively to reflect different choices in loss function and parameterization.
+
+Model-Based (MB) Similar to Paduraru (2007), we train dynamics and reward models on transitions from the offline dataset $\mathcal{D}$ . Our models are deep neural networks trained to maximize the log likelihood of the next state and reward given the current state and action, similar to models from successful model-based RL algorithms (Chua et al., 2018; Janner et al., 2019). We follow the setup detailed in Zhang et al. (2021). We include both the feed-forward and auto-regressive models labeled MB-FF and MB-AR respectively. To evaluate a policy, we compute the return using simulated trajectories generated by the policy under the learned dynamics model.
+
+Importance Sampling (IS) We perform importance sampling with a learned behavior policy. We use the implementation from Kostrikov & Nachum $(2020)^{3}$ , which uses self-normalized (also known as weighted) step-wise importance sampling (Precup, 2000). Since the behavior policy is not known explicitly, we learn an estimate of it via a max-likelihood objective over the dataset $\mathcal{D}$ , as advocated by Xie et al. (2018); Hanna et al. (2019). In order to be able to compute log-probabilities when the target policy is deterministic, we add artificial Gaussian noise with standard deviation 0.01 for all deterministic target policies.
+
+
+Figure 4: DOPE RL Unplugged Mean overall performance of baselines.
+
+
+
+
+
+
+Figure 5: DOPE D4RL Mean overall performance of baselines.
+
+
+
+
+
+Doubly-Robust (DR) We perform weighted doubly-robust policy evaluation Thomas & Brunskill (2016) using the implementation of Kostrikov & Nachum $(2020)^{3}$ . Specifically, this method combines the IS technique above with a value estimator for variance reduction. The value estimator is learned using deep FQE with an L2 loss function. More advanced approaches that trade variance for bias exist (e.g., MAGIC (Thomas & Brunskill, 2016)), but we leave implementing them to future work.
+
+DICE This method uses a saddle-point objective to estimate marginalized importance weights $d^{\pi}(s,a) / d^{\pi_B}(s,a)$ ; these weights are then used to compute a weighted average of reward over the offline dataset, and this serves as an estimate of the policy's value in the MDP. We use the implementation from Yang et al. (2020) corresponding to the algorithm BestDICE.
+
+Variational Power Method (VPM) This method runs a variational power iteration algorithm to estimate the importance weights $d^{\pi}(s,a) / d^{\pi_B}(s,a)$ without the knowledge of the behavior policy. It then estimates the target policy value using weighted average of rewards similar to the DICE method. Our implementation is based on the same network and hyperparameters for OPE setting as in Wen et al. (2020). We further tune the hyper-parameters including the regularization parameter $\lambda$ , learning rates $\alpha_{\theta}$ and $\alpha_{v}$ , and number of iterations on the Cartpole swingup task using ground-truth policy value, and then fix them for all other tasks.
+
+# 5.2 RESULTS
+
+To facilitate aggregate metrics and comparisons between tasks and between DOPE RL Unplugged and DOPE D4RL, we normalize the returns and estimated returns to range between 0 and 1. For each set of policies we compute the worst value $V_{worst} = \min \{V^{\pi_1}, V^{\pi_2}, \dots, V^{\pi_N}\}$ and best value $V_{best} = \max \{V^{\pi_1}, V^{\pi_2}, \dots, V^{\pi_N}\}$ and normalize the returns and estimated returns according to $x' = (x - V_{worst}) / (V_{best} - V_{worst})$ .
+
+We present results averaged across DOPE RL Unplugged in Fig. 4, and results for DOPE D4RL in Fig. 5. Overall, no evaluated algorithm attains near-oracle performance under any metric (absolute error, regret, or rank correlation). Because the dataset is finite, we do not expect that achieving oracle performance is possible. Nevertheless, based on recent progress on this benchmark (e.g., Zhang et al. (2021)), we hypothesize that the benchmark has room for improvement, making it suitable for driving further improvements on OPE methods and facilitating the development of OPE algorithms that can provide reliable estimates on the types of high-dimensional problems that we consider.
+
+While all algorithms achieve sub-optimal performance, some perform better than others. We find that on the DOPE RL Unplugged tasks model based (MB-AR, MB-FF) and direct value based methods (FQE-D, FQE-L2) significantly outperform importance sampling methods (VPM, DICE, IS) across all metrics. This is somewhat surprising as DICE and VPM have shown promising results in other settings. We hypothesize that this is due to the relationship between the behavior data and evaluation policies, which is different from standard OPE settings. Recall that in DOPE RL Unplugged the behavior data is collected from an online RL algorithm and the evaluation policies are learned via offline RL from the behavior data. In our experience all methods work better when the behavior policy is a noisy/perturbed version of the evaluation policy. Moreover, MB and FQE-based methods may
+
+
+Figure 6: Rank correlation for each baseline algorithm for each RL Unplugged task considered.
+
+
+Figure 7: Scatter plots of estimate vs ground truth return for MB-AR and FQE-D on selected tasks.
+
+implicitly benefit from the architectural and optimization advancements made in policy optimization settings, which focus on similar environments and where these methods are more popular than importance sampling approaches. Note that within the MB and FQE methods, design details can create a significant difference in performance. For example model architecture (MB-AR vs MB-FF) and implementation differences (FQE-D vs FQE-L2) show differing performance on certain tasks.
+
+On DOPE D4RL, direct value based methods still do well, with FQE-L2 performing best on the Absolute Error and Regret@1 metrics. However, there are cases where other methods outperform FQE. Notably, IS and DR outperform FQE-L2 under the rank correlation metric. As expected, there is a clear performance gap between DOPE RL Unplugged and DOPE D4RL. While both domains have challenging tasks, algorithms perform better under the more ideal conditions of DOPE RL Unplugged than under the challenging conditions of DOPE D4RL (0.69 vs 0.25 rank correlation respectively).
+
+In Fig. A.2 we show the rank correlation for each task in DOPE RL Unplugged. Most tasks follow the overall trends, but we will highlight a few exceptions. 1) Importance sampling is among the best methods for the humanoid run task, significantly outperforming direct value-based methods. 2) while MB-AR and FQE-D are similar overall, there are a few tasks where the difference is large, for example FQE-D outperforms MB-AR on finger turn hard, and manipulator insert ball, where as MB-AR outperforms FQE-D on cartpole swingup, fish swim, humanoid run, and manipulator insert peg. We show the scatter plots for MB-AR and FQE-D on these tasks in Fig 7 which highlights different failure modes: when MB-AR performs worse, it assigns similar values for all policies; when FQE-D performs worse, it severely over-estimates the values of poor policies.
+
+We present more detailed results, separated by task, in Appendix A.2. Note in particular how in Table A.2.2, which shows the regret@1 metric for different D4RL tasks, the particular choice of dataset for the Gym-MuJoCo, Adroit, and AntMaze domains causes a significant difference in the performance of OPE methods. This indicates the importance of evaluating multiple distinct datasets, with different data distribution properties (e.g., more narrow datasets, such as expert data, vs. broader datasets, such as random data), as no tested method is reliably robust to the effects of dataset variation.
+
+High-dimensional tasks requiring temporally extended control were also challenging, as highlighted by the performance on the AntMaze domain. No algorithm was able to achieve a good absolute error value on such tasks, and importance sampling was the only method able to achieve a correlation consistently above zero, suggesting that these more complex tasks are a particularly important area for future methods to focus on.
+
+# 6 RELATED WORK
+
+Off-policy evaluation (OPE) has been studied extensively across a range of different domains, from healthcare (Thapa et al., 2005; Raghu et al., 2018; Nie et al., 2019), to recommender systems (Li et al., 2010; Dudík et al., 2014; Theocharous et al., 2015), and robotics (Kalashnikov et al., 2018). While a full survey of OPE methods is outside the scope of this article, broadly speaking we can categories OPE methods into groups based the use of importance sampling (Precup, 2000), value functions (Sutton et al., 2009; Migliavacca et al., 2010; Sutton et al., 2016; Yang et al., 2020), and learned transition models (Paduraru, 2007), though a number of methods combine two or more of these components (Jiang & Li, 2015; Thomas & Brunskill, 2016; Munos et al., 2016). A significant body of work in OPE is also concerned with providing statistical guarantees (Thomas et al., 2015). Our focus instead is on empirical evaluation – while theoretical analysis is likely to be a critical part of future OPE research, combining such analysis with empirical demonstration on broadly accepted and standardized benchmarks is likely to facilitate progress toward practically useful algorithms.
+
+Current evaluation of OPE methods is based around several metrics, including error in predicting the true return of the evaluated policy (Voloshin et al., 2019), correlation between the evaluation output and actual returns (Irpan et al., 2019), and ranking and model selection metrics (Doroudi et al., 2017). As there is no single accepted metric used by the entire community, we provide a set of candidate metrics along with our benchmark, with a detailed justification in Section 5. Our work is closely related to (Paine et al., 2020) which studies OPE in a similar setting, however in our work we present a benchmark for the community and compare a range of OPE methods. Outside of OPE, standardized benchmark suites have led to considerable standardization and progress in RL (Stone & Sutton, 2001; Dutech et al., 2005; Riedmiller et al., 2007). The Arcade Learning Environment (ALE) (Bellemare et al., 2013) and OpenAI Gym (Brockman et al., 2016) have been widely used to compare online RL algorithms to good effect. More recently, Gulcehre et al. (2020); Fu et al. (2020) proposed benchmark tasks for offline RL. Our benchmark is based on the tasks and environments described in these two benchmarks, which we augment with a set of standardized policies for evaluation, results for a number of existing OPE methods, and standardized evaluation metrics and protocols. Voloshin et al. (2019) have recently proposed benchmarking for OPE methods on a variety of tasks ranging from tabular problems to image-based tasks in Atari. Our work differs in several key aspects. Voloshin et al. (2019) is composed entirely of discrete action tasks, whereas out benchmark focuses on continuous action tasks. Voloshin et al. (2019) assumes full support for the evaluation policy under the behavior policy data, whereas we designed our datasets and policies to ensure that different cases of dataset and policy distributions could be studied. Finally, all evaluations in Voloshin et al. (2019) are performed using the MSE metric, and they do not provide standardized datasets. In contrast, we provide a variety of policies for each problem which enables one to evaluate metrics such as ranking for policy selection, and a wide range of standardized datasets for reproducibility.
+
+# 7 CONCLUSION
+
+We have presented the Deep Off-Policy Evaluation (DOPE) benchmark, which aims to provide a platform for studying policy evaluation and selection across a wide range of challenging tasks and datasets. In contrast to prior benchmarks, DOPE provides multiple datasets and policies, allowing researchers to study how data distributions affect performance and to evaluate a wide variety of metrics, including those that are relevant for offline policy selection. In comparing existing OPE methods, we find that no existing algorithms consistently perform well across all of the tasks, which further reinforces the importance of standardized and challenging OPE benchmarks. Moreover, algorithms that perform poorly under one metric, such as absolute error, may perform better on other metrics, such as correlation, which provides insight into what algorithms to use depending on the use case (e.g., policy evaluation vs. policy selection).
+
+We believe that OPE is an exciting area for future research, as it allows RL agents to learn from large and abundant datasets in domains where online RL methods are otherwise infeasible. We hope that our benchmark will enable further progress in this field, though important evaluation challenges remain. As the key benefit of OPE is the ability to utilize real-world datasets, a promising direction for future evaluation efforts is to devise effective ways to use such data, where a key challenge is to develop evaluation protocols that are both reproducible and accessible. This could help pave the way towards developing intelligent decision making agents that can leverage vast banks of logged information to solve important real-world problems.
+
+# REFERENCES
+
+CharuC Aggarwal et al. Recommender systems, volume 1.Springer,2016.
+Gabriel Barth-Maron, Matthew W. Hoffman, David Budden, Will Dabney, Dan Horgan, Dhruva TB, Alistair Muldal, Nicolas Heess, and Timothy Lillicrap. Distributional policy gradients. In International Conference on Learning Representations, 2018.
+Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47: 253-279, 2013.
+Léon Bottou, Jonas Peters, Joaquin Quinonero-Candela, Denis X Charles, D Max Chickering, Elon Portugaly, Dipankar Ray, Patrice Simard, and Ed Snelson. Counterfactual reasoning and learning systems: The example of computational advertising. The Journal of Machine Learning Research, 14(1):3207-3260, 2013.
+Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016.
+Kurtland Chua, Roberto Calandra, Rowan McAllister, and Sergey Levine. Deep reinforcement learning in a handful of trials using probabilistic dynamics models. In Advances in Neural Information Processing Systems, pp. 4754-4765, 2018.
+Shayan Doroudi, Philip S Thomas, and Emma Brunskill. Importance sampling for fair policy selection. *Grantee Submission*, 2017.
+Miroslav Dudík, Dumitru Erhan, John Langford, Lihong Li, et al. Doubly robust policy evaluation and optimization. Statistical Science, 29(4):485-511, 2014.
+Alain Dutech, Timothy Edmunds, Jelle Kok, Michail Lagoudakis, Michael Littman, Martin Ried-miller, Bryan Russell, Bruno Scherrer, Richard Sutton, Stephan Timmer, et al. Reinforcement learning benchmarks and bake-offs ii. Advances in Neural Information Processing Systems (NIPS), 17:6, 2005.
+Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, and Sergey Levine. D4rl: Datasets for deep data-driven reinforcement learning. arXiv preprint arXiv:2004.07219, 2020.
+Scott Fujimoto, Herke Hoof, and David Meger. Addressing function approximation error in actor-critic methods. In International Conference on Machine Learning, pp. 1587-1596, 2018.
+Alexandre Gilotte, Clément Calauzènes, Thomas Nedelec, Alexandre Abraham, and Simon Dollé. Offline a/b testing for recommender systems. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pp. 198-206, 2018.
+Shixiang Gu, Ethan Holly, Timothy Lillicrap, and Sergey Levine. Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates. In 2017 IEEE international conference on robotics and automation (ICRA), pp. 3389-3396. IEEE, 2017.
+Caglar Gulcehre, Ziyu Wang, Alexander Novikov, Tom Le Paine, Sergio Gomez Colmenarejo, Konrad Zolna, Rishabh Agarwal, Josh Merel, Daniel Mankowitz, Cosmin Paduraru, et al. R1 unplugged: Benchmarks for offline reinforcement learning. arXiv preprint arXiv:2006.13888, 2020.
+Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. arXiv preprint arXiv:1801.01290, 2018.
+Josiah Hanna, Scott Niekum, and Peter Stone. Importance sampling policy evaluation with an estimated behavior policy. In International Conference on Machine Learning, pp. 2605-2613. PMLR, 2019.
+Milos Hauskrecht and Hamish Fraser. Planning treatment of ischemic heart disease with partially observable markov decision processes. Artificial Intelligence in Medicine, 18(3):221-244, 2000.
+
+Matt Hoffman, Bobak Shahrioni, John Aslanides, Gabriel Barth-Maron, Feryal Behbahani, Tamara Norman, Abbas Abdolmaleki, Albin Cassirer, Fan Yang, Kate Baumli, et al. Acme: A research framework for distributed reinforcement learning. arXiv preprint arXiv:2006.00979, 2020.
+Alexander Irpan, Kanishka Rao, Konstantinos Bousmalis, Chris Harris, Julian Ibarz, and Sergey Levine. Off-policy evaluation via off-policy classification. In Advances in Neural Information Processing Systems, pp. 5437-5448, 2019.
+Michael Janner, Justin Fu, Marvin Zhang, and Sergey Levine. When to trust your model: Model-based policy optimization. In Advances in Neural Information Processing Systems, pp. 12519-12530, 2019.
+Nan Jiang and Lihong Li. Doubly robust off-policy value evaluation for reinforcement learning. arXiv preprint arXiv:1511.03722, 2015.
+Dmitry Kalashnikov, Alex Irpan, Peter Pastor, Julian Ibarz, Alexander Herzog, Eric Jang, Deirdre Quillen, Ethan Holly, Mrinal Kalakrishnan, Vincent Vanhoucke, et al. Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation. arXiv preprint arXiv:1806.10293, 2018.
+Alex Kendall, Jeffrey Hawke, David Janz, Przemyslaw Mazur, Daniele Reda, John-Mark Allen, Vinh-Dieu Lam, Alex Bewley, and Amar Shah. Learning to drive in a day. In 2019 International Conference on Robotics and Automation (ICRA), pp. 8248-8254. IEEE, 2019.
+Jens Kober, J Andrew Bagnell, and Jan Peters. Reinforcement learning in robotics: A survey. The International Journal of Robotics Research, 32(11):1238-1274, 2013.
+Ilya Kostrikov and Ofir Nachum. Statistical bootstrapping for uncertainty estimation in off-policy evaluation, 2020.
+Hoang M Le, Cameron Voloshin, and Yisong Yue. Batch policy learning under constraints. arXiv preprint arXiv:1903.08738, 2019.
+Lihong Li, Wei Chu, John Langford, and Robert E Schapire. A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th international conference on World wide web, pp. 661-670, 2010.
+Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
+Travis Mandel, Yun-En Liu, Sergey Levine, Emma Brunskill, and Zoran Popovic. Offline policy evaluation across representations with applications to educational games. In AAMAS, pp. 1077-1084, 2014.
+Martino Migliavacca, Alessio Pecorino, Matteo Pirotta, Marcello Restelli, and Andrea Bonarini. Fitted policy search: Direct policy search using a batch reinforcement learning approach. In 3rd International Workshop on Evolutionary and Reinforcement Learning for Autonomous Robot Systems (ERLARS 2010), pp. 35. CiteSeer, 2010.
+Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. In NIPS Deep Learning Workshop. 2013.
+Rémi Munos, Tom Stepleton, Anna Harutyunyan, and Marc G. Bellemare. Safe and efficient off-policy reinforcement learning. arXiv preprint arXiv:1606.02647, 2016.
+Xinkun Nie, Emma Brunskill, and Stefan Wager. Learning when-to-treat policies. arXiv preprint arXiv:1905.09751, 2019.
+Cosmin Paduraru. Planning with approximate and learned models of markov decision processes. 2007.
+
+Tom Le Paine, Cosmin Paduraru, Andrea Michi, Caglar Gulcehre, Konrad Zolna, Alexander Novikov, Ziyu Wang, and Nando de Freitas. Hyperparameter selection for offline reinforcement learning. arXiv preprint arXiv:2007.09055, 2020.
+Doina Precup. Eligibility traces for off-policy policy evaluation. Computer Science Department Faculty Publication Series, pp. 80, 2000.
+Aniruddh Raghu, Omer Gottesman, Yao Liu, Matthieu Komorowski, Aldo Faisal, Finale Doshi-Velez, and Emma Brunskill. Behaviour policy estimation in off-policy policy evaluation: Calibration matters. arXiv preprint arXiv:1807.01066, 2018.
+Aravind Rajeswaran, Vikash Kumar, Abhishek Gupta, Giulia Vezzani, John Schulman, Emanuel Todorov, and Sergey Levine. Learning complex dexterous manipulation with deep reinforcement learning and demonstrations. arXiv preprint arXiv:1709.10087, 2017.
+Martin Riedmiller, Jan Peters, and Stefan Schaal. Evaluation of policy gradient methods and variants on the cart-pole benchmark. In 2007 IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning, pp. 254-261. IEEE, 2007.
+John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In International conference on machine learning, pp. 1889-1897, 2015.
+John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
+Noah Y Siegel, Jost Tobias Springenberg, Felix Berkenkamp, Abbas Abdelmaleki, Michael Neunert, Thomas Lampe, Roland Hafner, and Martin Riedmiller. Keep doing what worked: Behavioral modelling priors for offline reinforcement learning. In International Conference on Learning Representations, 2020.
+Peter Stone and Richard S Sutton. Scaling reinforcement learning toward robocop soccer. In *Icml*, volume 1, pp. 537-544. CiteSeer, 2001.
+Richard S Sutton, Hamid Reza Maei, Doina Precup, Shalabh Bhatnagar, David Silver, Csaba Szepesvári, and Eric Wiewiora. Fast gradient-descent methods for temporal-difference learning with linear function approximation. In Proceedings of the 26th Annual International Conference on Machine Learning, pp. 993-1000, 2009.
+Richard S Sutton, A Rupam Mahmood, and Martha White. An emphatic approach to the problem of off-policy temporal-difference learning. The Journal of Machine Learning Research, 17(1): 2603-2631, 2016.
+Adith Swaminathan and Thorsten Joachims. Counterfactual risk minimization: Learning from logged bandit feedback. In International Conference on Machine Learning, pp. 814-823, 2015.
+Adith Swaminathan, Akshay Krishnamurthy, Alekh Agarwal, Miro Dudik, John Langford, Damien Jose, and Imed Zitouni. Off-policy evaluation for slate recommendation. In Advances in Neural Information Processing Systems, pp. 3632-3642, 2017.
+Gerald Tesauro. Temporal difference learning and td-gammon. Communications of the ACM, 38(3): 58-68, 1995.
+Devinder Thapa, In-Sung Jung, and Gi-Nam Wang. Agent based decision support system using reinforcement learning under emergency circumstances. In International Conference on Natural Computation, pp. 888-892. Springer, 2005.
+Georgios Theocharous, Philip S Thomas, and Mohammad Ghavamzadeh. Personalized ad recommendation systems for life-time value optimization with guarantees. In Twenty-Fourth International Joint Conference on Artificial Intelligence, 2015.
+Philip Thomas and Emma Brunskill. Data-efficient off-policy policy evaluation for reinforcement learning. In International Conference on Machine Learning, pp. 2139-2148, 2016.
+
+Philip S Thomas, Georgios Theochondrous, and Mohammad Ghavamzadeh. High-confidence off-policy evaluation. In Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015.
+Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026-5033. IEEE, 2012.
+Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michael Mathieu, Andrew Dudzik, Junyoung Chung, David H Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. Grandmaster level in starcraft ii using multi-agent reinforcement learning. Nature, 575(7782):350-354, 2019.
+Cameron Voloshin, Hoang M Le, Nan Jiang, and Yisong Yue. Empirical study of off-policy policy evaluation for reinforcement learning. arXiv preprint arXiv:1911.06854, 2019.
+Yu-Xiang Wang, Alekh Agarwal, and Miroslav Dudik. Optimal and adaptive off-policy evaluation in contextual bandits. In International Conference on Machine Learning, pp. 3589-3597. PMLR, 2017.
+Ziyu Wang, Alexander Novikov, Konrad Zolna, Jost Tobias Springenberg, Scott Reed, Bobak Shahriari, Noah Siegel, Josh Merel, Caglar Gulcehre, Nicolas Heess, and Nando de Freitas. Critic regularized regression. arXiv preprint arXiv:2006.15134, 2020.
+Junfeng Wen, Bo Dai, Lihong Li, and Dale Schuurmans. Batch stationary distribution estimation. arXiv preprint arXiv:2003.00722, 2020.
+Yuan Xie, Boyi Liu, Qiang Liu, Zhaoran Wang, Yuan Zhou, and Jian Peng. Off-policy evaluation and learning from logged bandit feedback: Error reduction via surrogate policy. arXiv preprint arXiv:1808.00232, 2018.
+Mengjiao Yang, Ofir Nachum, Bo Dai, Lihong Li, and Dale Schuurmans. Off-policy evaluation via the regularized lagrangian. arXiv preprint arXiv:2007.03438, 2020.
+Michael R Zhang, Thomas Paine, Ofir Nachum, Cosmin Paduraru, George Tucker, ziyu wang, and Mohammad Norouzi. Autoregressive dynamics models for offline policy evaluation and optimization. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=kmqjgSNXby.
+
+# A APPENDIX
+
+# A.1 METRICS
+
+The metrics we use in our paper are defined as follows:
+
+Absolute Error We evaluate policies using absolute error in order to be robust to outliers. The absolute error is defined as the difference between the value and estimated value of a policy:
+
+$$
+\operatorname {A b s E r r} = \left| V ^ {\pi} - \hat {V} ^ {\pi} \right| \tag {2}
+$$
+
+Where $V^{\pi}$ is the true value of the policy, and $\hat{V}^{\pi}$ is the estimated value of the policy.
+
+Regret@k Regret@k is the difference between the value of the best policy in the entire set, and the value of the best policy in the top-k set (where the top-k set is chosen by estimated values). It can be defined as:
+
+$$
+\operatorname {R e g r e t} @ k = \max _ {i \in 1: N} V _ {i} ^ {\pi} - \max _ {j \in \operatorname {t o p k} (1: N)} V _ {j} ^ {\pi} \tag {3}
+$$
+
+Where $\mathrm{topk}(1:N)$ denotes the indices of the top K policies as measured by estimated values $\hat{V}^{\pi}$ .
+
+Rank correlation Rank correlation (also Spearman's $\rho$ ) measures the correlation between the ordinal rankings of the value estimates and the true values. It can be written as:
+
+$$
+\operatorname {R a n k C o r r} = \frac {\operatorname {C o v} \left(V _ {1 : N} ^ {\pi} , \hat {V} _ {1 : N} ^ {\pi}\right)}{\sigma \left(V _ {1 : N} ^ {\pi}\right) \sigma \left(\hat {V} _ {1 : N} ^ {\pi}\right)} \tag {4}
+$$
+
+# A.2 DETAILED RESULTS
+
+Detailed results figures and tables are presented here. We show results by task in both tabular and chart form, as well as scatter plots which compare the estimated returns against the ground truth returns for every policy.
+
+# A.2.1 CHART RESULTS
+
+First we show the normalized results for each algorithm and task.
+
+
+Figure A.1: Absolute error for each baseline algorithm for each RL Unplugged task considered.
+
+
+Figure A.2: Rank correlation for each baseline algorithm for each RL Unplugged task considered.
+
+
+Figure A.3: Regret@1 for each baseline algorithm for each RL Unplugged task considered.
+
+
+Figure A.4: Absolute error for each baseline algorithm for each D4RL task domain considered.
+
+
+Figure A.5: Rank correlation for each baseline algorithm for each D4RL task domain considered.
+
+
+Figure A.6: Regret@1 for each baseline algorithm for each D4RL task domain considered.
+
+
+Figure A.7: Online evaluation of policy checkpoints for 4 Offline RL algorithms with 3 random seeds. We observe a large degree of variability between the behavior of algorithms on different tasks.
+
+
+
+
+
+
+
+
+
+# A.2.2 TABULAR RESULTS
+
+Next, we present the results for each task and algorithm in tabular form, with means and standard deviations reported across 3 seeds.
+
+ | | Cartpole swingup | Cheetah run | Finger turn hard | Fish swim | Humanoid run |
| Absolute Error btw. OPE and ground truth | Variational power method | 37.53 ±3.50 | 61.89 ±4.25 | 46.22 ±3.93 | 31.27 ±0.99 | 35.29 ±3.03 |
| Importance Sampling | 68.75 ±2.39 | 44.29 ±1.91 | 90.10 ±4.68 | 34.82 ±1.93 | 27.89 ±1.98 |
| Best DICE | 22.73 ±1.65 | 23.35 ±1.32 | 33.52 ±3.48 | 59.48 ±2.47 | 31.42 ±2.04 |
| Model based - FF | 6.80 ±0.85 | 13.64 ±0.59 | 35.99 ±3.00 | 4.75 ±0.23 | 30.12 ±2.40 |
| FQE (L2) | 19.02 ±1.34 | 48.26 ±1.78 | 27.91 ±1.18 | 19.82 ±1.57 | 56.28 ±3.52 |
| Doubly Robust (IS, FQE) | 24.38 ±2.51 | 40.27 ±2.05 | 25.26 ±2.48 | 20.28 ±1.90 | 53.64 ±3.68 |
| FQE (distributional) | 12.63 ±1.21 | 36.50 ±1.62 | 10.23 ±0.93 | 7.76 ±0.95 | 32.36 ±2.27 |
| Model based - AR | 5.32 ±0.54 | 4.64 ±0.46 | 22.93 ±1.72 | 4.31 ±0.22 | 20.95 ±1.61 |
| | Walker stand | Walker walk | Manipulator insert ball | Manipulator insert peg | Median ↓ |
| Absolute Error btw. OPE and ground truth | Variational power method | 96.76 ±3.59 | 87.24 ±4.25 | 79.25 ±6.19 | 21.95 ±1.17 | 46.22 |
| Importance Sampling | 66.50 ±1.90 | 67.24 ±2.70 | 29.93 ±1.10 | 12.78 ±0.66 | 44.29 |
| Best DICE | 27.58 ±3.01 | 47.28 ±3.13 | 103.45 ±5.21 | 22.75 ±3.00 | 31.42 |
| Model based - FF | 23.34 ±2.41 | 52.23 ±2.34 | 34.30 ±2.55 | 121.12 ±1.58 | 30.12 |
| FQE (L2) | 6.51 ±0.71 | 18.34 ±0.95 | 36.32 ±1.07 | 31.12 ±2.37 | 27.91 |
| Doubly Robust (IS, FQE) | 26.82 ±2.66 | 24.63 ±1.69 | 13.33 ±1.16 | 22.28 ±2.34 | 24.63 |
| FQE (distributional) | 21.49 ±1.41 | 27.57 ±1.54 | 9.75 ±1.10 | 12.66 ±1.39 | 12.66 |
| Model based - AR | 19.12 ±1.23 | 5.14 ±0.49 | 17.13 ±1.34 | 9.71 ±0.70 | 9.71 |
+
+Table A.1: Average absolute error between OPE metrics and ground truth values at a discount factor of 0.995 In each column, absolute error values that are not significantly different from the best $(p > 0.05)$ are bold faced. Methods are ordered by median.
+
+ | | Cartpole swingup | Cheetah run | Finger turn hard | Fish swim | Humanoid run |
| Rank Correlation btw. OPE and ground truth | Importance Sampling | -0.23 ±0.11 | -0.01 ±0.12 | -0.45 ±0.08 | -0.17 ±0.11 | 0.91 ±0.02 |
| Best DICE | -0.16 ±0.11 | 0.07 ±0.11 | -0.22 ±0.11 | 0.44 ±0.09 | -0.10 ±0.10 |
| Variational power method | 0.01 ±0.11 | 0.01 ±0.12 | -0.25 ±0.11 | 0.56 ±0.08 | 0.36 ±0.09 |
| Doubly Robust (IS, FQE) | 0.55 ±0.09 | 0.56 ±0.08 | 0.67 ±0.05 | 0.11 ±0.12 | -0.03 ±0.12 |
| Model based - FF | 0.83 ±0.05 | 0.64 ±0.08 | 0.08 ±0.11 | 0.95 ±0.02 | 0.35 ±0.10 |
| FQE (distributional) | 0.69 ±0.07 | 0.67 ±0.06 | 0.94 ±0.01 | 0.59 ±0.10 | 0.74 ±0.06 |
| FQE (L2) | 0.70 ±0.07 | 0.56 ±0.08 | 0.83 ±0.04 | 0.10 ±0.12 | -0.02 ±0.12 |
| Model based - AR | 0.91 ±0.02 | 0.74 ±0.07 | 0.57 ±0.09 | 0.96 ±0.01 | 0.90 ±0.02 |
| | Walker stand | Walker walk | Manipulator insert ball | Manipulator insert peg | Median ↑ |
| Rank Correlation btw. OPE and ground truth | Importance Sampling | 0.59 ±0.08 | 0.38 ±0.10 | -0.72 ±0.05 | -0.25 ±0.08 | -0.17 |
| Best DICE | -0.11 ±0.12 | -0.58 ±0.08 | 0.19 ±0.11 | -0.35 ±0.10 | -0.11 |
| Variational power method | -0.35 ±0.10 | -0.10 ±0.11 | 0.61 ±0.08 | 0.41 ±0.09 | 0.01 |
| Doubly Robust (IS, FQE) | 0.88 ±0.03 | 0.85 ±0.04 | 0.42 ±0.10 | -0.47 ±0.09 | 0.55 |
| Model based - FF | 0.82 ±0.04 | 0.80 ±0.05 | 0.06 ±0.10 | -0.56 ±0.08 | 0.64 |
| FQE (distributional) | 0.87 ±0.02 | 0.89 ±0.03 | 0.63 ±0.08 | -0.23 ±0.10 | 0.69 |
| FQE (L2) | 0.96 ±0.01 | 0.94 ±0.02 | 0.70 ±0.07 | -0.48 ±0.08 | 0.70 |
| Model Based - AR | 0.96 ±0.01 | 0.98 ±0.00 | -0.33 ±0.09 | 0.47 ±0.09 | 0.90 |
+
+Table A.2: Spearman's rank correlation $(\rho)$ coefficient (bootstrap mean $\pm$ standard deviation) between different OPE metrics and ground truth values at a discount factor of 0.995. In each column, rank correlation coefficients that are not significantly different from the best $(p > 0.05)$ are bold faced. Methods are ordered by median. Also see Table A.3 and Table A.1 for Normalized Regret@5 and Average Absolute Error results.
+
+ | Cartpole swingup | Cheetah run | Finger turn hard | Fish swim | Humanoid run |
| Regret@5 for OPE vs. ground truth | Importance Sampling | 0.73 ±0.16 | 0.40 ±0.21 | 0.64 ±0.05 | 0.12 ±0.05 | 0.31 ±0.09 |
| Best DICE | 0.68 ±0.41 | 0.27 ±0.05 | 0.44 ±0.04 | 0.35 ±0.24 | 0.84 ±0.22 |
| Variational power method | 0.50 ±0.13 | 0.37 ±0.04 | 0.45 ±0.13 | 0.02 ±0.02 | 0.56 ±0.08 |
| Doubly Robust (IS, FQE) | 0.28 ±0.05 | 0.09 ±0.05 | 0.56 ±0.12 | 0.61 ±0.12 | 0.99 ±0.00 |
| FQE (L2) | 0.06 ±0.04 | 0.17 ±0.05 | 0.30 ±0.11 | 0.50 ±0.03 | 0.99 ±0.00 |
| Model based - FF | 0.02 ±0.02 | 0.24 ±0.12 | 0.43 ±0.04 | 0.00 ±0.00 | 0.44 ±0.02 |
| FQE (distributional) | 0.03 ±0.09 | 0.11 ±0.09 | 0.10 ±0.12 | 0.49 ±0.06 | 0.24 ±0.15 |
| Model based - AR | 0.00 ±0.02 | 0.01 ±0.02 | 0.63 ±0.11 | 0.03 ±0.02 | 0.32 ±0.06 |
| Walker stand | Walker walk | Manipulator insert ball | Manipulator insert peg | Median ↓ |
| Regret@5 for OPE vs. ground truth | Importance Sampling | 0.54 ±0.11 | 0.54 ±0.23 | 0.83 ±0.05 | 0.22 ±0.03 | 0.54 |
| Best DICE | 0.24 ±0.07 | 0.55 ±0.06 | 0.44 ±0.07 | 0.75 ±0.04 | 0.44 |
| Variational power method | 0.41 ±0.02 | 0.39 ±0.02 | 0.52 ±0.20 | 0.32 ±0.02 | 0.41 |
| Doubly Robust (IS, FQE) | 0.02 ±0.01 | 0.05 ±0.07 | 0.30 ±0.10 | 0.73 ±0.01 | 0.30 |
| FQE (L2) | 0.04 ±0.02 | 0.00 ±0.02 | 0.37 ±0.07 | 0.74 ±0.01 | 0.30 |
| Model based - FF | 0.18 ±0.10 | 0.03 ±0.05 | 0.83 ±0.06 | 0.74 ±0.01 | 0.24 |
| FQE (distributional) | 0.03 ±0.03 | 0.01 ±0.02 | 0.50 ±0.30 | 0.73 ±0.01 | 0.11 |
| Model based - AR | 0.04 ±0.02 | 0.04 ±0.02 | 0.85 ±0.02 | 0.30 ±0.04 | 0.04 |
+
+Table A.3: Normalized Regret@5 (bootstrap mean ± standard deviation) for OPE methods vs. ground truth values at a discount factor of 0.995. In each column, normalized regret values that are not significantly different from the best $(p > 0.05)$ are bold faced. Methods are ordered by median.
+
+ | | Halfcheetah expert | Halfcheetah medium | Halfcheetah medium-expert | Halfcheetah medium-replay | Halfcheetah random |
| Abs. Error | IS | 1404±152 | 1217±123 | 1400±146 | 1409±154 | 1405±155 |
| VPM | 945±164 | 1374±153 | 1427±111 | 1384±148 | 1411±154 |
| Best DICE | 944±161 | 1382±130 | 1078±132 | 1440±158 | 1446±156 |
| Doubly Robust | 1025±95 | 1222±134 | 1015±103 | 1001±129 | 949±126 |
| FQE (L2) | 1031±95 | 1211±130 | 1014±101 | 1003±132 | 938±125 |
| | Antmaze large-diverse | Antmaze large-play | Antmaze medium-diverse | Antmaze medium-play | Antmaze umaze |
| Abs. Error | IS | 0.62±0.01 | 0.85±0.00 | 0.55±0.01 | 0.81±0.00 | 0.62±0.04 |
| VPM | 0.02±0.02 | 0.26±0.24 | 0.07±0.05 | 0.11±0.06 | 0.12±0.03 |
| Best DICE | 5.55±0.36 | 19.62±1.28 | 2.42±1.56 | 19.47±2.15 | 14.97±1.93 |
| Doubly Robust | 0.99±0.01 | 1.59±0.01 | 0.61±0.03 | 1.47±0.01 | 0.87±0.04 |
| FQE (L2) | 0.53±0.01 | 0.78±0.00 | 0.29±0.01 | 0.71±0.01 | 0.39±0.03 |
| | Antmaze umaze-diverse | Door cloned | Door expert | Door human | Hammer cloned |
| Abs. Error | IS | 0.14±0.02 | 891±188 | 648±122 | 870±173 | 7403±1126 |
| VPM | 0.12±0.03 | 1040±188 | 879±182 | 862±163 | 7459±1114 |
| Best DICE | 0.17±0.04 | 697±79 | 856±134 | 1108±199 | 4169±839 |
| Doubly Robust | 0.11±0.02 | 424±73 | 1353±218 | 379±65 | 6101±679 |
| FQE (L2) | 0.11±0.03 | 438±81 | 1343±84 | 389±60 | 5415±558 |
| | Hammer expert | Hammer human | Maze2d large | Maze2d medium | Maze2d umaze |
| Abs. Error | IS | 3052±608 | 7352±1118 | 45.61±10.43 | 61.29±7.78 | 50.20±9.16 |
| VPM | 7312±1117 | 7105±1107 | 44.10±10.69 | 60.30±8.37 | 62.81±8.40 |
| Best DICE | 3963±758 | 5677±936 | 42.46±9.66 | 58.97±9.57 | 21.95±4.69 |
| Doubly Robust | 3485±590 | 5768±751 | 22.94±6.82 | 23.64±4.96 | 76.93±4.42 |
| FQE (L2) | 2950±728 | 6000±612 | 24.31±6.56 | 35.11±6.33 | 79.67±4.93 |
| | Pen cloned | Pen expert | Pen human | Relocate cloned | Relocate expert |
| Abs. Error | IS | 1707±128 | 4547±222 | 3926±128 | 632±215 | 2731±147 |
| VPM | 2324±129 | 2325±136 | 1569±215 | 586±135 | 620±214 |
| Best DICE | 1454±219 | 2963±279 | 4193±244 | 1347±485 | 1095±221 |
| Doubly Robust | 1323±98 | 2013±564 | 2846±200 | 412±124 | 1193±350 |
| FQE (L2) | 1232±105 | 1057±281 | 2872±170 | 439±125 | 1351±393 |
| | Relocate human | Ant expert | Ant medium | Ant medium-expert | Ant medium-replay |
| Abs. Error | IS | 638±217 | 605±104 | 594±104 | 604±102 | 603±101 |
| VPM | 806±166 | 607±108 | 570±109 | 604±106 | 612±105 |
| Best DICE | 4526±474 | 558±108 | 495±90 | 471±100 | 583±110 |
| Doubly Robust | 606±116 | 584±114 | 345±66 | 326±66 | 421±72 |
| FQE (L2) | 593±113 | 583±122 | 345±64 | 319±67 | 410±79 |
| | Ant random | Hopper expert | Hopper medium | Hopper random | Walker2d expert |
| Abs. Error | IS | 606±103 | 106±29 | 405±48 | 412±45 | 405±62 |
| VPM | 570±99 | 442±43 | 433±44 | 438±44 | 367±68 |
| Best DICE | 530±92 | 259±54 | 215±41 | 122±16 | 437±60 |
| Doubly Robust | 404±106 | 426±99 | 307±85 | 289±50 | 519±179 |
| FQE (L2) | 398±111 | 282±76 | 283±73 | 261±42 | 453±142 |
| | Walker2d medium | Walker2d medium-expert | Walker2d medium-replay | Walker2d random | Median |
| Abs. Error | IS | 428±60 | 436±62 | 427±60 | 430±61 | 603.82 |
| VPM | 426±60 | 425±61 | 424±64 | 440±58 | 585.53 |
| Best DICE | 273±31 | 322±60 | 374±51 | 419±57 | 530.43 |
| Doubly Robust | 368±74 | 217±46 | 296±54 | 347±74 | 411.99 |
| FQE (L2) | 350±79 | 233±42 | 313±73 | 354±73 | 398.37 |
| Halfcheetah expert | Halfcheetah medium-expert | Halfcheetah medium-replay | Halfcheetah random | Door cloned |
| Rank Corr. | Best DICE | -0.44 ±0.30 | -0.08 ±0.35 | -0.15 ±0.41 | -0.70 ±0.22 | 0.18 ±0.31 |
| VPM | 0.18 ±0.35 | -0.47 ±0.29 | -0.07 ±0.36 | 0.27 ±0.36 | -0.29 ±0.36 |
| FQE (L2) | 0.78 ±0.15 | 0.62 ±0.27 | 0.26 ±0.37 | -0.11 ±0.41 | 0.55 ±0.27 |
| IS | 0.01 ±0.35 | -0.06 ±0.37 | 0.59 ±0.26 | -0.24 ±0.36 | 0.66 ±0.22 |
| Doubly Robust | 0.77 ±0.17 | 0.62 ±0.27 | 0.32 ±0.37 | -0.02 ±0.38 | 0.60 ±0.28 |
| Door expert | Hammer cloned | Hammer expert | Maze2d large | Maze2d medium |
| Rank Corr. | Best DICE | -0.06 ±0.32 | 0.35 ±0.38 | -0.42 ±0.31 | 0.56 ±0.21 | -0.64 ±0.23 |
| VPM | 0.65 ±0.23 | -0.77 ±0.22 | 0.39 ±0.31 | -0.26 ±0.33 | -0.05 ±0.39 |
| FQE (L2) | 0.89 ±0.09 | -0.15 ±0.33 | 0.29 ±0.34 | 0.30 ±0.36 | 0.16 ±0.38 |
| IS | 0.76 ±0.17 | 0.58 ±0.27 | 0.64 ±0.24 | 0.63 ±0.19 | 0.44 ±0.25 |
| Doubly Robust | 0.76 ±0.13 | -0.70 ±0.20 | 0.49 ±0.31 | 0.31 ±0.36 | 0.41 ±0.35 |
| Pen expert | Relocate expert | Ant expert | Ant medium | Ant medium-expert |
| Rank Corr. | Best DICE | -0.53 ±0.30 | -0.27 ±0.34 | -0.13 ±0.37 | -0.36 ±0.28 | -0.33 ±0.40 |
| VPM | 0.08 ±0.33 | 0.39 ±0.31 | -0.42 ±0.38 | -0.20 ±0.31 | -0.28 ±0.28 |
| FQE (L2) | -0.01 ±0.33 | -0.57 ±0.28 | -0.13 ±0.32 | 0.65 ±0.25 | 0.37 ±0.35 |
| IS | -0.45 ±0.31 | 0.52 ±0.23 | 0.14 ±0.41 | -0.17 ±0.32 | -0.21 ±0.35 |
| Doubly Robust | 0.52 ±0.28 | -0.40 ±0.24 | -0.28 ±0.32 | 0.66 ±0.26 | 0.35 ±0.35 |
| Ant medium-replay | Ant random | Hopper expert | Hopper medium | Hopper random |
| Rank Corr. | Best DICE | -0.24 ±0.39 | -0.21 ±0.35 | -0.08 ±0.32 | 0.19 ±0.33 | -0.13 ±0.39 |
| VPM | -0.26 ±0.29 | 0.24 ±0.31 | 0.21 ±0.32 | 0.13 ±0.37 | -0.46 ±0.20 |
| FQE (L2) | 0.57 ±0.28 | 0.04 ±0.33 | -0.33 ±0.30 | -0.29 ±0.33 | -0.11 ±0.36 |
| IS | 0.07 ±0.39 | 0.26 ±0.34 | 0.37 ±0.27 | -0.55 ±0.26 | 0.23 ±0.34 |
| Doubly Robust | 0.45 ±0.32 | 0.01 ±0.33 | -0.41 ±0.27 | -0.31 ±0.34 | -0.19 ±0.36 |
| Walker2d expert | Walker2d medium | Walker2d medium-expert | Walker2d medium-replay | Walker2d random |
| Rank Corr. | Best DICE | -0.37 ±0.27 | 0.12 ±0.38 | -0.34 ±0.34 | 0.55 ±0.23 | -0.19 ±0.36 |
| VPM | 0.17 ±0.32 | 0.44 ±0.21 | 0.49 ±0.37 | -0.52 ±0.25 | -0.42 ±0.34 |
| FQE (L2) | 0.35 ±0.33 | -0.09 ±0.36 | 0.25 ±0.32 | -0.19 ±0.36 | 0.21 ±0.31 |
| IS | 0.22 ±0.37 | -0.25 ±0.35 | 0.24 ±0.33 | 0.65 ±0.24 | -0.05 ±0.38 |
| Doubly Robust | 0.26 ±0.34 | 0.02 ±0.37 | 0.19 ±0.33 | -0.37 ±0.39 | 0.16 ±0.29 |
| Median | | | | |
| Rank Corr. | Best DICE | -0.19 | | | | |
| VPM | -0.05 | | | | |
| FQE (L2) | 0.21 | | | | |
| IS | 0.23 | | | | |
| Doubly Robust | 0.26 | | | | |
| Halfcheetah expert | Halfcheetah medium | Halfcheetah medium-expert | Halfcheetah medium-replay | Halfcheetah random |
| Regret@1 | Best DICE | 0.32±0.40 | 0.82±0.29 | 0.38±0.37 | 0.30±0.07 | 0.81±0.30 |
| VPM | 0.14±0.09 | 0.33±0.19 | 0.80±0.34 | 0.25±0.09 | 0.12±0.07 |
| Doubly Robust | 0.11±0.08 | 0.37±0.15 | 0.14±0.07 | 0.33±0.18 | 0.31±0.10 |
| FQE (L2) | 0.12±0.07 | 0.38±0.13 | 0.14±0.07 | 0.36±0.16 | 0.37±0.08 |
| IS | 0.15±0.08 | 0.05±0.05 | 0.73±0.42 | 0.13±0.10 | 0.31±0.11 |
| Antmaze large-diverse | Antmaze large-play | Antmaze medium-diverse | Antmaze medium-play | Antmaze umaze |
| Regret@1 | Best DICE | 0.54±0.34 | 0.96±0.13 | 0.04±0.11 | 0.09±0.10 | 0.69±0.39 |
| VPM | 0.88±0.27 | 0.45±0.30 | 0.14±0.10 | 0.03±0.08 | 0.62±0.32 |
| Doubly Robust | 0.83±0.30 | 0.93±0.21 | 0.05±0.07 | 0.17±0.31 | 0.42±0.36 |
| FQE (L2) | 0.93±0.25 | 1.00±0.03 | 0.16±0.10 | 0.05±0.19 | 0.41±0.35 |
| IS | 0.39±0.26 | 0.71±0.20 | 0.14±0.09 | 0.18±0.06 | 0.86±0.06 |
| Antmaze umaze-diverse | Door cloned | Door expert | Door human | Hammer cloned |
| Regret@1 | Best DICE | 0.42±0.28 | 0.65±0.45 | 0.37±0.27 | 0.10±0.27 | 0.67±0.48 |
| VPM | 0.63±0.32 | 0.81±0.33 | 0.03±0.03 | 0.69±0.24 | 0.72±0.39 |
| Doubly Robust | 0.79±0.14 | 0.11±0.08 | 0.05±0.07 | 0.05±0.09 | 0.78±0.38 |
| FQE (L2) | 0.64±0.37 | 0.11±0.06 | 0.03±0.03 | 0.05±0.08 | 0.36±0.39 |
| IS | 0.22±0.36 | 0.02±0.07 | 0.01±0.04 | 0.45±0.40 | 0.03±0.15 |
| Hammer expert | Hammer human | Maze2d large | Maze2d medium | Maze2d umaze |
| Regret@1 | Best DICE | 0.24±0.34 | 0.04±0.08 | 0.15±0.08 | 0.44±0.05 | 0.03±0.07 |
| VPM | 0.04±0.07 | 0.18±0.29 | 0.66±0.10 | 0.24±0.24 | 0.06±0.12 |
| Doubly Robust | 0.09±0.09 | 0.46±0.23 | 0.21±0.16 | 0.27±0.14 | 0.03±0.07 |
| FQE (L2) | 0.05±0.04 | 0.46±0.23 | 0.20±0.14 | 0.31±0.14 | 0.03±0.07 |
| IS | 0.01±0.04 | 0.19±0.30 | 0.16±0.23 | 0.15±0.15 | 0.02±0.12 |
| Pen cloned | Pen expert | Pen human | Relocate cloned | Relocate expert |
| Regret@1 | Best DICE | 0.12±0.08 | 0.33±0.20 | 0.04±0.09 | 0.96±0.18 | 0.97±0.07 |
| VPM | 0.36±0.18 | 0.25±0.13 | 0.28±0.12 | 0.11±0.29 | 0.76±0.23 |
| Doubly Robust | 0.13±0.06 | 0.05±0.07 | 0.09±0.08 | 0.18±0.27 | 0.98±0.08 |
| FQE (L2) | 0.12±0.07 | 0.11±0.14 | 0.07±0.05 | 0.29±0.42 | 1.00±0.06 |
| IS | 0.14±0.09 | 0.31±0.10 | 0.17±0.15 | 0.63±0.41 | 0.18±0.14 |
| Relocate human | Ant expert | Ant medium | Ant medium-expert | Ant medium-replay |
| Regret@1 | Best DICE | 0.97±0.11 | 0.62±0.15 | 0.43±0.10 | 0.60±0.16 | 0.64±0.13 |
| VPM | 0.77±0.18 | 0.88±0.22 | 0.40±0.21 | 0.32±0.24 | 0.72±0.43 |
| Doubly Robust | 0.17±0.15 | 0.43±0.22 | 0.12±0.18 | 0.37±0.13 | 0.05±0.09 |
| FQE (L2) | 0.17±0.14 | 0.43±0.22 | 0.12±0.18 | 0.36±0.14 | 0.05±0.09 |
| IS | 0.63±0.41 | 0.47±0.32 | 0.61±0.18 | 0.46±0.18 | 0.16±0.23 |
| Ant random | Hopper expert | Hopper medium | Hopper random | Walker2d expert |
| Regret@1 | Best DICE | 0.50±0.29 | 0.20±0.08 | 0.18±0.19 | 0.30±0.15 | 0.35±0.36 |
| VPM | 0.15±0.24 | 0.13±0.10 | 0.10±0.14 | 0.26±0.10 | 0.09±0.19 |
| Doubly Robust | 0.28±0.15 | 0.34±0.35 | 0.32±0.32 | 0.41±0.17 | 0.06±0.07 |
| FQE (L2) | 0.28±0.15 | 0.41±0.20 | 0.32±0.32 | 0.36±0.22 | 0.06±0.07 |
| IS | 0.56±0.22 | 0.06±0.03 | 0.38±0.28 | 0.05±0.05 | 0.43±0.26 |
| Walker2d medium | Walker2d medium-expert | Walker2d medium-replay | Walker2d random | Median |
| Regret@1 | Best DICE | 0.27±0.43 | 0.78±0.27 | 0.18±0.12 | 0.39±0.33 | 0.38 |
| VPM | 0.08±0.06 | 0.24±0.42 | 0.46±0.31 | 0.88±0.20 | 0.28 |
| Doubly Robust | 0.25±0.09 | 0.30±0.12 | 0.68±0.23 | 0.15±0.20 | 0.25 |
| FQE (L2) | 0.31±0.10 | 0.22±0.14 | 0.24±0.20 | 0.15±0.21 | 0.24 |
| IS | 0.70±0.39 | 0.13±0.07 | 0.02±0.05 | 0.74±0.33 | 0.18 |
+
+# A.2.3 SCATTER PLOTS
+
+Finally, we present scatter plots plotting the true returns of each policy against the estimated returns. Each point on the plot represents one evaluated policy.
+
+
+Figure A.8: Scatter plots of estimate vs ground truth return for each baseline on each task in DOPE RL Unplugged.
+
+
+Figure A.9: Scatter plots of estimate vs ground truth return for each baseline on each task in DOPE D4RL (part 1).
+
+
+Figure A.10: Scatter plots of estimate vs ground truth return for each baseline on each task in DOPE D4RL (part 2).
+
+
+Figure A.11: Scatter plots of estimate vs ground truth return for each baseline on each task in DOPE D4RL (part 3).
+
+
+Figure A.12: Scatter plots of estimate vs ground truth return for each baseline on each task in DOPE D4RL (part 4).
+
+
+Figure A.13: Scatter plots of estimate vs ground truth return for each baseline on each task in DOPE D4RL (part 5).
+
+
+Figure A.14: Scatter plots of estimate vs ground truth return for each baseline on each task in DOPE D4RL (part 6).
\ No newline at end of file
diff --git a/benchmarksfordeepoffpolicyevaluation/images.zip b/benchmarksfordeepoffpolicyevaluation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..d47f009628c7e5195463e4f31f5c96c469964a0e
--- /dev/null
+++ b/benchmarksfordeepoffpolicyevaluation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:394539ffc64528c652c1a847304256d40088a78b2ca1ebc2dd0ddd49003daa5d
+size 2565211
diff --git a/benchmarksfordeepoffpolicyevaluation/layout.json b/benchmarksfordeepoffpolicyevaluation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..cf632d0ac22ef3364aaf2995df79583bfc90b7ae
--- /dev/null
+++ b/benchmarksfordeepoffpolicyevaluation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a97ef6d00c082b10ccaf97697cdef576a2f0e546ce53193dc0fe41fc130accac
+size 522922
diff --git a/bertologymeetsbiologyinterpretingattentioninproteinlanguagemodels/6445afe9-2ae7-4a66-a021-d1a93c727204_content_list.json b/bertologymeetsbiologyinterpretingattentioninproteinlanguagemodels/6445afe9-2ae7-4a66-a021-d1a93c727204_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..45ca3ced317f17fd1c1462c7500b7f21f985d5f6
--- /dev/null
+++ b/bertologymeetsbiologyinterpretingattentioninproteinlanguagemodels/6445afe9-2ae7-4a66-a021-d1a93c727204_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e0b05f3eb86e7b1880e7f2ffaade6b15a4f3d330572b6305839812fc9963ac3d
+size 129653
diff --git a/bertologymeetsbiologyinterpretingattentioninproteinlanguagemodels/6445afe9-2ae7-4a66-a021-d1a93c727204_model.json b/bertologymeetsbiologyinterpretingattentioninproteinlanguagemodels/6445afe9-2ae7-4a66-a021-d1a93c727204_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..dd3e0ec5545e53d8030f95c5d5fd33860cbab5c1
--- /dev/null
+++ b/bertologymeetsbiologyinterpretingattentioninproteinlanguagemodels/6445afe9-2ae7-4a66-a021-d1a93c727204_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f61361ce87e865cbaf23266b8f01f33fe713e40d0b59169dca79f9d0dd501f06
+size 166917
diff --git a/bertologymeetsbiologyinterpretingattentioninproteinlanguagemodels/6445afe9-2ae7-4a66-a021-d1a93c727204_origin.pdf b/bertologymeetsbiologyinterpretingattentioninproteinlanguagemodels/6445afe9-2ae7-4a66-a021-d1a93c727204_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..aa17b0070a78e04b6883acb65bb16303bf821fa2
--- /dev/null
+++ b/bertologymeetsbiologyinterpretingattentioninproteinlanguagemodels/6445afe9-2ae7-4a66-a021-d1a93c727204_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:03cbe8cd5b04fac15eb15df98444c3740bec3e0c075784593dad9417b7d7808f
+size 2722947
diff --git a/bertologymeetsbiologyinterpretingattentioninproteinlanguagemodels/full.md b/bertologymeetsbiologyinterpretingattentioninproteinlanguagemodels/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..e984d924f4e9517ae6b2ea5c89aaa5e6bf639d88
--- /dev/null
+++ b/bertologymeetsbiologyinterpretingattentioninproteinlanguagemodels/full.md
@@ -0,0 +1,577 @@
+# BERTOLOGY MEETS BIOLOGY: INTERPRETING ATTENTION IN PROTEIN LANGUAGE MODELS
+
+Jesse Vig $^{1}$ Ali Madani $^{1}$ Lav R. Varshney $^{1,2}$ Caiming Xiong $^{1}$
+
+Richard Socher1 Nazneen Fatema Rajani1
+
+$^{1}$ Salesforce Research, $^{2}$ University of Illinois at Urbana-Champaign
+
+{jvig,amadani,cxiong,rsocher,nazneen.rajani}@salesforce.com varshney@illinois.edu
+
+# ABSTRACT
+
+Transformer architectures have proven to learn useful representations for protein classification and generation tasks. However, these representations present challenges in interpretability. In this work, we demonstrate a set of methods for analyzing protein Transformer models through the lens of attention. We show that attention: (1) captures the folding structure of proteins, connecting amino acids that are far apart in the underlying sequence, but spatially close in the three-dimensional structure, (2) targets binding sites, a key functional component of proteins, and (3) focuses on progressively more complex biophysical properties with increasing layer depth. We find this behavior to be consistent across three Transformer architectures (BERT, ALBERT, XLNet) and two distinct protein datasets. We also present a three-dimensional visualization of the interaction between attention and protein structure. Code for visualization and analysis is available at https://github.com/salesforce/provis.
+
+# 1 INTRODUCTION
+
+The study of proteins, the fundamental macromolecules governing biology and life itself, has led to remarkable advances in understanding human health and the development of disease therapies. The decreasing cost of sequencing technology has enabled vast databases of naturally occurring proteins (El-Gebali et al., 2019a), which are rich in information for developing powerful machine learning models of protein sequences. For example, sequence models leveraging principles of co-evolution, whether modeling pairwise or higher-order interactions, have enabled prediction of structure or function (Rollins et al., 2019).
+
+Proteins, as a sequence of amino acids, can be viewed precisely as a language and therefore modeled using neural architectures developed for natural language. In particular, the Transformer (Vaswani et al., 2017), which has revolutionized unsupervised learning for text, shows promise for similar impact on protein sequence modeling. However, the strong performance of the Transformer comes at the cost of interpretability, and this lack of transparency can hide underlying problems such as model bias and spurious correlations (Niven & Kao, 2019; Tan & Celis, 2019; Kurita et al., 2019). In response, much NLP research now focuses on interpreting the Transformer, e.g., the subspecialty of "BERTology" (Rogers et al., 2020), which specifically studies the BERT model (Devlin et al., 2019).
+
+In this work, we adapt and extend this line of interpretability research to protein sequences. We analyze Transformer protein models through the lens of attention, and present a set of interpretability methods that capture the unique functional and structural characteristics of proteins. We also compare the knowledge encoded in attention weights to that captured by hidden-state representations. Finally, we present a visualization of attention contextualized within three-dimensional protein structure.
+
+Our analysis reveals that attention captures high-level structural properties of proteins, connecting amino acids that are spatially close in three-dimensional structure, but apart in the underlying sequence (Figure 1a). We also find that attention targets binding sites, a key functional component of proteins (Figure 1b). Further, we show how attention is consistent with a classic measure of similarity between amino acids—the substitution matrix. Finally, we demonstrate that attention captures progressively higher-level representations of structure and function with increasing layer depth.
+
+
+(a) Attention in head 12-4, which targets amino acid pairs that are close in physical space (see inset subsequence 117D-157I) but lie apart in the sequence. Example is a de novo designed TIMbarrel (5BVL) with characteristic symmetry.
+
+
+(b) Attention in head 7-1, which targets binding sites, a key functional component of proteins. Example is HIV-1 protease (7HVP). The primary location receiving attention is 27G, a binding site for protease inhibitor small-molecule drugs.
+Figure 1: Examples of how specialized attention heads in a Transformer recover protein structure and function, based solely on language model pre-training. Orange lines depict attention between amino acids (line width proportional to attention weight; values below 0.1 hidden). Heads were selected based on correlation with ground-truth annotations of contact maps and binding sites. Visualizations based on the NGL Viewer (Rose et al., 2018; Rose & Hildebrand, 2015; Nguyen et al., 2017).
+
+In contrast to NLP, which aims to automate a capability that humans already have—understanding natural language—protein modeling also seeks to shed light on biological processes that are not fully understood. Thus we also discuss how interpretability can aid scientific discovery.
+
+# 2 BACKGROUND: PROTEINS
+
+In this section we provide background on the biological concepts discussed in later sections.
+
+Amino acids. Just as language is composed of words from a shared lexicon, every protein sequence is formed from a vocabulary of amino acids, of which 20 are commonly observed. Amino acids may be denoted by their full name (e.g., Proline), a 3-letter abbreviation (Pro), or a single-letter code $(P)$ .
+
+Substitution matrix. While word synonyms are encoded in a thesaurus, proteins that are similar in structure or function are captured in a substitution matrix, which scores pairs of amino acids on how readily they may be substituted for one another while maintaining protein viability. One common substitution matrix is BLOSUM (Henikoff & Henikoff, 1992), which is derived from co-occurrence statistics of amino acids in aligned protein sequences.
+
+Protein structure. Though a protein may be abstracted as a sequence of amino acids, it represents a physical entity with a well-defined three-dimensional structure (Figure 1). Secondary structure describes the local segments of proteins; two commonly observed types are the alpha helix and beta sheet. Tertiary structure encompasses the large-scale formations that determine the overall shape and function of the protein. One way to characterize tertiary structure is by a contact map, which describes the pairs of amino acids that are in contact (within 8 angstroms of one another) in the folded protein structure but lie apart (by at least 6 positions) in the underlying sequence (Rao et al., 2019).
+
+Binding sites. Proteins may also be characterized by their functional properties. Binding sites are protein regions that bind with other molecules (proteins, natural ligands, and small-molecule drugs) to carry out a specific function. For example, the HIV-1 protease is an enzyme responsible for a critical process in replication of HIV (Brik & Wong, 2003). It has a binding site, shown in Figure 1b, that is a target for drug development to ensure inhibition.
+
+Post-translational modifications. After a protein is translated from RNA, it may undergo additional modifications, e.g. phosphorylation, which play a key role in protein structure and function.
+
+# 3 METHODOLOGY
+
+Model. We demonstrate our interpretability methods on five Transformer models that were pretrained through language modeling of amino acid sequences. We primarily focus on the BERT-Base model from TAPE (Rao et al., 2019), which was pretrained on Pfam, a dataset of 31M protein sequences (ElGebali et al., 2019b). We refer to this model as TapeBert. We also analyze 4 pre-trained Transformer models from ProtTrans (Elnaggar et al., 2020): ProtBert and ProtBert-BFD, which are 30-layer, 16-head BERT models; ProtAlbert, a 12-layer, 64-head ALBERT (Lan et al., 2020) model; and ProtXLNet, a 30-layer, 16-head XLNet (Yang et al., 2019) model. ProtBert-BFD was pretrained on BFD (Steinegger & Söding, 2018), a dataset of 2.1B protein sequences, while the other ProtTrans models were pretrained on UniRef100 (Suzek et al., 2014), which includes 216M protein sequences. A summary of these 5 models is presented in Appendix A.1.
+
+Here we present an overview of BERT, with additional details on all models in Appendix A.2. BERT inputs a sequence of amino acids $\boldsymbol{x} = (x_{1},\ldots ,x_{n})$ and applies a series of encoders. Each encoder layer $\ell$ outputs a sequence of continuous embeddings $(\mathbf{h}_1^{(\ell)},\dots,\mathbf{h}_n^{(\ell)})$ using a multi-headed attention mechanism. Each attention head in a layer produces a set of attention weights $\alpha$ for an input, where $\alpha_{i,j} > 0$ is the attention from token $i$ to token $j$ , such that $\sum_{j}\alpha_{i,j} = 1$ . Intuitively, attention weights define the influence of every token on the next layer's representation for the current token. We denote a particular head by $\langle \text{layer} \rangle$ - $\langle \text{head\_index} \rangle$ , e.g. head 3-7 for the 3rd layer's 7th head.
+
+Attention analysis. We analyze how attention aligns with various protein properties. For properties of token pairs, e.g. contact maps, we define an indicator function $f(i,j)$ that returns 1 if the property is present in token pair $(i,j)$ (e.g., if amino acids $i$ and $j$ are in contact), and 0 otherwise. We then compute the proportion of high-attention token pairs $(\alpha_{i,j} > \theta)$ where the property is present, aggregated over a dataset $X$ :
+
+$$
+p _ {\alpha} (f) = \sum_ {\mathbf {x} \in \mathbf {X}} \sum_ {i = 1} ^ {| \mathbf {x} |} \sum_ {j = 1} ^ {| \mathbf {x} |} f (i, j) \cdot \mathbb {1} _ {\alpha_ {i, j} > \theta} / \sum_ {\mathbf {x} \in \mathbf {X}} \sum_ {i = 1} ^ {| \mathbf {x} |} \sum_ {j = 1} ^ {| \mathbf {x} |} \mathbb {1} _ {\alpha_ {i, j} > \theta} \tag {1}
+$$
+
+where $\theta$ is a threshold to select for high-confidence attention weights. We also present an alternative, continuous version of this metric in Appendix B.1.
+
+For properties of individual tokens, e.g. binding sites, we define $f(i,j)$ to return 1 if the property is present in token $j$ (e.g. if $j$ is a binding site). In this case, $p_{\alpha}(f)$ equals the proportion of attention that is directed to the property (e.g. the proportion of attention focused on binding sites).
+
+When applying these metrics, we include two types of checks to ensure that the results are not due to chance. First, we test that the proportion of attention that aligns with particular properties is significantly higher than the background frequency of these properties, taking into account the Bonferroni correction for multiple hypotheses corresponding to multiple attention heads. Second, we compare the results to a null model, which is an instance of the model with randomly shuffled attention weights. We describe these methods in detail in Appendix B.2.
+
+Probing tasks. We also perform probing tasks on the model, which test the knowledge contained in model representations by using them as inputs to a classifier that predicts a property of interest (Veldhoen et al., 2016; Conneau et al., 2018; Adi et al., 2016). The performance of the probing classifier serves as a measure of the knowledge of the property that is encoded in the representation. We run both embedding probes, which assess the knowledge encoded in the output embeddings of each layer, and attention probes (Reif et al., 2019; Clark et al., 2019), which measure the knowledge contained in the attention weights for pairwise features. Details are provided in Appendix B.3.
+
+Datasets. For our analyses of amino acids and contact maps, we use a curated dataset from TAPE based on ProteinNet (AlQuraishi, 2019; Fox et al., 2013; Berman et al., 2000; Moult et al., 2018), which contains amino acid sequences annotated with spatial coordinates (used for the contact map analysis). For the analysis of secondary structure and binding sites we use the Secondary Structure dataset (Rao et al., 2019; Berman et al., 2000; Moult et al., 2018; Klausen et al., 2019) from TAPE. We employed a taxonomy of secondary structure with three categories: Helix, Strand, and Turn/Bend, with the last two belonging to the higher-level beta sheet category (Sec. 2). We used this taxonomy to study how the model understood structurally distinct regions of beta sheets. We obtained token-level binding site and protein modification labels from the Protein Data Bank (Berman et al., 2000). For analyzing attention, we used a random subset of 5000 sequences from the training split of the
+
+
+(a) TapeBert
+
+
+(b) ProtAlbert
+
+
+(c) ProtBert
+
+
+(d) ProtBert-BFD
+Figure 2: Agreement between attention and contact maps across five pretrained Transformer models from TAPE (a) and ProtTrans (b-e). The heatmaps show the proportion of high-confidence attention weights $(\alpha_{i,j} > \theta)$ from each head that connects pairs of amino acids that are in contact with one another. In TapeBert (a), for example, we can see that $45\%$ of attention in head 12-4 (the 12th layer's 4th head) maps to contacts. The bar plots show the maximum value from each layer. Note that the vertical striping in ProtAlbert (b) is likely due to cross-layer parameter sharing (see Appendix A.3).
+
+
+(e) ProtXLNet
+
+respective datasets (note that none of the aforementioned annotations were used in model training). For the diagnostic classifier, we used the respective training splits for training and the validation splits for evaluation. See Appendix B.4 for additional details.
+
+Experimental details We exclude attention to the [SEP] delimiter token, as it has been shown to be a "no-op" attention token (Clark et al., 2019), as well as attention to the [CLS] token, which is not explicitly used in language modeling. We only include results for attention heads where at least 100 high-confidence attention arcs are available for analysis. We set the attention threshold $\theta$ to 0.3 to select for high-confidence attention while retaining sufficient data for analysis. We truncate all protein sequences to a length of 512 to reduce memory requirements. $^{1}$
+
+We note that all of the above analyses are purely associative and do not attempt to establish a causal link between attention and model behavior (Vig et al., 2020; Grimsley et al., 2020), nor to explain model predictions (Jain & Wallace, 2019; Wiegrefe & Pinter, 2019).
+
+# 4 WHAT DOES ATTENTION UNDERSTAND ABOUT PROTEINS?
+
+# 4.1 PROTEIN STRUCTURE
+
+Here we explore the relationship between attention and tertiary structure, as characterized by contact maps (see Section 2). Secondary structure results are included in Appendix C.1.
+
+Attention aligns strongly with contact maps in the deepest layers. Figure 2 shows how attention aligns with contact maps across the heads of the five models evaluated², based on the metric defined in Equation 1. The most aligned heads are found in the deepest layers and focus up to $44.7\%$ (TapeBert), $55.7\%$ (ProtAlbert), $58.5\%$ (ProtBert), $63.2\%$ (ProtBert-BFD), and $44.5\%$ (ProtXLNet) of attention on contacts, whereas the background frequency of contacts among all amino acid pairs in the dataset is $1.3\%$ . Figure 1a shows an example of the induced attention from the top head in TapeBert. We note that the model with the single most aligned head—ProtBert-BFD—is the largest model (same size as ProteinBert) at 420M parameters (Appendix A.1) and it was also the only model pre-trained on the
+
+
+(a) TapeBert
+
+
+
+
+(c) ProtBert
+
+
+(d) ProtBert-BFD
+
+
+(e) ProtXLNet
+Figure 3: Proportion of attention focused on binding sites across five pretrained models. The heatmaps show the proportion of high-confidence attention $(\alpha_{i,j} > \theta)$ from each head that is directed to binding sites. In TapeBert (a), for example, we can see that $49\%$ of attention in head 11-6 (the 11th layer's 6th head) is directed to binding sites. The bar plots show the maximum value from each layer.
+
+largest dataset, BFD. It's possible that both factors helped the model learn more structurally-aligned attention patterns. Statistical significance tests and null models are reported in Appendix C.2.
+
+Considering the models were trained on language modeling tasks without any spatial information, the presence of these structurally-aware attention heads is intriguing. One possible reason for this emergent behavior is that contacts are more likely to biochemically interact with one another, creating statistical dependencies between the amino acids in contact. By focusing attention on the contacts of a masked position, the language models may acquire valuable context for token prediction.
+
+While there seems to be a strong correlation between the attention head output and classically-defined contacts, there are also differences. The models may have learned differing contextualized or nuanced formulations that describe amino acid interactions. These learned interactions could then be used for further discovery and investigation or repurposed for prediction tasks similar to how principles of coevolution enabled a powerful representation for structure prediction.
+
+# 4.2 BINDING SITES AND POST-TRANSLATIONAL MODIFICATIONS
+
+We also analyze how attention interacts with binding sites and post-translational modifications (PTMs), which both play a key role in protein function.
+
+Attention targets binding sites throughout most layers of the models. Figure 3 shows the proportion of attention focused on binding sites (Eq. 1) across the heads of the 5 models studied. Attention to binding sites is most pronounced in the ProtAlbert model (Figure 3b), which has 22 heads that focus over $50\%$ of attention on bindings sites, whereas the background frequency of binding sites in the dataset is $4.8\%$ . The three BERT models (Figures 3a, 3c, and 3d) also attend strongly to binding sites, with attention heads focusing up to $48.2\%$ , $50.7\%$ , and $45.6\%$ of attention on binding sites, respectively. Figure 1b visualizes the attention in one strongly-aligned head from the TapeBert model. Statistical significance tests and a comparison to a null model are provided in Appendix C.3.
+
+ProtXLNet (Figure 3e) also targets binding sites, but not as strongly as the other models: the most aligned head focuses $15.1\%$ of attention on binding sites, and the average head directs just $6.2\%$ of attention to binding sites, compared to $13.2\%$ , $19.8\%$ , $16.0\%$ , and $15.1\%$ for the first four models in Figure 3. It's unclear whether this disparity is due to differences in architectures or pre-training objectives; for example, ProtXLNet uses a bidirectional auto-regressive pretraining method (see Appendix A.2), whereas the other 4 models all use masked language modeling objectives.
+
+
+Figure 4: Each plot shows the percentage of attention focused on the given property, averaged over all heads within each layer. The plots, sorted by center of gravity (red dashed line), show that heads in deeper layers focus relatively more attention on binding sites and contacts, whereas attention toward specific secondary structures is more even across layers.
+
+
+Figure 5: Performance of probing classifiers by layer, sorted by task order in Figure 4. The embedding probes (orange) quantify the knowledge of the given property that is encoded in each layer's output embeddings. The attention probe (blue), show the amount of information encoded in attention weights for the (pairwise) contact feature. Additional details are provided in Appendix B.3.
+
+Why does attention target binding sites? In contrast to contact maps, which reveal relationships within proteins, binding sites describe how a protein interacts with other molecules. These external interactions ultimately define the high-level function of the protein, and thus binding sites remain conserved even when the sequence as a whole evolves (Kinjo & Nakamura, 2009). Further, structural motifs in binding sites are mainly restricted to specific families or superfamilies of proteins (Kinjo & Nakamura, 2009), and binding sites can reveal evolutionary relationships among proteins (Lee et al., 2017). Thus binding sites may provide the model with a high-level characterization of the protein that is robust to individual sequence variation. By attending to these regions, the model can leverage this higher-level context when predicting masked tokens throughout the sequence.
+
+Attention targets PTMs in a small number of heads. A small number of heads in each model concentrate their attention very strongly on amino acids associated with post-translational modifications (PTMs). For example, Head 11-6 in TapeBert focused $64\%$ of attention on PTM positions, though these occur at only $0.8\%$ of sequence positions in the dataset.3 Similar to our discussion on binding sites, PTMs are critical to protein function (Rubin & Rosen, 1975) and thereby are likely to exhibit behavior that is conserved across the sequence space. See Appendix C.4 for full results.
+
+# 4.3 CROSS-LAYER ANALYSIS
+
+We analyze how attention captures properties of varying complexity across different layers of TapeBert, and compare this to a probing analysis of embeddings and attention weights (see Section 3).
+
+Attention targets higher-level properties in deeper layers. As shown in Figure 4, deeper layers focus relatively more attention on binding sites and contacts (high-level concept), whereas secondary structure (low- to mid-level concept) is targeted more evenly across layers. The probing analysis of attention (Figure 5, blue) similarly shows that knowledge of contact maps (a pairwise feature)
+
+
+Figure 6: Percentage of each head's attention focused on amino acids Pro (left) and Phe (right).
+
+
+
+
+Figure 7: Pairwise attention similarity (left) vs. substitution matrix (right) (codes in App. C.5)
+
+
+
+is encoded in attention weights primarily in the last 1-2 layers. These results are consistent with prior work in NLP that suggests deeper layers in text-based Transformers attend to more complex properties (Vig & Belinkov, 2019) and encode higher-level representations (Raganato & Tiedemann, 2018; Peters et al., 2018; Tenney et al., 2019; Jawahar et al., 2019).
+
+The embedding probes (Figure 5, orange) also show that the model first builds representations of local secondary structure in lower layers before fully encoding binding sites and contact maps in deeper layers. However, this analysis also reveals stark differences in how knowledge of contact maps is accrued in embeddings, which accumulate this knowledge gradually over many layers, compared to attention weights, which acquire this knowledge only in the final layers in this case. This example points out limitations of common layerwise probing approaches that only consider embeddings, which, intuitively, represent what the model knows but not necessarily how it operationalizes that knowledge.
+
+# 4.4 AMINO ACIDS AND THE SUBSTITUTION MATRIX
+
+In addition to high-level structural and functional properties, we also performed a fine-grained analysis of the interaction between attention and particular amino acids.
+
+Attention heads specialize in particular amino acids. We computed the proportion of TapeBert's attention to each of the 20 standard amino acids, as shown in Figure 6 for two example amino acids. For 16 of the amino acids, there exists an attention head that focuses over $25\%$ of attention on that amino acid, significantly greater than the background frequencies of the corresponding amino acids, which range from $1.3\%$ to $9.4\%$ . Similar behavior was observed for ProtBert, ProtBert-BFD, ProtAlbert, and ProtXLNet models, with 17, 15, 16, and 18 amino acids, respectively, receiving greater than $25\%$ of the attention from at least one attention head. Detailed results for TapeBert including statistical significance tests and comparison to a null model are presented in Appendix C.5.
+
+Attention is consistent with substitution relationships. A natural follow-up question from the above analysis is whether each head has "memorized" specific amino acids to target, or whether it has actually learned meaningful properties that correlate with particular amino acids. To test the latter hypothesis, we analyze whether amino acids with similar structural and functional properties are attended to similarly across heads. Specifically, we compute the Pearson correlation between the distribution of attention across heads between all pairs of distinct amino acids, as shown in Figure 7 (left) for TapeBert. For example, the entry for Pro (P) and Phe (F) is the correlation between the two heatmaps in Figure 6. We compare these scores to the BLOSUM62 substitution scores (Sec. 2) in Figure 7 (right), and find a Pearson correlation of 0.73, suggesting that attention is moderately
+
+consistent with substitution relationships. Similar correlations are observed for the ProtTrans models: 0.68 (ProtBert), 0.75 (ProtBert-BFD), 0.60 (ProtAlbert), and 0.71 (ProtXLNet). As a baseline, the randomized versions of these models (Appendix B.2) yielded correlations of -0.02 (TapeBert), 0.02 (ProtBert), -0.03 (ProtBert-BFD), -0.05 (ProtAlbert), and 0.21 (ProtXLNet).
+
+# 5 RELATED WORK
+
+# 5.1 PROTEIN LANGUAGE MODELS
+
+Deep neural networks for protein language modeling have received broad interest. Early work applied the Skip-gram model (Mikolov et al., 2013) to construct continuous embeddings from protein sequences (Asgari & Mofrad, 2015). Sequence-only language models have since been trained through autoregressive or autoencoding self-supervision objectives for discriminative and generative tasks, for example, using LSTMs or Transformer-based architectures (Alley et al., 2019; Bepler & Berger, 2019; Rao et al., 2019; Rives et al., 2019). TAPE created a benchmark of five tasks to assess protein sequence models, and ProtTrans also released several large-scale pretrained protein Transformer models (Elnaggar et al., 2020). Riesselman et al. (2019); Madani et al. (2020) trained autoregressive generative models to predict the functional effect of mutations and generate natural-like proteins.
+
+From an interpretability perspective, Rives et al. (2019) showed that the output embeddings from a pretrained Transformer can recapitulate structural and functional properties of proteins through learned linear transformations. Various works have analyzed output embeddings of protein models through dimensionality reduction techniques such as PCA or t-SNE (Elnaggar et al., 2020; Biswas et al., 2020). In our work, we take an interpretability-first perspective to focus on the internal model representations, specifically attention and intermediate hidden states, across multiple protein language models. We also explore novel biological properties including binding sites and post-translational modifications.
+
+# 5.2 INTERPRETING MODELS IN NLP
+
+The rise of deep neural networks in ML has also led to much work on interpreting these so-called black-box models. This section reviews the NLP interpretability literature on the Transformer model, which is directly comparable to our work on interpreting Transformer models of protein sequences.
+
+Interpreting Transformers. The Transformer is a neural architecture that uses attention to accelerate learning (Vaswani et al., 2017). In NLP, transformers are the backbone of state-of-the-art pre-trained language models such as BERT (Devlin et al., 2019). BERTology focuses on interpreting what the BERT model learns about language using a suite of probes and interventions (Rogers et al., 2020). So-called diagnostic classifiers are used to interpret the outputs from BERT's layers (Veldhoen et al., 2016). At a high level, mechanisms for interpreting BERT can be placed into three main categories: interpreting the learned embeddings (Ethayarajh, 2019; Wiedemann et al., 2019; Mickus et al., 2020; Adi et al., 2016; Conneau et al., 2018), BERT's learned knowledge of syntax (Lin et al., 2019; Liu et al., 2019; Tenney et al., 2019; Htut et al., 2019; Hewitt & Manning, 2019; Goldberg, 2019), and BERT's learned knowledge of semantics (Tenney et al., 2019; Ettinger, 2020).
+
+Interpreting attention specifically. Interpreting attention on textual sequences is a well-established area of research (Wiegreff & Pinter, 2019; Zhong et al., 2019; Brunner et al., 2020; Hewitt & Manning, 2019). Past work has been shown that attention correlates with syntactic and semantic relationships in natural language in some cases (Clark et al., 2019; Vig & Belinkov, 2019; Htut et al., 2019). Depending on the task and model architecture, attention may have less or more explanatory power for model predictions (Jain & Wallace, 2019; Serrano & Smith, 2019; Pruthi et al., 2020; Moradi et al., 2019; Vashisth et al., 2019). Visualization techniques have been used to convey the structure and properties of attention in Transformers (Vaswani et al., 2017; Kovaleva et al., 2019; Hoover et al., 2020; Vig, 2019). Recent work has begun to analyze attention in Transformer models outside of the domain of natural language (Schwaller et al., 2020; Payne et al., 2020).
+
+Our work extends these methods to protein sequence models by considering particular biophysical properties and relationships. We also present a joint cross-layer probing analysis of attention weights and layer embeddings. While past work in NLP has analyzed attention and embeddings across layers, we believe we are the first to do so in any domain using a single, unified metric, which enables us to
+
+directly compare the relative information content of the two representations. Finally, we present a novel tool for visualizing attention embedded in three-dimensional structure.
+
+# 6 CONCLUSIONS AND FUTURE WORK
+
+This paper builds on the synergy between NLP and computational biology by adapting and extending NLP interpretability methods to protein sequence modeling. We show how a Transformer language model recovers structural and functional properties of proteins and integrates this knowledge directly into its attention mechanism. While this paper focuses on reconciling attention with known properties of proteins, one might also leverage attention to uncover novel relationships or more nuanced forms of existing measures such as contact maps, as discussed in Section 4.1. In this way, language models have the potential to serve as tools for scientific discovery. But in order for learned representations to be accessible to domain experts, they must be presented in an appropriate context to facilitate discovery. Visualizing attention in the context of protein structure (Figure 1) is one attempt to do so. We believe there is the potential to develop such contextual visualizations of learned representations in a range of scientific domains.
+
+# ACKNOWLEDGMENTS
+
+We would like to thank Xi Victoria Lin, Stephan Zheng, Melvin Gruesbeck, and the anonymous reviewers for their valuable feedback.
+
+# REFERENCES
+
+Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. arXiv:1608.04207 [cs.CL], 2016.
+Ethan C Alley, Grigory Khimulya, Surojit Biswas, Mohammed AlQuraishi, and George M Church. Unified rational protein engineering with sequence-based deep representation learning. Nature Methods, 16(12):1315-1322, 2019.
+Mohammed AlQuraishi. ProteinNet: a standardized data set for machine learning of protein structure. BMC Bioinformatics, 20, 2019.
+Ehsaneddin Asgari and Mohammad RK Mofrad. Continuous distributed representation of biological sequences for deep proteomics and genomics. PLOS One, 10(11), 2015.
+Tristan Bepler and Bonnie Berger. Learning protein sequence embeddings using information from structure. In International Conference on Learning Representations, 2019.
+Helen M Berman, John Westbrook, Zukang Feng, Gary Gilliland, Talapady N Bhat, Helge Weissig, Ilya N Shindyalov, and Philip E Bourne. The protein data bank. *Nucleic Acids Research*, 28(1): 235-242, 2000.
+Surojit Biswas, Grigory Khimulya, Ethan C. Alley, Kevin M. Esvelt, and George M. Church. Low-n protein engineering with data-efficient deep learning. bioRxiv, 2020. doi: 10.1101/2020.01.23.917682. URL https://www.biorxiv.org/content/early/2020/08/31/2020.01.23.917682.
+Ashraf Brik and Chi-Huey Wong. HIV-1 protease: Mechanism and drug discovery. Organic & Biomolecular Chemistry, 1(1):5-14, 2003.
+Gino Brunner, Yang Liu, Damian Pascual, Oliver Richter, Massimiliano Ciaramita, and Roger Wattenhofer. On identifiability in Transformers. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=BJg1f6EFDB.
+Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. What does BERT look at? An analysis of BERT's attention. In BlackBoxNLP@ACL, 2019.
+
+Alexis Conneau, German Kruszewski, Guillaume Lample, Loic Barrault, and Marco Baroni. What you can cram into a single $\$ 8!\#$ * vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pp. 2126-2136, 2018.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186, Minneapolis, Minnesota, 2019. Association for Computational Linguistics.
+Sara El-Gebali, Jaina Mistry, Alex Bateman, Sean R. Eddy, Aurélien Luciani, Simon C. Potter, Matloob Qureshi, Lorna J. Richardson, Gustavo A. Salazar, Alfredo Smart, Erik L. L. Sonnhammer, Layla Hirsh, Lisanna Paladin, Damiano Piovesan, Silvio C. E. Tosatto, and Robert D. Finn. The Pfam protein families database in 2019. *Nucleic Acids Research*, 47(D1):D427–D432, January 2019a. doi: 10.1093/nar/gky995.
+Sara El-Gebali, Jaina Mistry, Alex Bateman, Sean R Eddy, Aurélien Luciani, Simon C Potter, Matloob Qureshi, Lorna J Richardson, Gustavo A Salazar, Alfredo Smart, Erik L L Sonnhammer, Layla Hirsh, Lisanna Paladin, Damiano Piovesan, Silvio C E Tosatto, and Robert D Finn. The Pfam protein families database in 2019. *Nucleic Acids Research*, 47(D1):D427–D432, 2019b. ISSN 0305-1048. doi: 10.1093/nar/gky995.
+Ahmed Elnaggar, Michael Heinzinger, Christian Dallago, Ghalia Rihawi, Yu Wang, Llion Jones, Tom Gibbs, Tamas Feher, Christoph Angerer, Martin Steinegger, Debsindhu Bhowmik, and Burkhard Rost. ProtTrans: Towards cracking the language of life's code through self-supervised deep learning and high performance computing. arXiv preprint arXiv:2007.06225, 2020.
+Kawin Ethayarajh. How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 55–65, Hong Kong, China, 2019. Association for Computational Linguistics.
+Allyson Ettinger. What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models. Transactions of the Association for Computational Linguistics, 8:34-48, 2020.
+Naomi K Fox, Steven E Brenner, and John-Marc Chandonia. SCOPe: Structural classification of proteins—extended, integrating scop and astral data and classification of new structures. *Nucleic Acids Research*, 42(D1):D304–D309, 2013.
+Yoav Goldberg. Assessing BERT's syntactic abilities. arXiv preprint arXiv:1901.05287, 2019.
+Christopher Grimsley, Elijah Mayfield, and Julia R.S. Bursten. Why attention is not explanation: Surgical intervention and causal reasoning about neural models. In Proceedings of The 12th Language Resources and Evaluation Conference, pp. 1780-1790, Marseille, France, May 2020. European Language Resources Association. ISBN 979-10-95546-34-4. URL https://www.aclweb.org/anthology/2020.lrec-1.220.
+S Henikoff and J G Henikoff. Amino acid substitution matrices from protein blocks. Proceedings of the National Academy of Sciences, 89(22):10915-10919, 1992. ISSN 0027-8424. doi: 10.1073/pnas.89.22.10915. URL https://www.pnas.org/content/89/22/10915.
+John Hewitt and Christopher D Manning. A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4129-4138, 2019.
+Benjamin Hoover, Hendrik Strobelt, and Sebastian Gehrmann. exBERT: A Visual Analysis Tool to Explore Learned Representations in Transformer Models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pp. 187-196. Association for Computational Linguistics, 2020.
+
+Phu Mon Htut, Jason Phang, Shikha Bordia, and Samuel R Bowman. Do attention heads in BERT track syntactic dependencies? arXiv preprint arXiv:1911.12246, 2019.
+Sarthak Jain and Byron C. Wallace. Attention is not Explanation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 3543-3556, June 2019.
+Ganesh Jawahar, Benoit Sagot, and Djamé Seddah. What does BERT learn about the structure of language? In ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, July 2019. URL https://hal.inria.fr/hal-02131630.
+Akira Kinjo and Haruki Nakamura. Comprehensive structural classification of ligand-binding motifs in proteins. Structure, 17(2), 2009.
+Michael Schantz Klausen, Martin Closter Jespersen, Henrik Nielsen, Kamilla Kjaergaard Jensen, Vanessa Isabell Jurtz, Casper Kaae Soenderby, Morten Otto Alexander Sommer, Ole Winther, Morten Nielsen, Bent Petersen, et al. NetSurfP-2.0: Improved prediction of protein structural features by integrated deep learning. Proteins: Structure, Function, and Bioinformatics, 2019.
+Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. Revealing the dark secrets of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 4365-4374, Hong Kong, China, 2019. Association for Computational Linguistics.
+Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. Measuring bias in contextualized word representations. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pp. 166-172, Florence, Italy, 2019. Association for Computational Linguistics.
+Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. Albert: A lite bert for self-supervised learning of language representations. In International Conference on Learning Representations, 2020.
+Juyong Lee, Janez Konc, Dusanka Janezic, and Bernard Brooks. Global organization of a binding site network gives insight into evolution and structure-function relationships of proteins. Sci Rep, 7 (11652), 2017.
+Yongjie Lin, Yi Chern Tan, and Robert Frank. Open sesame: Getting inside BERT's linguistic knowledge. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 241-253, Florence, Italy, 2019. Association for Computational Linguistics.
+Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. Linguistic knowledge and transferability of contextual representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 1073-1094. Association for Computational Linguistics, 2019.
+Ali Madani, Bryan McCann, Nikhil Naik, Nitish Shirish Keskar, Namrata Anand, Raphael R. Eguchi, Po-Ssu Huang, and Richard Socher. Progen: Language modeling for protein generation. arXiv preprint arXiv:2004.03497, 2020.
+Timothee Mickus, Mathieu Constant, Denis Paperno, and Kees Van Deemter. What do you mean, BERT? Assessing BERT as a Distributional Semantics Model. Proceedings of the Society for Computation in Linguistics, 3, 2020.
+Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger (eds.), Advances in Neural Information Processing Systems 26, pp. 3111-3119. Curran Associates, Inc., 2013.
+
+Pooya Moradi, Nishant Kambhatla, and Anoop Sarkar. Interrogating the explanatory power of attention in neural machine translation. In Proceedings of the 3rd Workshop on Neural Generation and Translation, pp. 221-230, Hong Kong, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-5624. URL https://www.aclweb.org/anthology/D19-5624.
+John Moult, Krzysztof Fidelis, Andriy Kryshtafovych, Torsten Schwede, and Anna Tramontano. Critical assessment of methods of protein structure prediction (CASP)-Round XII. Proteins: Structure, Function, and Bioinformatics, 86:7-15, 2018. ISSN 08873585. doi: 10.1002/prot.25415. URL http://doi.wiley.com/10.1002/prot.25415.
+Hai Nguyen, David A Case, and Alexander S Rose. NGLview–interactive molecular graphics for Jupyter notebooks. Bioinformatics, 34(7):1241–1242, 12 2017. ISSN 1367-4803. doi: 10.1093/bioinformatics/btx789. URL https://doi.org/10.1093/bioinformatics/btx789.
+Timothy Niven and Hung-Yu Kao. Probing neural network comprehension of natural language arguments. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 4658-4664, Florence, Italy, 2019. Association for Computational Linguistics.
+Josh Payne, Mario Srouji, Dian Ang Yap, and Vineet Kosaraju. Bert learns (and teaches) chemistry. arXiv preprint arXiv:2007.16012, 2020.
+Matthew Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. Dissecting contextual word embeddings: Architecture and representation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 1499-1509, Brussels, Belgium, October-November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1179. URL https://www.aclweb.org/anthology/D18-1179.
+Danish Pruthi, Mansi Gupta, Bhuwan Dhingra, Graham Neubig, and Zachary C. Lipton. Learning to deceive with attention-based explanations. In Annual Conference of the Association for Computational Linguistics (ACL), July 2020. URL https://arxiv.org/abs/1909.07913.
+Alessandro Raganato and Jörg Tiedemann. An analysis of encoder representations in Transformer-based machine translation. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 287-297, Brussels, Belgium, November 2018. Association for Computational Linguistics. doi: 10.18653/v1/W18-5431. URL https://www.aclweb.org/anthology/W18-5431.
+Roshan Rao, Nicholas Bhattacharya, Neil Thomas, Yan Duan, Xi Chen, John Canny, Pieter Abbeel, and Yun S Song. Evaluating protein transfer learning with TAPE. In Advances in Neural Information Processing Systems, 2019.
+Emily Reif, Ann Yuan, Martin Wattenberg, Fernanda B Viegas, Andy Coenen, Adam Pearce, and Been Kim. Visualizing and measuring the geometry of BERT. In H. Wallach, H. Larochelle, A. Beygelzimer, F. dAlché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 32, pp. 8594-8603. Curran Associates, Inc., 2019.
+Adam J Riesselman, Jung-Eun Shin, Aaron W Kollasch, Conor McMahon, Elana Simon, Chris Sander, Aashish Manglik, Andrew C Kruse, and Debora S Marks. Accelerating protein design using autoregressive generative models. bioRxiv, pp. 757252, 2019.
+Alexander Rives, Siddharth Goyal, Joshua Meier, Demi Guo, Myle Ott, C Lawrence Zitnick, Jerry Ma, and Rob Fergus. Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences. bioRxiv, pp. 622803, 2019.
+Anna Rogers, Olga Kovaleva, and Anna Rumshisky. A primer in BERTology: What we know about how BERT works. Transactions of the Association for Computational Linguistics, 8:842-866, 2020.
+Nathan J Rollins, Kelly P Brock, Frank J Poelwijk, Michael A Stiffler, Nicholas P Gauthier, Chris Sander, and Debora S Marks. Inferring protein 3D structure from deep mutation scans. Nature Genetics, 51(7):1170, 2019.
+
+Alexander S. Rose and Peter W. Hildebrand. NGL Viewer: a web application for molecular visualization. *Nucleic Acids Research*, 43(W1):W576-W579, 04 2015. ISSN 0305-1048. doi: 10.1093/nar/gkv402. URL https://doi.org/10.1093/nar/gkv402.
+Alexander S Rose, Anthony R Bradley, Yana Valasatava, Jose M Duarte, Andreas Prlic, and Peter W Rose. NGL viewer: web-based molecular graphics for large complexes. Bioinformatics, 34(21): 3755-3758, 05 2018. ISSN 1367-4803. doi: 10.1093/bioinformatics/bty419. URL https://doi.org/10.1093/bioinformatics/bty419.
+Charles Rubin and Ora Rosen. Protein phosphorylation. Annual Review of Biochemistry, 44:831-887, 1975. URL https://doi.org/10.1146/annurev.bi.44.070175.004151.
+Philippe Schwaller, Benjamin Hoover, Jean-Louis Reymond, Hendrik Strobelt, and Teodoro Laino. Unsupervised attention-guided atom-mapping. ChemRxiv, 5 2020. doi: 10.26434/chemrxiv.12298559.v1. URL https://chemrxiv.org/articles/Unsupervised Attention-Guided_Atom-Mapping/12298559.
+Sofia Serrano and Noah A. Smith. Is attention interpretable? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2931-2951, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1282. URL https://www.aclweb.org/anthology/P19-1282.
+Martin Steinegger and Johannes Söding. Clustering huge protein sequence sets in linear time. Nature Communications, 9(2542), 2018. doi: 10.1038/s41467-018-04964-5.
+Baris E. Suzek, Yuqi Wang, Hongzhan Huang, Peter B. McGarvey, Cathy H. Wu, and the UniProt Consortium. UniRef clusters: a comprehensive and scalable alternative for improving sequence similarity searches. Bioinformatics, 31(6):926-932, 11 2014. ISSN 1367-4803. doi: 10.1093/bioinformatics/btu739. URL https://doi.org/10.1093/bioinformatics/btu739.
+Yi Chern Tan and L. Elisa Celis. Assessing social and intersectional biases in contextualized word representations. In Advances in Neural Information Processing Systems 32, pp. 13230-13241. Curran Associates, Inc., 2019.
+Ian Tenney, Dipanjan Das, and Ellie Pavlick. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 4593-4601, Florence, Italy, 2019. Association for Computational Linguistics.
+Shikhar Vashishth, Shyam Upadhyay, Gaurav Singh Tomar, and Manaal Faruqui. Attention interpretability across NLP tasks. arXiv preprint arXiv:1909.11218, 2019.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pp. 5998-6008, 2017.
+Sara Veldhoen, Dieuwke Hupkes, and Willem H. Zuidema. Diagnostic classifiers revealing how neural networks process hierarchical structure. In CoCo@NIPS, 2016.
+Jesse Vig. A multiscale visualization of attention in the Transformer model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pp. 37-42, Florence, Italy, 2019. Association for Computational Linguistics.
+Jesse Vig and Yonatan Belinkov. Analyzing the structure of attention in a Transformer language model. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 63-76, Florence, Italy, 2019. Association for Computational Linguistics.
+Jesse Vig, Sebastian Gehrmann, Yonatan Belinkov, Sharon Qian, Daniel Nevo, Yaron Singer, and Stuart Shieber. Investigating gender bias in language models using causal mediation analysis. In Advances in Neural Information Processing Systems, volume 33, pp. 12388-12401, 2020.
+Gregor Wiedemann, Steffen Remus, Avi Chawla, and Chris Biemann. Does BERT make any sense? Interpretable word sense disambiguation with contextualized embeddings. arXiv preprint arXiv:1909.10430, 2019.
+
+Sarah Wiegrefe and Yuval Pinter. Attention is not not explanation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 11-20, November 2019.
+Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. XLNet: Generalized autoregressive pretraining for language understanding. In H. Wallach, H. Larochelle, A. Beygelzimer, F. dAlché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 32, pp. 5753-5763. Curran Associates, Inc., 2019.
+Ruiqi Zhong, Steven Shao, and Kathleen McKeown. Fine-grained sentiment analysis with faithful attention. arXiv preprint arXiv:1908.06870, 2019.
+
+# A MODEL OVERVIEW
+
+# A.1 PRE-TRAINED MODELS
+
+Table 1 provides an overview of the five pre-trained Transformer models studied in this work. The models originate from the TAPE and ProtTrans repositories, spanning three model architectures: BERT, ALBERT, and XLNet.
+
+Table 1: Summary of pre-trained models analyzed, including the source of the model, the type of Transformer used, the number of layers and heads, the total number of model parameters, the source of the pre-training dataset, and the number of protein sequences in the pre-training dataset.
+
+| Source | Name | Type | Layers | Heads | Params | Train Dataset | # Seq |
| TAPE | TapeBert | BERT | 12 | 12 | 94M | Pfam | 31M |
| ProtTrans | ProtBert | BERT | 30 | 16 | 420M | Uniref100 | 216M |
| ProtTrans | ProtBert-BFD | BERT | 30 | 16 | 420M | BFD | 2.1B |
| ProtTrans | ProtAlbert | ALBERT | 12 | 64 | 224M | Uniref100 | 216M |
| ProtTrans | ProtXLNet | XLNet | 30 | 16 | 409M | Uniref100 | 216M |
+
+# A.2 BERT TRANSFORMER ARCHITECTURE
+
+Stacked Encoder: BERT uses a stacked-encoder architecture, which inputs a sequence of tokens $\mathbf{x} = (x_{1},\dots,x_{n})$ and applies position and token embeddings followed by a series of encoder layers. Each layer applies multi-head self-attention (see below) in combination with a feedforward network, layer normalization, and residual connections. The output of each layer $\ell$ is a sequence of contextualized embeddings $(\mathbf{h}_1^{(\ell)},\ldots ,\mathbf{h}_n^{(\ell)})$ .
+
+Self-Attention: Given an input $\boldsymbol{x} = (x_{1},\ldots ,x_{n})$ , the self-attention mechanism assigns to each token pair $i,j$ an attention weight $\alpha_{i,j} > 0$ where $\sum_{j}\alpha_{i,j} = 1$ . Attention in BERT is bidirectional. In the multi-layer, multi-head setting, $\alpha$ is specific to a layer and head. The BERT-Base model has 12 layers and 12 heads. Each attention head learns a distinct set of weights, resulting in $12\times 12 = 144$ distinct attention mechanisms in this case.
+
+The attention weights $\alpha_{i,j}$ are computed from the scaled dot-product of the query vector of $i$ and the key vector of $j$ , followed by a softmax operation. The attention weights are then used to produce a weighted sum of value vectors:
+
+$$
+\operatorname {A t t e n t i o n} (Q, K, V) = \operatorname {s o f t m a x} \left(\frac {Q K ^ {T}}{\sqrt {d _ {k}}}\right) V \tag {2}
+$$
+
+using query matrix $Q$ , key matrix $K$ , and value matrix $V$ , where $d_k$ is the dimension of $K$ . In a multi-head setting, the queries, keys, and values are linearly projected $h$ times, and the attention operation is performed in parallel for each representation, with the results concatenated.
+
+# A.3 OTHER TRANSFORMER VARIANTS
+
+ALBERT: The architecture of ALBERT differs from BERT in two ways: (1) It shares parameters across layers, unlike BERT which learns distinct parameters for every layer and (2) It uses factorized embeddings, which allows the input token embeddings to be of a different (smaller) size than the hidden states. The original version of ALBERT designed for text also employed a sentence-order prediction pretraining task, but this was not used on the models studied in this paper.
+
+XLNet: Instead of the masked-language modeling pretraining objective use for BERT, XLNet uses a bidirectional auto-regressive pretraining method that considers all possible orderings of the input factorization. The architecture also adds a segment recurrence mechanism to process long sequences, as well as a relative rather than absolute encoding scheme.
+
+# B ADDITIONAL EXPERIMENTAL DETAILS
+
+# B.1 ALTERNATIVE ATTENTION AGREEMENT METRIC
+
+Here we present an alternative formulation to Eq. 1 based on an attention-weighted average. We define an indicator function $f(i,j)$ for property $f$ that returns 1 if the property is present in token pair $(i,j)$ (i.e., if amino acids $i$ and $j$ are in contact), and zero otherwise. We then compute the proportion of attention that matches with $f$ over a dataset $X$ as follows:
+
+$$
+p _ {\alpha} (f) = \sum_ {x \in X} \sum_ {i = 1} ^ {| x |} \sum_ {j = 1} ^ {| x |} f (i, j) \alpha_ {i, j} (x) / \sum_ {x \in X} \sum_ {i = 1} ^ {| x |} \sum_ {j = 1} ^ {| x |} \alpha_ {i, j} (x) \tag {3}
+$$
+
+where $\alpha_{i,j}(x)$ denotes the attention from $i$ to $j$ for input sequence $x$ .
+
+# B.2 STATISTICAL SIGNIFICANCE TESTING AND NULL MODELS
+
+We perform statistical significance tests to determine whether any results based on the metric defined in Equation 1 are due to chance. Given a property $f$ , as defined in Section 3, we perform a two-proportion z-test comparing (1) the proportion of high-confidence attention arcs $(\alpha_{i,j} > \theta)$ for which $f(i,j) = 1$ , and (2) the proportion of all possible pairs $i,j$ for which $f(i,j) = 1$ . Note that the first proportion is exactly the metric $p_{\alpha}(f)$ defined in Equation 1 (e.g. the proportion of attention aligned with contact maps). The second proportion is simply the background frequency of the property (e.g. the background frequency of contacts). Since we extract the maximum scores over all of the heads in the model, we treat this as a case of multiple hypothesis testing and apply the Bonferroni correction, with the number of hypotheses $m$ equal to the number of attention heads.
+
+As an additional check that the results did not occur by chance, we also report results on baseline (null) models. We initially considered using two forms of null models: (1) a model with randomly initialized weights, and (2) a model trained on randomly shuffled sequences. However, in both cases, none of the sequences in the dataset yielded attention weights greater than the attention threshold $\theta$ . This suggests that the mere existence of the high-confidence attention weights used in the analysis could not have occurred by chance, but it does not shed light on the particular analyses performed. Therefore, we implemented an alternative randomization scheme in which we randomly shuffle attention weights from the original models as a post-processing step. Specifically, we permute the sequence of attention weights from each token for every attention head. To illustrate, let's say that the original model produced attention weights of (0.3, 0.2, 0.1, 0.4, 0.0) from position $i$ in protein sequence $x$ from head $h$ , where $|x| = 5$ . In the null model, the attention weights from position $i$ in sequence $x$ in head $h$ would be a random permutation of those weights, e.g., (0.2, 0.0, 0.4, 0.3, 0.1). Note that these are still valid attention weights as they would sum to 1 (since the original weights would sum to 1 by definition). We report results using this form of baseline model.
+
+# B.3 PROBING METHODOLOGY
+
+**Embedding probe.** We probe the embedding vectors output from each layer using a linear probing classifier. For token-level probing tasks (binding sites, secondary structure) we feed each token's output vector directly to the classifier. For token-pair probing tasks (contact map) we construct a pairwise feature vector by concatenating the elementwise differences and products of the two tokens' output vectors, following the $\mathrm{TAPE}^4$ implementation.
+
+We use task-specific evaluation metrics for the probing classifier: for secondary structure prediction, we measure F1 score; for contact prediction, we measure precision@ $L / 5$ , where $L$ is the length of the protein sequence, following standard practice (Moult et al., 2018); for binding site prediction, we measure precision@ $L / 20$ , since approximately one in twenty amino acids in each sequence is a binding site (4.8% in the dataset).
+
+Attention probe. Just as the attention weight $\alpha_{i,j}$ is defined for a pair of amino acids $(i,j)$ , so is the contact property $f(i,j)$ , which returns true if amino acids $i$ and $j$ are in contact. Treating the attention weight as a feature of a token-pair $(i,j)$ , we can train a probing classifier that predicts the
+
+contact property based on this feature, thereby quantifying the attention mechanism's knowledge of that property. In our multi-head setting, we treat the attention weights across all heads in a given layer as a feature vector, and use a probing classifier to assess the knowledge of a given property in the attention weights across the entire layer. As with the embedding probe, we measure performance of the probing classifier using precision@ $L / 5$ , where $L$ is the length of the protein sequence, following standard practice for contact prediction.
+
+# B.4 DATASETS
+
+We used two protein sequence datasets from the TAPE repository for the analysis: the ProteinNet dataset (AlQuraishi, 2019; Fox et al., 2013; Berman et al., 2000; Moult et al., 2018) and the Secondary Structure dataset (Rao et al., 2019; Berman et al., 2000; Moult et al., 2018; Klausen et al., 2019). The former was used for analysis of amino acids and contact maps, and the latter was used for analysis of secondary structure. We additionally created a third dataset for binding site and post-translational modification (PTM) analysis from the Secondary Structure dataset, which was augmented with binding site and PTM annotations obtained from the Protein Data Bank's Web API. We excluded any sequences for which annotations were not available. The resulting dataset sizes are shown in Table 2. For the analysis of attention, a random subset of 5000 sequences from the training split of each dataset was used, as the analysis was purely evaluative. For training and evaluating the diagnostic classifier, the full training and validation splits were used.
+
+Table 2: Datasets used in analysis
+
+| Dataset | Train size | Validation size |
| ProteinNet | 25299 | 224 |
| Secondary Structure | 8678 | 2170 |
| Binding Sites / PTM | 5734 | 1418 |
+
+# C ADDITIONAL RESULTS OF ATTENTION ANALYSIS
+
+# C.1 SECONDARY STRUCTURE
+
+
+(a) TapeBert
+
+
+(b) ProtAlbert
+
+
+(c) ProtBert
+
+
+(d) ProtBert-BFD
+
+
+(e) ProtXLNet
+Figure 8: Percentage of each head's attention that is focused on Helix secondary structure.
+
+
+(a) TapeBert
+
+
+(b) ProtAlbert
+
+
+(c) ProtBert
+
+
+(d) ProtBert-BFD
+Figure 9: Percentage of each head's attention that is focused on Strand secondary structure.
+
+
+(e) ProtXLNet
+
+
+(a) TapeBert
+
+
+(b) ProtAlbert
+
+
+(c) ProtBert
+Figure 10: Percentage of each head's attention that is focused on Turn/Bend secondary structure.
+
+
+(d) ProtBert-BFD
+
+
+(e) ProtXLNet
+
+# C.2 CONTACT MAPS: STATISTICAL SIGNIFICANCE TESTS AND NULL MODELS
+
+
+(a) TapeBert
+
+
+(b) ProtAlbert
+
+
+(c) ProtBert
+
+
+(d) ProtBert-BFD
+
+
+(e) ProtXLNet
+
+
+Figure 11: Top 10 heads (denoted by $<\text{layer}>$ - $<\text{head}>$ ) for each model based on the proportion of attention aligned with contact maps [95% conf. intervals]. The differences between the attention proportions and the background frequency of contacts (orange dashed line) are statistically significant $(p < 0.00001)$ . Bonferroni correction applied for both confidence intervals and tests (see App. B.2).
+
+
+(b) ProtAlbert-Random
+
+
+(a) TapeBert-Random
+
+
+(d) ProtBert-BFD-Random
+
+
+(c) ProtBert-Random
+(e) ProtXLNet-Random
+Figure 12: Top-10 contact-aligned heads for null models. See Appendix B.2 for details.
+
+# C.3 BINDING SITES: STATISTICAL SIGNIFICANCE TESTS AND NULL MODEL
+
+
+(a) TapeBert
+
+
+(b) ProtAlbert
+
+
+(c) ProtBert
+
+
+(d) ProtBert-BFD
+
+
+(e) ProtXLNet
+
+
+Figure 13: Top 10 heads (denoted by $<\text{layer}>$ - $<\text{head}>$ ) for each model based on the proportion of attention focused on binding sites [95% conf. intervals]. Differences between attention proportions and the background frequency of binding sites (orange dashed line) are all statistically significant $(p < 0.00001)$ . Bonferroni correction applied for both confidence intervals and tests (see App. B.2).
+(a) TapeBert-Random
+
+
+(b) ProtAlbert-Random
+
+
+(c) ProtBert-Random
+
+
+(d) ProtBert-BFD-Random
+
+
+(e) ProtXLNet-Random
+Figure 14: Top-10 heads most focused on binding sites for null models. See Appendix B.2 for details.
+
+# C.4 POST-TRANSLATIONAL MODIFICATIONS (PTMS)
+
+
+(a) TapeBert
+
+
+(b) ProtAlbert
+
+
+(c) ProtBert
+
+
+(d) ProtBert-BFD
+Figure 15: Percentage of each head's attention that is focused on post-translational modifications.
+
+
+(e) ProtXLNet
+
+
+(a) TapeBert
+
+
+(b) ProtAlbert
+
+
+(c) ProtBert
+
+
+(d) ProtBert-BFD
+
+
+(e) ProtXLNet
+
+
+Figure 16: Top 10 heads (denoted by $<\text{layer}>$ - $<\text{head}>$ ) for each model based on the proportion of attention focused on PTM positions [95% conf. intervals]. The differences between the attention proportions and the background frequency of PTMs (orange dashed line) are statistically significant $(p < 0.00001)$ . Bonferroni correction applied for both confidence intervals and tests (see App. B.2).
+
+
+(b) ProtAlbert-Random
+
+
+(a) TapeBert-Random
+
+
+(d) ProtBert-BFD-Random
+
+
+(c) ProtBert-Random
+(e) ProtXLNet-Random
+Figure 17: Top-10 heads most focused on PTMs for null models. See Appendix B.2 for details.
+
+# C.5 AMINO ACIDS
+
+
+(a) ALA
+
+
+(b) ARG
+
+
+(c) ASN
+
+
+(d) ASP
+
+
+(e) CYS
+
+
+(f) GLN
+
+
+(g) GLU
+
+
+(h) GLY
+
+
+(i) HIS
+
+
+(j) ILE
+
+
+(k)LEU
+
+
+(1) LYS
+
+
+(m) MET
+
+
+(n) PHE
+
+
+(o) PRO
+Figure 18: Percentage of each head's attention that is focused on the given amino acid, averaged over a dataset (TapeBert).
+
+
+(a) SER
+
+
+(b) THR
+
+
+(c) TRP
+
+
+(d) TYR
+
+
+(e) VAL
+Figure 19: Percentage of each head's attention that is focused on the given amino acid, averaged over a dataset (cont.)
+
+Table 3: Amino acids and the corresponding maximally attentive heads in the standard and randomized versions of TapeBert. The differences between the attention percentages for TapeBert and the background frequencies of each amino acid are all statistically significant $(p < 0.00001)$ taking into account the Bonferroni correction. See Appendix B.2 for details. The bolded numbers represent the higher of the two values between the standard and random models. In all cases except for Glutamine, which was the amino acid with the lowest top attention proportion in the standard model (7.1), the standard TapeBert model has higher values than the randomized version.
+
+| Abbrev | Code | Name | Background % | TapeBert | TapeBert-Random |
| Top Head | Attn % | Top Head | Attn % |
| Ala | A | Alanine | 7.9 | 12-11 | 25.5 | 11-12 | 12.1 |
| Arg | R | Arginine | 5.2 | 12-8 | 63.2 | 12-7 | 8.4 |
| Asn | N | Asparagine | 4.3 | 8-2 | 44.8 | 8-2 | 6.7 |
| Asp | D | Aspartic acid | 5.8 | 12-6 | 79.9 | 5-4 | 10.7 |
| Cys | C | Cysteine | 1.3 | 11-6 | 83.2 | 11-6 | 9.3 |
| Gln | Q | Glutamine | 3.8 | 11-7 | 7.1 | 12-1 | 9.2 |
| Glu | E | Glutamic acid | 6.9 | 11-7 | 16.2 | 11-4 | 11.8 |
| Gly | G | Glycine | 7.1 | 2-11 | 98.1 | 11-8 | 14.6 |
| His | H | Histidine | 2.7 | 9-10 | 56.7 | 11-6 | 5.4 |
| Ile | I | Isoleucine | 5.6 | 11-10 | 27.0 | 9-5 | 10.6 |
| Leu | L | Leucine | 9.4 | 2-12 | 44.1 | 12-11 | 13.9 |
| Lys | K | Lysine | 6.0 | 12-8 | 29.4 | 6-11 | 12.9 |
| Met | M | Methionine | 2.3 | 3-10 | 73.5 | 9-3 | 6.2 |
| Phe | F | Phenylalanine | 3.9 | 12-3 | 22.7 | 12-1 | 6.7 |
| Pro | P | Proline | 4.6 | 1-11 | 98.3 | 10-6 | 7.6 |
| Ser | S | Serine | 6.4 | 12-7 | 36.1 | 11-12 | 11.0 |
| Thr | T | Threonine | 5.4 | 12-7 | 19.0 | 10-4 | 9.0 |
| Trp | W | Tryptophan | 1.3 | 11-4 | 68.1 | 9-2 | 3.0 |
| Tyr | Y | Tyrosine | 3.4 | 12-3 | 51.6 | 12-11 | 6.6 |
| Val | V | Valine | 6.8 | 12-11 | 34.0 | 8-2 | 15.0 |
\ No newline at end of file
diff --git a/bertologymeetsbiologyinterpretingattentioninproteinlanguagemodels/images.zip b/bertologymeetsbiologyinterpretingattentioninproteinlanguagemodels/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..4695851ed72040b05b972622f45c1dea7a342331
--- /dev/null
+++ b/bertologymeetsbiologyinterpretingattentioninproteinlanguagemodels/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:91ce58fb983b272af0f72968597efb665148bb8230ad92c48cd2b3aa72f2d0e5
+size 1420471
diff --git a/bertologymeetsbiologyinterpretingattentioninproteinlanguagemodels/layout.json b/bertologymeetsbiologyinterpretingattentioninproteinlanguagemodels/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..1174ee45579c2d0c6996a2ebc6cae504b31859b3
--- /dev/null
+++ b/bertologymeetsbiologyinterpretingattentioninproteinlanguagemodels/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:13037fe6ae15f0074c515caff07f8f92efe4803ceedde1c68936f85b0c2acddf
+size 770367
diff --git a/betterfinetuningbyreducingrepresentationalcollapse/a5ab975b-3420-4821-a0d2-5bffad9cfb75_content_list.json b/betterfinetuningbyreducingrepresentationalcollapse/a5ab975b-3420-4821-a0d2-5bffad9cfb75_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..26f66ed225b69e9ccc5f28507d41d21a4e90c102
--- /dev/null
+++ b/betterfinetuningbyreducingrepresentationalcollapse/a5ab975b-3420-4821-a0d2-5bffad9cfb75_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1cc716ec40645afdd2608d427c87eae8750334262980d4c0f4ca0825bbd5f8ba
+size 77602
diff --git a/betterfinetuningbyreducingrepresentationalcollapse/a5ab975b-3420-4821-a0d2-5bffad9cfb75_model.json b/betterfinetuningbyreducingrepresentationalcollapse/a5ab975b-3420-4821-a0d2-5bffad9cfb75_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..11fe3896d4943061b98f09af6741df38e450c777
--- /dev/null
+++ b/betterfinetuningbyreducingrepresentationalcollapse/a5ab975b-3420-4821-a0d2-5bffad9cfb75_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a7f5130349ad5797e5db81ca1ade032249635b1d2d246a519d9f89ce3f6249d8
+size 90914
diff --git a/betterfinetuningbyreducingrepresentationalcollapse/a5ab975b-3420-4821-a0d2-5bffad9cfb75_origin.pdf b/betterfinetuningbyreducingrepresentationalcollapse/a5ab975b-3420-4821-a0d2-5bffad9cfb75_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..e754c29742c00a3fae36ad88bedfebe97e6e2387
--- /dev/null
+++ b/betterfinetuningbyreducingrepresentationalcollapse/a5ab975b-3420-4821-a0d2-5bffad9cfb75_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2cfe916b4a0384286b0ad18fd766df869e49546d450fa054e2e70247b7512a90
+size 458254
diff --git a/betterfinetuningbyreducingrepresentationalcollapse/full.md b/betterfinetuningbyreducingrepresentationalcollapse/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..fadcee07981d1c372c48789e15b4f2895d400448
--- /dev/null
+++ b/betterfinetuningbyreducingrepresentationalcollapse/full.md
@@ -0,0 +1,335 @@
+# BETTER FINE-TUNING BY REDUCING REPRESENTATIONAL COLLAPSE
+
+Armen Aghajanyan, Akshit Shrivastava, Anchit Gupta & Naman Goyal
+
+Facebook
+
+{armenag, akshats, anchit, naman}@fb.com
+
+Luke Zettlemoyer & Sonal Gupta
+
+Facebook
+
+{lsz, sonalgupta}@fb.com
+
+# ABSTRACT
+
+Although widely adopted, existing approaches for fine-tuning pre-trained language models have been shown to be unstable across hyper-parameter settings, motivating recent work on trust region methods. This paper presents a simplified and efficient method rooted in trust region theory that replaces previously used adversarial objectives with parametric noise (sampling from either a normal or uniform distribution), thereby discouraging representation change during fine-tuning when possible without hurting performance. We also introduce a new analysis to motivate the use of trust region methods more generally, by studying representational collapse; the degradation of generalizable representations from pre-trained models as they are fine-tuned for a specific end task. Extensive experiments show that our fine-tuning method matches or exceeds the performance of previous trust region methods on a range of understanding and generation tasks (including DailyMail/CNN, Gigaword, Reddit TIFU, and the GLUE benchmark), while also being much faster. We also show that it is less prone to representation collapse; the pre-trained models maintain more generalizable representations every time they are fine-tuned.
+
+# 1 INTRODUCTION
+
+Pre-trained language models (Radford et al., 2019; Devlin et al., 2018; Liu et al., 2019; Lewis et al., 2019; 2020) have been shown to capture a wide array of semantic, syntactic, and world knowledge (Clark et al., 2019), and provide the defacto initialization for modeling most existing NLP tasks. However, fine-tuning them for each task is a highly unstable process, with many hyperparameter settings producing failed fine-tuning runs, unstable results (considerable variation between random seeds), over-fitting, and other unwanted consequences (Zhang et al., 2020; Dodge et al., 2020).
+
+Recently, trust region or adversarial based approaches, including SMART (Jiang et al., 2019) and FreeLB (Zhu et al., 2019), have been shown to increase the stability and accuracy of fine-tuning by adding additional constraints limiting how much the fine-tuning changes the initial parameters. However, these methods are significantly more computationally and memory intensive than the more commonly adopted simple-gradient-based approaches.
+
+This paper presents a lightweight fine-tuning strategy that matches or improves performance relative to SMART and FreeLB while needing just a fraction of the computational and memory overhead and no additional backward passes. Our approach is motivated by trust region theory while also reducing to simply regularizing the model relative to parametric noise applied to the original pre-trained representations. We show uniformly better performance, setting a new state of the art for RoBERTa fine-tuning on GLUE and reaching state of the art on XNLI using no novel pre-training approaches (Liu et al., 2019; Wang et al., 2018; Conneau et al., 2018). Furthermore, the low overhead of our family of fine-tuning methods allows our method to be applied to generation tasks where we consistently outperform standard fine-tuning, setting state of the art on summarization tasks.
+
+We also introduce a new analysis to motivate the use of trust-region-style methods more generally, by defining a new notion of representational collapse and introducing a new methodology for measuring it during fine-tuning. Representational collapse is the degradation of generalizable representations of pre-trained models during the fine-tuning stage. We empirically show that standard fine-tuning degrades generalizable representations through a series of probing experiments on GLUE tasks. Furthermore, we attribute this phenomenon to using standard gradient descent algorithms for the fine-tuning stage. We also find that (1) recently proposed fine-tuning methods rooted in trust region, i.e., SMART, can alleviate representation collapse, and (2) our methods alleviate representational collapse to an even greater degree, manifesting in better performance across almost all datasets and models.
+
+Our contributions in this paper are the following.
+
+- We propose a novel approach to fine-tuning rooted in trust-region theory, which we show directly alleviates representational collapse at a fraction of the cost of other recently proposed fine-tuning methods.
+- Through extensive experimentation, we show that our method outperforms standard fine-tuning methodology following recently proposed best practices from Zhang et al. (2020). We improve various SOTA models from sentence prediction to summarization, from monolingual to cross-lingual.
+- We further define and explore the phenomena of representational collapse in fine-tuning and directly correlate it with generalization in tasks of interest.
+
+# 2 LEARNING ROBUST REPRESENTATIONS THROUGH REGULARIZED FINE-TUNING
+
+We are interested in deriving methods for fine-tuning representations that provide guarantees on the movement of representations, in the sense that they do not forget the original pre-trained representations when they are fine-tuned for new tasks (see Section 4 for more details). We introduce a new fine-tuning method rooted in an approximation to trust region, which provides guarantees for stochastic gradient descent algorithms by bounding some divergence between model at update $t$ and $t + 1$ (Pascanu & Bengio, 2013; Schulman et al., 2015b; Jiang et al., 2019).
+
+Let $f: \mathbb{R}^{m \times n} \to \mathbb{R}^p$ be a function which returns some pre-trained representation parameterized by $\theta_f$ from $m$ tokens embedded into a fixed vector of size $n$ . Let the learned classification head $g: \mathbb{R}^p \to \mathbb{R}^q$ be a function which takes an input from $f$ and outputs a valid probability distribution parameterized by $\theta_g$ in $q$ dimensions and let $X$ be our dataset. In the case of generation, we can assume the classification head is simply an identity function or softmax depending on the loss function. Let $\mathcal{L}(\theta)$ denote a loss function given by $\theta = [\theta_f, \theta_g]$ .
+
+We are interested in minimizing $\mathcal{L}$ with respect to $\theta$ such that each update step is constrained by movement in the representational density space $p(f)$ . More formally given an arbitrary $\epsilon$
+
+$$
+\underset {\Delta \theta} {\arg \min } \mathcal {L} (\theta + \Delta \theta) \tag {1}
+$$
+
+$$
+s. t. K L (p (f (\cdot ; \theta_ {f})) | | p (f (\cdot ; \theta_ {f} + \Delta \theta_ {f}))) = \epsilon
+$$
+
+This constrained optimization problem is equivalent to doing natural gradient descent directly over the representations (Pascanu & Bengio, 2013). Unfortunately, we do not have direct access to the density of representations; therefore, it is not trivial to directly bound this quantity. Instead, we propose to do natural gradient over $g \cdot f$ with an additional constraint that $g$ is at most 1-Lipschitz (which naturally constrains change of representations, see Section A.1 in the Appendix). Traditional computation of natural gradient is computationally prohibitive due to the need for inverting the Hessian. An alternative formulation of natural gradient can be stated through mirror descent, using Bregmann divergences (Raskutti & Mukherjee, 2015; Jiang et al., 2019).
+
+This method primarily serves as a robust regularizer by preventing large updates in the model's probability space. This family of methods is classically known as trust-region methods (Pascanu & Bengio, 2013; Schulman et al., 2015a).
+
+$$
+\mathcal {L} _ {S M A R T} (\theta , f, g) = \mathcal {L} (\theta) + \lambda \mathbb {E} _ {x \sim X} \left[ \sup _ {x ^ {\sim}: | x ^ {\sim} - x | \leq \epsilon} K L _ {S} (g \cdot f (x) \| g \cdot f \left(x ^ {\sim}\right)) \right] \tag {2}
+$$
+
+However, the supremum is computationally intractable. An approximation is possible by doing gradient ascent steps, similar to finding adversarial examples. This was first proposed by SMART with a symmetrical $KL_{S}(X,Y) = KL(X||Y) + KL(Y||X)$ term (Jiang et al., 2019).
+
+We propose an even simpler approximation which does not require extra backward computations and empirically works as well as or better than SMART. We altogether remove the adversarial nature from SMART and instead optimize for a smoothness parameterized by $KL_{S}$ . Furthermore, we optionally also add a constraint on the smoothness of $g$ by making it at most 1-Lipschitz, the intuition being if we can bound the volume of change in $g$ we can more effectively bound $f$ .
+
+$$
+\mathcal {L} _ {R 3} (f, g, \theta) = \mathcal {L} (\theta) + \lambda \mathbb {E} _ {x \sim X} \left[ K L _ {S} (g \cdot f (x) \| g \cdot f (x + z)) \right] \quad \text {R 3 F M e t h o d} \tag {3}
+$$
+
+$$
+s. t. \quad z \sim \mathcal {N} (0, \sigma^ {2} I) \text {o r} z \sim \mathcal {U} (- \sigma , \sigma) \tag {4}
+$$
+
+$$
+s. t. \quad L i p \{g \} \leq 1 \quad \text {O p t i o n a l R 4 F M e t h o d} \tag {5}
+$$
+
+where $KL_{S}$ is the symmetric KL divergence and $z$ is a sample from a parametric distribution. In our work we test against two distributions, normal and uniform centered around 0. We denote this as the Robust Representations through Regularized Finetuning (R3F) method.
+
+Additionally we propose an extension to R3F (R4F; Robust Representations through Regularized and Reparameterized Finetuning, which reparameterizes $g$ to be at most 1-Lipschitz via Spectral Normalization (Miyato et al., 2018). By constraining $g$ to be at most 1-Lipschitz, we can more directly bound the change in representation (Appendix Section A.1). Specifically we scale all the weight matrices of $g$ by the inverse of their largest singular values $W_{SN} \coloneqq W / \sigma(W)$ . Given that spectral radius $\sigma(W_{SN}) = 1$ we can bound $Lip\{g\} \leq 1$ . In the case of generation, $g$ does not have any weights therefore we can only apply the R3F method.
+
+# 2.1 RELATIONSHIP TO SMART AND FREELB
+
+Our method is most closely related to the SMART algorithm, which utilizes an auxiliary smoothness inducing regularization term, which directly optimizes the Bregmann divergence mentioned above in Equation 2 (Jiang et al., 2019).
+
+SMART solves the supremum by using an adversarial methodology to ascent to the largest KL divergence with an $\epsilon$ -ball. We instead propose to remove the ascent step completely, optionally fixing the smoothness of the classification head $g$ . This completely removes SMART's adversarial nature and is more akin to optimizing the smoothness of $g \cdot f$ directly. Another recently proposed adversarial method for fine-tuning, FreeLB optimizes a direct adversarial loss $\mathcal{L}_{FreeLB}(\theta) = \sup_{\Delta \theta: |\Delta \theta| \leq \epsilon} \mathcal{L}(\theta + \Delta \theta)$ through iterative gradient ascent steps. This is similar to SMART in the sense that both are adversarial and require gradient ascent steps. Unfortunately, the need for extra forward-backward passes can be prohibitively expensive when fine-tuning large pre-trained models (Zhu et al., 2019).
+
+ | FP | BP | xFP |
| FreeLB | 1 + S | 1 + S | 3 + 3S |
| SMART | 1 + S | 1 + S | 3 + 3S |
| R3F/R4F | 2 | 1 | 4 |
| Standard | 1 | 1 | 3 |
+
+Table 1: Computational cost of recently proposed fine-tuning algorithms. We show Forward Passes (FP), Backward Passes (BP) as well as computation cost as a factor of forward passes (xFP). $S$ is the number of gradient ascent steps, with a minimum of $S \geq 1$
+
+Our method is significantly more computationally effi
+
+cient than adversarial based fine-tuning methods, as seen in Table 1. We show that this efficiency does not hurt performance; we can match or exceed FreeLB and SMART on a large number of tasks. In addition, the relatively low costs of our methods allow us to improve over fine-tuning on an array of generation tasks.
+
+# 3 EXPERIMENTS
+
+We will first measure performance by fine-tuning on a range of tasks and languages. The next sections report why methods rooted in trust region, including ours, outperform standard fine-tuning. We aimed for fair comparisons throughout all of our experiments by using fixed budget hyperparameters searches across all methods. Furthermore, for computationally tractable tasks, we report median/max numbers as well as show distributions across a large number of runs.
+
+# 3.1 SENTENCE PREDICTION
+
+# GLUE
+
+We will first test R3F and R4F on sentence classification tasks from the GLUE benchmark (Wang et al., 2018). We select the same subset of GLUE tasks that have been reported by prior work in this space (Jiang et al., 2019): MNLI (Williams et al., 2018), QQP (Iyer et al., 2017), RTE (Bentivogli et al., 2009), QNLI (Rajpurkar et al., 2016), MRPC (Dolan & Brockett, 2005), CoLA (Warstadt et al., 2018), SST-2 (Socher et al., 2013).1
+
+Consistent with prior work (Jiang et al., 2019; Zhu et al., 2019), we focus on improving the performance of RoBERTa-Large based models in the single-task setting (Liu et al., 2019). We report the performance of all models on the GLUE development set.
+
+
+Figure 1: Empirical evidence towards the computational benefits of our method we present training wall time analysis on the SST-2 dataset. Each method includes a violin plot for 10 random runs. We define wall-time as the training time in seconds to best checkpoint.
+
+We fine-tune each of the GLUE tasks with four methods: Standard (STD), the traditional fine-tuning scheme as done by RoBERTa (Liu et al., 2019); Standard++ (STD++), a variant of standard fine-tuning that incorporates recently proposed best practices for fine-tuning, specifically longer fine-tuning and using bias correction in Adam (Zhang et al., 2020); and our proposed methods R3F and R4F. We compare against the numbers reported by SMART, FreeLB, and RoBERTa on the validation set. For each method, we applied a hyper-parameter search with equivalent fixed budgets per method. Fine-tuning each task has task-specific hyperparameters described in the Appendix (Section A.2). After finding the best hyperparameters, we replicated experiments with optimal parameters across ten different random seeds. Our numbers reported are the maximum of 10 seeds to be comparable with other benchmarks in Table 2.
+
+In addition to showing the best performance, we also show the distribution of various meth
+
+ods across ten seeds to demonstrate the stability properties of individual methods in Figure 2.
+
+R3F and R4F unanimously improve over Standard and Standard++ fine-tuning. Furthermore, our methods match or exceed adversarial methods such as SMART/FreeLB at a fraction of the computational cost when comparing median runs. We show computational cost in Figure 1 for a single task, but the relative behavior of wall times is consistent across all other GLUE tasks. We note that we could not find a discernable difference in the experimental setting, which would make the selection between R3F vs. R4F trivial.
+
+
+Figure 2: We show the results of our method against Standard++ fine-tuning and SMART across 3 tasks. Across 10 random seeds both max and median of our runs were higher using our method than both SMART and Standard++.
+
+
+
+
+
+ | MNLI Acc-m/mm | QQP Acc/F1 | RTE Acc | QNLI Acc | MRPC Acc | CoLA Mcc | SST-2 Acc |
| STD | 90.2/- | 92.2/- | 86.6 | 94.7 | 89.1 | 68.0 | 96.4 |
| STD++ | 91.0/- | 92.2/- | 87.4 | 94.8 | 91.1 | 69.4 | 96.9 |
| FreeLB | 90.6/- | 92.6/- | 88.1 | 95.0 | - | 71.1 | 96.7 |
| SMART | 91.1/91.3 | 92.4/89.8 | 92.0 | 95.6 | 89.2 | 70.6 | 96.9 |
| R3F | 91.1/91.3 | 92.4/89.9 | 88.5 | 95.3 | 91.6 | 71.2 | 97.0 |
| R4F | 90.1/90.8 | 92.5/89.9 | 88.8 | 95.1 | 90.9 | 70.6 | 97.1 |
+
+| MNLI Acc-m/mm | QQP Acc/F1 | RTE Acc | QNLI Acc | MRPC Acc | CoLA Mcc | SST-2 Acc |
| 90.2/- | 91.9/- | 86.6 | 92.1 | 84.4 | 66.2 | 96.4 |
| 90.8/- | 92.1/- | 87.4 | 92.5 | 89.1 | 68.4 | 96.9 |
| -/- | -/- | - | - | - | - | - |
| 90.85/91.10 | 91.7/88.2 | 89.5 | 94.8 | 83.9 | 69.4 | 96.6 |
| 91.10/91.10 | 92.1/88.4 | 88.4 | 95.1 | 91.2 | 70.6 | 96.2 |
| 90.0/90.6 | 91.8/88.2 | 88.3 | 94.8 | 90.1 | 70.1 | 96.8 |
+
+# XNLI
+
+We hypothesize that staying closer to the original representations is especially crucial for cross-lingual tasks, especially in the zero-shot fashion where drifting away from pre-trained representations for a single language might manifest in loss of cross-lingual capabilities. In particular, we take a look at the popular XNLI benchmark, containing 15 languages (Conneau et al., 2018). We compare our method against the standard trained XLM-R model in the zero-shot setting (Conneau et al., 2019).
+
+Table 2: We present our best results on the GLUE development set for various fine-tuning methods applied to the RoBERTa Large model. On the left side table, we present our best numbers and numbers published in other papers. On the right side, we present median numbers from 10 runs for the mentioned methods.
+
+| Model | en | fr | es | de | el | bg | ru | tr | ar | vi | th | zh | hi | sw | ur | Avg |
| XLM-R Base | 85.8 | 79.7 | 80.7 | 78.7 | 77.5 | 79.6 | 78.1 | 74.2 | 73.8 | 76.5 | 74.6 | 76.7 | 72.4 | 66.5 | 68.3 | 76.2 |
| XLM-R Large | 89.1 | 84.1 | 85.1 | 83.9 | 82.9 | 84.0 | 81.2 | 79.6 | 79.8 | 80.8 | 78.1 | 80.2 | 76.9 | 73.9 | 73.8 | 80.9 |
| +R3F | 89.4 | 84.2 | 85.1 | 83.7 | 83.6 | 84.6 | 82.3 | 80.7 | 80.6 | 81.1 | 79.4 | 80.1 | 77.3 | 72.6 | 74.2 | 81.2 |
| +R4F | 89.6 | 84.7 | 85.2 | 84.2 | 83.6 | 84.6 | 82.5 | 80.3 | 80.5 | 80.9 | 79.2 | 80.6 | 78.2 | 72.7 | 73.9 | 81.4 |
| InfoXLM | 89.7 | 84.5 | 85.5 | 84.1 | 83.4 | 84.2 | 81.3 | 80.9 | 80.4 | 80.8 | 78.9 | 80.9 | 77.9 | 74.8 | 73.7 | 81.4 |
+
+Table 3: To remain consistent with prior experiments, we report an average of 5 runs of zero-shots results on the XNLI test set for our method applied to XLM-R Large. Various versions of our method win over the majority of languages. The bottom row shows the current SOTA on XNLI, which requires the pre-training of a novel model.
+
+We present our result in Table 3. R3F and R4F dominate standard pre-training on 14 out of the 15 languages in the XNLI task. R4F improves over the best known XLM-R XNLI results reaching SOTA with an average language score of 81.4 across five runs. The current state of the art, INFO-XLM required a novel pre-training method to reach the same numbers (Chi et al., 2020).
+
+ | CNN/DailyMail | Gigaword | Reddit TIFU (Long) |
| Random Transformer | 38.27/15.03/35.48 | 35.70/16.75/32.83 | 15.89/1.94/12.22 |
| BART | 44.16/21.28/40.90 | 39.29/20.09/35.65 | 24.19/8.12/21.31 |
| PEGASUS | 44.17/21.47/41.11 | 39.12/19.86/36.24 | 26.63/9.01/21.60 |
| ERNIE-GEN | 44.02/21.17/41.26 | 39.25/ 20.25/36.53 | - |
| ProphetNet (Old SOTA) | 44.20/21.17/41.30 | 39.51/20.42/36.69 | - |
| BART+R3F (New SOTA) | 44.38/21.53/41.17 | 40.45/20.69/36.56 | 30.31/10.98/24.74 |
+
+Table 4: Our results on various summarization data-sets. We report Rouge-1, Rouge-2 and Rouge-L per element in table. Following PEGASUS, we bold the best number and numbers within 0.15 of the best.
+
+# 3.2 SUMMARIZATION
+
+While prior work in non-standard finetuning methods tends to focus on sentence prediction and GLUE tasks (Jiang et al., 2019; Zhu et al., 2019; Zhang et al., 2020), we look to improve abstractive summarization, due to its additional complexity and computational cost, specifically we look at three datasets: CNN/Dailymail (Hermann et al., 2015), Gigaword (Napoles et al., 2012) and Reddit TIFU (Kim et al., 2018).
+
+Like most other NLP tasks, summarization recently has been dominated by the fine-tuning of large pre-trained models. For example, PEGASUS explicitly defines a pre-training objective to facilitate the learning of representations tailored to summarization tasks manifesting in state-of-the-art performance on various summarization benchmarks (Zhang et al., 2019). ProphetNet (Yan et al., 2020) improved over these numbers by introducing their own novel self-supervised task as did ERNIEGEN (Xiao et al., 2020).
+
+Independent of the pre-training task, standard fine-tuning on downstream tasks follows a simple formula of using a label smoothing loss while directly fine-tuning the whole model without adding any new parameters. We propose the addition of the R3F term directly to the label smoothing loss. We note that R4F cannot be applied directly to generation tasks due to its reparameterization nature.
+
+We present our results in Table 4. Our method (R3F) outperforms standard fine-tuning across the board for three tasks across all of the ROUGE metric variants. Notably, we improve Gigaword and Reddit TIFU ROUGE-1 scores by a point and four points, respectively.
+
+# 4 REPRESENTATIONAL COLLAPSE
+
+Catastrophic forgetting, proposed initially as catastrophic interference, is a phenomenon that occurs during sequential training where new updates interfere catastrophically with previous updates manifesting in forgetting of particular examples for a fixed task (McCloskey & Cohen, 1989). Catastrophic forgetting has been historically associated with continuous learning, and recent work (Mosbach et al., 2020) showed that catastrophic forgetting concerning the original MLM objective is not detrimental for end task training. Instead, the issue lies in optimization. Inspired by this work, we explore the related problem of representational collapse, the degradation of generalizable representations of pre-trained models during the fine-tuning stage. This definition is independent of a specific fine-tuning task but is rather over the internal representations generalizability over a large union of tasks. Another view of this phenomenon is that fine-tuning collapses the wide range of information available in the representations into a smaller set needed only for the immediate task and particular training set.
+
+Measuring such degradations is non-trivial. Simple metrics such as the distance between pre-trained representations and fine-tuned representations are not sufficient (e.g., adding a constant to the pretrained representations will not change representation power, but will change distances). One approach would be to estimate mutual information of representations across tasks before and after fine-tuning, but the estimation of mutual information is notoriously hard, especially in high-dimensions (Tschannen et al., 2019). We instead propose a series of probing experiments meant to provide us
+
+
+
+
+
+
+
+
+Figure 3: Results from our probing experiments comparing our proposed algorithms R3F, R4F to standard fine-tuning. Variants of our method consistently outperform past work.
+
+
+
+
+
+with empirical evidence of the existence of representation collapse on the GLUE benchmark (Wang et al., 2018).
+
+# 4.1 PROBING EXPERIMENTS
+
+# PROBING GENERALIZATION OF FINE-TUNED REPRESENTATIONS
+
+To measure the generalization properties of various fine-tuning methodologies, we follow probing methodology by first freezing the representations from the model trained on one task and then fine-tuning a linear layer on top of the model for another task. Doing this form of probing can directly measure the quality of representations learned by various fine-tuning methods and how much they collapse when fine-tuned on a sequence of tasks.
+
+In particular, we fine-tune a RoBERTa model on SST-2 and train a linear layer for six other GLUE tasks, respectively. Our results are shown in Figure 3. Appendix A.2 presents the hyperparameters. Across all tasks, one of the two variants of our method performed best across various fine-tuning methods.
+
+Conversely, standard fine-tuning produced representations that were worse than other fine-tuning methods across the board, hinting at the sub-optimality of standard fine-tuning. Furthermore, R3F/R4F consistently outperforms the adversarial fine-tuning method SMART.
+
+# PROBING REPRESENTATION DEGRADATION
+
+To show the effect of representation collapse, we propose an experiment to measure how the fine-tuning process degrades representations by sequentially training on a series of GLUE tasks. We arbitrarily select 3 GLUE tasks (QNLI, QQP, and RTE) and a source task (SST-2). We begin by training a model on our source task and then train on QNLI, QQP, and RTE in a sequential order using the best checkpoint from
+
+
+Figure 4: We show the results of the chained probing experiments. We do not show the distributional properties of the runs because there was minimal variance in the results.
+
+the prior iteration. At each point in the chain, we probe the source task and measure performance. We compare standard SGD with the best trust-region fine-tuning approach (R4F). Our results are depicted in Figure 4.
+
+As we can see with the standard fine-tuning process, our model diverges from the source task resulting in lower performance probes; however, with our method, the probes change much less with sequential probing resulting in better probing and end performance.
+
+# PROBING REPRESENTATION RETENTION
+
+To further understand representational collapse's impact, we extend our probing experiments to train a cyclic chain of tasks. We showed that traditional fine-tuning degrades representations during the fine-tuning process in our prior experiments, meaning standard fine-tuning learns poorer representation compared to alternative fine-tuning methods. The dual to looking at degradation is to look at the retainment of learned representations. To do this, we take a look at cyclic sequential probing. Sequential probing involves training a model on task A, probing B, then training model fine-tuned on B and probing task C, and so forth. We then create a cyclic chain $\underbrace{A\rightarrow B\rightarrow C}_{\text{Cycle 1}}\rightarrow \underbrace{A\rightarrow B\rightarrow C}_{\text{Cycle 2}}$
+
+from where we compare tasks via their probe performance at each cycle.
+
+We expect probing performance to increase at every cycle; since every cycle, the task we are probing on will undergo a full fine-tuning. What we are interested in is the level of retention in representations after the fine-tuning. Specifically, we hypothesize that our method, specifically R4F, will retain representations significantly better than the Standard++ fine-tuning method.
+
+In our experiments we consider the following sequence of GLUE tasks: SST-2 $\rightarrow$ QNLI $\rightarrow$ QQP $\rightarrow$ RTE. We defer hyperparameter values to Appendix (Section A.2).
+
+
+Figure 5: We present the results of cyclical sequential probing for 3 cycles.
+
+Looking at Figure 5, we see that R4F retains the quality of representations significantly better than standard fine-tuning methods.
+
+# 5 CONCLUSION
+
+We propose a family of new fine-tuning approaches for pre-trained representations based on trust-region theory: R3F and R4F. Our methods are more computationally efficient and outperform prior work in fine-tuning via adversarial learning (Jiang et al., 2019; Zhu et al., 2019). We show that this is due to a new phenomenon during fine-tuning: representational collapse, where representations learned during fine-tuning degrade, leading to worse generalization. Our analysis shows that standard fine-tuning is sub-optimal when it comes to learning generalizable representations, and instead, our methods retain representation generalizability and improve end task performance.
+
+With our method, we improve upon monolingual and multilingual sentence prediction tasks as well as generation tasks compared to standard and adversarial fine-tuning methods. Notably, we set state of the art on DailyMail/CNN, Gigaword, Reddit TIFU, improve the best-known results on fine-tuning RoBERTa on GLUE, and reach state of the art on zero-shot XNLI without the need for any new pre-training method.
+
+We note there are many flavors of RXF that can occur with various noise distributions or perturbation strategies. We believe a larger, more general framework exists which connects trust region methods and fine-tuning in general. We leave this area of exploration for future work.
+
+# REFERENCES
+
+Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. The fifth pascal recognizing textual entailment challenge. In TAC, 2009.
+Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. arXiv preprint arXiv:1708.00055, 2017.
+Zewen Chi, Li Dong, Furu Wei, Nan Yang, Saksham Singhal, Wenhui Wang, Xia Song, Xian-Ling Mao, Heyan Huang, and Ming Zhou. Infoxm: An information-theoretic framework for crosslingual language model pre-training, 2020.
+Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D Manning. What does bert look at? an analysis of bert's attention. arXiv preprint arXiv:1906.04341, 2019.
+Alexis Conneau, Guillaume Lample, Rudy Rinott, Adina Williams, Samuel R Bowman, Holger Schwenk, and Veselin Stoyanov. Xnli: Evaluating cross-lingual sentence representations. arXiv preprint arXiv:1809.05053, 2018.
+Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116, 2019.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
+Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah Smith. Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping. arXiv preprint arXiv:2002.06305, 2020.
+William B Dolan and Chris Brockett. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005), 2005.
+Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Advances in neural information processing systems, pp. 1693-1701, 2015.
+Shankar Iyer, Nikhil Dandekar, and Kornel Csernai. First quora dataset release: Question pairs, 2017. URL https://data.quora.com/First-Quora-Dataset-Release-Question-Pairs.
+Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Tuo Zhao. Smart: Robust and efficient fine-tuning for pre-trained natural language models through principled regularized optimization. arXiv preprint arXiv:1911.03437, 2019.
+Byeongchang Kim, Hyunwoo Kim, and Gunhee Kim. Abstractive summarization of reddit posts with multi-level memory networks. arXiv preprint arXiv:1811.00783, 2018.
+Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. Bart: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461, 2019.
+Mike Lewis, Marjan Ghazvininejad, Gargi Ghosh, Armen Aghajanyan, Sida Wang, and Luke Zettle-moyer. Pre-training via paraphrasing, 2020.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
+
+Michael McCloskey and Neal J Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. In *Psychology of learning and motivation*, volume 24, pp. 109-165. Elsevier, 1989.
+Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957, 2018.
+Marius Mosbach, Maksym Andriushchenko, and Dietrich Klakow. On the stability of fine-tuning bert: Misconceptions, explanations, and strong baselines. arXiv preprint arXiv:2006.04884, 2020.
+Courtney Naples, Matthew R Gormley, and Benjamin Van Durme. Annotated gigaword. In Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction (AKBC-WEKEX), pp. 95-100, 2012.
+Razvan Pascanu and Yoshua Bengio. Revisiting natural gradient for deep networks. arXiv preprint arXiv:1301.3584, 2013.
+Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9, 2019.
+Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016.
+Garvesh Raskutti and Sayan Mukherjee. The information geometry of mirror descent. IEEE Transactions on Information Theory, 61(3):1451-1457, 2015.
+John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In International conference on machine learning, pp. 1889-1897, 2015a.
+John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In International conference on machine learning, pp. 1889-1897, 2015b.
+Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pp. 1631-1642, 2013.
+Michael Tschannen, Josip Djolonga, Paul K Rubenstein, Sylvain Gelly, and Mario Lucic. On mutual information maximization for representation learning. arXiv preprint arXiv:1907.13625, 2019.
+Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 353-355, Brussels, Belgium, November 2018. Association for Computational Linguistics. doi: 10.18653/v1/W18-5446. URL https://www.aclweb.org/anthology/W18-5446.
+Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. Neural network acceptability judgments. arXiv preprint arXiv:1805.12471, 2018.
+Adina Williams, Nikita Nangia, and Samuel Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1112–1122. Association for Computational Linguistics, 2018. URL http://aclweb.org/anthology/N18-1101.
+Dongling Xiao, Han Zhang, Yukun Li, Yu Sun, Hao Tian, Hua Wu, and Haifeng Wang. Ernie-gen: An enhanced multi-flow pre-training and fine-tuning framework for natural language generation. arXiv preprint arXiv:2001.11314, 2020.
+Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, and Ming Zhou. Prophetnet: Predicting future n-gram for sequence-to-sequence pre-training. arXiv preprint arXiv:2001.04063, 2020.
+
+Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J Liu. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. arXiv preprint arXiv:1912.08777, 2019.
+
+Tianyi Zhang, Felix Wu, Arzoo Katiyar, Kilian Q Weinberger, and Yoav Artzi. Revisiting few-sample bert fine-tuning. arXiv preprint arXiv:2006.05987, 2020.
+
+Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Goldstein, and Jingjing Liu. Freelb: Enhanced adversarial training for natural language understanding. In International Conference on Learning Representations, 2019.
+
+# A APPENDIX
+
+# A.1 CONTROLLING CHANGE OF REPRESENTATION VIA CHANGE OF VARIABLE
+
+Let us say we have random variables in some type of markovian chain $x, y, z; y = f(x; \theta_f), z = g(y; \theta_g)$
+
+The change of variable formulation for probability densities is
+
+$$
+p \left(f \left(x; \theta_ {f}\right)\right) = p \left(g \left(f \left(x; \theta_ {f}\right)\right)\right) \left| \det \frac {d g \left(f \left(x ; \theta_ {f}\right)\right)}{d f \left(x ; \theta_ {f}\right)} \right| \tag {6}
+$$
+
+Direct application of change of variable gives us
+
+$$
+K L \left(p \left(f \left(x; \theta_ {f}\right)\right) \mid \mid p \left(f \left(x; \theta_ {f} + \Delta \theta_ {f}\right)\right)\right) = \tag {7}
+$$
+
+$$
+\sum p \left(f \left(x; \theta_ {f}\right)\right) \log \frac {p \left(f \left(x ; \theta_ {f}\right)\right)}{p \left(f \left(x ; \theta_ {f} + \Delta \theta_ {f}\right)\right)} = \tag {8}
+$$
+
+$$
+\sum p \left(g \left(f \left(x; \theta_ {f}\right)\right)\right) \left| \det \frac {d g \left(f \left(x ; \theta_ {f}\right)\right)}{d f \left(x ; \theta_ {f}\right)} \right| [ \tag {9}
+$$
+
+$$
+\log p \left(g \left(f \left(x; \theta_ {f}\right)\right)\right) + \log \left| \det \frac {d g \left(f \left(x ; \theta_ {f}\right)\right)}{d f \left(x ; \theta_ {f}\right)} \right| \tag {10}
+$$
+
+$$
+- \log p \left(g \left(f \left(x; \Delta \theta_ {f}\right)\right)\right) - \log \left| \det \frac {d g \left(f \left(x ; \Delta \theta_ {f}\right)\right)}{d f \left(x ; \Delta \theta_ {f}\right)} \right| \tag {11}
+$$
+
+$$
+] \tag {12}
+$$
+
+Let us make some more assumptions. Let $g(y) = Wy$ where the spectral norm of $W, \rho(W) = 1$ . We can then trivially bound $\det W \leq 1$ . Then we have
+
+$$
+\begin{array}{l} = \sum p \left(g \left(f \left(x; \theta_ {f}\right)\right) \mid \det \frac {d g \left(f \left(x ; \theta_ {f}\right)\right)}{d f \left(x ; \theta_ {f}\right)} \mid \left[ \log p \left(g \left(f \left(x; \theta_ {f}\right)\right)\right) - \log p \left(g \left(f \left(x; \Delta \theta_ {f}\right)\right)\right) \right] \right. (13) \\ = \sum p \left(g \left(f \left(x; \theta_ {f}\right)\right)\right) \left| \det \frac {d g \left(f \left(x ; \theta_ {f}\right)\right)}{d f \left(x ; \theta_ {f}\right)} \right| \log \frac {p \left(g \left(f \left(x ; \theta_ {f}\right)\right)\right)}{p \left(g \left(f \left(x ; \Delta \theta_ {f}\right)\right)\right)} (14) \\ \leq \sum p \left(g \left(f \left(x; \theta_ {f}\right)\right)\right) \log \frac {p \left(g \left(f \left(x ; \theta_ {f}\right)\right)\right)}{p \left(g \left(f \left(x ; \Delta \theta_ {f}\right)\right)\right)} (15) \\ = K L \left(p \left(g \left(f \left(x; \theta_ {f}\right)\right)\right) \mid \mid p \left(g \left(f \left(x; \Delta \theta_ {f}\right)\right)\right)\right) (16) \\ \end{array}
+$$
+
+We also see that tightness is controlled by $|\det W|$ , which is bounded by the singular value giving us intuition to the importance of using spectral normalization.
+
+# A.2 EXPERIMENT HYPER-PARAMETERS
+
+For our GLUE related experiments, both full fine-tuning and probing, the following parameters are used. For probing experiments, the difference is our RoBERTa encoder is frozen, and the encoder dropout is removed.
+
+| Hyper Parameter | MNLI | QNLI | QQP | SST-2 | RTE | MRPC | CoLA |
| Learning Rate | 5e-6 | 5e-6 | 5e-6 | 5e-6 | 1e-5 | 1e-5 | 1e-5 |
| Max Updates | 123873 | 33112 | 113272 | 20935 | 3120 | 2296 | 5336 |
| Max Sentences | 8 | 8 | 32 | 32 | 8 | 16 | 16 |
+
+Table 5: Task specific hyper parameters for GLUE experiments
+
+| Hyper parameter | Value |
| Optimizer | Adam |
| Adam-betas | (0.9, 0.98) |
| Adam-eps | 1e-6 |
| LR Scheduler | polynomial decay |
| Dropout | 0.1 |
| Weight Decay | 0.01 |
| Warmup Updates | 0.06 * max updates |
+
+| Hyper parameter | Value |
| λ | [0.1, 0.5, 1.0, 5.0] |
| Noise Types | [U, N] |
| σ | 1e-5 |
+
+Table 6: Hyper parameters for R3F and R4F experiments on GLUE
+
+| Hyper Parameter | CNN/Dailymail | Gigaword | Reddit TIFU |
| Max Tokens | 1024 | 2048 | 2048 |
| Total updates | 80000 | 200000 | 200000 |
| Warmup Updates | 1000 | 5000 | 5000 |
+
+Table 7: Task specific hyper parameters for Summarization experiments.
+
+| Hyper parameter | Value |
| Optimizer | Adam |
| Adam-betas | (0.9, 0.98) |
| Adam-eps | 1e-8 |
| LR Scheduler | polynomial decay |
| Learning Rate | 3e-05 |
+
+| Hyper parameter | Value |
| λ | [0.001, 0.01, 0.1] |
| Noise Types | [U, N] |
| σ | 1e-5 |
| Dropout | 0.1 |
| Weight Decay | 0.01 |
| Clip Norm | 0.1 |
+
+Table 8: Hyper parameters for R3F and R4F experiments on Summarization experiments.
+
+| Hyper parameter | Value |
| Optimizer | Adam |
| Adam-betas | (0.9, 0.98) |
| Adam-eps | 1e-8 |
| LR Scheduler | polynomial decay |
| Learning Rate | 3e-05 |
| Dropout | 0.1 |
| Weight Decay | 0.01 |
+
+| Hyper parameter | Value |
| λ | [0.5, 1, 3, 5] |
| Noise Types | [U, N] |
| σ | 1e-5 |
| Total Updates | 450000 |
| Max Positions | 512 |
| Max Tokens | 4400 |
| Max Sentences | 8 |
+
+Table 9: Hyper parameters for R3F and R4F experiments on XNLI.
\ No newline at end of file
diff --git a/betterfinetuningbyreducingrepresentationalcollapse/images.zip b/betterfinetuningbyreducingrepresentationalcollapse/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..da163f39ca03b9a8a075f6eecd8b0d195ed8870e
--- /dev/null
+++ b/betterfinetuningbyreducingrepresentationalcollapse/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bd8f54e0fafe476bd633960aff6243705071ad90041cfb9a036f5c6947bc815d
+size 601168
diff --git a/betterfinetuningbyreducingrepresentationalcollapse/layout.json b/betterfinetuningbyreducingrepresentationalcollapse/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..d4434fd839a96540846405610e8faed7aa61152c
--- /dev/null
+++ b/betterfinetuningbyreducingrepresentationalcollapse/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5265314e7b80f71332eeb2b3fe6dd2efe033a31ea1dec1f621dbcf2dbcca7a64
+size 344268
diff --git a/beyondcategoricallabelrepresentationsforimageclassification/f629610f-d96b-46ed-832a-f1a1bba5cad1_content_list.json b/beyondcategoricallabelrepresentationsforimageclassification/f629610f-d96b-46ed-832a-f1a1bba5cad1_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..56b9588cb02f6a7161dc11618f749d0d651661da
--- /dev/null
+++ b/beyondcategoricallabelrepresentationsforimageclassification/f629610f-d96b-46ed-832a-f1a1bba5cad1_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ee6a134f84062ea697781b9f9421fd4e098f1a4fd01bf0bacda7ee45ed3b813d
+size 86798
diff --git a/beyondcategoricallabelrepresentationsforimageclassification/f629610f-d96b-46ed-832a-f1a1bba5cad1_model.json b/beyondcategoricallabelrepresentationsforimageclassification/f629610f-d96b-46ed-832a-f1a1bba5cad1_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..b2876b0f804211df4ed88c80894d0e5704e7fb06
--- /dev/null
+++ b/beyondcategoricallabelrepresentationsforimageclassification/f629610f-d96b-46ed-832a-f1a1bba5cad1_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a62e087e994d2e2d95a4ba3ae96e424adab58003860129139e164c8304ccf3b2
+size 106485
diff --git a/beyondcategoricallabelrepresentationsforimageclassification/f629610f-d96b-46ed-832a-f1a1bba5cad1_origin.pdf b/beyondcategoricallabelrepresentationsforimageclassification/f629610f-d96b-46ed-832a-f1a1bba5cad1_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..000aa3561f753d9a1bdd99b5511970ad055fb382
--- /dev/null
+++ b/beyondcategoricallabelrepresentationsforimageclassification/f629610f-d96b-46ed-832a-f1a1bba5cad1_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:14d1bac75d2b5fe6613b28f469e527c192599e5bb3c5a85b9f80f1c7b01b5138
+size 3042698
diff --git a/beyondcategoricallabelrepresentationsforimageclassification/full.md b/beyondcategoricallabelrepresentationsforimageclassification/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..395d4f12a943c78f06fee4687a74f0988d453bd1
--- /dev/null
+++ b/beyondcategoricallabelrepresentationsforimageclassification/full.md
@@ -0,0 +1,343 @@
+# BEYOND CATEGORICAL LABEL REPRESENTATIONS FOR IMAGE CLASSIFICATION
+
+Boyuan Chen, Yu Li, Sunand Raghupathi, Hod Lipson
+
+Columbia University
+
+https://www.creativemachineslab.com/about-representation.html
+
+# ABSTRACT
+
+We find that the way we choose to represent data labels can have a profound effect on the quality of trained models. For example, training an image classifier to regress audio labels rather than traditional categorical probabilities produces a more reliable classification. This result is surprising, considering that audio labels are more complex than simpler numerical probabilities or text. We hypothesize that high dimensional, high entropy label representations are generally more useful because they provide a stronger error signal. We support this hypothesis with evidence from various label representations including constant matrices, spectrograms, shuffled spectrograms, Gaussian mixtures, and uniform random matrices of various dimensionalities. Our experiments reveal that high dimensional, high entropy labels achieve comparable accuracy to text (categorical) labels on the standard image classification task, but features learned through our label representations exhibit more robustness under various adversarial attacks and better effectiveness with a limited amount of training data. These results suggest that label representation may play a more important role than previously thought.
+
+# 1 INTRODUCTION
+
+Image classification is a well-established task in machine learning. The standard approach takes an input image and predicts a categorical distribution over the given classes. The most popular method to train these neural network is through a cross-entropy loss with backpropagation. Deep convolutional neural networks (Lecun et al., 1998; Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; He et al., 2015; Huang et al., 2016) have achieved extraordinary performance on this task, while some even surpass human level performance. However, is this a solved problem? The state-of-the-art performance commonly relies on large amounts of training data (Krizhevsky, 2009; Russakovsky et al., 2015; Kuznetsova et al., 2018), and there exist many examples of networks with good performance that fail on images with imperceptible adversarial perturbations (Biggio et al., 2013; Szegedy et al., 2013; Nguyen et al., 2014).
+
+Much progress has been made in domains such as few-shot learning and meta-learning to improve the data efficiency of neural networks. There is also a large body of research addressing the challenge of adversarial defense. Most efforts have focused on improving optimization methods, weight initialization, architecture design, and data preprocessing. In this work, we find that simply replacing standard categorical labels with high dimensional, high entropy variants (e.g. an audio spectrogram pronouncing the name of the class) can lead to interesting properties such as improved robustness and efficiency, without a loss of accuracy.
+
+Our research is inspired by key observations from human learning. Humans appear to learn to recognize new objects from few examples, and are not easily fooled by the types of adversarial perturbations applied to current neural networks. There could be many reasons for the discrepancy between how humans and machines learn. One significant aspect is that humans do not output categorical probabilities on all known categories. A child shown a picture of a dog and asked "what is this a picture of?" will directly speak out the answer — "dog." Similarly, a child being trained by a parent is shown a picture and then provided the associated label in the form of speech. These observations raise the question: Are we supervising neural networks on the best modality?
+
+
+Figure 1: Label Representations beyond Categorical Probabilities: We study the role of label representation in training neural networks for image classification. We find that high-dimensional labels with high entropy lead to more robust and data-efficient feature learning.
+
+In this paper, we take one step closer to understanding the role of label representations inside the training pipeline of deep neural networks. However, while useful properties emerge by utilizing various label representations, we do not attempt to achieve state-of-the-art performance over these metrics. Rather, we hope to provide a novel research perspective on the standard setup. Therefore, our study is not mutually exclusive with previous research on improving adversarial robustness and data efficiency.
+
+An overview of our approach is shown in Figure 1. We first follow the above natural observation and modify the existing image classifiers to "speak out" the predictions instead of outputting a categorical distribution. Our initial experiments show surprising results: that neural networks trained with speech labels learn more robust features against adversarial attacks, and are more data-efficient when only less than $20\%$ of training data is available.
+
+Furthermore, we hypothesize that the improvements from the speech label representation come from its property as a specific type of high-dimensional object. To test our hypothesis, we performed a large-scale systematic study with various other high-dimensional label representations including constant matrices, speech spectrograms, shuffled speech spectrograms, composition of Gaussians, and high dimensional and low dimensional uniform random vectors. Our experimental results show that high-dimensional label representations with high entropy generally lead to robust and data efficient network training. We believe that our findings suggest a significant role of label representations which has been largely unexplored when considering the training of deep neural networks.
+
+Our contributions are three fold. First, we introduce a new paradigm for the image classification task by using speech as the supervised signal. We demonstrate that speech models can achieve comparable performance to traditional models that rely on categorical outputs. Second, we quantitatively show that high-dimensional label representations with high entropy (e.g. audio spectrograms and composition of Gaussians) produce more robust and data-efficient neural networks, while high-dimensional labels with low entropy (e.g. constant matrices) and low-dimensional labels with high entropy do not have these benefits and may even lead to worse performance. Finally, we present a set of quantitative and qualitative analyses to systematically study and understand the learned feature representations of our networks. Our visualizations suggest that speech labels encourage learning more discriminative features.
+
+# 2 RELATED WORKS
+
+Data Efficiency and Robustness Data efficiency has been a widely studied problem within the context of few-shot learning and meta-learning (Thrun & Pratt, 2012; Vilalta & Drissi, 2002; Vanschoren, 2018; Wang et al., 2019). Researchers have made exciting progress on improving methods of optimization (Ravi & Larochelle, 2016; Li et al., 2017), weight initialization (Finn et al., 2017; Ravi & Larochelle, 2017), and architecture design (Santoro et al., 2016a;b).
+
+There is also a large body of research addressing the challenge of adversarial defense. Adversarial training is perhaps the most common measure against adversarial attacks (Goodfellow et al., 2014; Kurakin et al., 2016; Szegedy et al., 2013; Shaham et al., 2018; Madry et al., 2017). Recent works try to tackle the problem by leveraging GANs (Samangouei et al., 2018), detecting adversarial examples (Meng & Chen, 2017; Lu et al., 2017; Metzen et al., 2017), and denoising or reconstruction (Song et al., 2017; Liao et al., 2017).
+
+Most of these techniques for improving data efficiency and adversarial robustness study the problem from the perspective of model, optimizer and data. Relatively little research has been conducted on the labels themselves or more specifically, their representation. Hosseini et al. (2017) augmented categorical labels with a new NULL class to allow the model to classify and reject perturbed examples. Papernot et al. (2015) utilizes model distillation (Hinton et al., 2015) for adversarial defense. Papernot & McDaniel (2017) further augment the label used to train the distilled model with the predictive uncertainty from the original model. Nevertheless, the method of Hosseini et al. (2017) requires adversarial examples to train on while that of defensive distillation has been shown to be vulnerable to substitute model black-box attacks (Papernot et al., 2017).
+
+Label Smoothing The closest approach to our work is Label Smoothing (LS) (Szegedy et al., 2016). Here we highlight key differences between our approach and LS. First, LS applies to discriminative outputs where both correct and incorrect class information are presented during training, while our output is generative and only correct class information is presented. That is, our outputs are not a distribution over classes. Although LS has been shown to improve adversarial robustness (Goibert & Dohmatob, 2020), it has not been shown to be effective for low-data learning. As we will show in our experiments, LS does not help when the amount of training data is limited, while our label representations lead to significant improvements. Therefore, our high-dimensional, high-entropy labels provide benefits beyond those provided by label smoothing.
+
+# 3 BEYOND ACCURACY: EMERGENCE OF ROBUSTNESS AND EFFICIENCY
+
+It is well-known that deep neural networks with similar accuracy on the same task may perform very differently under different evaluation scenarios. Additionally, real-world applications rely on more considerations than just accuracy. Robustness and data efficiency are two practical challenges for deep neural networks. We test the emergence of those properties under various label representations.
+
+# 3.1 ROBUSTNESS UNDER ADVERSARIAL ATTACKS
+
+We evaluate the robustness of all the trained models using the fast gradient sign method (FGSM) (Goodfellow et al., 2014) and the iterative method (Kurakin et al., 2016) across multiple widely used convolutional networks. We choose these attacks because their adversarial images $I_{\mathrm{adv}}$ are usually indistinguishable from the original images $I$ or do not significantly affect human evaluation, but they can be very challenging for neural networks to correctly classify. When a loss function $J$ is involved in generating the adversarial images, we use the cross-entropy loss for the text model and the smooth L1 loss for the speech model.
+
+FGSM is a fast one-step attack that generates an adversarial image by adding a small adversarial perturbation to the original image. The perturbation is based on the gradient of the loss with respect to the original image. The maximum magnitude of the perturbation is maintained by $\epsilon$ :
+
+$$
+\left\| I - I _ {\mathrm {a d v}} \right\| _ {\infty} \leq \epsilon . \tag {1}
+$$
+
+We test both untargeted and targeted versions of FGSM. The untargeted attacks increase the loss between the predicted class and the true class $Y_{\mathrm{true}}$ :
+
+$$
+I _ {\mathrm {a d v}} = I + \epsilon \cdot \operatorname {S i g n} \left(\nabla_ {I} J (I, Y _ {\text {t r u e}})\right), \tag {2}
+$$
+
+whereas the targeted attacks decrease the loss between the predicted class and a target class $Y_{\text{target}}$ :
+
+$$
+I _ {\mathrm {a d v}} = I - \epsilon \cdot \operatorname {S i g n} \left(\nabla_ {I} J (I, Y _ {\text {t a r g e t}})\right). \tag {3}
+$$
+
+We choose a random incorrect class as the target class for each input image, and the same target classes are used to test different models. All $I_{\mathrm{adv}}$ are normalized after the perturbation.
+
+Iterative Method As an extension to FGSM, the iterative method applies multiple steps of gradient-based updates. In our experiments, we initialize the adversarial image $I_{\mathrm{adv}}$ to be the original image $I$ so that $I_{\mathrm{adv}}^0 = I$ . Then we apply FGSM for 5 times with a small step size $\alpha = \epsilon / 5$ . The untargeted update for each iteration becomes
+
+$$
+I _ {\mathrm {a d v}} ^ {N + 1} = \operatorname {C l i p} _ {I, e} \left\{I _ {\mathrm {a d v}} ^ {N} + \alpha \cdot \operatorname {S i g n} \left(\nabla_ {I} J \left(I _ {\mathrm {a d v}} ^ {N}, Y _ {\text {t r u e}}\right)\right) \right\}, \tag {4}
+$$
+
+and the targeted update becomes
+
+$$
+I _ {\mathrm {a d v}} ^ {N + 1} = \operatorname {C l i p} _ {I, \epsilon} \left\{I _ {\mathrm {a d v}} ^ {N} - \alpha \cdot \operatorname {S i g n} \left(\nabla_ {I} J \left(I _ {\mathrm {a d v}} ^ {N}, Y _ {\text {t a r g e t}}\right)\right) \right\}, \tag {5}
+$$
+
+where $\mathrm{Clip}_{I,\epsilon}$ denotes clipping the total perturbation $I_{\mathrm{adv}}^N - I$ in the range of $[- \epsilon, \epsilon]$ . We use the same targeted classes from FGSM for the evaluation on the iterative method.
+
+# 3.2 LEARNING EFFICIENCY WITH LIMITED DATA
+
+We take the most straightforward approach to evaluate data efficiency. We start with only $1\%$ of the original training data. We always use the full amount of testing data. We then gradually increase the amount of training data to $2\%$ , $4\%$ , $8\%$ , $10\%$ , and $20\%$ of the original to perform extensive multi-scale evaluation.
+
+# 4 EXPERIMENTAL SETUP
+
+Dataset We evaluate our models on the CIFAR-10 and CIFAR-100 datasets (Krizhevsky, 2009). We use the same training, validation and testing data split $(45,000 / 5,000 / 10,000)$ for all of our experiments. We also keep the same random seed for data preprocessing and augmentation. Therefore, we present apple to apple comparisons for all label representations.
+
+Speech Label Generation We generate the speech labels shown in Figure 1 by following the standard practice as in recent works (Naranjo-Alcazar et al., 2019; Zhang et al., 2019):
+
+- We first generate the English speech audio automatically with a text-to-speech (TTS) system from the text labels in the corresponding dataset. Therefore, all the speech labels are pronounced consistently by the same API with the same parameters for controlled experiments. We leave the exploration of different languages and intonations as future work.
+- We save each audio file in the WAVE format with the 16-bit pulse-code modulation encoding, and trim the silent edges from all audio files.
+- Since different speech signals may last for various lengths, we preprocess each speech label to generate a Log Mel spectrogram to maintain the same dimension. We use a sampling rate of $22,050\mathrm{Hz}$ , 64 Mel frequency bands, and a hop length of 256. Another advantage of this preprocessing into spectrograms is that we can then utilize convolutional neural networks as the speech decoder to reconstruct our speech labels. Meanwhile, we convert the amplitudes to the decibel scale.
+- Finally, the spectrograms are shaped into a $N \times N$ matrix with values ranging from $-80$ to 0, where $N$ is double the dimension of the image input. Our resulting speech spectrogram can be viewed as a 2D image.
+
+Given the first step in this procedure, note that for a given class (e.g. "bird") there is only one corresponding spectrogram. Therefore, the improved robustness we observe is not a result of any label-augmentation.
+
+Other Labels For a deeper understanding of which properties of speech labels introduce different feature learning characteristics, we replicate all the experiments using the following high-dimensional label variants. We also show visualizations for all the label variants in Figure 1.
+
+- shuffled-speech We cut the speech spectrogram image into 64 slices along the time dimension, shuffle the order of them, and then combine them back together as an image. Although the new image cannot be converted back to a meaningful audio, it preserves the frequency information from the original audio. More importantly, this variant does not change the entropy or dimensionality of the original speech label.
+- constant-matrix In contrast, the constant matrix represents another extreme of high-dimensional label representation, with all elements having the same value and zero entropy. The constant-matrix label has the same dimension as the speech label, and values are evenly spaced with a range of 80 (which is also the range of the speech labels).
+- Gaussian-composition We obtain this representation by plotting a composition of 2D Gaussians directly as images. Each composition is obtained by adding 10 Gaussians with uniformly sampled positions and orientations.
+- random/uniform-matrix We also adopt a matrix with same dimensions as the above high-dimensional labels by randomly sampling from a uniform distribution. We additionally
+
+construct a uniform random vector with low dimensionality to inspect whether dimensionality matters. Throughout the paper, we will use random matrix and uniform matrix interchangeably to refer to the same representation.
+
+- BERT embedding We obtain the BERT embedding from the last hidden state of a pretrained BERT model (Devlin et al., 2018; Wolf et al., 2020). We remove outlines larger than twice the standard deviation and normalize the matrix to the same range of our other high-dimensional labels. The BERT embedding results in a $64 \times 64$ matrix.
+- GloVe embedding. Similarly, we directly use the pretrained GloVe (Pennington et al., 2014) word vectors.3 This results in a 50-dimensional label vector.
+
+Models We take three widely used convolutional networks as our base model: VGG19 (Simonyan & Zisserman, 2014), ResNet-32 and ResNet-110 (He et al., 2015). Here we define that a categorical classification model has two parts: an image encoder $I_{e}$ and a category (text) decoder $I_{td}$ . Traditionally, $I_{td}$ represents fully connected layers following various forms of convolutional backbones $I_{e}$ . Finally, a softmax function is applied to the output from the last layer of $I_{td}$ to convert the predictions to a categorical probability distribution. The prediction is retrieved from the class with the highest probability.
+
+Similarly, we define that the models for our high-dimensional labels consists of two parts: an image encoder and a label decoder $I_{ld}$ . Throughout the paper, we use the same image encoder but replace the category decoder $I_{td}$ with one dense layer and several transpose convolutional layers as $I_{ld}$ . All the layers inside $I_{ld}$ are equipped with batch normalization (Ioffe & Szegedy, 2015) and leaky ReLU with a 0.01 negative slope.
+
+Overall, the majority of parameters of the network comes from the image encoder which is shared across both categorical labels and other label representations. The decoder for high-dimensional labels increases the number of parameters by a very limited amount (see Appendix for all the numbers). Thus, our experiments are well-controlled with respect to the number of parameters.
+
+Learning We train the categorical model to minimize the traditional cross-entropy objective function. We minimize Equation 6 for other models with high-dimensional labels. We use the smooth L1 loss (Huber loss) $\bar{L}_s$ shown in Equation 7. Here, $y_{i}$ is the predicted label matrix while $s_i$ stands for the ground-truth matrix.
+
+$$
+\min _ {\theta_ {s}} \sum_ {i} \mathcal {L} _ {s} \left(y _ {i}, s _ {i}\right) \tag {6}
+$$
+
+$$
+\mathcal {L} _ {s} \left(y _ {i}, s _ {i}\right) = \left\{ \begin{array}{l l} \frac {1}{2} \left(y _ {i} - s _ {i}\right) ^ {2}, & \text {i f} \left| y _ {i} - s _ {i} \right| \leq 1 \\ \left| y _ {i} - s _ {i} \right|, & \text {o t h e r w i s e} \end{array} \right. \tag {7}
+$$
+
+We optimize all the networks using stochastic gradient descent (SGD) with back-propagation, and we select the best model based on the accuracy on the validation set.
+
+Evaluation For categorical models, a prediction is considered correct if the class with the highest probability indicates the same category as the target class. For high-dimensional labels, we provide two types of measurements. Given the model output, we select the ground-truth label that minimizes the distance to the output. We refer to this as the "nearest neighbor" (NN) choice. The other criteria is to check whether the smooth L1 loss is below a certain threshold. We use Amazon Mechanical Turk (Sorokin & Forsyth, 2008) to validate that generated speech below our threshold is correctly identifiable by humans. In our experiments, we find 3.5 is a reasonable threshold. The human evaluations on the speech label demonstrate that our metric captures both the numerical performance and the level of interpretability of the generated speech output. Note that we mainly rely on the NN method for evaluation and only refer to the threshold method to demonstrate the qualitative results from the speech label.
+
+# 5 RESULTS AND DISCUSSION
+
+# 5.1 DO ALL THE MODELS LEARN TO CLASSIFY IMAGES?
+
+We report the classification accuracy for all the labels in Table 1. Speech labels, shuffled-speech and Gaussian-composition labels achieve comparable accuracy with the traditional categorical labels,
+
+while the constant-matrix performs slightly worse than these representations. This suggests that the constant-matrix label is harder to train with. We verify this observation by visualizing the training curves for all label representations on CIFAR-100 with the ResNet-110 image encoder in Figure 2. The training curves show that the constant-matrix model takes longer to converge than the others, and converges to a higher loss.
+
+| Labels | CIFAR-10 Accuracy (%) | CIFAR-100 Accuracy (%) |
| VGG19 | ResNet32 | ResNet110 | VGG19 | ResNet32 | ResNet110 |
| Category | 92.82 ± 0.08 | 92.34 ± 0.25 | 93.23 ± 0.29 | 70.98 ± 0.10 | 68.05 ± 0.72 | 70.03 ± 0.49 |
| Speech Threshold | 91.97 ± 0.17 | 91.90 ± 0.04 | 92.44 ± 0.11 | 69.13 ± 0.75 | 61.08 ± 0.27 | 67.88 ± 0.16 |
| Speech NN | 92.12 ± 0.18 | 92.34 ± 0.01 | 92.73 ± 0.08 | 70.27 ± 0.61 | 64.74 ± 0.36 | 69.51 ± 0.25 |
| Shuffle Threshold | 92.27 ± 0.16 | 91.49 ± 0.22 | 92.44 ± 0.05 | 67.04 ± 0.41 | 51.95 ± 0.85 | 63.00 ± 1.45 |
| Shuffle NN | 92.64 ± 0.19 | 92.72 ± 0.20 | 92.92 ± 0.24 | 70.88 ± 0.31 | 64.23 ± 0.80 | 69.41 ± 0.66 |
| Composition Threshold | 91.53 ± 0.24 | 91.49 ± 0.22 | 92.36 ± 0.02 | 68.06 ± 0.28 | 60.54 ± 0.54 | 67.17 ± 0.40 |
| Composition NN | 91.94 ± 0.22 | 92.39 ± 0.17 | 93.07 ± 0.03 | 70.20 ± 0.23 | 66.72 ± 0.44 | 70.62 ± 0.20 |
| Constant Threshold | 88.27 ± 0.63 | 88.50 ± 0.25 | 89.27 ± 0.15 | 62.99 ± 0.11 | 55.70 ± 0.36 | 58.78 ± 0.18 |
| Constant NN | 88.33 ± 0.65 | 88.61 ± 0.27 | 89.37 ± 0.13 | 34.29 ± 2.40 | 19.00 ± 1.19 | 24.46 ± 3.70 |
+
+Table 1: Classification accuracy on CIFAR-10 and CIFAR-100 for all label representations. Speech labels, shuffled speech labels, and composition of Gaussian labels all achieve comparable accuracies with categorical labels. Constant matrix labels perform slightly worse than the others.
+
+
+
+
+Figure 2: Training and validation losses on CIFAR-10 dataset with ResNet-110 image encoder for models with the speech / shuffled speech / composition of Gaussians / constant matrix labels (left) and categorical labels (right). All of the models are trained to converge. The model trained with constant matrix labels converges slower than models trained with other high dimensional labels.
+
+# 5.2 FEATURE ROBUSTNESS
+
+In order to evaluate the robustness of the models, we take all the trained models (Table 1) as the starting points for adversarial attacks with the FGSM and the iterative method. We apply an $\epsilon$ from 0 to 0.3 with a 0.05 increment to the normalized images by following the original FGSM setup (Goodfellow et al., 2014) and test the model accuracy for each $\epsilon$ value. We only run attacks on the original correctly classified images and mark the original misclassified images as incorrect when we compute the accuracy for all $\epsilon$ values. We provide the accuracy computed on the subset of test images that are initially correctly classified in the Appendix, though the ranking among different models remains the same.
+
+Figure 3 shows the test accuracy under various attacks. Although the accuracy of all models decreases as the attack becomes stronger (larger $\epsilon$ ), the models with speech labels, shuffled speech labels, and composition of Gaussian labels perform consistently much better than the models with traditional categorical labels across all three image encoders, for all types of attacks, and on both CIFAR datasets. Uniform random matrix labels perform similarly well in this setting (see the Appendix for details). Interestingly, models with constant matrix labels perform worse than all other models with high-dimensional labels, suggesting that there are some inherent properties that enhance model robustness beyond simply high dimensionality.
+
+
+Figure 3: Test accuracy under adversarial attacks on CIFAR-10 (left four columns) and CIFAR-100 (right four columns). The accuracy evaluated by the threshold and the nearest neighbor is plotted in solid and dotted lines respectively. We show the results of targeted and untargeted FGSM and iterative method on three image encoders with three random seeds. The horizontal axis indicates the strength of different attacks.
+
+
+
+# 5.3 FEATURE EFFECTIVENESS
+
+With the CIFAR-10 dataset, we train models with various label representations using $1\%$ , $2\%$ , $4\%$ , $8\%$ , $10\%$ , and $20\%$ of the training data. For each amount of data, we train with the VGG19, ResNet-32, and ResNet-110 image encoders with five different random seeds. To conduct controlled experiments, we use the exact same training procedures and hyperparameters as the full-data experiments, so that the only difference is the amount of training data. All models are evaluated on the same validation set and test set. Figure 4 reports the test accuracy. Similar to the results from the robustness evaluation, speech labels, shuffled speech labels, composition of Gaussian labels, and uniform random labels achieve higher accuracies than the models with categorical labels for both VGG19 and ResNet-110, and comparable results for ResNet-32. The results demonstrate that the speech models are able to learn more generalizable and effective features with less data. This property is extremely valuable when the amount of training data is limited.
+
+Additionally, our results suggest that label-smoothing does not provide further benefits when the amount of training data is limited, as discussed above. Lastly, the performance of the models trained on constant matrix labels is consistent with that in the robustness experiment: it performs worse than all other high-dimensional labels. We provide further analysis in the next section.
+
+
+Figure 4: Test accuracy when limited training data is available. Accuracy is computed using the nearest-neighbor method
+
+# 5.4 WHAT IS SPECIAL ABOUT AUDIO LABELS?
+
+Our experiments for robustness and data efficiency suggest that high-dimensional labels hold some interesting inherent property beyond just high-dimensionality that encourage learning of more robust
+
+and effective features. We hypothesize that high-dimensional label representations with high entropy provide stronger learning signals which give rise to better feature representations.
+
+To verify our hypothesis, we measure several standard statistics over various label representations, shown in Table 2. Specifically, we measure the normalized L1 and L2 distance between pairs of labels for each representation. We further measure the entropy for each individual label.
+
+| Label Types | CIFAR-10 | CIFAR-100 |
| Entropy | L1 Distance | L2 Distance | Entropy | L1 Distance | L2 Distance |
| Category | 0.47 ± 0.00 | 2.00 ± 0.00 | 1.41 ± 0.00 | 0.08 ± 0.00 | 2.00 ± 0.00 | 1.41 ± 0.00 |
| Constant | 0.00 ± 0.00 | 26.07 ± 15.72 | 26.07 ± 15.72 | 0.00 ± 0.00 | 21.76 ± 15.16 | 21.76 ± 15.16 |
| Speech | 11.35 ± 0.35 | 23.80 ± 5.16 | 12.95 ± 2.70 | 11.37 ± 0.32 | 21.45 ± 5.53 | 13.15 ± 2.19 |
| Shuffle | 11.35 ± 0.35 | 35.29 ± 2.18 | 18.87 ± 1.53 | 11.37 ± 0.32 | 34.40 ± 2.59 | 17.81 ± 1.24 |
| Composite | 12.00 ± 0.00 | 24.13 ± 3.36 | 19.41 ± 1.47 | 12.00 ± 0.00 | 25.75 ± 4.72 | 20.60 ± 2.96 |
| BERT | 11.17 ± 0.00 | 5.71 ± 0.94 | 2.06 ± 0.24 | 11.17 ± 0.00 | 7.89 ± 2.84 | 2.63 ± 0.70 |
| GloVe | 5.64 ± 0.00 | 7.35 ± 1.69 | 1.30 ± 0.31 | 5.64 ± 0.00 | 5.62 ± 0.90 | 0.99 ± 0.16 |
+
+Table 2: Different basic statistics of all types of label representations. Labels that encourage more robust and effective feature learning also have higher entropy than other label forms.
+
+Interestingly, although the Manhattan and Euclidean distance between pairs of labels do not show any particularly useful patterns, the average entropy of the speech labels, the shuffled speech labels, and composition of Gaussian labels are all higher than that of the constant matrix and original categorical labels. The ranking of the entropy between these two groups exactly matches the performance in our robustness and data efficiency experiments shown in Figure 3 and Figure 4. This correlation suggests that high dimensional labels with high entropy may have positive impacts on robustness and data-efficient training.
+
+We further validate that the benefits come from both high dimensionality and high entropy by training a model with low-dimensional and high-entropy labels. We generated these labels by sampling from a uniform distribution, following the same procedure as the uniform-random matrix label described previously. We found that while models trained with this label perform comparably to ones trained with high-dimensional high-entropy labels in terms of adversarial robustness (see Appendix), high-dimensional and high-entropy label models outperform the low-dimensional high-entropy model in terms of data efficiency, as shown by the curve "Low-dim" in Figure 4. We find a similar result for categorical models trained with label smoothing, which has been previously shown to improve adversarial robustness (Goibert & Dohmatob, 2020). In fact, high dimensionality is a prerequisite for high entropy because the maximum entropy is limited by the dimensionality of the label.
+
+Note that the model trained with label smoothing uses the standard cross-entropy loss, meanwhile the low-dimensional high-entropy model is trained with the Huber loss. As a result, we argue that the loss is not responsible for the improved performance of models trained with high-dimensional, high-entropy labels.
+
+# 5.5 VISUALIZATIONS
+
+Throughout the training process, we visualized the learned features immediately after the image encoder layer of ResNet-110 with t-SNE (van der Maaten & Hinton, 2008), both for audio and categorical models on CIFAR-10 test set. The results are shown in Figure 5. We observe that the embedding of the learned features evolve as the training progresses. Compared with the feature embedding of the categorical model, the embedding of the speech models shows the formation of clusters at earlier stages of training. More separated clusters are also obtained towards convergence. We provide further visualizations and Grad-CAM interpretations in the Appendix.
+
+
+Figure 5: T-SNE progression for speech (top row) and categorical (bottom row) models with ResNet-110 image encoder. From left to right, the plot shows $10\%$ , $30\%$ , $50\%$ , $70\%$ , and $100\%$ progress in training. The speech model develops distinctive clusters at an earlier stage and has better separated clusters overall.
+
+# 6 CONCLUSION
+
+We introduce a novel paradigm for the traditional image classification task by replacing categorical labels with high-dimensional, high-entropy matrices such as speech spectrograms. We demonstrate comparable accuracy on the original task with our speech labels, however, models trained with our speech labels achieve superior performance under various adversarial attacks and are able to learn in a more data efficient manner with only a small percentage of training data. We further study the inherent properties of high dimensional label representations that potentially introduce the advantages. Through a large scale systematic study on various label representations, we suggest that high-entropy, high-dimensional labels generally lead to more robust and data efficient training. Our work provides novel insights for the role of label representation in training deep neural networks.
+
+# ACKNOWLEDGMENTS
+
+This research is supported by NSF NRI 1925157 and DARPA MTO grant L2M Program HR0011-18-2-0020.
+
+# REFERENCES
+
+Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Šrndić, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. Evasion attacks against machine learning at test time. Lecture Notes in Computer Science, pp. 387-402, 2013. ISSN 1611-3349. doi: 10.1007/978-3-642-40994-3_25. URL http://dx.doi.org/10.1007/978-3-642-40994-3_25.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
+Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks, 2017.
+Morgane Goibert and Elvis Dohmatob. Adversarial robustness via label-smoothing. 2020.
+Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and Harnessing Adversarial Examples. arXiv e-prints, art. arXiv:1412.6572, Dec 2014.
+Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples, 2014.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition, 2015.
+Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network, 2015.
+Hossein Hosseini, Yize Chen, Sreeram Kannan, Baosen Zhang, and Radha Poovendran. Blocking transferability of adversarial examples in black-box learning systems, 2017.
+Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q. Weinberger. Densely connected convolutional networks, 2016.
+Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
+
+Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009.
+Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger (eds.), Advances in Neural Information Processing Systems 25, pp. 1097-1105. Curran Associates, Inc., 2012. URL http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf.
+Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial examples in the physical world, 2016.
+Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Tom Duerig, and Vittorio Ferrari. The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale. arXiv:1811.00982, 2018.
+Yann Lecun, Leon Bottou, Joshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. In Proceedings of the IEEE, pp. 2278-2324, 1998.
+Zhenguo Li, Fengwei Zhou, Fei Chen, and Hang Li. Meta-sgd: Learning to learn quickly for few-shot learning. arXiv preprint arXiv:1707.09835, 2017.
+Fangzhou Liao, Ming Liang, Yinpeng Dong, Tianyu Pang, Xiaolin Hu, and Jun Zhu. Defense against adversarial attacks using high-level representation guided denoiser, 2017.
+Jiajun Lu, Theerasit Issaranon, and David Forsyth. Safetynet: Detecting and rejecting adversarial examples robustly, 2017.
+Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks, 2017.
+Dongyu Meng and Hao Chen. Magnet: a two-pronged defense against adversarial examples, 2017.
+Jan Hendrik Metzen, Tim Genewein, Volker Fischer, and Bastian Bischoff. On detecting adversarial perturbations, 2017.
+Javier Naranjo-Alcazar, Sergi Perez-Castanos, Pedro Zuccarello, and Maximo Cobos. Dcase 2019: Cnn depth analysis with different channel inputs for acoustic scene classification, 2019.
+Anh Nguyen, Jason Yosinski, and Jeff Clune. Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images. arXiv e-prints, art. arXiv:1412.1897, Dec 2014.
+Nicolas Papernot and Patrick McDaniel. Extending defensive distillation, 2017.
+Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. Distillation as a defense to adversarial perturbations against deep neural networks, 2015.
+Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, Apr 2017. doi: 10.1145/3052973.3053009. URL http://dx.doi.org/10.1145/3052973.3053009.
+Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, pp. 8024-8035, 2019.
+Jeffrey Pennington, Richard Socher, and Christopher D. Manning. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pp. 1532-1543, 2014. URL http://www.aclweb.org/anthology/D14-1162.
+Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. 2016.
+
+Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In ICLR, 2017.
+Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211-252, 2015. doi: 10.1007/s11263-015-0816-y.
+Pouya Samangouei, Maya Kabbab, and Rama Chellappa. Defense-gan: Protecting classifiers against adversarial attacks using generative models, 2018.
+Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. Meta-learning with memory-augmented neural networks. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML'16, pp. 1842-1850. JMLR.org, 2016a.
+Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. One-shot learning with memory-augmented neural networks, 2016b.
+Uri Shaham, Yutaro Yamada, and Sahand Negahban. Understanding adversarial training: Increasing local stability of supervised models through robust optimization. Neurocomputing, 307:195 - 204, 2018. ISSN 0925-2312. doi: https://doi.org/10.1016/j.neucom.2018.04.027. URL http://www.sciencedirect.com/science/article/pii/S0925231218304557.
+Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition, 2014.
+Yang Song, Taesup Kim, Sebastian Nowozin, Stefano Ermon, and Nate Kushman. Pixeldefend: Leveraging generative models to understand and defend against adversarial examples, 2017.
+Alexander Sorokin and David Forsyth. Utility data annotation with amazon mechanical turk. In 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 1-8. IEEE, 2008.
+Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv e-prints, art. arXiv:1312.6199, Dec 2013.
+Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks, 2013.
+Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818-2826, 2016.
+Sebastian Thrun and Lorien Pratt. Learning to learn. Springer Science & Business Media, 2012.
+Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. Journal of Machine Learning Research, 9:2579-2605, 2008. URL http://www.jmlr.org/papers/v9/vandermaaten08a.html.
+Joaquin Vanschoren. Meta-learning: A survey. arXiv preprint arXiv:1810.03548, 2018.
+Ricardo Vilalta and Youssef Drissi. A perspective view and survey of meta-learning. Artificial intelligence review, 18(2):77-95, 2002.
+Yaqing Wang, Quanming Yao, J Kwok, and LM Ni. Few-shot learning: A survey. arXiv preprint arXiv:1904.05046, 2019.
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierrick Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38-45, Online, October 2020. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/2020.emnlp-demos.6.
+
+Jing-Xuan Zhang, Zhen-Hua Ling, Li-Juan Liu, Yuan Jiang, and Li-Rong Dai. Sequence-to-sequence acoustic modeling for voice conversion. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 27(3):631-644, Mar 2019. ISSN 2329-9304. doi: 10.1109/taslp.2019.2892235. URL http://dx.doi.org/10.1109/TASLP.2019.2892235.
+
+# A APPENDIX
+
+# A.1 THRESHOLD VALIDATION
+
+We deployed a large-scale study on Amazon Mechanical Turk $^4$ to validate our choice of 3.5 as a classification threshold for the speech model.
+
+In particular, we asked workers to listen to the outputs of the speech model and choose from the set of classes (with a "None" class for unintelligible output) the one which best fits the output. We assigned 3 workers to evaluate each output from the VGG19 speech model on CIFAR-10 test set. We chose a restrictive selection criteria to ensure maximum quality of responses. Only workers with a $\geq 99\%$ approval rating and at least 10,000 approvals were selected.
+
+To measure agreement between humans and our speech model, for each sample in the test set we determine the decision made by the model using our pre-selected threshold (loss $< 3.5$ is correct, while loss $\geq 3.5$ is incorrect). Then we compare these decisions to those of the human workers. When we count each of the three workers independently, we find that humans agree with the model $99.4\%$ of the time. When we take a majority vote (2/3 humans agreeing) we find that humans agree with the model $99.8\%$ of the time. We conclude that 3.5 is a reasonable threshold for evaluating the model.
+
+# A.2 SUBSET ROBUSTNESS
+
+Additional robustness evaluation is computed using the subset of the test images that are initially correctly classified by the models without any adversarial attacks (Figure 6). All test accuracy starts at $100\%$ and decreases with stronger attacks. The strength of the attacks is described by the value of epsilon.
+
+
+Figure 6: Test accuracy under adversarial attacks on CIFAR-10 (left four columns) and CIFAR-100 (right four columns) for the initially correct subset of the test images. The accuracy evaluated by the threshold and the nearest neighbor is plotted in solid and dotted lines respectively. The general trend from the subset is similar to that from the full test set.
+
+
+
+# A.3 ADDITIONAL RESULTS ON ROBUSTNESS EVALUATION
+
+We here include the full results on robustness evaluation on CIFAR-10 dataset in Figure 7.
+
+
+Figure 7: Full results of the robustness evaluation on CIFAR-10
+
+# A.4 HYPERPARAMETERS
+
+# A.4.1 IMPLEMENTATION DETAILS
+
+We train the categorical models for 200 epochs with a starting learning rate of 0.1, and decay the learning rate by 0.1 at epoch 100 and 150. The high-dimensional models are trained for 600 epochs with the same initial learning rate, and we drop the learning rate by 0.1 at epoch 300 and 450. All models are trained with a batch size of 128 using the SGD optimizer with 0.9 momentum and 0.0001 weight decay. One exception is when train categorical models with the VGG19 image encoder, we use a larger weight decay, 0.0005. We implement our models using PyTorch Paszke et al. (2019). All experiments are performed on a single GeForce RTX 2080 Ti GPU. The limited-data experiments are conducted using the same settings as the full data experiments.
+
+# A.4.2 ARCHITECTURE PARAMETERS
+
+We provide the parameter counts for the all models in Table 3 and Table 4. The majority of the parameters come from the image encoders. High-dimensional models have slightly more parameters than categorical models due to the high-dimensional label decoder (Table 5).
+
+| Model | CIFAR-10 |
| VGG19 | ResNet32 | ResNet110 |
| Category | 2 × 107 | 4.67 × 105 | 1.73 × 106 |
| High-dimensional | 2.01 × 107 | 5.80 × 105 | 1,84 × 106 |
+
+Table 3: Total number of parameters of the category and high-dimensional models for CIFAR-10 dataset
+
+# A.5 DATASET
+
+To demonstrate the effectiveness of our proposed method, we evaluate our models on the CIFAR-10 and CIFAR-100 datasets Krizhevsky (2009). For each dataset, we train different models on the same training set and evaluate the models on the same validation set using the same random seeds for fair comparisons. To preprocess the training images, we randomly crop them with a padding size 4 and perform random horizontal flips. All CIFAR images are normalized with mean (0.4914, 0.4822, 0.4465) and standard deviation (0.2023, 0.1994, 0.2010) of the training set.
+
+| Model | CIFAR-100 |
| VGG19 | ResNet32 | ResNet110 |
| Category | 2.01 × 107 | 4.73 × 105 | 1.74 × 105 |
| High-dimensional | 2.02 × 107 | 5.80 × 105 | 1.84 × 105 |
+
+Table 4: Total number of parameters of the category and high-dimensional models for CIFAR-100 dataset
+
+| Layer | Input | Output | Kernel | Stride | Padding |
| Dense | Ie out | 64 | - | - | - |
| ConvTranspose 2D | 64 × 1 × 1 | 64 × 4 × 4 | 4 × 4 | 1 × 1 | 0 |
| ConvTranspose 2D | 64 × 4 × 4 | 32 × 8 × 8 | 4 × 4 | 2 × 2 | 1 × 1 |
| ConvTranspose 2D | 32 × 8 × 8 | 16 × 16 × 16 | 4 × 4 | 2 × 2 | 1 × 1 |
| ConvTranspose 2D | 16 × 16 × 16 | 8 × 32 × 32 | 4 × 4 | 2 × 2 | 1 × 1 |
| ConvTranspose 2D | 8 × 32 × 32 | 1 × 64 × 64 | 4 × 4 | 2 × 2 | 1 × 1 |
+
+Table 5: The architecture of the high-dimensional label decoder $I_{ld}$ . The input dimension of the first dense layer is the dimension of the output of the image encoder $I_e$ . The output of the last ConvTranspose2d layer is the target label.
+
+CIFAR-10 consists of 60,000 images of size $32 \times 32$ uniformly distributed across 10 classes. The dataset comes with 50,000 training images and 10,000 test images. We use a 45,000/5,000 training/validation split.
+
+CIFAR-100 also comprises 60,000 images of size $32 \times 32$ , but it has 100 classes each containing 600 images. The dataset is split into 50,000 training images and 10,000 test images. We randomly select 5,000 images from the training images to form the validation set.
+
+# A.6 VISUALIZATIONS
+
+In addition to the progressive t-SNE plots we presented in the main context, we plot the embedding of all three types of image encoders of models trained on speech labels, categorical labels, and constant matrix labels in Figure 8. We only show the results from the models with highest accuracy. Similarly, we observe that speech models have better separation of clusters. The feature embedding of the constant model is worse than that of the speech model, further confirming that the speech representation is unique in addition to its dimensionality.
+
+Grad-CAM We visualize activations from beginning, intermediate, and final layers in the image encoders for both speech and categorical models in Figure 9. We see that the input activations for the speech model conform more tightly than those of the categorical model to the central object of each image at all three stages. This may suggest that the speech model learns more discriminative features than the categorical model. These features are also visually more easily understandable to humans.
+
+
+Figure 8: T-SNE of the best uniform (left), speech (middle), and categorical (right) models trained with the same random seed. Speech models show best separated clusters across all three models.
+
+
+(a) VGG19
+
+
+Figure 9: Grad-CAM visualization of learned features. Each triplet contains (from left to right) an unaltered image, a categorical model visualization, and a speech model visualization. From top to bottom, the activations are taken from some beginning, intermediate, and final layer in the respective image encoders. Speech models learn more discriminative features than categorical models.
+
+
+
+
+(b) ResNet32
+
+
+
+
+
+
+(c) ResNet110
+
+
+
+
\ No newline at end of file
diff --git a/beyondcategoricallabelrepresentationsforimageclassification/images.zip b/beyondcategoricallabelrepresentationsforimageclassification/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..3f282ae4a205576b306cef5ab589c494dbf8243e
--- /dev/null
+++ b/beyondcategoricallabelrepresentationsforimageclassification/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ab248dbddb4a66e70a41648a99c2f1fa7b4ecb025159977518994d9cc4ec011c
+size 744696
diff --git a/beyondcategoricallabelrepresentationsforimageclassification/layout.json b/beyondcategoricallabelrepresentationsforimageclassification/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..ee4cd12ca86a6c42703e57e5fc6a16cad1a453ef
--- /dev/null
+++ b/beyondcategoricallabelrepresentationsforimageclassification/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:adc08456bae0b75142bc3ac3b6e2e04d9c66605fc8bda7e6b55f4df07ef60540
+size 414331
diff --git a/bidirectionalvariationalinferencefornonautoregressivetexttospeech/bc4486a0-6014-4a3b-b8f2-b91ad6bc4ef4_content_list.json b/bidirectionalvariationalinferencefornonautoregressivetexttospeech/bc4486a0-6014-4a3b-b8f2-b91ad6bc4ef4_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..45b9f5f537aa1420ece85dcf97787000eed075af
--- /dev/null
+++ b/bidirectionalvariationalinferencefornonautoregressivetexttospeech/bc4486a0-6014-4a3b-b8f2-b91ad6bc4ef4_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:64e1d11802797becec995376c0c48a4140bc3882b91a363f5cb43156bbacd45b
+size 105617
diff --git a/bidirectionalvariationalinferencefornonautoregressivetexttospeech/bc4486a0-6014-4a3b-b8f2-b91ad6bc4ef4_model.json b/bidirectionalvariationalinferencefornonautoregressivetexttospeech/bc4486a0-6014-4a3b-b8f2-b91ad6bc4ef4_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..0f95801415262c0805b16f7d92ad89a9f19c4c6c
--- /dev/null
+++ b/bidirectionalvariationalinferencefornonautoregressivetexttospeech/bc4486a0-6014-4a3b-b8f2-b91ad6bc4ef4_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9c7763640f4b9709e02c7bf6219fb905c8e35ae8d47e225128ea83212f91896e
+size 122719
diff --git a/bidirectionalvariationalinferencefornonautoregressivetexttospeech/bc4486a0-6014-4a3b-b8f2-b91ad6bc4ef4_origin.pdf b/bidirectionalvariationalinferencefornonautoregressivetexttospeech/bc4486a0-6014-4a3b-b8f2-b91ad6bc4ef4_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..60525155a8692034bc31f819eba3fb6a8c914adb
--- /dev/null
+++ b/bidirectionalvariationalinferencefornonautoregressivetexttospeech/bc4486a0-6014-4a3b-b8f2-b91ad6bc4ef4_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8d12d814f2f33add970eb26a3b3b98a2764581eac74e4be2d1ec422c33b2a001
+size 2312542
diff --git a/bidirectionalvariationalinferencefornonautoregressivetexttospeech/full.md b/bidirectionalvariationalinferencefornonautoregressivetexttospeech/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..962380400e6704abc76edab9a26e73ff19015762
--- /dev/null
+++ b/bidirectionalvariationalinferencefornonautoregressivetexttospeech/full.md
@@ -0,0 +1,494 @@
+# BIDIRECTIONAL VARIATIONAL INFERENCE FOR NON-AUTOREGRESSIVE TEXT-TO-SPEECH
+
+Yoonhyung Lee, Joongbo Shin, Kyomin Jung
+
+Department of Electrical and Computer Engineering
+
+Seoul National University
+
+Seoul, South Korea
+
+{cpi1234,jbshin,kjung}@snu.ac.kr
+
+# ABSTRACT
+
+Although early text-to-speech (TTS) models such as Tacotron 2 have succeeded in generating human-like speech, their autoregressive architectures have several limitations: (1) They require a lot of time to generate a mel-spectrogram consisting of hundreds of steps. (2) The autoregressive speech generation lacks robustness due to its error propagation property. In this paper, we propose a novel non-autoregressive TTS model called BVAE-TTS, which eliminates the architectural limitations and generates a mel-spectrogram in parallel. BVAE-TTS adopts a bidirectional-inference variational autoencoder (BVAE) that learns hierarchical latent representations using both bottom-up and top-down paths to increase its expressiveness. To apply BVAE to TTS, we design our model to utilize text information via an attention mechanism. By using attention maps that BVAE-TTS generates, we train a duration predictor so that the model uses the predicted duration of each phoneme at inference. In experiments conducted on LJSpeech dataset, we show that our model generates a mel-spectrogram 27 times faster than Tacotron 2 with similar speech quality. Furthermore, our BVAE-TTS outperforms Glow-TTS, which is one of the state-of-the-art non-autoregressive TTS models, in terms of both speech quality and inference speed while having $58\%$ fewer parameters.
+
+# 1 INTRODUCTION
+
+End-to-end text-to-speech (TTS) systems have recently attracted much attention, as neural TTS models began to generate high-quality speech that is very similar to the human voice (Sotelo et al., 2017; Wang et al., 2017; Shen et al., 2018; Ping et al., 2018; Li et al., 2019). Typically, those TTS systems first generate a mel-spectrogram from a text using a sequence-to-sequence (seq2seq) model (Sutskever et al., 2014) and then synthesize speech from the mel-spectrogram using a neural vocoder like WaveGlow (Prenger et al., 2019).
+
+Early neural TTS systems have used an autoregressive (AR) architecture to generate a mel-spectrogram mainly because of its two benefits. First, the AR generation eases the difficulty of modeling mel-spectrogram distribution by factorizing the distribution into the product of homogeneous conditional factors in sequential order. Second, the seq2seq based AR architecture helps the model predict the length of the target mel-spectrogram from an input text, which is a non-trivial task because there are no pre-defined rules between the lengths of text and mel-spectrogram.
+
+Although they facilitate high-quality speech synthesis, AR TTS models have several shortcomings. First, they cannot generate a mel-spectrogram in parallel, so the inference time increases linearly with mel-spectrogram time steps. Second, the AR-based generation suffers from accumulated prediction error, resulting in being vulnerable to the out-of-domain data, e.g. very long input text, or text patterns not existing in the training dataset.
+
+In this work, we present a novel non-AR TTS model called BVAE-TTS that achieves fast and robust high-quality speech synthesis. BVAE-TTS generates a mel-spectrogram in parallel by adopting a bidirectional-inference variational autoencoder (BVAE) (Sønderby et al., 2016; Kingma et al., 2016; Maaløe et al., 2019; Vahdat & Kautz, 2020) consisting of 1-D convolutional networks. For
+
+the high-quality speech synthesis, BVAE-TTS learns mel-spectrogram distribution jointly with hierarchical latent variables in a bidirectional manner, where BVAE uses both bottom-up and top-down paths. Furthermore, to match the length of the target mel-spectrogram at inference, BVAE-TTS has an additional module called duration predictor, which predicts how many steps of a mel-spectrogram will be generated from each phoneme. To train the duration predictor, we employ an attention mechanism in BVAE-TTS to make BVAE-TTS utilize the text while learning attention maps between the text and the mel-spectrogram, where the mapping information is used for duration labels.
+
+Our BVAE-TTS has advantages over the previous non-AR TTS models as follows:
+
+- It has a simpler training process compared to the previous non-AR TTS models such as ParaNet (Peng et al., 2020) and FastSpeech (Ren et al., 2019). In the previous TTS models, well-trained AR teacher models are needed for duration labels or knowledge-distillation. Although FastSpeech 2 (Ren et al., 2020) removes the dependency on the teacher model, it still requires additional duration labels and acoustic features prepared in advance using other speech analysis methods. In contrast, BVAE-TTS requires only the text-speech paired dataset without any helps from the teacher model.
+- It is more flexible in designing its architecture compared to the previous flow-based non-AR TTS models such as Flow-TTS (Miao et al., 2020) and Glow-TTS (Kim et al., 2020). The flow-based models have architectural constraints caused by their bijective transformation property, which leads to deeper models with a lot of parameters. On the contrary, the VAE-based model is free from the architectural constraints.
+
+In experiments, we compare our BVAE-TTS with Tacotron 2 and Glow-TTS in terms of speech quality, inference speed, and model size. The results show that our model achieves 27 times speed improvement over Tacotron 2 in generating a mel-spectrogram with similar speech quality. Furthermore, BVAE-TTS outperforms the state-of-the-art non-AR TTS model, Glow-TTS, in both speech quality and inference time, while having $58\%$ fewer model parameters. Additionally, we analyze how the latent representations are learned by BVAE-TTS. In this analysis, we confirm that the bottom part of BVAE-TTS captures the variation of mel-spectrograms that can occur from a text.
+
+Related work: In the meantime, several TTS systems have utilized VAE to relax the one-to-many mapping nature in TTS, so improve the naturalness and the controllability of the systems. For example, (Hsu et al., 2018) and (Zhang et al., 2019) incorporate VAE to Tacotron 2 to learn the style or prosody of the input speech. However, previous uses of VAE have been limited to an auxiliary network in TTS based on the main AR TTS model. To the best of our knowledge, our BVAE-TTS is the first parallel TTS model that directly uses the VAE architecture to the task of TTS.
+
+More discussions about other related works on the previous non-AR TTS models are in Section 5.
+
+# 2 BACKGROUND
+
+# 2.1 BIDIRECTIONAL-INFERENCE VARIATIONAL AUTOENCODER
+
+Variational autoencoder (VAE) is a neural network generative model $p_{\theta}(\mathbf{x}, \mathbf{z})$ parameterized by $\theta$ , where $\mathbf{x}$ is an observed data and $\mathbf{z}$ is a latent vector. In practice, since we only have a dataset $X = \{\mathbf{x}_1, \dots, \mathbf{x}_N\}$ without the knowledge about $\mathbf{z}$ , $\theta$ is typically optimized by maximizing the likelihood:
+
+$$
+\max _ {\theta} \log p _ {\theta} (X) = \max _ {\theta} \sum_ {i = 1} ^ {N} \log \int_ {\mathbf {z}} p _ {\theta} (\mathbf {x} _ {i}, \mathbf {z}) d \mathbf {z}. \tag {1}
+$$
+
+However, the integral over $\mathbf{z}$ is intractable to compute. Therefore, the VAE introduces an approximate posterior $q_{\phi}(\mathbf{z}|\mathbf{x})$ and does variational inference while maximizing the evidence lower bound (ELBO):
+
+$$
+\log p _ {\theta} (\mathbf {x}) \geq \mathbb {E} _ {q _ {\phi} (\mathbf {z} | \mathbf {x})} \left[ \log p _ {\theta} (\mathbf {x} | \mathbf {z}) \right] - D _ {K L} \left[ q _ {\phi} (\mathbf {z} | \mathbf {x}) \right| | p (\mathbf {z}) ]. \tag {2}
+$$
+
+
+Figure 1: A Schematic diagram of the bidirectional-inference variational autoencoder. Samplings of latent variables occur in the layers expressed as circles.
+
+In practice, for easy sampling and easy computation of the KL-divergence, each of the prior $p(\mathbf{z})$ and the approximate posterior $q_{\phi}(\mathbf{z}|\mathbf{x})$ is usually modeled as a multivariate normal distribution with a diagonal covariance matrix.
+
+For a more expressive model, the latent vector $\mathbf{z}$ can be factorized into $\{\mathbf{z}_1,\dots,\mathbf{z}_K\}$ with hierarchical dependency, where $K$ is the number of hierarchies. Then, each of the prior and the approximate posterior is represented as $p_{\theta}(\mathbf{z}) = \Pi_k p_{\theta}(\mathbf{z}_k|\mathbf{z}_{< k})$ and $q_{\phi}(\mathbf{z}|\mathbf{x}) = \Pi_k q_{\phi}(\mathbf{z}_k|\mathbf{z}_{< k},\mathbf{x})$ , respectively. In (Sønderby et al., 2016; Kingma et al., 2016; Vahdat & Kautz, 2020), the variational inference is designed in a bidirectional way based on bottom-up path and top-down path, while letting the inference network (left) and generative network (right) share their parameters as shown in Figure 1. First, along the bottom-up path, BVAE extracts hierarchical features from $\mathbf{x}$ and stores them inside of it. Then, along the top-down path, BVAE does the variational inference and reconstructs the input data considering the stored hierarchical features together. This architecture helps the model effectively learn the hierarchies between the latent variables, and equation (2) is changed as follows:
+
+$$
+\log p _ {\theta} (\mathbf {x}) \geq \mathbb {E} _ {q _ {\phi} (\mathbf {z} | \mathbf {x})} [ \log p _ {\theta} (\mathbf {x} | \mathbf {z}) ] - \sum_ {k = 1} ^ {K} \mathbb {E} _ {q _ {\phi} (\mathbf {z} < k | \mathbf {x})} [ D _ {K L} [ q _ {\phi} (\mathbf {z} _ {k} | \mathbf {x}, \mathbf {z} < k) ] ] ]. \tag {3}
+$$
+
+# 2.2 DURATION PREDICTOR IN NON-AUTOREGRESSIVE TEXT-TO-SPEECH
+
+To achieve the non-autoregressive (non-AR) text-to-speech (TTS) model, the model needs to predict the length of the target mel-spectrogram from a text. This is because there is no way to access to the length of the target mel-spectrogram at inference. However, this is a challenging task considering that there are no pre-defined rules between the lengths of text and mel-spectrogram. Recently, several non-AR TTS models (Ren et al., 2019; Zeng et al., 2020; Kim et al., 2020) resolved the issue by introducing a module called duration predictor. The duration predictor is a module that predicts how many mel-spectrogram steps will be generated from each phoneme.
+
+First, using the duration predictor, the non-AR TTS models compute durations $\hat{D} = \{\hat{d}_1,\dots,\hat{d}_S\}$ corresponding to each phoneme based on phoneme representations $H = \{\mathbf{h}_1,\dots,\mathbf{h}_S\}$ , where each $\hat{d}_i$ is a positive integer that is rounded off from a positive real number, and $S$ is the number of phonemes. Then, $H$ is expanded to the length of the target mel-spectrogram $T$ , by repeating each $\mathbf{h}_i$ as many steps as $\hat{d}_i$ . Finally, the non-AR TTS models generate a mel-spectrogram in parallel by decoding the expanded phoneme representations.
+
+In practice, since there are no ground-truth duration labels for the training of the duration predictor, the non-AR models obtain the duration labels using various methods, and we adopt a method used in FastSpeech (Ren et al., 2019). From well-aligned attention maps, the duration labels are obtained according to $d_{i} = \sum_{t=1}^{t=T} [\arg\max_{s} a_{s,t} == i]$ , where $a_{s,t}$ represents an attention weight given from the $t$ -th mel-spectrogram step to the $s$ -th phoneme.
+
+# 3 METHODOLOGY
+
+In this section, we explain a novel non-autoregressive (non-AR) TTS model, BVAE-TTS, which is based on the bidirectional-inference variational autoencoder (BVAE). As shown in Figure 2-(a), during training, BVAE-TTS is given a mel-spectrogram with a phoneme sequence, and it is trained to reconstruct the mel-spectrogram while maximizing the ELBO. Here, the duration predictor is jointly trained using the attention maps that BVAE-TTS generates during training. As shown in Figure 2-(c), at inference BVAE-TTS generates a mel-spectrogram from a phoneme sequence using the duration predictor as described in Section 2.2, while using its top-down path for decoding the expanded phoneme representations. In Appendix A.1, pseudo-codes for the training and inference of BVAE-TTS are contained for detailed descriptions. The other aspects of BVAE-TTS are described in the following sub-sections in more detail.
+
+# 3.1 USING BVAE FOR TEXT-TO-SPEECH
+
+Unlike the previous BVAE models (Sonderby et al., 2016; Kingma et al., 2016; Vahdat & Kautz, 2020) are trained to generate natural images, our model should learn to generate a mel-spectrogram
+
+
+Figure 2: (a) The training procedure of BVAE-TTS. The scissors represent to detach the signal from the computational graph to block the gradient signal in backpropagation. The Downsample and Upsample layers are included in even-numbered BVAE blocks. $\mathbf{V}_{\mathrm{exp}}$ represents the expanded V to fit the length of the top-down path input. The $f(\cdot)$ represents the Straight-Through argmax with jitter, and $\nabla \mathcal{L}$ represents the gradient signal. (b) BVAE Layer. The dotted arrow represents sampling. The red lines indicate the parameters of prior and approximate posterior normal distributions. (c) The inference procedure of BVAE-TTS that uses the top-down path only.
+
+
+
+
+
+that is not only natural but also corresponding to the input text. To this end, we add a dot-product attention network (Bahdanau et al., 2015) on top of the BVAE, which is a channel for BVAE-TTS to learn how to utilize the text properly. First, using a text encoder, key $(\mathbf{K})$ and value $(\mathbf{V})$ are obtained from a phoneme sequence, and from the bottom-up path, query $(\mathbf{Q})$ is obtained. Here, obtaining $\mathbf{Q}$ is different from the bottom-up paths of the previous BVAE studies used in the image domain, where only the parameters for posterior approximation are obtained. Second, based on the dot-product attention with $\mathbf{Q}, \mathbf{K},$ and $\mathbf{V}$ , the $\mathbf{V}$ are expanded to $\mathbf{V}_{\mathrm{exp}}$ to fit the length of the top-down path, and then the $\mathbf{V}_{\mathrm{exp}}$ is inputted into the top-down path of BVAE-TTS. Lastly, the BVAE-TTS does both the variational inference and mel-spectrogram reconstruction along the top-down path using the expanded text representations with the following objectives:
+
+$$
+\mathcal {L} _ {\text {r e c o n}} = - \mathbb {E} _ {q _ {\phi} (\mathbf {z} | \mathbf {x}, \mathbf {y})} \left[ \log p _ {\theta} (\mathbf {x} | \mathbf {z}, \mathbf {y}) \right], \tag {4}
+$$
+
+$$
+\mathcal {L} _ {K L} = \sum_ {k = 1} ^ {K} \mathbb {E} _ {q _ {\phi} (\mathbf {z} _ {< k} | \mathbf {x}, \mathbf {y})} [ D _ {K L} [ q _ {\phi} (\mathbf {z} _ {k} | \mathbf {x}, \mathbf {z} _ {< k}, \mathbf {y}) | | p (\mathbf {z} _ {k} | \mathbf {z} _ {< k}, \mathbf {y}) ] ], \tag {5}
+$$
+
+where $\mathbf{x}$ represents mel-spectrogram, $\mathbf{y}$ represents text, $\mathbf{z}$ represents latent representation, and mean absolute error (MAE) loss is used for the $\mathcal{L}_{\text{recon}}$ .
+
+In addition to that, a duration predictor is jointly trained to predict durations corresponding to each phoneme in the logarithmic domain using mean square error (MSE) loss, $\mathcal{L}_{dur} = \mathbb{E}[(\log d_i - \log \hat{d}_i)^2]$ , where $d_{i}$ and $\hat{d}_i$ are obtained as described in Section 2.2. The duration predictor takes as input the $\mathbf{V}$ obtained from the text encoder, and here the $\mathbf{V}$ is detached from the computational graph to prevent it from affecting the BVAE training.
+
+# 3.2 ARCHITECTURE OF BVAE-TTS
+
+In this section, we describe the architecture of BVAE-TTS that hierarchically learns the latent representations based on BVAE blocks consisting of BVAE layers.
+
+BVAE block: As shown in Figure 2-(a), the main part of BVAE-TTS is the stacked BVAE blocks, each consisting of BVAE layers. To guide the multi-scale fine-to-coarse latent features to be contained in the latent hierarchies, the time dimension of input mel-spectrogram is downsampled using bilinear downsampling operations (Zhang, 2019) in even-numbered BVAE blocks along the bottom-up path. Here, the numbering of BVAE blocks starts with one and increases from bottom to top. On the contrary, its time dimension is upsampled again along the top-down path, by repeating the signals in the BVAE blocks where the downsamplings have occurred. In odd-numbered BVAE-blocks, the channel dimension is decreased along the bottom-up path and is increased along the top-down path. It is done at the pre- or post-convolutional network of the first BVAE layer in the BVAE blocks shown in Figure 2-(b).
+
+BVAE layer: The main element of the BVAE block is the BVAE layer. As shown in Figure 2-(b), at each bottom-up and top-down path, the parameters of the prior and the approximate posterior distributions $\{\pmb{\mu}_k,\pmb{\Sigma}_k\}$ , $\{\Delta \pmb{\mu}_{k_1},\Delta \pmb{\Sigma}_{k_1}\}$ , $\{\Delta \pmb{\mu}_{k_2},\Delta \pmb{\Sigma}_{k_2}\}$ are obtained from 1-D convolutional networks. Then, the prior distribution $p_{\theta}(\mathbf{z}_k|\mathbf{z}_{< k},\mathbf{y})$ , and the approximate posterior distribution $q_{\phi}(\mathbf{z}_k|\mathbf{z}_{< k},\mathbf{x},\mathbf{y})$ are defined as follow:
+
+$$
+p _ {\theta} \left(\mathbf {z} _ {k} \mid \mathbf {z} _ {< k}, \mathbf {y}\right) := \mathcal {N} \left(\boldsymbol {\mu} _ {k}, \boldsymbol {\Sigma} _ {k}\right), \tag {6}
+$$
+
+$$
+q _ {\phi} \left(\mathbf {z} _ {k} \mid \mathbf {z} _ {< k}, \mathbf {x}, \mathbf {y}\right) := \mathcal {N} \left(\boldsymbol {\mu} _ {k} + \Delta \boldsymbol {\mu} _ {k _ {1}} + \Delta \boldsymbol {\mu} _ {k _ {2}}, \boldsymbol {\Sigma} _ {k} \cdot \Delta \boldsymbol {\Sigma} _ {k _ {1}} \cdot \Delta \boldsymbol {\Sigma} _ {k _ {2}}\right), \tag {7}
+$$
+
+where diagonal covariance matrices $\pmb{\Sigma}$ are used after applying a softplus function to guarantee that they are positive. This parameterization follows (Vahdat & Kautz, 2020), where the approximate posterior $q_{\phi}(\mathbf{z}_k|\mathbf{z}_{< k},\mathbf{x},\mathbf{y})$ is relative to the prior $p_{\theta}(\mathbf{z}_k|\mathbf{z}_{< k},\mathbf{y})$ . With this parameterization, when the prior moves, the approximate posterior moves accordingly while making the BVAE training easier and more stable. During training, the latent representation $\mathbf{z}_k$ is sampled from $q_{\phi}(\mathbf{z}_k|\mathbf{z}_{< k},\mathbf{x},\mathbf{y})$ and sampled from $p_{\theta}(\mathbf{z}_k|\mathbf{z}_{< k},\mathbf{y})$ at inference. Other details on the BVAE-TTS architecture such as text encoder or duration predictor are in Appendix A.2.
+
+# 3.3 BRIDGE THE GAP BETWEEN TRAINING AND INFERENCE
+
+When BVAE-TTS reconstructs a mel-spectrogram during training, text representations are expanded via the attention network. In contrast, text representations are expanded via the duration predictor at inference. Therefore, to bridge the gap between the attention-based mel-spectrogram generation and the duration-based mel-spectrogram generation, we use the following techniques in this work.
+
+Straight-Through argmax: In the duration-based generation, the predicted duration of each phoneme is used after it is rounded to the nearest integer. It means that there is a corresponding phoneme for every time step of a mel-spectrogram. Therefore, during training, we use a trick called Straight-Through (ST) argmax, where the phoneme representation given the largest attention weight from each query time step, which is computed using arg max operation, is passed to the top-down path instead of the weighted sum in the attention mechanism. However, during backpropagation, the parameter update is conducted as if the signal was a result of the weighted sum.
+
+Jitter: To make the model more robust to the errors of the duration predictor, we apply jitter to the text representations, where each of the text representations obtained from the ST-argmax is replaced with a text representation attended by one of the neighboring queries with each probability of $25\%$ during training. We also experimentally observe that applying jitter makes the learning of the attention maps more stable, so the attention maps are not defused throughout the training and stay diagonal.
+
+Positional encoding biasing & Guided attention: In order to reduce the gap between the attention-based generation and the duration-based generation, it is important for the learned attention maps to have diagonal shapes. Therefore, we use two additional techniques to directly help BVAE-TTS learn the diagonal attention maps. First, we add positional encoding vectors with different angular speeds to query and key as an inductive bias following (Ping et al., 2018). Second, we use an additional guided attention loss $\mathcal{L}_{\text {guide }}$ that gives penalties for attention weights deviating from the diagonal following (Tachibana et al., 2018). For more details on the techniques in this section, see Appendix A.3.
+
+With the above techniques, BVAE-TTS is trained with the following objective:
+
+$$
+\mathcal {L} _ {\text {t o t a l}} = \mathcal {L} _ {\text {r e c o n}} + \alpha * \mathcal {L} _ {K L} + \mathcal {L} _ {\text {d u r}} + \mathcal {L} _ {\text {g u i d e}}, \tag {8}
+$$
+
+where the $\alpha$ is a warm-up constant that linearly increases from 0 to 1 over the first $20\%$ of training. This technique is proposed in (Sønderby et al., 2016) to weaken the variational regularization in the early stages of training.
+
+# 4 EXPERIMENTS
+
+In this section, we describe the experimental setup and the results obtained from the quantitative and qualitative experiments that are conducted to evaluate our BVAE-TTS $^1$ . For comparison, we use two state-of-the-art TTS models: Tacotron $2^2$ for an AR TTS model and Glow-TTS $^3$ for a non-AR TTS model. Here, we use the pre-trained weights of the models that are publicly available.
+
+# 4.1 EXPERIMENTAL SETUP
+
+In the experiments, we mainly use the LJSpeech dataset (Ito & Johnson, 2017) consisting of 12500/100/500 samples for training / validation / test splits, respectively. For speech data, we convert raw waveforms into log-mel-spectrograms with 1024 window length and 256 hop length and use them as target sequences of our BVAE-TTS model. For text data, we convert raw texts into phoneme sequences using grapheme-to-phoneme library (Park, 2019) and use them as input sequences of BVAE-TTS.
+
+We train the BVAE-TTS consisting of 4 BVAE blocks for 300K iterations with a batch size of 128. For an optimizer, we use the Adamax Optimizer (Kingma & Ba, 2015) with $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ using the learning rate scheduling used in (Vaswani et al., 2017), where initial learning rate of 1e-3 and warm-up step of 4000 are used. Training of BVAE-TTS takes about 48 hours on Intel(R) Xeon(R) Gold 5120 CPU (2.2GHz) and NVIDIA V100 GPU on the Pytorch 1.16.0 library with Python 3.6.10 over the Ubuntu 16.04 LTS. For more details on the hyperparameters, see Appendix A.4.
+
+# 4.2 EXPERIMENTAL RESULTS
+
+In this section, we compare BVAE-TTS with Tacotron 2 and Glow-TTS in terms of speech quality, inference time, and model size. For the quality evaluation, we use pre-trained WaveGlow as a vocoder that converts mel-spectrograms to waveforms. When we sample latent representations in Glow-TTS and BVAE-TTS, we use the temperature of 0.333 for the models for better speech quality. (Kingma & Dhariwal, 2018)
+
+Table 1: Experimental results. The MOS-ID and MOS-OOD are written with $95\%$ confidence intervals. The number in parentheses represents the number of parameters of BVAE-TTS that are used at inference.
+
+| Method | MOS-ID | MOS-OOD | Inference Time (ms) | # of Parameters |
| GT Audio | 4.68 ± 0.06 | - | - | - |
| GT Mel-spectrogram | 4.41 ± 0.07 | - | - | - |
| Tacotron 2 | 4.35 ± 0.07 | 4.16 ± 0.07 | 658.5 | 28.2M |
| Glow-TTS | 3.96 ± 0.08 | 3.89 ± 0.10 | 43.07 | 28.6M |
| BVAE-TTS | 4.14 ± 0.07 | 4.21 ± 0.07 | 24.20 | 16.0M (12.0M) |
+
+Speech quality: In this experiment, we measure the Mean-Opinion-Score (MOS) for audio samples generated by each TTS model using fifty sentences randomly sampled from the in-domain LJSpeech test set (MOS-ID). In addition, we measure another MOS on fifty sentences randomly sampled from the test-clean set of LibriTTS (Zen et al., 2019) to see the generalization ability on the out-of-domain
+
+text data (MOS-OOD). Via Amazon Mechanical Turk (AMT), we assign five testers living in the United States to each audio sample, and ask them to listen to the audio sample and give a score between 1 to 5 in 9-scale based on its naturalness.
+
+The MOS results shown in Table 1 demonstrate the superiority of our BVAE-TTS, where it outperforms the state-of-the-art non-AR TTS model, Glow-TTS, on both MOS-ID and MOS-OOD. Although BVAE-TTS does not surpass Tacotron 2 in MOS-ID, our model achieves better results on MOS-OOD. It shows its robustness to the out-of-domain text over the autoregressive TTS model which suffers from the accumulated prediction error. For a better understanding of the speech quality generated by the models, we strongly encourage the readers to listen to the audio samples in the supplementary material or on the demo page.
+
+Inference time: We measure inference times taken to generate a mel-spectrogram from a text on the 500 sentences of LJSpeech test set in GPU environment. The average inference time of each TTS model is shown in Table 1. As can be seen in the table, our BVAE-TTS shows 27.2 times faster inference speed on average compared with Tacotron 2, and it is also 1.78 times faster than Glow-TTS. Besides, due to the sequential generation property of the AR TTS model, the gap between the inference speed of BVAE-TTS and Tacotron 2 increases with a longer length of an input text. See Appendix B for more details.
+
+Model size: As can be seen in the last column of Table 1, BVAE-TTS has the smallest number of parameters of 16.0M while maintaining high-quality speech synthesis. Furthermore, BVAE-TTS gets smaller (to 12.0M) at inference because the layers belonging to the bottom-up path are not used to generate a mel-spectrogram, where the number of parameters is $58\%$ fewer parameters compared to Glow-TTS. This shows that the training principle of BVAE-TTS that hierarchically learns the latent features while adjusting hidden dimensions enables our model to have small parameters. It is contrary to the flow-based TTS models such as Flow-TTS (Miao et al., 2020) and Glow-TTS (Kim et al., 2020), where many parameters are used due to its architectural constraint.
+
+# 4.3 MODEL ANALYSIS
+
+As our BVAE-TTS is the first VAE-based parallel TTS model, we conduct several analyses on it. First, we analyze BVAE-TTS to see how the hierarchies are contained in the latent representations and how the variance in mel-spectrograms is learned. Then, we verify the effectiveness of the techniques used in BVAE-TTS such as Straight-Through (ST) argmax and jitter through ablation studies.
+
+# 4.3.1 ANALYSIS ON HIERARCHY
+
+In this experiment, we conduct an analysis on the hierarchical latent representation learning of BVAE-TTS. To see how the latent features of the mel-spectrograms are learned in the hierarchies, we observe the variations of the mel-spectrograms sampled from the same text, while using different temperatures for different BVAE blocks. Specifically, we set a target BVAE block among the four BVAE blocks and increase the variance of the target BVAE block, by using a temperature of 1.0 or 2.0 or 5.0 for the samplings occurred in the BVAE layers belonging to the target BVAE block. On the contrary, we lower the variance of the nontarget BVAE blocks using a temperature of 0.333. Then, we sample 100 different mel-spectrograms each from the same text, while varying the target BVAE block and its temperature.
+
+
+Figure 3: Averages of pixel-by-pixel standard deviations measured on randomly sampled 100 mel-spectrograms.
+
+Figure 3 shows the averages of pixel-by-pixel standard deviations measured on the randomly sampled 100 mel-spectrograms from the same text. The block numbers in the figure are given from one to four starting from the BVAE block at the bottom. In this experiment, we observe that the variance of the speech is mostly contained in the latent representations of BVAE blocks 1, 2, which are close to the mel-spectrogram. However, there is not much variance in the generated mel-spectrograms when we increase the temperature of BVAE blocks 3, 4, which are close to the text representations. Therefore, we can conclude that the global content is mostly contained in the expanded text representations obtained using the text encoder and the duration predictor, and the BVAE blocks 3, 4 focus on building the content rather than its style. Note that while Figure 3 shows standard deviations measured using one exemplar sentence, "One, Two, Three," the tendency is consistent regardless of the input sentence. Mel-spectrogram samples obtained in this experiment are in Appendix C.
+
+# 4.3.2 ABLATION STUDY
+
+
+
+
+
+
+
+
+(a) BVAE-TTS
+Figure 4: Examples of ablations. Each row represents (1) learned attention map, (2) reconstructed mel-spectrogram, (3) diagonally stacked predicted durations, (4) generated mel-spectrogram from a text. (1), (2) are obtained during training, and (3),(4) are obtained at inference. The red lines represent the section of the durations corresponding to a 'comma' and a 'whitespace' in text.
+
+
+(b) No jitter
+
+
+(c) No ST-argmax
+
+We conduct ablation studies to see the effects of applying jitter and Straight-Through (ST) argmax to the soft attention mechanism in BVAE-TTS, and the results are shown in Figure 4. Here, since jitter is included in ST-argmax (jitter is applied to the output of the arg max), the ablation study of not using ST-argmax represents when BVAE-TTS is trained using a normal soft attention mechanism.
+
+The most noticeable differences appear in the attention maps that they learn. As shown in the first row of Figure 4-(a), (b), applying jitter shows the effectiveness for BVAE-TTS to learn well-aligned attention maps. It results in using more accurate duration labels to train the duration predictor, which leads to more natural speech. We observe that BVAE-TTS without applying jitter still generates a clear speech even though it is a little unnatural, where it obtains a 3.68 MOS on the LJSpeech dataset. As shown in the bottom mel-spectrogram of Figure 4-(c), the BVAE-TTS without ST-argmax technique just generates a stuttering sound.
+
+As shown in Figure 4-(a), although the BVAE-TTS also does not learn the perfect attention map, BVAE-TTS successfully generates a mel-spectrogram at inference. Since the text is forced to be used monotonically in the duration-based generation, it makes the model more robust to the attention errors while making fewer pronouncing mistakes. In addition, when using the duration predictor, it is also possible to locally control the speed of speech by adjusting the predicted durations. The experiment on the speed control is included in Appendix D.
+
+# 5 DISCUSSION
+
+To overcome the limitations that the autoregressive (AR) TTS models have, various non-AR architectures have been recently proposed. On one hand, there have been feed-forward neural networks such as ParaNet (Peng et al., 2020), and FastSpeech 1, 2 (Ren et al., 2019; 2020), which use knowledge distillation or additional duration labels and acoustic features. Although they succeeded in enabling their models to predict the lengths of the target mel-spectrograms, the feed-forward architectures did not fit the one-to-many mapping problems in TTS. Therefore, FastSpeech (Ren et al., 2019) uses as targets mel-spectrograms that the AR teacher model generates. This is because much of the diversity in original mel-spectrograms has been eliminated in the mel-spectrograms generated by the AR teacher model. Besides, FastSpeech 2 (Ren et al., 2020) even directly uses additional pre-obtained acoustic features such as pitch and energy to relax the one-to-many mapping nature in TTS. In contrast to the models, it can be seen that BVAE-TTS is asked to solve one-to-one mapping problems during training because there is only one possible target for the reconstruction task. As a result, BVAE-TTS can generate natural and diverse samples while learning latent features in mel-spectrograms.
+
+On the other hand, there have been generative flow-based non-AR TTS models such as Flow-TTS (Miao et al., 2020) and Glow-TTS (Kim et al., 2020). While their speech quality is comparable to that of AR TTS models, flow-based generative models usually have a problem that they require a lot of model parameters. In the models, the dimensions of the hidden representations in the flow networks should be the same through the whole network, and their bipartite flow requires many layers and larger hidden size because of its lack of expressiveness (Ping et al., 2019; Lee et al., 2020). Contrary to flow-based TTS models, our BVAE-TTS is free from the above issue. In this work, by designing BVAE-TTS in hierarchical architecture with varying hidden dimensions, we can outperform the flow-based TTS model, Glow-TTS in both speech quality and speed, while having a much smaller model size.
+
+# 6 CONCLUSION
+
+In this work, we propose BVAE-TTS, which is the first VAE-based non-AR TTS model that generates a mel-spectrogram from a text in parallel. To use the BVAE architecture in text-to-speech, we combine BVAE with an attention mechanism to utilize a text, and to extract duration labels for the training of the duration predictor. In experiments, BVAE-TTS achieves to generate speech $27\mathrm{x}$ faster than Tacotron 2 with similar speech quality, and also outperforms Glow-TTS in terms of both speech quality and inference time with $58\%$ fewer parameters. Since our VAE-based TTS model shows competitive performance and has many advantages over the previous non-AR TTS models, we hope it becomes a good starting point for future VAE-based TTS research.
+
+# ACKNOWLEDGMENTS
+
+K. Jung is with ASRI, Seoul National University, Korea. This work was supported by the Ministry of Trade, Industry & Energy (MOTIE, Korea) under the Industrial Technology Innovation Program (No. 10073144). This research was results of a study on the "HPC Support" Project, supported by the 'Ministry of Science and ICT' and NIPA.
+
+# REFERENCES
+
+Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http://arxiv.org/abs/1409.0473.
+Wei-Ning Hsu, Yu Zhang, Ron J Weiss, Heiga Zen, Yonghui Wu, Yuxuan Wang, Yuan Cao, Ye Jia, Zhifeng Chen, Jonathan Shen, et al. Hierarchical generative modeling for controllable speech synthesis. In International Conference on Learning Representations, 2018.
+
+Keith Ito and Linda Johnson. The lj speech dataset. https://keithito.com/LJ-Speech-Dataset/, 2017.
+
+Jaehyeon Kim, Sungwon Kim, Jungil Kong, and Sungroh Yoon. Glow-tts: A generative flow for text-to-speech via monotonic alignment search. arXiv preprint arXiv:2005.11129, 2020.
+Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR (Poster), 2015. URL http://arxiv.org/abs/1412.6980.
+Durk P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. In Advances in neural information processing systems, pp. 10215-10224, 2018.
+Durk P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improved variational inference with inverse autoregressive flow. In Advances in neural information processing systems, pp. 4743-4751, 2016.
+Sang-gil Lee, Sungwon Kim, and Sungroh Yoon. Nanoflow: Scalable normalizing flows with sublinear parameter complexity. arXiv preprint arXiv:2006.06280, 2020.
+Naihan Li, Shujie Liu, Yanqing Liu, Sheng Zhao, and Ming Liu. Neural speech synthesis with transformer network. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 6706-6713, 2019.
+Lars Maaloe, Marco Fraccaro, Valentin Lievin, and Ole Winther. Biva: A very deep hierarchy of latent variables for generative modeling. In Advances in neural information processing systems, pp. 6551-6562, 2019.
+Chenfeng Miao, Shuang Liang, Minchuan Chen, Jun Ma, Shaojun Wang, and Jing Xiao. Flowts: A non-autoregressive network for text to speech based on flow. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 7209-7213. IEEE, 2020.
+Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=B1QRgziT-.
+Jongseok Park, Kyubyong & Kim. g2pe. https://github.com/Kyubyong/g2p, 2019.
+Kainan Peng, Wei Ping, Zhao Song, and Kexin Zhao. Non-autoregressive neural text-to-speech. In Proceedings of the 37th International Conference on Machine Learning, pp. 10192-10204. PMLR, 2020.
+Wei Ping, Kainan Peng, Andrew Gibiansky, Sercan O Arik, Ajay Kannan, Sharan Narang, Jonathan Raiman, and John Miller. Deep voice 3: 2000-speaker neural text-to-speech. Proc. ICLR, pp. 214-217, 2018.
+Wei Ping, Kainan Peng, Kexin Zhao, and Zhao Song. Waveflow: A compact flow-based model for raw audio. arXiv preprint arXiv:1912.01219, 2019.
+Ryan Prenger, Rafael Valle, and Bryan Catanzaro. Waveglow: A flow-based generative network for speech synthesis. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3617-3621. IEEE, 2019.
+Yi Ren, Yangjun Ruan, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. Fast speech: Fast, robust and controllable text to speech. In Advances in Neural Information Processing Systems, pp. 3171-3180, 2019.
+Yi Ren, Chenxu Hu, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. Fastspeech 2: Fast and high-quality end-to-end text-to-speech. arXiv preprint arXiv:2006.04558, 2020.
+Jonathan Shen, Ruoming Pang, Ron J Weiss, Mike Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, Rj Skerrv-Ryan, et al. Natural tts synthesis by conditioning wavenet on mel spectrogram predictions. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4779-4783. IEEE, 2018.
+Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther. Ladder variational autoencoders. In Advances in neural information processing systems, pp. 3738-3746, 2016.
+
+Jose Sotelo, Soroush Mehri, Kundan Kumar, João Felipe Santos, Kyle Kastner, Aaron C. Courville, and Yoshua Bengio. Char2wav: End-to-end speech synthesis. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Workshop Track Proceedings. OpenReview.net, 2017. URL https://openreview.net/forum?id=B1VWyySKx.
+Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pp. 3104-3112, 2014.
+Hideyuki Tachibana, Katsuya Uenoyama, and Shunsuke Aihara. Efficiently trainable text-to-speech system based on deep convolutional networks with guided attention. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4784-4788. IEEE, 2018.
+Arash Vahdat and Jan Kautz. Nvae: A deep hierarchical variational autoencoder. arXiv preprint arXiv:2007.03898, 2020.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998-6008, 2017.
+Yuxuan Wang, R.J. Skerry-Ryan, Daisy Stanton, Yonghui Wu, Ron J. Weiss, Navdeep Jaitly, Zongheng Yang, Ying Xiao, Zhifeng Chen, Samy Bengio, Quoc Le, Yannis Agiomyrgiannakis, Rob Clark, and Rif A. Saurous. Tacotron: Towards end-to-end speech synthesis. In Proc. Interspeech 2017, pp. 4006-4010, 2017. doi: 10.21437/Interspeech.2017-1452. URL http://dx.doi.org/10.21437/Interspeech.2017-1452.
+Heiga Zen, Viet Dang, Rob Clark, Yu Zhang, Ron J. Weiss, Ye Jia, Zhifeng Chen, and Yonghui Wu. LibriTTS: A Corpus Derived from LibriSpeech for Text-to-Speech. In Proc. Interspeech 2019, pp. 1526-1530, 2019. doi: 10.21437/Interspeech.2019-2441. URL http://dx.doi.org/10.21437/Interspeech.2019-2441.
+Zhen Zeng, Jianzong Wang, Ning Cheng, Tian Xia, and Jing Xiao. Alignnts: Efficient feed-forward text-to-speech system without explicit alignment. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6714-6718. IEEE, 2020.
+Richard Zhang. Making convolutional networks shift-invariant again. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pp. 7324-7334. PMLR, 2019. URL http://proceedings.mlr.press/v97/zhang19a.html.
+Ya-Jie Zhang, Shifeng Pan, Lei He, and Zhen-Hua Ling. Learning latent representations for style control and transfer in end-to-end speech synthesis. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6945-6949. IEEE, 2019.
+
+# A DETAILS ON BVAE-TTS
+
+# A.1 ALGORITHMS
+
+Algorithm 1: Pseudo-code of BVAE-TTS training
+Data:
+x,y: a mel-spectrogram, a phoneme sequence
+Q,K,V, $\mathbf{V}_{\mathrm{exp}}$ query, key, value, and expanded value matrices
+h,hexp: hidden representations, expanded hidden representations
+BVAE-TTS[b,l]: an $l$ -th BVAE layer in $b$ -th BVAE block. PEquery,PEkey: positional encoding vectors for query and key
+A: a soft attention map obtained before applying the ST-argmax technique
+D: phoneme durations extracted from the attention map
+ $\hat{D}$ : phoneme durations predicted by the duration predictor
+ $\alpha$ : a warm-up constant
+Result:
+ $\hat{\mathbf{x}}$ : a reconstructed mel-spectrogram
+Ltotal,Lrecon, $\mathcal{L}_{KL},\mathcal{L}_{dur},\mathcal{L}_{guide}$ total loss, and losses that make up the total loss
+K,V<-TextEncoder(y);
+ $h\gets \mathrm{PreNet}(\mathbf{x})$
+for $b\gets 0$ to $B - 1$ do if $b \% 2 = = 1$ then h $\leftarrow$ Downsample (h); end for $l\gets 0$ to $L - 1$ do h, $\Delta \mu_{k_1},\Delta \Sigma_{k_1}\gets \mathrm{BVAE - TTS}[b,l].$ BottomUp (h); BVAE-TTS[b,l]. $\Delta \mu_{k_1}\gets \Delta \mu_{k_1};$ BVAE-TTS[b,l]. $\Delta \Sigma_{k_1}\gets$ Softplus( $\Delta \Sigma_{k_1}$ ); end
+end
+Q | Hyperparameter | BVAE-TTS |
| Phoneme Embedding Dimension | 256 |
| Text Encoder Layers | 7 |
| Text Encoder Hidden Dimension | 256 |
| Text Encoder Conv1D Kernel Width | 5 |
| Text Encoder Conv1D Filter Size | 256 |
| Text Encoder Dropout | 0.1 |
| Pre-net Layers | 2 |
| Pre-net Dropout | 0.5 |
| Pre-net Hidden Dimension | 256 |
| Downsampling Conv 1D kernel | [0.25, 0.5, 0.25] |
| Projection Layers | 3 |
| Projection Dropout | 0.5 |
| Projection Conv1D Kernel Width | 5 |
| Projection Conv1D Filter Size | 256 |
| Duration Predictor Conv1D Kernel Width | 3 |
| Duration Predictor Conv1D Filter Size | 256 |
| Duration Predictor Dropout | 0.1 |
| BVAE Blocks | 4 |
| BVAE Layers per block | 3 |
| BVAE Conv1D Kernel Width | 5 |
| Hidden Dimensions of BVAE blocks | 128, 128, 64, 64 |
| Total Number of Parameters | 16.0M (12.0M) |
+
+# B PARALLEL SYNTHESIS
+
+
+Figure 6: Inference time measured on 500 sentences of LJSpeech test set.
+
+To see the benefit of parallel mel-spectrogram synthesis, we measure the inference time of Tacotron 2, Glow-TTS, and BVAE-TTS on the 500 sentences of LJSpeech test set in GPU environment. Figure 6 shows that the inference times of the non-AR TTS models are almost constant even if the length of the input text gets longer. On the contrary, the inference time of Tacotron2 linearly increases.
+
+# C ANALYSIS ON HIERARCHY
+
+While changing the target BVAE block and its temperature as described in 4.3.1, we observe the generated mel-spectrogram samples. As shown in Figure 7-9, the variance of the mel-spectrograms is clear when we increase the temperature of the two bottom BVAE blocks. On the contrary, the mel-spectrograms are almost the same when we increase the temperature of the top two BVAE blocks. Especially, when we set the temperature of 'BVAE block 2' to 5.0, the mel-spectrograms are the most diverse with good speech quality.
+
+
+Temperature: 1.0
+
+
+Temperature: 2.0
+
+
+Temperature: 5.0
+Figure 7: Mel-spectrograms generated from the same text, "One, Two, Three".
+
+
+block 1
+
+
+Temperature: 1.0
+
+
+
+
+block 2
+
+
+
+
+
+
+block 3
+
+
+
+
+
+
+block 4
+
+
+
+
+
+
+block 1
+
+
+Temperature: 2.0
+
+
+
+
+block 2
+
+
+
+
+
+
+block 3
+
+
+
+
+
+
+block 4
+
+
+
+
+
+
+block 1
+
+
+Temperature: 5.0
+
+
+
+
+block 2
+
+
+
+
+
+
+block 3
+
+
+Figure 8: Mel-spectrograms generated from the same text, "Hello, my friends".
+
+
+
+
+block 4
+
+
+
+
+
+
+
+
+
+
+Figure 9: Mel-spectrograms generated from the same text, "Trick or treat!".
+
+# D SPEED CONTROL
+
+
+
+
+
+
+Figure 10: Speed controlled mel-spectrograms generated from the same text "One, Two, Three."
+
+While AR TTS models generally suffer from the lack of controllability, BVAE-TTS can control the fine-grained speed of speech by multiplying a positive constant to the durations predicted by the duration predictor. Figure 10 shows three mel-spectrograms produced by BVAE-TTS using the same sentence, "One, Two, Three". While changing the target word from "One" to "Three", we multiply 2.0 to the durations of phonemes belonging to the target word, and multiply 0.7 to the durations of phonemes belonging to the non-target words. In this experiment, we observe that BVAE-TTS successfully generates speech while varying the pronouncing speed of each word in a single sentence. Interestingly, we observe that our model intensifies the pronunciation of the target word, showing the capability of appropriately adjusting the prosody according to the speed.
\ No newline at end of file
diff --git a/bidirectionalvariationalinferencefornonautoregressivetexttospeech/images.zip b/bidirectionalvariationalinferencefornonautoregressivetexttospeech/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..7f1d78dda5a854db854f9a47d17d91cb1efa2d00
--- /dev/null
+++ b/bidirectionalvariationalinferencefornonautoregressivetexttospeech/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:abdb53c56905be74b22b3524ffb3c6744c7e46bd4eb423a88a6f29621e110819
+size 904097
diff --git a/bidirectionalvariationalinferencefornonautoregressivetexttospeech/layout.json b/bidirectionalvariationalinferencefornonautoregressivetexttospeech/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..ee1046afb003b5949ac7360f8e51e28ddee6e5a4
--- /dev/null
+++ b/bidirectionalvariationalinferencefornonautoregressivetexttospeech/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7cc60e38dcf4da0ae5528fabb93e1eb78318763fd9f150861649353379419cc0
+size 597728
diff --git a/bipointnetbinaryneuralnetworkforpointclouds/74ab6ea6-13bf-470c-a686-06ba34da1ca4_content_list.json b/bipointnetbinaryneuralnetworkforpointclouds/74ab6ea6-13bf-470c-a686-06ba34da1ca4_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..8bf107bb1c64863fe90568a5acb68938ae8a7279
--- /dev/null
+++ b/bipointnetbinaryneuralnetworkforpointclouds/74ab6ea6-13bf-470c-a686-06ba34da1ca4_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f7b8bc318964262b0a8de0b6f583630ae4fbbe6433331b98fbf421bb75dd2629
+size 144029
diff --git a/bipointnetbinaryneuralnetworkforpointclouds/74ab6ea6-13bf-470c-a686-06ba34da1ca4_model.json b/bipointnetbinaryneuralnetworkforpointclouds/74ab6ea6-13bf-470c-a686-06ba34da1ca4_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..caf51f22d4cb8311c9cf5607169ab34fc20b818f
--- /dev/null
+++ b/bipointnetbinaryneuralnetworkforpointclouds/74ab6ea6-13bf-470c-a686-06ba34da1ca4_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:849e4cd8e01cbcc4f9cb148dce500c5c2d8c0f4452ac353d02033dcfb8be7a76
+size 170570
diff --git a/bipointnetbinaryneuralnetworkforpointclouds/74ab6ea6-13bf-470c-a686-06ba34da1ca4_origin.pdf b/bipointnetbinaryneuralnetworkforpointclouds/74ab6ea6-13bf-470c-a686-06ba34da1ca4_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..2ddf8522740108fabca94eb231822a75f542d5e9
--- /dev/null
+++ b/bipointnetbinaryneuralnetworkforpointclouds/74ab6ea6-13bf-470c-a686-06ba34da1ca4_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8e6f4be064e628d2fb15c55144965159c82aad5ecb1f67f8e5d43b365882506d
+size 2982551
diff --git a/bipointnetbinaryneuralnetworkforpointclouds/full.md b/bipointnetbinaryneuralnetworkforpointclouds/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..17cf202533a8452fd9ca9eb6129c675c904c2f60
--- /dev/null
+++ b/bipointnetbinaryneuralnetworkforpointclouds/full.md
@@ -0,0 +1,646 @@
+# BIPOINTNET: BINARY NEURAL NETWORK FOR POINT CLOUDS
+
+Haotong Qin\*1,2, Zhongang Cai\*3, Mingyuan Zhang\*3, Yifu Ding1, Haiyu Zhao3, Shuai Yi3, Xianglong Liu†1, Hao Su4
+
+1State Key Lab of Software Development Environment, Beihang University
+$^{2}$ Shen Yuan Honors College, Beihang University $^{3}$ SenseTime Research
+4University of California, San Diego
+
+{qinhaotong,xlliu}@nlsde.buaa.edu.cn,zjdyf@buaa.edu.cn
+
+{caizhongang, zhangmingyuan, zhaohaiyu, yishuai}@sensetime.com
+
+haosu@eng.ucsd.edu
+
+# ABSTRACT
+
+To alleviate the resource constraint for real-time point cloud applications that run on edge devices, in this paper we present BiPointNet, the first model binarization approach for efficient deep learning on point clouds. We discover that the immense performance drop of binarized models for point clouds mainly stems from two challenges: aggregation-induced feature homogenization that leads to a degradation of information entropy, and scale distortion that hinders optimization and invalidates scale-sensitive structures. With theoretical justifications and in-depth analysis, our BiPointNet introduces Entropy-Maximizing Aggregation (EMA) to modulate the distribution before aggregation for the maximum information entropy, and Layer-wise Scale Recovery (LSR) to efficiently restore feature representation capacity. Extensive experiments show that BiPointNet outperforms existing binarization methods by convincing margins, at the level even comparable with the full precision counterpart. We highlight that our techniques are generic, guaranteeing significant improvements on various fundamental tasks and mainstream backbones. Moreover, BiPointNet gives an impressive $14.7 \times$ speedup and $18.9 \times$ storage saving on real-world resource-constrained devices.
+
+# 1 INTRODUCTION
+
+With the advent of deep neural networks that directly process raw point clouds (PointNet (Qi et al., 2017a) as the pioneering work), great success has been achieved in learning on point clouds (Qi et al., 2017b; Li et al., 2018; Wang et al., 2019a; Wu et al., 2019; Thomas et al., 2019; Liu et al., 2019b; Zhang et al., 2019b). Point cloud applications, such as autonomous driving and augmented reality, often require real-time interaction and fast response. However, computation for such applications is usually deployed on resource-constrained edge devices. To address the challenge, novel algorithms, such as Grid-GCN (Xu et al., 2020b), RandLA-Net (Hu et al., 2020), and PointVoxel (Liu et al., 2019d), have been proposed to accelerate those point cloud processing networks. While significant speedup and memory footprint reduction have been achieved, these works still rely on expensive floating-point operations, leaving room for further optimization of the performance from the model quantization perspective. Model binarization (Rastegari et al., 2016; Bulat & Tzimiropoulos, 2019; Hubara et al., 2016; Wang et al., 2020; Zhu et al., 2019; Xu et al., 2019) emerged as one of the most promising approaches to optimize neural networks for better computational and memory usage efficiency. Binary Neural Networks (BNNs) leverage 1) compact binarized parameters that take small memory space, and 2) highly efficient bitwise operations which are far less costly compared to the floating-point counterparts.
+
+Despite that in 2D vision tasks (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; Szegedy et al., 2015; Girshick et al., 2014; Girshick, 2015; Russakovsky et al., 2015; Wang et al., 2019b;
+
+
+Figure 1: Overview of our BiPointNet on PointNet base model, applying Entropy-Maximizing Aggregation (EMA) and Layer-wise Scale Recovery (LSR). EMA consists of the transformation unit and the aggregation unit for maximizing the information entropy of feature after binarization. LSR with the learnable layer-wise scaling factor $\alpha$ is applied to address the scale distortion of bi-linear layers (which form the BiMLPs), flexibly restore the distorted output to reasonable values
+
+Zhang et al., 2021) has been studied extensively by the model binarization community, the methods developed are not readily transferable for 3D point cloud networks due to the fundamental differences between 2D images and 3D point clouds. First, to gain efficiency in processing unordered 3D points, many point cloud learning methods rely heavily on pooling layers with large receptive field to aggregate point-wise features. As shown in PointNet (Qi et al., 2017b), global pooling provides a strong recognition capability. However, this practice poses challenges for binarization. Our analyses show that the degradation of feature diversity, a persistent problem with binarization (Liu et al., 2019a; Qin et al., 2020b; Xie et al., 2017), is significantly amplified by the global aggregation function (Figure 2), leading to homogenization of global features with limited discriminability. Second, the binarization causes immense scale distortion at the point-wise feature extraction stage, which is detrimental to model performance in two ways: the saturation of forward-propagated features and backward-propagated gradients hinders optimization, and the disruption of the scale-sensitive structures (Figure 3) results in the invalidation of their designated functionality.
+
+In this paper, we provide theoretical formulations of the above-mentioned phenomenons and obtain insights through in-depth analysis. Such understanding allows us to propose a method that turns full-precision point cloud networks into extremely efficient yet strong binarized models (see the overview in Figure 1). To tackle the homogenization of the binarized features after passing the aggregation function, we study the correlation between the information entropy of binarization features and the performance of point cloud aggregation functions. We thus propose Entropy-Maximizing Aggregation (EMA) that shifts the feature distribution towards the statistical optimum, effectively improving expression capability of the global features. Moreover, given maximized information entropy, we further develop Layer-wise Scale Recovery (LSR) to efficiently restore the output scale that enhances optimization, which allows scale-sensitive structures to function properly. LSR uses only one learnable parameter per layer, leading to negligible storage increment and computation overhead.
+
+Our BiPointNet is the first binarization approaches to deep learning on point clouds, and it outperforms existing binarization algorithms for 2D vision by convincing margins. It is even almost on par (within $\sim 1 - 2\%$ ) with the full-precision counterpart. Although we conduct most analysis on the PointNet baseline, we show that our methods are generic and can be readily extendable to other popular backbones, such as PointNet++ (Qi et al., 2017b), PointCNN (Li et al., 2018), DGCNN (Wang et al., 2019a), and PointConv (Wu et al., 2019), which are the representatives of mainstream categories of point cloud feature extractors. Moreover, extensive experiments on multiple fundamental tasks on the point cloud, such as classification, part segmentation, and semantic segmentation, highlight that our BiPointNet is task-agnostic. Besides, we highlight that our EMA and LSR are efficient and easy to implement in practice: in the actual test on popular edge devices, BiPointNet achieves $14.7\times$ speedup and $18.9\times$ storage savings compared to the full-precision PointNet. Our code is released at https://github.com/htqin/BiPointNet.
+
+# 2 RELATED WORK
+
+Network Binarization. Recently, various quantization methods for neural networks have emerged, such as uniform quantization (Gong et al., 2019; Zhu et al., 2020), mixed-precision quantization (Wu
+
+et al., 2018; Yu et al., 2020), and binarization. Among these methods, binarization enjoys compact binarized parameters and highly efficient bitwise operations for extreme compression and acceleration (Rastegari et al., 2016; Qin et al., 2020a). In general, the forward and backward propagation of binarized models in the training process can be formulated as:
+
+$$
+\operatorname {F o r w a r d}: b = \operatorname {s i g n} (x) = \left\{ \begin{array}{l l} + 1, & \text {i f} x \geq 0 \\ - 1, & \text {o t h e r w i s e} \end{array} \right. \quad \operatorname {B a c k w a r d}: g _ {x} = \left\{ \begin{array}{l l} g _ {b}, & \text {i f} x \in (- 1, 1) \\ 0, & \text {o t h e r w i s e} \end{array} \right. \tag {1}
+$$
+
+where $x$ denotes the element in floating-point weights and activations, $b$ denotes the element in binarized weights $\mathbf{B}_{\mathbf{w}}$ and activations $\mathbf{B}_{\mathbf{a}}$ . $g_{x}$ , and $g_{b}$ donate the gradient $\frac{\partial C}{\partial x}$ and $\frac{\partial C}{\partial b}$ , respectively, where $C$ is the cost function for the minibatch. In forward propagation, sign function is directly applied to obtain the binary parameters. In backward propagation, the Straight-Through Estimator (STE) (Bengio et al., 2013) is used to obtain the derivative of the sign function, avoiding getting all zero gradients. The existing binarization methods are designed to obtain accurate binarized networks by minimizing the quantization error (Rastegari et al., 2016; Zhou et al., 2016; Lin et al., 2017), improving loss function (Ding et al., 2019; Hou et al., 2017), reducing the gradient error (Liu et al., 2018; 2020), and designing novel structures and pipelines (Martinez et al., 2020). Unfortunately, we show in Sec 3 that these methods, designed for 2D vision tasks, are not readily transferable to 3D point clouds.
+
+Deep Learning on Point Clouds. PointNet (Qi et al., 2017a) is the first deep learning model that processes raw point clouds directly. The basic building blocks proposed by PointNet such as MLP for point-wise feature extraction and max pooling for global aggregation (Guo et al., 2020) have become the popular design choices for various categories of newer backbones: 1) the pointwise MLP-based such as PointNet++ (Qi et al., 2017b); 2) the graph-based such as DGCNN (Xu et al., 2020b); 3) the convolution-based such as PointCNN (Li et al., 2018), PointConv (Wu et al., 2019) RS-CNN (Liu et al., 2019c) and KP-Conv (Thomas et al., 2019). Recently, methods are proposed for efficient deep learning on point clouds through novel data structuring (Xu et al., 2020b), faster sampling (Hu et al., 2020), adaptive filters (Xu et al., 2020a), efficient representation (Liu et al., 2019d) or convolution operation (Zhang et al., 2019b). However, they still use expensive floating-point parameters and operations, which can be improved by binarization.
+
+# 3 METHODS
+
+Binarized models operate on efficient binary parameters, but often suffer large performance drop. Moreover, the unique characteristics of point clouds pose even more challenges. We observe there are two main problems: first, aggregation of a large number of points leads to a severe loss of feature diversity; second, binarization induces an immense scale distortion, that undermines the functionality of scale-sensitive structures. In this section, we discuss our observations, and propose our BiPointNet with theoretical justifications.
+
+# 3.1 BINARIZATION FRAMEWORK
+
+We first give a brief introduction to our framework that binarizes a floating-point network. For example, deep learning models on point clouds typically contain multi-layer perceptrons (MLPs) for feature extraction. In contrast, the binarized models contain binary MLPs (BiMLPs), which are composed of binarized linear (bi-linear) layers. Bi-linear layers perform the extremely efficient bitwise operations (XNOR and Bitcount) on the lightweight binary weight/activation. Specifically, the activation of the bi-linear layer is binarized to $\mathbf{B}_{\mathbf{a}}$ , and is computed with the binarized weight $\mathbf{B}_{\mathbf{w}}$ to obtain the output $\mathbf{Z}$ :
+
+$$
+\mathbf {Z} = \mathbf {B} _ {\mathbf {a}} \odot \mathbf {B} _ {\mathbf {w}}, \tag {2}
+$$
+
+where $\odot$ denotes the inner product for vectors with bitwise operations XNOR and Bitcount. When $B_{w}$ and $B_{a}$ denote the random variables in $\mathbf{B}_{\mathbf{w}}$ and $\mathbf{B}_{\mathbf{a}}$ , we represent their probability mass function as $p_{B_w}(b_w)$ , and $p_{B_a}(b_a)$ .
+
+Moreover, we divide the BiPointNet into units for detailed discussions. In BiPointNet, the original data or feature $\mathbf{X} \in \mathbb{R}^{n \times c}$ first enters the symmetric function $\Omega$ , which represents a composite function built by stacking several permutation equivariant and permutation invariant layers (e.g., nonlinear layer, bi-linear layer, max pooling). And then, the output $\mathbf{Y} \in \mathbb{R}^{n \times k}$ is binarized to obtain
+
+
+(a) Point-wise features to be aggregated.
+
+
+(b) Full-precision features aggregated with max pooling.
+Figure 2: Aggregation-induced feature homogenization. (a) shows the activation of each test sample in a batch of ModelNet40. In (b)-(d), the single feature vectors pooled from all points are mapped to colors. The diversity of colors represents the diversity of pooled features. The original aggregation design is incompatible with binarization, leading to the homogenization of output features in (c), whereas our proposed EMA retains high information entropy, shown in (d)
+
+
+(c) Binarized features aggregated with max pooling.
+
+
+(d) Binarized features aggregated with EMA.
+
+the binary feature $\mathbf{B} \in \{-1, 1\}^{i \times k}$ , where $i$ takes $n$ when the feature is modeled independently and takes $1$ when the feature is aggregated globally. The single unit is thus represented as
+
+$$
+\mathbf {B} = \operatorname {s i g n} (\mathbf {Y}) = \operatorname {s i g n} (\Omega (\mathbf {X})). \tag {3}
+$$
+
+Similarly, when $B$ , $Y$ and $X$ denote the random variables sampled from $\mathbf{B}$ , $\mathbf{Y}$ and $\mathbf{X}$ , we represent their probability mass function as $p_B(b)$ , $p_Y(y)$ and $p_X(x)$ .
+
+# 3.2 ENTROPY-MAXIMIZING AGGREGATION
+
+Unlike images pixels that are arranged in regular lattices, point clouds are sets of points without any specific order. Hence, features are usually processed in a point-wise manner and aggregated explicitly through pooling layers. Our study shows that the aggregation function is a performance bottleneck of the binarized model, due to severe homogenization as shown in Figure 2.
+
+We apply information theory (Section 3.2.1) to quantify the effect of the loss of feature diversity, and find that global feature aggregation leads to a catastrophic loss of information entropy. In Section 3.2.2, we propose the concept of Entropy-Maximizing Aggregation (EMA) that gives the statistically maximum information entropy to effectively tackle the feature homogenization problem.
+
+# 3.2.1 AGGREGATION-INDUCED FEATURE HOMOGENIZATION
+
+Ideally, the binarized tensor $\mathbf{B}$ should reflect the information in the original tensor $\mathbf{Y}$ as much as possible. From the perspective of information, maximizing mutual information can maximize the information flow from the full-precision to the binarized parameters. Hence, our goal is equivalent to maximizing the mutual information $\mathcal{I}(Y; B)$ of the random variables $Y$ and $B$ :
+
+$$
+\underset {Y, B} {\arg \max } \mathcal {I} (Y; B) = \mathcal {H} (B) - \mathcal {H} (B \mid Y) \tag {4}
+$$
+
+where $\mathcal{H}(B)$ is the information entropy, and $\mathcal{H}(B\mid Y)$ is the conditional entropy of $B$ given $Y$ . $\mathcal{H}(B\mid Y) = 0$ as we use the deterministic sign function as the quantizer in binarization (see Section A.1 for details). Hence, the original objective function Eq. (4) is equivalent to:
+
+$$
+\underset {B} {\arg \max } \mathcal {H} _ {B} (B) = - \sum_ {b \in \mathcal {B}} p _ {B} (b) \log p _ {B} (b), \tag {5}
+$$
+
+where $\mathcal{B}$ is the set of possible values of $B$ . We then study the information properties of max pooling, which is a common aggregation function used in popular point cloud learning models such as PointNet. Let the max pooling be the last layer $\phi$ of the multi-layer stacked $\Omega$ , and the input of $\phi$ is defined as $\mathbf{X}_{\phi}$ . The data flow of Eq. (3) can be further expressed as $\mathbf{B} = \mathrm{sign}(\phi (\mathbf{X}_{\phi}))$ , and the information entropy $\mathcal{H}_B$ of binarized feature $B$ can be expressed as
+
+$$
+\mathcal {H} _ {B} \left(X _ {\phi}\right) = - \left(\sum_ {x _ {\phi} \geq 0} p _ {X _ {\phi}} \left(x _ {\phi}\right)\right) ^ {n} \log \left(\sum_ {x _ {\phi} \geq 0} p _ {X _ {\phi}} \left(x _ {\phi}\right)\right) ^ {n} - \left(1 - \left(\sum_ {x _ {\phi} \geq 0} p _ {X _ {\phi}} \left(x _ {\phi}\right)\right) ^ {n}\right) \log \left(1 - \left(\sum_ {x _ {\phi} \geq 0} p _ {X _ {\phi}} \left(x _ {\phi}\right)\right) ^ {n}\right) \tag {6}
+$$
+
+where $n$ is the number of elements aggregated by the max pooling, and $X_{\phi}$ is the random variable sampled from $\mathbf{X}_{\phi}$ . The brief derivation of Eq (6) is shown in Appendix A.2. Theorem 1 shows the information properties of max pooling with the normal distribution input on the binarized network architecture.
+
+Theorem 1 For input $X_{\phi}$ of max pooling $\phi$ with arbitrary distribution, the information entropy of the binarized output goes to zero as $n$ goes to infinity, i.e., $\lim_{n\to +\infty}\mathcal{H}_B = 0$ . And there exists a constant $c$ , for any $n_1$ and $n_2$ , if $n_1 > n_2 > c$ , we have $\mathcal{H}_{B,n_1} < \mathcal{H}_{B,n_2}$ , where $n$ is the number of elements to be aggregated.
+
+The proof of Theorem 1 is included in Appendix A.2, which explains the severe feature homogenization after global feature pooling layers. As the number of points is typically large (e.g. 1024 points by convention in ModelNet40 classification task), it significantly reduces the information entropy $\mathcal{H}_B$ of binarized feature $\mathbf{B}$ , i.e., the information of $\mathbf{Y}$ is hardly retained in $\mathbf{B}$ , leading to highly similar output features regardless of the input features to pooling layer as shown in Figure 2.
+
+Furthermore, Theorem 1 provides a theoretical justification for the poor performance of existing binarization methods, transferred from 2D vision tasks to point cloud applications. In 2D vision, the aggregation functions are often used to gather local features with a small kernel size $n$ (e.g. $n = 4$ in ResNet (He et al., 2016; Liu et al., 2018) and VGG-Net (Simonyan & Zisserman, 2014) which use $2 \times 2$ pooling kernels). Hence, the feature homogenization problem on images is not as significant as that on point clouds.
+
+# 3.2.2 EMA FOR MAXIMUM INFORMATION ENTROPY
+
+Therefore, we need a class of aggregation functions that maximize the information entropy of $\mathbf{B}$ to avoid the aggregation-induced feature homogenization.
+
+We study the correlation between the information entropy $\mathcal{H}_B$ of binary random variable $B$ and the distribution of the full-precision random variable $Y$ . We notice that the sign function used in binarization has a fixed threshold and decision levels, so we get Proposition 1 about information entropy of $B$ and the distribution of $Y$ .
+
+Proposition 1 When the distribution of the random variable $Y$ satisfies $\sum_{y < 0} p_Y(y) = \sum_{y \geq 0} p_Y(y) = 0.5$ , the information entropy $\mathcal{H}_B$ is maximized.
+
+The proof of Proposition 1 is shown in Appendix A.3. Therefore, theoretically, there is a distribution of $Y$ that can maximize the mutual information of $\mathbf{Y}$ and $\mathbf{B}$ by maximizing the information entropy of the binary tensor $\mathbf{B}$ , so as to maximally retain the information of $\mathbf{Y}$ in $\mathbf{B}$ .
+
+To maximize the information entropy $\mathcal{H}_B$ , we propose the EMA for feature aggregation in BiPointNet. The EMA is not one, but a class of binarization-friendly aggregation layers. Modifying the aggregation function in the full-precision neural network to a EMA keeps the entropy maximized by input transformation. The definition of EMA is
+
+$$
+\mathbf {Y} = \operatorname {E M A} \left(\mathbf {X} _ {\phi}\right) = \varphi (\tau \left(\mathbf {X} _ {\phi}\right)), \tag {7}
+$$
+
+where $\varphi$ denotes the aggregation function (e.g. max pooling and average pooling) and $\tau$ denotes the transformation unit. Note that a standard normal distribution $\mathcal{N}(0,1)$ is assumed for $X_{\phi}$ because batch normalization layers are placed prior to the pooling layers by convention. $\tau$ can take many forms; we discover that a simple constant offset is already effective. The offset shifts the input so that the output distribution satisfies $\sum_{y < 0} p_Y(y) = 0.5$ , to maximize the information entropy of binary feature $B$ . The transformation unit $\tau$ in our BiPointNet can be defined as $\tau(\mathbf{X}_{\phi}) = \mathbf{X}_{\phi} - \delta^{*}$ .
+
+When max pooling is applied as $\varphi$ , we obtain the distribution offset $\delta^{*}$ for the input $X_{\phi}$ that maximizes the information entropy $\mathcal{H}_B$ by solving the objective function
+
+$$
+\begin{array}{l} \arg \max _ {\delta} \mathcal {H} _ {B} (\delta) = - \left(\sum_ {x _ {\phi} \geq 0} \frac {1}{\sqrt {2 \pi}} e ^ {- \frac {\left(x _ {\phi} - \delta\right) ^ {2}}{2}}\right) ^ {n} \log \left(\sum_ {x _ {\phi} \geq 0} \frac {1}{\sqrt {2 \pi}} e ^ {- \frac {\left(x _ {\phi} - \delta\right) ^ {2}}{2}}\right) ^ {n} \tag {8} \\ - \left(1 - \Big (\sum_ {x _ {\phi} \geq 0} \frac {1}{\sqrt {2 \pi}} e ^ {- \frac {(x _ {\phi} - \delta) ^ {2}}{2}} \Big) ^ {n}\right) \log \Big (1 - \Big (\sum_ {x _ {\phi} \geq 0} \frac {1}{\sqrt {2 \pi}} e ^ {- \frac {(x _ {\phi} - \delta) ^ {2}}{2}} \Big) ^ {n} \Big), \\ \end{array}
+$$
+
+where $n$ denotes the number of elements in each batch. For each $n$ , we can obtain an optimized $\delta_{\mathrm{max}}^*$ for Eq. (8), we include the pseudo code in the Appendix A.5.
+
+Moreover, we derive in the Appendix A.6 that when average pooling is used as $\varphi$ , the solution to its objective function is expressed as $\delta = 0$ . We thus obtain $\delta_{\mathrm{avg}}^* = 0$ . This means the solution
+
+
+(a)
+
+
+(b)
+Figure 4: (a) Information entropy of the aggregated features. With EMA, our BiPointNet achieves a higher information entropy. (b) Regularizer loss comparison. Our PointNet has a low loss, indicating that the scale distortion is reduced and T-Net is not disrupted. (c) Ratio of zero-gradient activations in back-propagation. LSR alleviates the scale distortion, enhancing the optimization process
+
+
+(c)
+
+is not related to $n$ . Hence, average pooling can be regarded as a flexible alternative because its performance is independent of the input number $n$ .
+
+In a nutshell, we provide two possible variants of $\varphi$ : first, we show that a simple shift is sufficient to turn a max pooling layer into an EMA (EMA-max); second, average pooling can be directly used (EMA-avg) without modification as a large number of points does not undermine its information entropy, making it adaptive to the dynamically changing number of point input. Note that modifying existing aggregation functions is only one way to achieve EMA; the theory also instructs the development of new binarization-friendly aggregation functions in the future.
+
+# 3.3 ONE-SCALE-FITS-ALL: LAYER-WISE SCALE RECOVERY
+
+In this section, we show that binarization leads to feature scale distortion and study its cause. We conclude that the distortion is directly related to the number of feature channels. More importantly, we discuss the detriments of scale distortion from the perspectives of the functionality of scale-sensitive structures and the optimization.
+
+To address the severe scale distortion in feature due to binarization, we propose the Layer-wise Scale Recovery (LSR). In LSR, each bi-linear layer is added only one learnable scaling factor to recover the original scales of all binarized parameters, with negligible additional computational overhead and memory usage.
+
+# 3.3.1 SCALE DISTORTION
+
+The scale of parameters is defined as the standard deviation $\sigma$ of their distribution. As we mentioned in Section 3.2, balanced binarized weights are used in the bi-linear layer aiming to maximize the entropy of the output after binarization, i.e., $p_{B_w}(1) = 0.5$ and $p_{B_a}(1) = 0.5$ .
+
+Theorem 2 When we let $p_{B_w}(1) = 0.5$ and $p_{B_a}(1) = 0.5$ in bi-linear layer to maximize the mutual information, for the binarized weight $\mathbf{B}_{\mathbf{w}} \in \{-1, +1\}^{m \times k}$ and activation $\mathbf{B}_{\mathbf{a}} \in \{-1, +1\}^{n \times m}$ , the probability mass function for the distribution of output $\mathbf{Z}$ can be represented as $p_Z(2i - m) = 0.5^m C_m^i$ , $i \in \{0,1,2,\dots,m\}$ . The output has approximately a normal distribution $\mathcal{N}(0,m)$ .
+
+The proof of Theorem 2 is found in Appendix A.4. Theorem 2 shows that given the maximized information entropy, the scale of the output features is directly related to the number of feature channels. Hence, scale distortion is pervasive as a large number of channels is the design norm of deep learning neural networks for effective feature extraction.
+
+
+Figure 3: Scale Distortion. Figures (b)-(d) show the transformed input. Compared with the input (a), the scales of (b) in full-precision PointNet and (c) in our BiPointNet are normal, while the scale of (d) in BNN is significantly distorted
+
+We discuss two major impacts of the scale distortion on the performance of binarized point cloud learning models. First, the scale distortion invalidates structures designed for 3D deep learning that
+
+Table 1: Ablation study for our BiPointNet of various tasks on ModelNet40 (classification), ShapeNet Parts (part segmentation), and S3DIS (semantic segmentation). EMA and LSR and complementary to each other, and they are useful across all three applications
+
+| Method | Bit-width | Aggr. | ModelNet40 | ShapeNet Parts | S3DIS |
| OA | mIoU | mIoU | OA |
| Full Prec. | 32/32 | MAX | 88.2 | 84.3 | 54.4 | 83.5 |
| 32/32 | AVG | 86.5 | 84.0 | 51.5 | 81.5 |
| BNN | 1/1 | MAX | 7.1 | 54.0 | 9.5 | 45.0 |
| BNN-LSR | 1/1 | MAX | 4.1 | 58.7 | 2.0 | 25.4 |
| BNN-EMA | 1/1 | EMA-avg | 11.3 | 53.0 | 9.9 | 46.8 |
| 1/1 | EMA-max | 16.2 | 47.3 | 8.5 | 47.2 |
| Ours | 1/1 | EMA-avg | 82.5 | 80.3 | 40.9 | 74.9 |
| 1/1 | EMA-max | 86.4 | 80.6 | 44.3 | 76.7 |
+
+are sensitive to the scale of values. For example, the T-Net in PointNet is designed to predict an orthogonal transformation matrix for canonicalization of input and intermediate features (Qi et al., 2017a). The predicted matrix is regularized by minimizing the loss term $L_{reg} = \left\| \mathbf{I} - \mathbf{Z}\mathbf{Z}^T\right\| ^2$ . However, this regularization is ineffective for the $\mathbf{Z}$ with huge variance as shown in Figure 3.
+
+Second, the scale distortion leads to a saturation of forward-propagated activations and backward-propagated gradients (Ding et al., 2019). In the binary neural networks, some modules (such as sign and Hardtanh) rely on the Straight-Through Estimator (STE) Bengio et al. (2013) for feature binarization or feature balancing. When the scale of their input is amplified, the gradient is truncated instead of increased proportionally. Such saturation, as shown in Fig 4(c), hinders learning and even leads to divergence.
+
+# 3.3.2 LSR FOR OUTPUT SCALE RECOVERY
+
+To recover the scale and adjustment ability of output, we propose the LSR for bi-linear layers in our BiPointNet. We design a learnable layer-wise scaling factor $\alpha$ in our LSR. $\alpha$ is initialized by the ratio of the standard deviations between the output of bi-linear and full-precision counterpart:
+
+$$
+\alpha_ {0} = \sigma (\mathbf {A} \otimes \mathbf {W}) / \sigma \left(\mathbf {B} _ {\mathbf {a}} \odot \mathbf {B} _ {\mathbf {w}}\right), \tag {9}
+$$
+
+where $\sigma$ denotes as the standard deviation. And the $\alpha$ is learnable during the training process. The calculation and derivative process of the bi-linear layer with our LSR are as follows:
+
+$$
+\text {F o r w a r d}: \mathbf {Z} = \alpha \left(\mathbf {B _ {a}} \odot \mathbf {B _ {w}}\right) \quad \text {B a c k w a r d}: g _ {\alpha} = g _ {\mathbf {Z}} \left(\mathbf {B _ {a}} \odot \mathbf {B _ {w}}\right), \tag {10}
+$$
+
+where $g_{\alpha}$ and $g_{\mathbf{Z}}$ denotes the gradient $\frac{\partial C}{\partial\alpha}$ and $\frac{\partial C}{\partial\overline{\mathbf{Z}}}$ , respectively. By applying the LSR in BiPointNet, we mitigate the scale distortion of output caused by binarization.
+
+Compared to existing methods, the advantages of LSR is summarized in two folds. First, LSR is efficient. It not only abandons the adjustment of input activations to avoid expensive inference time computation, but also recovers the scale of all weights parameters in a layer collectively instead of expensive restoration in a channel-wise manner (Rastegari et al., 2016). Second, LSR serves the purpose of scale recovery that we show is more effective than other adaptation such as minimizing quantization errors (Qin et al., 2020b; Liu et al., 2018).
+
+# 4 EXPERIMENTS
+
+In this section, we conduct extensive experiments to validate the effectiveness of our proposed BiPointNet for efficient learning on point clouds. We first ablate our method and demonstrate the contributions of EMA and LSR on three most fundamental tasks: classification on ModelNet40 (Wu et al., 2015), part segmentation on ShapeNet (Chang et al., 2015), and semantic segmentation on S3DIS (Armeni et al., 2016). Moreover, we compare BiPointNet with existing binarization methods where our designs stand out. Besides, BiPointNet is put to the test on real-world devices with limited computational power and achieve extremely high speedup $(14.7\times)$ and storage saving $(18.9\times)$ . The details of the datasets and the implementations are included in the Appendix E.
+
+Table 2: Comparison of binarization methods on PointNet. EMA is critical; even if all methods are equipped with our EMA, our LSR outperforms others with least number of scaling factors. OA: Overall Accuracy
+
+| Method | Bit-width | Aggr. | # Factors | OA |
| Full Prec. | 32/32 | MAX | - | 88.2 |
| 32/32 | AVG | - | 86.5 |
| BNN | 1/1 | MAX | 0 | 7.1 |
| 1/1 | EMA-avg | 0 | 11.3 |
| 1/1 | EMA-max | 0 | 16.2 |
| IR-Net | 1/1 | MAX | 10097 | 7.3 |
| 1/1 | EMA-avg | 10097 | 22.0 |
| 1/1 | EMA-max | 10097 | 63.5 |
| Bi-Real | 1/1 | MAX | 10097 | 4.0 |
| 1/1 | EMA-avg | 10097 | 77.0 |
| 1/1 | EMA-max | 10097 | 77.5 |
| ABC-Net | 1/1 | MAX | 51 | 4.1 |
| 1/1 | EMA-avg | 51 | 68.9 |
| 1/1 | EMA-max | 51 | 77.8 |
| XNOR++ | 1/1 | MAX | 18 | 4.1 |
| 1/1 | EMA-avg | 18 | 73.8 |
| 1/1 | EMA-max | 18 | 78.4 |
| XNOR | 1/1 | MAX | 28529 | 64.9 |
| 1/1 | EMA-avg | 28529 | 78.2 |
| 1/1 | EMA-max | 28529 | 81.9 |
| Ours | 1/1 | MAX | 18 | 4.1 |
| 1/1 | EMA-avg | 18 | 82.5 |
| 1/1 | EMA-max | 18 | 86.4 |
+
+Table 3: Our methods on mainstream backbones. We use XNOR as a strong baseline for comparison. The techniques in our BiPointNet are generic to point cloud learning. Hence, they are easily extendable to other backbones
+
+| Base Model | Method | Bit-width | Aggr. | OA |
| PointNet (Vanilla) | Full Prec. | 32/32 | MAX | 86.8 |
| XNOR | 1/1 | MAX | 61.0 |
| Ours | 1/1 | EMA-max | 85.6 |
| PointNet | Full Prec. | 32/32 | MAX | 88.2 |
| XNOR | 1/1 | MAX | 64.9 |
| Ours | 1/1 | EMA-max | 86.4 |
| PointNet++ | Full Prec. | 32/32 | MAX | 90.0 |
| XNOR | 1/1 | MAX | 63.1 |
| Ours | 1/1 | EMA-max | 87.8 |
| PointCNN | Full Prec. | 32/32 | AVG | 90.0 |
| XNOR | 1/1 | AVG | 83.0 |
| Ours | 1/1 | EMA-avg | 83.8 |
| DGCNN | Full Prec. | 32/32 | MAX | 89.2 |
| XNOR | 1/1 | MAX | 51.5 |
| Ours | 1/1 | EMA-max | 83.4 |
| PointConv | Full Prec. | 32/32 | - | 90.8 |
| XNOR | 1/1 | - | 83.1 |
| Ours | 1/1 | - | 87.9 |
+
+# 4.1 ABLATION STUDY
+
+As shown in Table 1, the binarization model baseline suffers a catastrophic performance drop in the classification task. EMA and LSR improve performance considerably when used alone, and they further close the gap between the binarized model and the full-precision counterpart when used together.
+
+In Figure 4, we further validate the effectiveness of EMA and LSR. We show that BiPointNet with EMA has its information entropy maximized during training, whereas the vanilla binarized network with max pooling gives limited and highly fluctuating results. Also, we make use of the regularization loss $L_{reg} = \left\| \mathbf{I} - \mathbf{Z}\mathbf{Z}^T\right\|_F$ for the feature transformation matrix of T-Net in PointNet as an indicator, the $L_{reg}$ of the BiPointNet with LSR is much smaller than the vanilla binarized network, demonstrating LSR's ability to reduce the scale distortion caused by binarization, allowing proper prediction of orthogonal transformation matrices.
+
+Moreover, we also include the results of two challenging tasks, part segmentation, and semantic segmentation, in Table 1. As we follow the original PointNet design for segmentation, which concatenates pointwise features with max pooled global feature, segmentation suffers from the information loss caused by the aggregation function. EMA and LSR are proven to be effective: BiPointNet is approaching the full precision counterpart with only $\sim 4\%$ mIoU difference on part segmentation and $\sim 10.4\%$ mIoU gap on semantic segmentation. The full results of segmentation are presented in Appendix E.6.
+
+# 4.2 COMPARATIVE EXPERIMENTS
+
+In Table 2, we show that our BiPointNet outperforms other binarization methods such as BNN (Hubara et al., 2016), XNOR (Rastegari et al., 2016), Bi-Real (Liu et al., 2018), ABC-Net (Lin
+
+
+Figure 5: (a) Time cost comparison. Our BiPointNet achieves $14.7 \times$ speedup on ARM A72 CPU device. (b) Storage usage comparison. Our BiPointNet enjoys $18.9 \times$ storage saving on all devices. (c) Speed vs accuracy trade-off plot. We evaluate various binarization methods (with our EMA-max) upon PointNet architecture on ARM A72 CPU device, our BiPointNet is the leading method in both speed and accuracy
+
+
+
+
+
+et al., 2017), XNOR++ (Bulat & Tzimiropoulos, 2019), and IR-Net (Qin et al., 2020b). Although these methods have been proven effective in 2D vision, they are not readily transferable to point clouds due to aggregation-induced feature homogenization.
+
+Even if we equip these methods with our EMA to mitigate information loss, our BiPointNet still performs better. We argue that existing approaches, albeit having many scaling factors, focus on minimizing quantization errors instead of recovering feature scales, which is critical to effective learning on point clouds. Hence, BiPointNet stands out with a negligible increase of parameters that are designed to restore feature scales. The detailed analysis of the performance of XNOR is found in Appendix C. Moreover, we highlight that our EMA and LSR are generic, and Table 3 shows improvements across several mainstream categories of point cloud deep learning models, including PointNet (Qi et al., 2017a), PointNet++ (Qi et al., 2017b), PointCNN (Li et al., 2018), DGCNN (Wang et al., 2019a), and PointConv (Wu et al., 2019).
+
+# 4.3 DEPLOYMENT EFFICIENCY ON REAL-WORLD DEVICES
+
+To further validate the efficiency of BiPointNet when deployed into the real-world edge devices, we further implement our BiPointNet on Raspberry Pi 4B with 1.5 GHz 64-bit quad-core ARM CPU Cortex-A72 and Raspberry Pi 3B with 1.2 GHz 64-bit quad-core ARM CPU Cortex-A53.
+
+We compare our BiPointNet with the PointNet in Figure 5(a) and Figure 5(b). We highlight that BiPointNet achieves $14.7 \times$ inference speed increase and $18.9 \times$ storage reduction over PointNet, which is recognized as a fast and lightweight model itself. Moreover, we implement various binarization methods over PointNet architecture and report their real speed performance on ARM A72 CPU device. As Figure 5(c), our BiPointNet surpasses all existing binarization methods in both speed and accuracy. Note that all binarization methods adopt our EMA and report their best accuracy, which is the important premise that they can be reasonably applied to binarize the PointNet.
+
+# 5 CONCLUSION
+
+We propose BiPointNet, the first binarization approach for efficient learning on point clouds. We build a theoretical foundation to study the impact of binarization on point cloud learning models, and proposed EMA and LSR in BiPointNet to improve the performance. BiPointNet outperforms existing binarization methods, and it is easily extendable to a wide range of tasks and backbones, giving an impressive $14.7 \times$ speedup and $18.9 \times$ storage saving on resource-constrained devices. Our work demonstrates the great potential of binarization. We hope our work can provide directions for future research.
+
+Acknowledgement This work was supported by National Natural Science Foundation of China (62022009, 61872021), Beijing Nova Program of Science and Technology (Z191100001119050), and State Key Lab of Software Development Environment (SKLSDE-2020ZX-06).
+
+# REFERENCES
+
+Iro Armeni, Ozan Sener, Amir R. Zamir, Helen Jiang, Ioannis Brilakis, Martin Fischer, and Silvio Savarese. 3d semantic parsing of large-scale indoor spaces. In IEEE CVPR, 2016.
+Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
+Adrian Bulat and Georgios Tzimiropoulos. Xnor-net++: Improved binary neural networks. In BMVC, 2019.
+Angel X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Qixing Huang, Zimo Li, S. Savarese, M. Savva, Shuran Song, H. Su, J. Xiao, L. Yi, and F. Yu. Shapenet: An information-rich 3d model repository. CoRR, abs/1512.03012, 2015.
+Ruizhou Ding, Ting-Wu Chin, Zeye Liu, and Diana Marculescu. Regularizing activation distribution for training binarized deep networks. In IEEE CVPR, 2019.
+Matthias Fey and Jan Eric Lenssen. Fast graph representation learning with pytorch geometric. CoRR, abs/1903.02428, 2019.
+Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In IEEE CVPR, 2014.
+Ross B. Girshick. Fast r-cnn. IEEE ICCV, 2015.
+Ruihao Gong, Xianglong Liu, Shenghu Jiang, Tianxiang Li, Peng Hu, Jiazhen Lin, Fengwei Yu, and Junjie Yan. Differentiable soft quantization: Bridging full-precision and low-bit neural networks. In IEEE ICCV, 2019.
+Yulan Guo, Hanyun Wang, Qingyong Hu, Hao Liu, Li Liu, and Mohammed Bennamoun. Deep learning for 3d point clouds: A survey. IEEE TPAMI, 2020.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE CVPR, 2016.
+Lu Hou, Quanming Yao, and James T. Kwok. Loss-aware binarization of deep networks. *ICLR*, 2017.
+Qingyong Hu, Bo Yang, Linhai Xie, Stefano Rosa, Yulan Guo, Zhihua Wang, Niki Trigoni, and Andrew Markham. Randla-net: Efficient semantic segmentation of large-scale point clouds. 2020.
+Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks. In NeurIPS, 2016.
+Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In NeurIPS, 2012.
+Yangyan Li, Rui Bu, Mingchao Sun, Wei Wu, X. Di, and B. Chen. Pointcnn: Convolution on x-transformed points. In NeurIPS, 2018.
+Xiaofan Lin, Cong Zhao, and Wei Pan. Towards accurate binary convolutional neural network. In NeurIPS, 2017.
+Chunlei Liu, Wenrui Ding, Xin Xia, Baochang Zhang, Jiaxin Gu, Jianzhuang Liu, Rongrong Ji, and David S. Doermann. Circulant binary convolutional networks: Enhancing the performance of 1-bit dnns with circulant back propagation. In IEEE CVPR, 2019a.
+Yongcheng Liu, Bin Fan, Shiming Xiang, and Chunhong Pan. Relation-shape convolutional neural network for point cloud analysis. IEEE CVPR, 2019b.
+Yongcheng Liu, Bin Fan, Shiming Xiang, and Chunhong Pan. Relation-shape convolutional neural network for point cloud analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8895-8904, 2019c.
+
+Zechun Liu, Baoyuan Wu, Wenhan Luo, Xin Yang, Wei Liu, and Kwang-Ting Cheng. Bi-real net: Enhancing the performance of 1-bit cnns with improved representational capability and advanced training algorithm. In ECCV, 2018.
+Zechun Liu, Zhiqiang Shen, Marios Savvides, and Kwang-Ting Cheng. Reactnet: Towards precise binary neural network with generalized activation functions. In ECCV, 2020.
+Zhijian Liu, Haotian Tang, Yujun Lin, and Song Han. Point-voxel cnn for efficient 3d deep learning. In NeurIPS, 2019d.
+Brais Martinez, Jing Yang, Adrian Bulat, and Georgios Tzimiropoulos. Training binary neural networks with real-to-binary convolutions. In ICLR, 2020.
+Charles R. Qi, Hao Su, Kaichun Mo, and Leonidas J. Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. IEEE CVPR, 2017a.
+Charles R. Qi, Li Yi, Hao Su, and Leonidas J. Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In NeurIPS, 2017b.
+Haotong Qin, Ruihao Gong, Xianglong Liu, Xiao Bai, Jingkuan Song, and Nicu Sebe. Binary neural networks: A survey. Pattern Recognition, 2020a.
+Haotong Qin, Ruihao Gong, Xianglong Liu, Mingzhu Shen, Ziran Wei, Fengwei Yu, and Jingkuan Song. Forward and backward information retention for accurate binary neural networks. In IEEE CVPR, 2020b.
+Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. In ECCV, 2016.
+Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael S Bernstein, et al. Imagenet large scale visual recognition challenge. IJCV, 2015.
+Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
+Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In IEEE CVPR, 2015.
+Hugues Thomas, Charles R. Qi, Jean-Emmanuel Deschaud, Beatz Marcotegui, François Goulette, and Leonidas J. Guibas. Kpconv: Flexible and deformable convolution for point clouds. IEEE ICCV, 2019.
+Yiru Wang, Weihao Gan, Wei Wu, and Junjie Yan. Dynamic curriculum learning for imbalanced data classification. In IEEE ICCV, 2019a.
+Yunhe Wang, Chang Xu, Chao Xu, and Dacheng Tao. Packing convolutional neural networks in the frequency domain. IEEE TPAMI, 2019b.
+Ziwei Wang, Ziyi Wu, Jiwen Lu, and Jie Zhou. Bidet: An efficient binarized object detector. In IEEE CVPR, 2020.
+B. Wu, Y. Wang, P. Zhang, Yuandong Tian, P. Vajda, and K. Keutzer. Mixed precision quantization of convnets via differentiable neural architecture search. CoRR, abs/1812.00090, 2018.
+Wenxuan Wu, Zhongang Qi, and Fuxin Li. Pointconv: Deep convolutional networks on 3d point clouds. IEEE CVPR, 2019.
+Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaou Tang, and Jianxiong Xiao. 3d shapenets: A deep representation for volumetric shapes. In IEEE CVPR, 2015.
+Zizhao Wu, Ruyang Shou, Yunhai Wang, and Xinguo Liu. Interactive shape co-segmentation via label propagation. Computers & Graphics, 38:248-254, 2014.
+
+Bo Xie, Yingyu Liang, and Le Song. Diverse neural network learns true target functions. In Artificial Intelligence and Statistics, 2017.
+Chenfeng Xu, Bichen Wu, Zining Wang, Wei Zhan, Peter Vajda, Kurt Keutzer, and Masayoshi Tomizuka. Squeezesegv3: Spatially-adaptive convolution for efficient point-cloud segmentation. In ECCV, 2020a.
+Qiangeng Xu, Xudong Sun, Cho-Ying Wu, Panqu Wang, and U. Neumann. Grid-gen for fast and scalable point cloud learning. 2020b.
+Yinghao Xu, Xin Dong, Yudian Li, and Hao Su. A main/subsidiary network framework for simplifying binary neural networks. In IEEE CVPR, 2019.
+Li Yi, Vladimir G Kim, Duygu Ceylan, I-Chao Shen, Mengyan Yan, Hao Su, Cewu Lu, Qixing Huang, Alla Sheffer, and Leonidas Guibas. A scalable active framework for region annotation in 3d shape collections. ACM Transactions on Graphics (ToG), 35(6):1-12, 2016.
+Haibao Yu, Q. Han, Jianbo Li, Jianping Shi, Guang-Liang Cheng, and Bin Fan. Search what you want: Barrier panelty nas for mixed precision quantization. In ECCV, 2020.
+Jianhao Zhang, Yingwei Pan, Ting Yao, He Zhao, and Tao Mei. dabnn: A super fast inference framework for binary neural networks on ARM devices. In ACM MM, 2019a.
+Xiangguo Zhang, Haotong Qin, Yifu Ding, Ruihao Gong, Qinghua Yan, Renshuai Tao, Yuhang Li, Fengwei Yu, and Xianglong Liu. Diversifying sample generation for data-free quantization. In IEEE CVPR, 2021.
+Zhiyuan Zhang, Binh-Son Hua, and Sai-Kit Yeung. Shellnet: Efficient point cloud convolutional neural networks using concentric shells statistics. In IEEE ICCV, 2019b.
+Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. Dorefa-net: Training low bandwidth convolutional neural networks with low bandwidth gradients. arXiv, abs/1606.06160, 2016.
+Feng Zhu, Ruihao Gong, Fengwei Yu, Xianglong Liu, Yanfei Wang, Zhelong Li, Xiuqi Yang, and Junjie Yan. Towards unified int8 training for convolutional neural network. In IEEE CVPR, 2020.
+Shilin Zhu, Xin Dong, and Hao Su. Binary ensemble neural network: More bits per network or more networks per bit? In IEEE CVPR, 2019.
+
+# APPENDIX FOR BIPOINTNET
+
+# A MAIN PROOFS AND DISCUSSION
+
+# A.1 PROOF OF ZERO CONDITIONAL ENTROPY
+
+In our BiPointNet, we hope that the binarized tensor $\mathbf{B}$ reflects the information in the original tensor $\mathbf{Y}$ as much as possible. From the perspective of information, our goal is equivalent to maximizing the mutual information $\mathcal{I}(Y; B)$ of the random variables $Y$ and $B$ :
+
+$$
+\begin{array}{l} \underset {Y, B} {\arg \max } \mathcal {I} (Y; B) (11) \\ = \sum_ {y \in \mathcal {Y}, b \in \mathcal {B}} p _ {(Y, B)} (y, b) \log \frac {p _ {(Y , B)} (y , b)}{p _ {Y} (y) p _ {B} (b)} (12) \\ = \sum_ {y \in \mathcal {Y}, b \in \mathcal {B}} p _ {(Y, B)} (y, b) \log \frac {p _ {(Y , B)} (y , b)}{p _ {Y} (y)} - \sum_ {y \in \mathcal {Y}, b \in \mathcal {B}} p _ {(Y, B)} (y, b) \log p _ {B} (b) (13) \\ = \sum_ {y \in \mathcal {Y}, b \in \mathcal {B}} p _ {Y} (y) p _ {B | Y = y} (b) \log p _ {B | Y = y} (b) - \sum_ {y \in \mathcal {Y}, b \in \mathcal {B}} p _ {(Y, B)} (y, b) \log p _ {B} (b) (14) \\ = \sum_ {y \in \mathcal {Y}} p _ {Y} (y) \left(\sum_ {b \in \mathcal {B}} p _ {B | Y = y} (b) \log p _ {B | Y = y} (b)\right) - \sum_ {b \in \mathcal {B}} \left(\sum_ {y} p _ {(Y, B)} (y, b)\right) \log p _ {B} (b) (15) \\ = - \sum_ {y \in \mathcal {Y}} p (y) \mathcal {H} (B \mid Y = y) - \sum_ {b \in \mathcal {B}} p _ {B} (b) \log p _ {B} (b) (16) \\ = - \mathcal {H} (B \mid Y) + \mathcal {H} (B) (17) \\ = \mathcal {H} (B) - \mathcal {H} (B \mid Y), (18) \\ \end{array}
+$$
+
+where $p_{(Y,B)}$ and $p_{Y}$ , $p_{B}$ are the joint and marginal probability mass functions of these discrete variables. $\mathcal{H}(B)$ is the information entropy, and $\mathcal{H}(B|Y)$ is the conditional entropy of $B$ given $Y$ . According to the Eq. (15) and Eq. (18), the conditional entropy $\mathcal{H}(Y \mid X)$ can be expressed as
+
+$$
+\mathcal {H} (B \mid Y) = \sum_ {y \in \mathcal {Y}} p _ {Y} (y) \left(\sum_ {b \in \mathcal {B}} p _ {B \mid Y = y} (b) \log p _ {B \mid Y = y} (b)\right). \tag {19}
+$$
+
+Since we use the deterministic sign function as the quantizer in binarization, the value of $B$ fully depends on the value of $Y$ , $p_{B|Y=y}(b) = 0$ or 1 in Eq. (4), i.e., every value $y$ has a fixed mapping to a binary value $b$ . Then we have
+
+$$
+\mathcal {H} (B \mid Y) = \sum_ {y \in \mathcal {Y}} p _ {Y} (y) (0 + 0 + \dots + 0) = 0. \tag {20}
+$$
+
+Hence, the original objective function is equivalent to maximizing the information entropy $\mathcal{H}(B)$ :
+
+$$
+\underset {B} {\arg \max } \mathcal {H} _ {B} (B) = - \sum_ {b \in \mathcal {B}} p _ {B} (b) \log p _ {B} (b). \tag {21}
+$$
+
+# A.2 PROOFS OF THEOREM 1
+
+Theorem 1 For input $X_{\phi}$ of max pooling $\phi$ with arbitrary distribution, the information entropy of the binarized output to zero as $n$ to infinity, i.e., $\lim_{n\to +\infty}\mathcal{H}_B = 0$ . And there is a constant $c$ , for any $n_1$ and $n_2$ , if $n_1 > n_2 > c$ , we have $\mathcal{H}_{B,n_1} < \mathcal{H}_{B,n_2}$ , where $n$ is the number of aggregated elements.
+
+Proof. We obtain the correlation between the probability mass function of input $\mathbf{X}_{\phi}$ and output $\mathbf{Y}$ of max pooling, intuitively, all values are negative to give a negative maximum value:
+
+$$
+\sum_ {y < 0} p _ {Y} (y) = \left(\sum_ {x _ {\phi} < 0} p _ {X _ {\phi}} \left(x _ {\phi}\right)\right) ^ {n}. \tag {22}
+$$
+
+Since the sign function is applied as the quantizer, the $\mathcal{H}_B(B)$ of binarized feature can be expressed as Eq. (6).
+
+(1) When $X_{\phi}$ obeys a arbitrary distribution, the probability mass function $p_{X_{\phi}}(x_{\phi})$ must satisfy $\sum_{x_{\phi} < 0} p_{X_{\phi}}(x_{\phi}) \leq 1$ . According to Eq. (6), let $t = \sum_{x_{\phi} < 0} p_{X_{\phi}}(x_{\phi})$ , we have
+
+$$
+\begin{array}{l} \lim _ {n \rightarrow \infty} \mathcal {H} _ {B} \left(X _ {\phi}\right) = \lim _ {n \rightarrow \infty} - t ^ {n} \log t ^ {n} - (1 - t) ^ {n} \log (1 - t) ^ {n} (23) \\ = - \left(\lim _ {n \rightarrow \infty} t ^ {n}\right) \log \left(\lim _ {n \rightarrow \infty} t ^ {n}\right) - \left(\lim _ {n \rightarrow \infty} (1 - t) ^ {n}\right) \log \left(\lim _ {n \rightarrow \infty} (1 - t) ^ {n}\right) (24) \\ = - 0 \log 0 - 1 \log 1 (25) \\ = 0 (26) \\ \end{array}
+$$
+
+(2) For any $n\geq 1$ , we can obtain the representation of the information entropy $\mathcal{H}_{B,n}(X_{\phi})$ ..
+
+$$
+\begin{array}{l} \mathcal {H} _ {B, n} (X _ {\phi}) = - \left(\sum_ {x _ {\phi} < 0} p _ {X _ {\phi}} (x _ {\phi})\right) ^ {n} \log \left(\sum_ {x _ {\phi} < 0} p _ {X _ {\phi}} (x _ {\phi})\right) ^ {n} \\ - \left(1 - \left(\sum_ {x _ {\phi} < 0} p _ {X _ {\phi}} \left(x _ {\phi}\right)\right) ^ {n}\right) \log \left(1 - \left(\sum_ {x _ {\phi} < 0} p _ {X _ {\phi}} \left(x _ {\phi}\right)\right) ^ {n}\right), \tag {27} \\ \end{array}
+$$
+
+Let $p_n = \left(\sum_{x_\phi < 0} p_{X_\phi}(x_\phi)\right)^n$ , the $\mathcal{H}_{B,n}(p_n)$ can be expressed as
+
+$$
+\mathcal {H} _ {B, n} \left(p _ {n}\right) = - p _ {n} \log p _ {n} - \left(1 - p _ {n}\right) \log \left(1 - p _ {n}\right), \tag {28}
+$$
+
+and the derivative of $\mathcal{H}_{B,n}(p_n)$ is
+
+$$
+\frac {d \mathcal {H} _ {B , n} \left(p _ {n}\right)}{d p _ {B} \left(p _ {n}\right)} = \log \left(\frac {1 - p _ {n}}{p _ {n}}\right), \tag {29}
+$$
+
+the $\mathcal{H}_{B,n}(p_n)$ is maximized when $p_n$ takes 0.5, and is positive correlation with $p_n$ when $p_n < 0.5$ since the $\frac{d\mathcal{H}_{B,n}(p_n)}{dp_B(p_n)} > 0$ when $p_n < 0.5$ .
+
+Therefore, when the constant $c$ satisfies $p_c = \left(\sum_{x_\phi < 0} p_{X_\phi}(x_\phi)\right)^c \geq 0.5$ , given the $n_1 > n_2 > c$ , we have $p_{n_1} < p_{n_2} < p_c$ , and $\mathcal{H}_{B,n_1}(X_\phi) < \mathcal{H}_{B,n_2}(X_\phi) < \mathcal{H}_{B,c}(X_\phi)$ .
+
+# A.3 PROOFS OF PROPOSITION 1
+
+Proposition 1 When the distribution of the random variable $Y$ satisfies $\sum_{y < 0} p_Y(y) = \sum_{y \geq 0} p_Y(y) = 0.5$ , the information entropy $\mathcal{H}_B$ is maximized.
+
+Proof. According to Eq (5), we have
+
+$$
+\begin{array}{l} \mathcal {H} _ {B} (B) = - \sum_ {b \in \mathcal {B}} p _ {B} (b) \log p _ {B} (b) (30) \\ = - p _ {B} (- 1) \log p _ {B} (- 1) - p _ {B} (1) \log p _ {B} (1) (31) \\ = - p _ {B} (- 1) \log p _ {B} (- 1) - (1 - p _ {B} (- 1) \log (1 - p _ {B} (- 1))). (32) \\ \end{array}
+$$
+
+Then we can get the derivative of $\mathcal{H}_B(B)$ with respect to $p_B(-1)$
+
+$$
+\begin{array}{l} \frac {d \mathcal {H} _ {B} (B)}{d p _ {B} (- 1)} = - \left(\log p _ {B} (- 1) + \frac {p _ {B} (- 1)}{p _ {B} (- 1) \ln 2}\right) + \left(\log \left(1 - p _ {B} (- 1)\right) + \frac {1 - p _ {B} (- 1)}{\left(1 - p _ {B} (- 1)\right) \ln 2}\right) (33) \\ = - \log p _ {B} (- 1) + \log (1 - p _ {B} (- 1)) - \frac {1}{\ln 2} + \frac {1}{\ln 2} (34) \\ = \log \left(\frac {1 - p _ {B} (- 1)}{p _ {B} (- 1)}\right). (35) \\ \end{array}
+$$
+
+When we let $\frac{d}{d} \mathcal{H}_B(B)} = 0$ to maximize the $\mathcal{H}_B(B)$ , we have $p_B(-1) = 0.5$ . Since the deterministic sign function with the zero threshold is applied as the quantizer, the probability mass function of $B$ is represented as
+
+$$
+p _ {B} (b) = \left\{ \begin{array}{l l} \sum_ {y < 0} p _ {Y} (y) d y, & \text {i f} b = - 1 \\ \sum_ {y \geq 0} p _ {Y} (y) d y, & \text {i f} b = 1, \end{array} \right. \tag {36}
+$$
+
+and when the information entropy is maximized, we have
+
+$$
+\sum_ {y < 0} p _ {Y} (y) d y = 0. 5. \tag {37}
+$$
+
+
+
+# A.4 DISCUSSION AND PROOFS OF THEOREM 2
+
+The bi-linear layers are widely used in our BiPointNet to model each point independently, and each linear layer outputs an intermediate feature. The calculation of the bi-linear layer is represented as Eq. (2). Since the random variable $B$ is sampled from $\mathbf{B}_{\mathbf{w}}$ or $\mathbf{B}_{\mathbf{a}}$ obeying Bernoulli distribution, the probability mass function of $B$ can be represented as
+
+$$
+p _ {B} (b) = \left\{ \begin{array}{l l} p, & \text {i f} b = + 1 \\ 1 - p, & \text {i f} b = - 1, \end{array} \right. \tag {38}
+$$
+
+where $p$ is the probability of taking the value $+1$ . The distribution of output $\mathbf{Z}$ can be represented by the probability mass function of $\mathbf{B}_{\mathbf{w}}$ and $\mathbf{B}_{\mathbf{a}}$ .
+
+Proposition 2 In bi-linear layer, for the binarized weight $\mathbf{B}_{\mathbf{w}} \in \{-1, +1\}^{m \times k}$ and activation $\mathbf{B}_{\mathbf{a}} \in \{-1, +1\}^{n \times m}$ with probability mass function $p_{B_w}(1) = p_w$ and $p_{B_a}(1) = p_a$ , the probability mass function for the distribution of output $\mathbf{Z}$ can be represented as $p_Z(2i - m) = C_m^i (1 - p_w - p_a + 2p_wp_a)^i (p_w + p_a - 2p_wp_a)^{m - i}, i \in \{0,1,2,\dots,m\}$ .
+
+Proof. To simplify the notation in the following statements, we define $\mathbf{A} = \mathbf{B}_{\mathbf{a}}$ and $\mathbf{W} = \mathbf{B}_{\mathbf{w}}$ . Then, for each element $\mathbf{Z}_{i,j}$ in output $\mathbf{Z} \in \{-1, +1\}^{n \times k}$ , we have
+
+$$
+x _ {i, j} = \sum_ {k = 1} ^ {m} \mathbf {A} _ {i, k} \times \mathbf {W} _ {k, j}. \tag {39}
+$$
+
+Observe that $\mathbf{A}_{i,k}$ is independent to $\mathbf{W}_{k,j}$ and the value of both variables are either $-1$ or $+1$ . Therefore, the discrete probability distribution of $\mathbf{A}_{i,k} \times \mathbf{W}_{k,j}$ can be defined as
+
+$$
+p (x) = \left\{ \begin{array}{l l} p _ {w} p _ {a} + \left(1 - p _ {w}\right) \times \left(1 - p _ {a}\right), & \text {i f} x = 1 \\ p _ {w} \times \left(1 - p _ {a}\right) + \left(1 - p _ {w}\right) \times p _ {a}, & \text {i f} x = - 1 \\ 0, & \text {o t h e r w i s e .} \end{array} \right. \tag {40}
+$$
+
+Simplify the above equation
+
+$$
+p (x) = \left\{ \begin{array}{l l} 1 - p _ {w} - p _ {a} + 2 p _ {w} p _ {a}, & \text {i f} x = 1 \\ p _ {w} + p _ {a} - 2 p _ {w} p _ {a}, & \text {i f} x = - 1 \\ 0, & \text {o t h e r w i s e .} \end{array} \right. \tag {41}
+$$
+
+Notice that $x_{i,j}$ can be parameterized as a binomial distribution. Then we have
+
+$$
+\Pr \left(x _ {i, j} = l - (m - l)\right) = C _ {m} ^ {l} \left(1 - p _ {w} - p _ {a} + 2 p _ {w} p _ {a}\right) ^ {l} \left(p _ {w} + p _ {a} - 2 p _ {w} p _ {a}\right) ^ {m - l}. \tag {42}
+$$
+
+Observe that $p_Z$ obeys the same distribution as $x_{i,j}$ . Finally, we have
+
+$$
+p _ {Z} (2 i - m) = C _ {m} ^ {i} \left(1 - p _ {w} - p _ {a} + 2 p _ {w} p _ {a}\right) ^ {i} \left(p _ {w} + p _ {a} - 2 p _ {w} p _ {a}\right) ^ {m - i}, i \in \{0, 1, 2, \dots , m \}. \tag {43}
+$$
+
+
+
+Proposition 2 shows that the output distribution of the bi-linear layer depends on the probability mass functions of binarized weight and activation. Then we present the proofs of Theorem 2.
+
+Theorem 2 When we let $p_{B_w}(1) = 0.5$ and $p_{B_a}(1) = 0.5$ in bi-linear layer to maximize the mutual information, for the binarized weight $\mathbf{B}_{\mathbf{w}} \in \{-1, +1\}^{m \times k}$ and activation $\mathbf{B}_{\mathbf{a}} \in \{-1, +1\}^{n \times m}$ , the probability mass function for the distribution of output $\mathbf{Z}$ can be represented as $p_Z(2i - m) = 0.5^m C_m^i$ , $i \in \{0,1,2,\dots,m\}$ . The distribution of output is approximate normal distribution $\mathcal{N}(0,m)$ .
+
+Proof. First, we prove that the distribution of $Z$ can be approximated as a normal distribution. For bi-linear layers in our BiPointNet, all weights and activations are binarized, which can be represented as $\mathbf{B}_{\mathbf{w}}$ and $\mathbf{B}_{\mathbf{a}}$ , respectively. And the value of an element $z_{(i,j)}$ in $\mathbf{Z}$ can be expressed as
+
+$$
+z _ {(i, j)} = \sum_ {k = 1} ^ {m} \left(b _ {w (i, k)} \times b _ {a (k, j)}\right),
+$$
+
+and the value of the element $b_{w(i,k)} \times b_{a(k,j)}$ can be expressed as
+
+$$
+b _ {w (i, k)} \times b _ {a (k, j)} = \left\{ \begin{array}{l l} 1, & \text {i f} b _ {w (i, k)} \underline {{\vee}} b _ {a (k, j)} = 1 \\ - 1, & \text {i f} b _ {w (i, k)} \underline {{\vee}} b _ {a (k, j)} = - 1. \end{array} \right. \tag {44}
+$$
+
+The $b_{w(i,k)} \times b_{a(k,j)}$ only can take from two values and its value can be considered as the result of one Bernoulli trial. Thus for the random variable $Z$ sampled from the output tensor $\mathbf{Z}$ , the probability mass function, $p_Z$ can be expressed as
+
+$$
+p _ {Z} (2 i - m) = C _ {m} ^ {i} p _ {e} ^ {k} (1 - p _ {e}) ^ {n - k}, \tag {45}
+$$
+
+where $p_e$ denotes the probability that the element $b_{w(i,k)} \times b_{a(k,j)}$ takes 1. Note that the Eq. (45) is completely equivalent to the representation in Proposition 2. According to the De Moivre-Laplace theorem, the normal distribution $\mathcal{N}(\mu, \sigma^2)$ can be used as an approximation of the binomial distribution under certain conditions, and the $p_Z(2i - m)$ can be approximated as
+
+$$
+p _ {Z} (2 i - m) = C _ {m} ^ {i} p _ {e} ^ {k} \left(1 - p _ {e}\right) ^ {n - k} \simeq \frac {1}{\sqrt {2 \pi n p _ {e} \left(1 - p _ {e}\right)}} e ^ {- \frac {(k - n p _ {e}) ^ {2}}{2 n p _ {e} \left(1 - p _ {e}\right)}}, \tag {46}
+$$
+
+and then, we can get the mean $\mu = 0$ and variance $\sigma = \sqrt{m}$ of the approximated distribution $\mathcal{N}$ with the help of equivalent representation of $p_Z$ in Proposition 2. Now we give proof of this below.
+
+According to Proposition 2, when $p_w = p_a = 0.5$ , we can rewrite the equation as
+
+$$
+p _ {Z} (2 i - m) = 0. 5 ^ {m} C _ {m} ^ {i}, i \in \{0, 1, 2, \dots , m \}. \tag {47}
+$$
+
+Then we move to calculate the mean and standard variation of this distribution. The mean of this distribution is defined as
+
+$$
+\mu \left(p _ {Z}\right) = \sum (2 i - m) 0. 5 ^ {m} C _ {m} ^ {i}, i \in \{0, 1, 2, \dots , m \}. \tag {48}
+$$
+
+By the virtue of binomial coefficient, we have
+
+$$
+\begin{array}{l} (2 i - m) 0. 5 ^ {m} C _ {m} ^ {i} + (2 (m - i) - m) 0. 5 ^ {m} C _ {m} ^ {m - i} = 0. 5 ^ {m} \left(\left(2 i - m\right) C _ {m} ^ {i} + (m - 2 i) C _ {m} ^ {m - i}\right) (49) \\ = 0. 5 ^ {m} \left(\left(2 i - m\right) C _ {m} ^ {i} + \left(m - 2 i\right) C _ {m} ^ {i}\right) (50) \\ = 0. (51) \\ \end{array}
+$$
+
+Besides, when $m$ is an even number, we have $(2i - m)0.5^{m}C_{m}^{i} = 0, i = \frac{m}{2}$ . These equations prove the symmetry of function $(2i - m)0.5^{m}C_{m}^{i}$ . Finally, we have
+
+$$
+\begin{array}{l} \mu \left(p _ {Z}\right) = \sum (2 i - m) 0. 5 ^ {m} C _ {m} ^ {i}, i \in \{0, 1, 2, \dots , m \} (52) \\ = \sum ((2 i - m) 0. 5 ^ {m} C _ {m} ^ {i} + (2 (m - i) - m) 0. 5 ^ {m} C _ {m} ^ {m - i}), i \in \{0, 1, 2, \dots , \frac {m}{2} \} (53) \\ = 0. (54) \\ \end{array}
+$$
+
+The standard variation of $p_Z$ is defined as
+
+$$
+\begin{array}{l} \sigma \left(p _ {Z}\right) = \sqrt {\left(\sum \left| 2 i - m \right| ^ {2} 0 . 5 ^ {m} C _ {m} ^ {i}\right)} (55) \\ = \sqrt {\sum \left(4 i ^ {2} - 4 i m + m ^ {2}\right) 0 . 5 ^ {m} C _ {m} ^ {i}} (56) \\ = \sqrt {0 . 5 ^ {m} \left(4 \sum i ^ {2} C _ {m} ^ {i} - 4 m \sum i C _ {m} ^ {i} + m ^ {2} \sum C _ {m} ^ {i}\right)}. (57) \\ \end{array}
+$$
+
+To calculate the standard variation of $p_Z$ , we use Binomial Theorem and have several identical equations:
+
+$$
+\sum C _ {m} ^ {i} = (1 + 1) ^ {m} = 2 ^ {m} \tag {58}
+$$
+
+$$
+\sum i C _ {m} ^ {i} = m (1 + 1) ^ {m - 1} = m 2 ^ {m - 1} \tag {59}
+$$
+
+$$
+\sum i ^ {2} C _ {m} ^ {i} = m (m + 1) (1 + 1) ^ {m - 2} = m (m + 1) m 2 ^ {m - 2}. \tag {60}
+$$
+
+These identical equations help simplify Eq. (57):
+
+$$
+\begin{array}{l} \sigma \left(p _ {Z}\right) = \sqrt {0 . 5 ^ {m} \left(4 \sum i ^ {2} C _ {m} ^ {i} - 4 m \sum i C _ {m} ^ {i} + m ^ {2} \sum C _ {m} ^ {i}\right)} (61) \\ = \sqrt {0 . 5 ^ {m} (4 m (m + 1) 2 ^ {m - 2} - 4 m ^ {2} 2 ^ {m - 1} + m ^ {2} 2 ^ {m})} (62) \\ = \sqrt {0 . 5 ^ {m} ((m ^ {2} + m) 2 ^ {m} - 2 m ^ {2} 2 ^ {m} + m ^ {2} 2 ^ {m})} (63) \\ = \sqrt {0 . 5 ^ {m} \left(m 2 ^ {m}\right)} (64) \\ = \sqrt {m}. (65) \\ \end{array}
+$$
+
+Now we proved that, the distribution of output is approximate normal distribution $\mathcal{N}(0,m)$ .
+
+# A.5 DISCUSSION OF THE OPTIMAL $\delta$ FOR EMA-MAX
+
+When the $X_{\phi} \sim \mathcal{N}(0,1)$ , the objective function of EMA-max to obtain optimal $\delta^{*}$ is represented as Eq. (8). It is difficult to directly solve the objective function. To circumvent this issue, we use Monte Carlo simulation to approximate the value of the optimal $\delta_{\mathrm{max}}^{*}$ as shown Algorithm 1.
+
+Algorithm 1 Monte Carlo Simulation for EMA-max
+Input: The number $n$ of points to be aggregated; the number of simulations m (e.g. 10000)
+Output: Estimated optimal $\delta_{\max}^{*}$ for EMA-max
+1: Creating an empty list $F$ (represents elements sampled form distribution of aggregated feature)
+2: for $i = 0$ to m do
+3: Creating an empty list $T_i$ (representing one channel of input feature)
+4: for $j = 0$ to n do
+5: Sampling an element $e_{ij}$ from the distribution $\mathcal{N}(0,1)$
+6: Adding the sampled element $e_{ij}$ to the list $T_i$
+7: end for
+8: Adding an element represents the aggregated feature $\mathrm{MAX}(T_i)$ to $F$
+9: end for
+10: Estimating the optimal $\delta_{\max}^{*}$ as $\delta_{\max}^{*} = \mathrm{Median}(F)$ (follow Proposition 1)
+
+# A.6 DISCUSSION OF THE OPTIMAL $\delta$ FOR EMA-AVG
+
+When the $X_{\phi} \sim \mathcal{N}(\delta, 1)$ , the $Y \sim \mathcal{N}(\delta, n^{-1})$ and the objective function of EMA-avg for obtaining optimal $\delta_{\mathrm{avg}}^{*}$ can be represented as
+
+$$
+\begin{array}{l} \arg \max _ {\delta} \mathcal {H} _ {B} (\delta) = - \left(\sum_ {x _ {\phi} < 0} \frac {1}{n ^ {- 1} \sqrt {2 \pi}} e ^ {- \frac {\left(x _ {\phi} - \delta\right) ^ {2}}{2}}\right) \log \left(\sum_ {x _ {\phi} < 0} \frac {1}{n ^ {- 1} \sqrt {2 \pi}} e ^ {- \frac {\left(x _ {\phi} - \delta\right) ^ {2}}{2}}\right) \tag {66} \\ - \Big (\sum_ {x _ {\phi} \geq 0} \frac {1}{n ^ {- 1} \sqrt {2 \pi}} e ^ {- \frac {\left(x _ {\phi} - \delta\right) ^ {2}}{2}} \Big) \log \Big (\sum_ {x _ {\phi} \geq 0} \frac {1}{n ^ {- 1} \sqrt {2 \pi}} e ^ {- \frac {\left(x _ {\phi} - \delta\right) ^ {2}}{2}} \Big). \\ \end{array}
+$$
+
+The solution of Eq. (66) is expressed as $\delta = 0$ , we thus obtain $\delta_{\mathrm{avg}}^* = 0$ . This means the solution is not related to $n$ .
+
+# B IMPLEMENTATION OF BIPOINTNET ON ARM DEVICES
+
+# B.1 OVERVIEW
+
+We further implement our BiPointNet on Raspberry Pi 4B with 1.5 GHz 64-bit quad-core ARM Cortex-A72 and Raspberry Pi 3B with 1.2 GHz 64-bit quad-core ARM Cortex-A53, and test the real speed that one can obtain in practice. Although the PointNet is a recognized high-efficiency model, the inference speed of BiPointNet is much faster. Compared to PointNet, BiPointNet enjoys up to $14.7 \times$ speedup and $18.9 \times$ storage saving.
+
+We utilize the SIMD instruction SSHL on ARM NEON to make inference framework daBNN (Zhang et al., 2019a) compatible with our BiPointNet and further optimize the implementation for more efficient inference.
+
+# B.2 IMPLEMENTATION DETAILS
+
+Figure 6 shows the detailed structures of six PointNet implementations. In Full-Precision version (a), BN is merged into the later fully connected layer for speedup, which is widely chosen for deployment in real-world applications. In Binarization version (b)(c)(d)(e), we have to keep BN unmerged due to the binarization of later layers. Instead, we merge the scaling factor of LSR into BN layers. The HardTanh function is removed because it does not affect the binarized value of
+
+input for the later layers. We test the quantization for the first layer and last layer in the variants (b)(c)(d)(e). In the last variant(f), we drop the BN layers during training. The scaling factor is ignored during deployment because it does not change the sign of the output.
+
+
+Figure 6: Structures of different PointNet implementations. Three fully connected layers are used in all six variants: Full Precision FC, Binarization FC with BN, Binarization FC w/o BN. Full Precision FC contains a full precision fully connected layer and a ReLU layer. Original BN is merged into the later layer. Binarization FC with BN also contains two layers: a quantized fully connected layer and a batch normalization layer. Binarization FC w/o BN is formed by a single quantized fully connected layer
+
+B.3 ABLATION ANALYSIS OF TIME COST AND QUANTIZATION SENSITIVITY
+
+| Setup | Bit-width | FL | LL | BN | OA | Storage & Saving Ratio | Time & Speedup Ratio |
| A72 | A53 |
| (a) | 32/32 | 32/32 | 32/32 | Merged | 86.8 | 3.16MB / 1.0× | 131ms / 1.0× | 67ms / 1.0× |
| (b) | 1/1 | 32/32 | 32/32 | Not Merged | 85.62 | 0.17MB / 18.9× | 9.0ms / 14.7× | 5.5ms / 12.1× |
| (c) | 1/1 | 32/32 | 1/1 | Not Merged | 84.60 | 0.12MB / 26.3× | 9.0ms / 14.7× | 5.3ms / 12.6× |
| (d) | 1/1 | 1/1 | 32/32 | Not Merged | 5.31 | 0.16MB / 19.7× | 11.5ms / 11.4× | 6.5ms / 10.3× |
| (e) | 1/1 | 1/1 | 1/1 | Not Merged | 4.86 | 0.12MB / 26.3× | 11.4ms / 11.5× | 6.4ms / 10.4× |
| (f) | 1/1 | 32/32 | 32/32 | Not Used | 85.13 | 0.15MB / 21.0× | 8.1ms / 16.1× | 4.8ms / 13.9× |
+
+Table 4: Comparison of different configurations in deployment on ARM devices. The storage-saving ratio and speedup ratio are calculated according to the full precision model as the first row illustrates. All the models use PointNet as the base model and EMA-max as the aggregation function. The accuracy performance is reported on the point cloud classification task with the ModelNet40 dataset. FL: First Layer; LL: Last Layer
+
+Table 4 shows the detailed configuration including overall accuracy, storage usage, and time cost of the above-mentioned six implementations. The result shows that binarization of the middle fully connected layers can extremely speed up the original model. We achieve $18.9 \times$ storage saving, $14.7 \times$ speedup on A72, and $12.1 \times$ speed on A53. The quantization of the last layer further helps save storage consumption and improves the speed with a slight performance drop. However, the quantization of the first layer causes a drastic drop in accuracy without discernible computational cost reduction. The variant (f) without BN achieves comparable performance with variant (b). It suggests that our LSR method could be an ideal alternative to the original normalization layers to achieve a fully quantized model except for the first layer.
+
+# C COMPARISON BETWEEN LAYER-WISE SCALE RECOVERY AND OTHER METHODS
+
+In this section, we will analyze the difference between the LSR method with other model binarization methods. Theorem 2 shows the significance of recovering scale in point cloud learning. However, IRNet and BiReal only consider the scale of weight but ignore the scale of input features. Therefore, these two methods cannot recover the scale of output due to scale distortion on the input feature. A major difference between these two methods is that LSR opts for layer-wise scaling factor while XNOR opts for point-wise one. Point-wise scale recovery needs dynamical computation during inference while our proposed LSR only has a layer-wise global scaling factor, which is independent of the input. As a result, our method can achieve higher speed in practice.
+
+
+Figure 7: The information entropy of BNN, XNOR and our BiPointNet
+
+Table 3 shows that XNOR can alleviate the aggregation-induced feature homogenization. The pointwise scaling factor helps the model to achieve comparable adjustment capacity as full-precision linear layers. Therefore, although XNOR suffers from feature homogenization at the beginning of the training process, it can alleviate this problem with the progress of training and achieve acceptable performance, as shown in Figure 7.
+
+# D COMPARISON WITH OTHER EFFICIENT LEARNING METHODS
+
+We compare our computation speedup and storage savings with several recently proposed methods to accelerate deep learning models on point clouds. Note that the comparison is for reference only; tests are conducted on different hardware, and for different tasks. Hence, direct comparison cannot give any meaningful conclusion. In Table 5, we show that BiPointNet achieves the most impressive acceleration.
+
+# EXPERIMENTS
+
+# E.1 DATASETS
+
+ModelNet40: ModelNet40 (Wu et al., 2015) for part segmentation. The ModelNet40 dataset is the most frequently used dataset for shape classification. ModelNet is a popular benchmark for point cloud classification. It contains 12,311 CAD models from 40 representative classes of objects.
+
+Table 5: Comparison between BiPointNet and other approaches to efficient learning on point clouds. Grid-GCN (Xu et al., 2020b) leverages novel data structuring strategy; RAND-LA Net (Hu et al., 2020) designs a faster sampling method; PointVoxel (Liu et al., 2019d) proposes an efficient representation. These works, albeit achieving high performance, are not as effective as our binarization method in terms of model acceleration. The asterisk indicates the vanilla version
+
+| Method | Hardware | Dataset | Base Model | Metric/ Performance | Speedup |
| BiPointNet | ARM Cortex-A72 | ModelNet4 | PointNet* | OA/85.6 | 12.1× |
| BiPointNet | ARM Cortex-A53 | ModelNet40 | PointNet* | OA/85.6 | 14.7× |
| Grid-GCN | RTX 2080 GPU | S3DIS | PointNet | mIoU/53.2 | 1.62× |
| RandLA-Net | RTX 2080Ti GPU | S3DIS | PointNet* | mIoU/70.0 | 1.04× |
| PointVoxel | GTX 1080Ti GPU | ShapeNet | PointNet | mIoU/46.9 | 2.46× |
+
+ShapeNet Parts: ShapeNet Parts (Chang et al., 2015) for part segmentation. ShapeNet contains 16,881 shapes from 16 categories, 2,048 points are sampled from each training shape. Each shape is split into two to five parts depending on the category, making up to 50 parts in total.
+
+S3DIS: S3DIS for semantic segmentation (Armeni et al., 2016). S3DIS includes 3D scan point clouds for 6 indoor areas including 272 rooms in total, each point belongs to one of 13 semantic categories. We follow the official code (Qi et al., 2017a) for training and testing.
+
+# E.2 IMPLEMENTATION DETAILS OF BIPOINTNET
+
+We follow the popular PyTorch implementation of PointNet and the recent geometric deep learning codebase (Fey & Lenssen, 2019) for the implementation of PointNet baselines. Our BiPointNet is built by binarizing the full-precision PointNet. All linear layers in PointNet except the first and last one are binarized to bi-linear layer, and we select Hardtanh as our activation function instead of ReLU when we binarize the activation before the bi-linear layer. For the part segmentation task, we follow the convention (Wu et al., 2014; Yi et al., 2016) to train a model for each of the 16 classes. We also provide our PointNet baseline under this setting.
+
+Following previous works, we train 200 epochs, 250 epochs, 128 epochs on point cloud classification, part segmentation, semantic segmentation respectively. To stably train the binarized models, we use a learning rate of 0.001 with Adam and Cosine Annealing learning rate decay for all binarized models on all three tasks.
+
+# E.3 MORE BACKBONES
+
+We also propose four other models: BiPointCNN, BiPointNet++, BiDGCCN, and BiPointConv, which are binarized versions of PointCNN (Li et al., 2018), PointNet++ (Qi et al., 2017b), DGCNN (Wang et al., 2019a), and PointConv (Wu et al., 2019), respectively. This is attributed to the fact that all these variants have characteristics in common, such as linear layers for point-wise feature extraction and global pooling layers for feature aggregation (except PointConv, which does not have explicit aggregators). In PointNet++, DGCNN, and PointConv, we keep the first layer and the last layer full-precision and binarize all the other layers. In PointCNN, we keep every first layer of XConv full precision and keep the last layer of the classifier full precision.
+
+# E.4 BINARIZATION METHODS
+
+For comparison, we implement various representative binarization methods for 2D vision, including BNN (Hubara et al., 2016), XNOR-Net (Rastegari et al., 2016), Bi-Real Net (Liu et al., 2018), XNOR++ (Bulat & Tzimiropoulos, 2019), ABC-Net (Lin et al., 2017), and IR-Net (Qin et al., 2020b), to be applied on 3D point clouds. Note that the Case 1 version of XNOR++ is used in our experiments for a fair comparison, which applies layerwise learnable scaling factors to minimize the quantization error. These methods are implemented according to their open-source code or the description in their papers, and we take reference of their $3 \times 3$ convolution design when implementing the corresponding bi-linear layers. We follow their training process and hyperparameter
+
+settings, but note that the specific shortcut structure in Bi-Real and IR-Net is ignored since it only applies to the ResNet architecture.
+
+# E.5 TRAINING DETAILS
+
+Our BiPointNet is trained from scratch (random initialization) without leveraging any pre-trained model. Amongst the experiments, we apply Adam as our optimizer and use the cosine annealing learning rate scheduler to stably optimize the networks. To evaluate our BiPointNet on various network architectures, we mostly follow the hyper-parameter settings of the original papers (Qi et al., 2017a; Li et al., 2018; Qi et al., 2017b; Wang et al., 2019a).
+
+# E.6 DETAILED RESULTS OF SEGMENTATION
+
+We present the detailed results of part segmentation on ShapeNet Part in Table 6 and semantic segmentation on S3DIS in Table 7. The detailed results further prove the conclusion of Section 4.1 as EMA and LSR improve performance considerably in most of the categories (instead of huge performance in only a few categories). This validates the effectiveness and robustness of our method.
+
+Table 6: Detailed results of our BiPointNet for part segmentation on ShapeNet Parts.
+
+ | aggr. | mean | aero | bag | cap | car | chair | ear phone | guitar | knife | lamp | laptop | motor | mug | pistol | rocket | skate board | table |
| # shapes | | 2690 | 76 | 55 | 898 | 3758 | 69 | 787 | 392 | 1547 | 451 | 202 | 184 | 283 | 66 | 152 | 5271 | |
| FP | max | 84.3 | 83.6 | 79.4 | 92.5 | 76.8 | 90.8 | 70.2 | 91.0 | 85.6 | 81.9 | 95.6 | 64.4 | 93.5 | 80.9 | 54.5 | 70.6 | 81.5 |
| FP | avg | 84.0 | 83.4 | 78.5 | 90.8 | 76.3 | 90.0 | 73.1 | 90.8 | 84.3 | 80.8 | 95.5 | 61.7 | 93.8 | 81.6 | 56.2 | 72.2 | 81.8 |
| BNN | max | 54.0 | 35.1 | 48.1 | 65.5 | 26.5 | 55.8 | 57.1 | 48.8 | 62.2 | 48.6 | 90.1 | 23.1 | 68.3 | 57.5 | 31.3 | 43.7 | 66.8 |
| BNN | ema-avg | 53.0 | 39.8 | 46.5 | 57.5 | 24.1 | 58.2 | 56.2 | 44.0 | 50.0 | 53.0 | 81.0 | 16.9 | 48.8 | 36.3 | 25.7 | 43.7 | 63.3 |
| BNN | ema-max | 47.3 | 37.9 | 46.2 | 44.6 | 24.1 | 61.3 | 38.2 | 33.5 | 42.6 | 50.8 | 48.6 | 16.9 | 49.0 | 25.2 | 26.8 | 43.7 | 50.30 |
| LSR | max | 58.7 | 41.5 | 46.2 | 80.2 | 39.2 | 75.3 | 46.0 | 47.8 | 75.5 | 50.0 | 93.8 | 25.4 | 51.0 | 60.2 | 36.2 | 43.7 | 61.4 |
| Ours | ema-avg | 80.3 | 79.3 | 71.9 | 85.5 | 66.1 | 87.7 | 65.6 | 84.1 | 82.8 | 76.0 | 94.8 | 42.7 | 91.8 | 75.9 | 47.2 | 59.1 | 79.7 |
| Ours | ema-max | 80.6 | 79.5 | 69.7 | 86.1 | 67.4 | 88.6 | 68.5 | 87.4 | 83.0 | 74.9 | 95.1 | 44.8 | 91.6 | 76.3 | 47.7 | 56.9 | 79.5 |
+
+Table 7: Detailed results of our BiPointNet for semantic segmentation on S3DIS.
+
+| method | aggr | overall overall | area1 | area2 | area3 | area4 | area5 | area6 | ceiling floor wall beam column |
| mIoU | acc. | (mIoU/acc.) | (mIoU/acc.) | (mIoU/acc.) | (mIoU/acc.) | (mIoU/acc.) | (mIoU/acc.) | IoU | IoU | IoU | IoU | IoU | IoU | IoU | IoU | IoU | IoU | IoU | IoU | IoU | IoU | |
| FP | max | 54.4 | 83.5 | 61.7/86.2 | 38.0/76.8 | 62.4/88.0 | 45.0/82.4 | 45.3/83.3 | 70.0/89.2 | 91.1 | 93.8 | 72.8 | 50.3 | 34.6 | 52.0 | 58.0 | 55.8 | 51.3 | 14.5 | 44.4 | 43.4 | 45.2 | | |
| FP | avg | 51.5 | 81.5 | 59.9/84.6 | 35.4/72.4 | 61.2/87.2 | 43.8/81.2 | 42.0/81.2 | 68.2/88.3 | 90.1 | 89.1 | 71.7 | 46.1 | 33.7 | 53.5 | 53.8 | 53.8 | 47.8 | 9.4 | 40.4 | 38.7 | 41.8 | | |
| BNN | max | 9.5 | 45.0 | 9.6/44.0 | 9.8/50.5 | 8.3/41.9 | 9.3/42.5 | 9.5/45.8 | 9.8/41.6 | 45.5 | 40.6 | 28.1 | 0 | 0 | 0 | 0 | 0 | 7.7 | 0 | 0 | 0 | 2.1 | | |
| BNN | ema-avg | 9.9 | 46.8 | 7.6/36.6 | 11.2/51.2 | 7.1/36.5 | 9.8/46.0 | 11.4/54.8 | 8.6/41.6 | 51.5 | 35.1 | 32.1 | 0 | 0 | 0 | 0.6 | 9.3 | 0 | 0 | 0 | 0.6 | | | |
| BNN | ema-max | 8.5 | 47.2 | 7.7/44.0 | 10.1/54.4 | 7.1/46.8 | 7.8/39.7 | 7.6/49.2 | 7.2/45.3 | 50.8 | 43.5 | 15.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | | |
| LSR | max | 2.0 | 25.4 | 2.0/26.0 | 2.1/27.0 | 2.0/25.7 | 1.8/22.8 | 2.0/25.8 | 1.9/24.5 | 25.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | | |
| Ours | ema-avg | 40.9 | 74.9 | 47.1/75.8 | 29.1/68.3 | 48.0/79.9 | 34.2/73.2 | 34.7/76.1 | 53.3/79.8 | 84.6 | 84.6 | 60.5 | 32.0 | 19.0 | 39.6 | 43.0 | 43.5 | 39.2 | 5.8 | 30.5 | 18.5 | 31.3 | | |
| Ours | ema-max | 44.3 | 76.7 | 50.9/78.3 | 31.0/70.3 | 53.4/82.4 | 36.6/73.9 | 36.9/77.6 | 57.9/82.3 | 85.1 | 86.1 | 62.6 | 34.5 | 23.8 | 43.0 | 48.0 | 45.7 | 40.6 | 9.6 | 36.9 | 26.2 | 33.9 | | |
\ No newline at end of file
diff --git a/bipointnetbinaryneuralnetworkforpointclouds/images.zip b/bipointnetbinaryneuralnetworkforpointclouds/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..740cba06855e8767c42ba86fbea6338a2c60370f
--- /dev/null
+++ b/bipointnetbinaryneuralnetworkforpointclouds/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0176299e1401cd91025cfee1f0ca24ff5ac7eab43b791efbc5d299cf365a40df
+size 1216540
diff --git a/bipointnetbinaryneuralnetworkforpointclouds/layout.json b/bipointnetbinaryneuralnetworkforpointclouds/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..52d285ac73529d01beef35ef22bb1c498891376f
--- /dev/null
+++ b/bipointnetbinaryneuralnetworkforpointclouds/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:63824a501f47c4ca43b178e63d9c35c3f9706e462b29bb484c2d7cfe7d5c581b
+size 810266
diff --git a/blendingmpcvaluefunctionapproximationforefficientreinforcementlearning/41f33825-1ec4-4dc0-a45d-c5ac71e6bdbf_content_list.json b/blendingmpcvaluefunctionapproximationforefficientreinforcementlearning/41f33825-1ec4-4dc0-a45d-c5ac71e6bdbf_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..34733da2a9c0e3036bac1d1eb9f09250cda0ff81
--- /dev/null
+++ b/blendingmpcvaluefunctionapproximationforefficientreinforcementlearning/41f33825-1ec4-4dc0-a45d-c5ac71e6bdbf_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7ce869c89d0e5a9a277feb4a7ce6b65418695c5dfa28a063608b501e99efdf05
+size 110071
diff --git a/blendingmpcvaluefunctionapproximationforefficientreinforcementlearning/41f33825-1ec4-4dc0-a45d-c5ac71e6bdbf_model.json b/blendingmpcvaluefunctionapproximationforefficientreinforcementlearning/41f33825-1ec4-4dc0-a45d-c5ac71e6bdbf_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..b72b9a121323687c276185381b1519b888b45784
--- /dev/null
+++ b/blendingmpcvaluefunctionapproximationforefficientreinforcementlearning/41f33825-1ec4-4dc0-a45d-c5ac71e6bdbf_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ef3051290f6f73bf2d4bfbb38c34062a5e4e2276e510b63172fd32fc87175fa8
+size 128519
diff --git a/blendingmpcvaluefunctionapproximationforefficientreinforcementlearning/41f33825-1ec4-4dc0-a45d-c5ac71e6bdbf_origin.pdf b/blendingmpcvaluefunctionapproximationforefficientreinforcementlearning/41f33825-1ec4-4dc0-a45d-c5ac71e6bdbf_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..2a6fa3cdf5ccf6942f3944efc0d54e1ae6231ec9
--- /dev/null
+++ b/blendingmpcvaluefunctionapproximationforefficientreinforcementlearning/41f33825-1ec4-4dc0-a45d-c5ac71e6bdbf_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7e24846c365251d154b1dc26594eaddb2032f8d1aa4908a81506e08356225a57
+size 1556410
diff --git a/blendingmpcvaluefunctionapproximationforefficientreinforcementlearning/full.md b/blendingmpcvaluefunctionapproximationforefficientreinforcementlearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..e58e95736dc38f3394e7a6605ca35adc60726cb5
--- /dev/null
+++ b/blendingmpcvaluefunctionapproximationforefficientreinforcementlearning/full.md
@@ -0,0 +1,498 @@
+# BLENDING MPC & VALUE FUNCTION APPROXIMATION FOR EFFICIENT REINFORCEMENT LEARNING
+
+Mohak Bhardwaj1 Sanjiban Choudhury2 Byron Boots1
+
+1 University of Washington 2 Aurora Innovation Inc.
+
+# ABSTRACT
+
+Model-Predictive Control (MPC) is a powerful tool for controlling complex, real-world systems that uses a model to make predictions about future behavior. For each state encountered, MPC solves an online optimization problem to choose a control action that will minimize future cost. This is a surprisingly effective strategy, but real-time performance requirements warrant the use of simple models. If the model is not sufficiently accurate, then the resulting controller can be biased, limiting performance. We present a framework for improving on MPC with model-free reinforcement learning (RL). The key insight is to view MPC as constructing a series of local Q-function approximations. We show that by using a parameter $\lambda$ , similar to the trace decay parameter in $\mathrm{TD}(\lambda)$ , we can systematically trade-off learned value estimates against the local Q-function approximations. We present a theoretical analysis that shows how error from inaccurate models in MPC and value function estimation in RL can be balanced. We further propose an algorithm that changes $\lambda$ over time to reduce the dependence on MPC as our estimates of the value function improve, and test the efficacy our approach on challenging high-dimensional manipulation tasks with biased models in simulation. We demonstrate that our approach can obtain performance comparable with MPC with access to true dynamics even under severe model bias and is more sample efficient as compared to model-free RL.
+
+# 1 INTRODUCTION
+
+Model-free Reinforcement Learning (RL) is increasingly used in challenging sequential decision-making problems including high-dimensional robotics control tasks (Haarnoja et al., 2018; Schulman et al., 2017) as well as video and board games (Silver et al., 2016; 2017). While these approaches are extremely general, and can theoretically solve complex problems with little prior knowledge, they also typically require a large quantity of training data to succeed. In robotics and engineering domains, data may be collected from real-world interaction, a process that can be dangerous, time consuming, and expensive.
+
+Model-Predictive Control (MPC) offers a simpler, more practical alternative. While RL typically uses data to learn a global model offline, which is then deployed at test time, MPC solves for a policy online by optimizing an approximate model for a finite horizon at a given state. This policy is then executed for a single timestep and the process repeats. MPC is one of the most popular approaches for control of complex, safety-critical systems such as autonomous helicopters (Abbeel et al., 2010), aggressive off-road vehicles (Williams et al., 2016) and humanoid robots (Erez et al., 2013), owing to its ability to use approximate models to optimize complex cost functions with nonlinear constraints (Mayne et al., 2000; 2011).
+
+However, approximations in the model used by MPC can significantly limit performance. Specifically, model bias may result in persistent errors that eventually compound and become catastrophic. For example, in non-prehensile manipulation, practitioners often use a simple quasi-static model that assumes an object does not roll or slide away when pushed. For more dynamic objects, this can lead to aggressive pushing policies that perpetually over-correct, eventually driving the object off the surface.
+
+Recently, there have been several attempts to combine MPC with model free RL, showing that the combination can improve over the individual approaches alone. Many of these approaches involve using RL to learn a terminal cost function, thereby increasing the effective horizon of MPC (Zhong et al., 2013; Lowrey et al., 2018; Bhardwaj et al., 2020). However, the learned value function is only applied at the end of the MPC horizon. Model errors would still persist in horizon, leading to sub-optimal policies. Similar approaches have also been applied to great effect in discrete games with known models (Silver et al., 2016; 2017; Anthony et al., 2017), where value functions and policies learned via model-free RL are used to
+
+guide Monte-Carlo Tree Search. In this paper, we focus on a somewhat broader question: can machine learning be used to both increase the effective horizon of MPC, while also correcting for model bias?
+
+One straightforward approach is to try to learn (or correct) the MPC model from real data encountered during execution; however there are some practical barriers to this strategy. Hand-constructed models are often crude-approximations of reality and lack the expressivity to represent encountered dynamics. Moreover, increasing the complexity of such models leads to computationally expensive updates that can harm MPC's online performance. Model-based RL approaches such as Chua et al. (2018); Nagabandi et al. (2018); Shyam et al. (2019) aim to learn general neural network models directly from data. However, learning globally consistent models is an exceptionally hard task due to issues such as covariate shift (Ross & Bagnell, 2012).
+
+We propose a framework, $\mathrm{MPQ}(\lambda)$ , for weaving together MPC with learned value estimates to trade-off errors in the MPC model and approximation error in a learned value function. Our key insight is to view MPC as tracing out a series of local Q-function approximations. We can then blend each of these Q-functions with value estimates from reinforcement learning. We show that by using a blending parameter $\lambda$ , similar to the trace decay parameter in $\mathrm{TD}(\lambda)$ , we can systematically trade-off errors between these two sources. Moreover, by smoothly decaying $\lambda$ over learning episodes we can achieve the best of both worlds - a policy can depend on a prior model before its has encountered any data and then gradually become more reliant on learned value estimates as it gains experience.
+
+To summarize, our key contributions are:
+
+1. A framework that unifies MPC and Model-free RL through value function approximation.
+2. Theoretical analysis of finite horizon planning with approximate models and value functions.
+3. Empirical evaluation on challenging manipulation problems with varying degrees of model-bias.
+
+# 2 PRELIMINARIES
+
+# 2.1 REINFORCEMENT LEARNING
+
+We consider an agent acting in an infinite-horizon discounted Markov Decision Process (MDP). An MDP is defined by a tuple $\mathcal{M} = (\mathcal{S},\mathcal{A},c,P,\gamma ,\mu)$ where $\mathcal{S}$ is the state space, $\mathcal{A}$ is the action space, $c(s,a)$ is the per-step cost function, $s_{t + 1}\sim P(\cdot |s_t,a_t)$ is the stochastic transition dynamics and $\gamma$ is the discount factor and $\mu (s_0)$ is a distribution over initial states. A closed-loop policy $\pi (\cdot |s)$ outputs a distribution over actions given a state. Let $\mu_{\mathcal{M}}^{\pi}$ be the distribution over state-action trajectories obtained by running policy $\pi$ on $\mathcal{M}$ . The value function for a given policy $\pi$ , is defined as $V_{\mathcal{M}}^{\pi}(s) = \mathbb{E}_{\mu_{\mathcal{M}}^{\pi}}[\sum_{t = 0}^{\infty}\gamma^{t}c(s_{t},a_{t})|s_{0} = s]$ and the action-value function as $Q_{\mathcal{M}}^{\pi}(s,a) = \mathbb{E}_{\mu_{\mathcal{M}}^{\pi}}[\sum_{t = 0}^{\infty}\gamma^{t}c(s_{t},a_{t})|s_{0} = s,a_{0} = a]$ . The objective is to find an optimal policy $\pi^{*} = \underset {\pi}{\mathrm{argmin}}\mathbb{E}_{s_{0}\sim \mu}[V_{\mathcal{M}}^{\pi}(s_{0})]$ . We can also define the (dis)-advantage function $A_{\mathcal{M}}^{\pi}(s,a) = Q_{\mathcal{M}}^{\pi}(s,a) - V^{\pi}(s)$ , which measures how good an action is compared to the action taken by the policy in expectation. It can be equivalently expressed in terms of the Bellman error as $A_{\mathcal{M}}^{\pi}(s,a) = c(s,a) + \gamma \mathbb{E}_{s^{\prime}\sim P,a^{\prime}\sim \pi}[Q_{\mathcal{M}}^{\pi}(s^{\prime},a^{\prime})] - \mathbb{E}_{a\sim \pi}[Q_{\mathcal{M}}^{\pi}(s,a)]$ .
+
+# 2.2 MODEL-PREDICTIVE CONTROL
+
+MPC is a widely used technique for synthesizing closed-loop policies for MDPs. Instead of trying to solve for a single, globally optimal policy, MPC follows a more pragmatic approach of optimizing simple, local policies online. At every timestep on the system, MPC uses an approximate model of the environment to search for a parameterized policy that minimizes cost over a finite horizon. An action is sampled from the policy and executed on the system. The process is then repeated from the next state, often by warm-starting the optimization from the previous solution.
+
+We formalize this process as solving a simpler surrogate MDP $\hat{\mathcal{M}} = (\mathcal{S},\mathcal{A},\hat{c},\hat{P},\gamma ,\hat{\mu},H)$ online, which differs from $\mathcal{M}$ by using an approximate cost function $\hat{c}$ , transition dynamics $\hat{P}$ and limiting horizon to $H$ . Since it plans to a finite horizon, it's also common to use a terminal state-action value function $\hat{Q}$ that estimates the cost-to-go. The start state distribution $\hat{\mu}$ is a dirac-delta function centered on the current state $s_0 = s_t$ . MPC can be viewed as iteratively constructing an estimate of the Q-function of the original MDP $\mathcal{M}$ , given policy $\pi_{\phi}$ at state $s$ :
+
+$$
+Q _ {H} ^ {\phi} (s, a) = \mathbb {E} _ {\mu_ {\mathcal {M}} ^ {\pi_ {\phi}}} \left[ \sum_ {i = 0} ^ {H - 1} \gamma^ {i} \hat {c} \left(s _ {i}, a _ {i}\right) + \gamma^ {H} \hat {Q} \left(s _ {H}, a _ {H}\right) \mid s _ {0} = s, a _ {0} = a \right] \tag {1}
+$$
+
+MPC then iteratively optimizes this estimate (at current system state $s_t$ ) to update the policy parameters
+
+$$
+\phi_ {t} ^ {*} = \underset {\phi} {\operatorname {a r g m i n}} Q _ {H} ^ {\phi} \left(s _ {t}, \pi_ {\phi} \left(s _ {t}\right)\right) \tag {2}
+$$
+
+Alternatively, we can also view the above procedure from the perspective of disadvantage minimization. Let us define an estimator for the 1-step disadvantage with respect to the potential function $\hat{Q}$ as $A(s_i,a_i) = c(s_i,a_i) + \gamma \hat{Q} (s_{i + 1},a_{i + 1}) - \hat{Q} (s_i,a_i)$ . We can then equivalently write the above optimization as minimizing the discounted sum of disadvantages over time via the telescoping sum trick
+
+$$
+\underset {\pi \in \Pi} {\operatorname {a r g m i n}} \mathbb {E} _ {\mu_ {\mathcal {M}}} ^ {\pi_ {\phi}} \left[ \hat {Q} \left(s _ {0}, a _ {0}\right) + \sum_ {i = 0} ^ {H - 1} \gamma^ {i} A \left(s _ {i}, a _ {i}\right) \mid s _ {0} = s _ {t} \right] \tag {3}
+$$
+
+Although the above formulation queries the $\hat{Q}$ at every timestep, it is still exactly equivalent to the original problem and hence, does not mitigate the effects of model-bias. In the next section, we build a concrete method to address this issue by formulating a novel way to blend Q-estimates from MPC and a learned value function that can balance their respective errors.
+
+# 3 MITIGATING BIAS IN MPC VIA REINFORCEMENT LEARNING
+
+In this section, we develop our approach to systematically deal with model bias in MPC by blending-in learned value estimates. First, we take a closer look at the different sources of error in the estimate in (1) and then propose an easy-to-implement, yet effective strategy for trading them off.
+
+# 3.1 SOURCES OF ERROR IN MPC
+
+The performance of MPC algorithms critically depends on the quality of the Q-function estimator $Q_H^\phi (s,a)$ in (1). There are three major sources of approximation error. First, model bias can cause compounding errors in predicted state trajectories, which biases the estimates of the costs of different action sequences. The effect of model error becomes more severe as $H\to \infty$ . Second, the error in the terminal value function gets propagated back to the estimate of the Q-function at the start state. With discounting, the effect of error due to inaccurate terminal value function diminishes as $H$ increases. Third, using a small $H$ with an inaccurate terminal value function can make the MPC algorithm greedy and myopic to rewards further out in the future.
+
+We can formally bound the performance of the policy with approximate models and approximate learned value functions. In Theorem 3.1, we show the loss in performance of the resulting policy as a function of the model error, value function error and the planning horizon.
+
+Theorem 3.1 (Proof Appendix A.1.2). Let $MDP\hat{\mathcal{M}}$ be an $\alpha$ -approximation of $\mathcal{M}$ such that $\forall (s,a)$ , we have $\left|\left|\hat{P}(s'|s,a) - P(s'|s,a)\right|\right|_1 \leq \alpha$ and $|\hat{c}(s,a) - c(s,a)| \leq \alpha$ . Let the learned value function $\hat{Q}(s,a)$ be an $\epsilon$ -approximation of the true value function $\left|\left|\hat{Q}(s,a) - Q_{\mathcal{M}}^*(s,a)\right|\right|_\infty \leq \epsilon$ . The performance of the MPC policy is bounded w.r.t the optimal policy as $\left|\left|V_{\mathcal{M}}^*(s) - V_{\mathcal{M}}^{\hat{\pi}}(s)\right|\right|_\infty$
+
+$$
+\leq 2 \left(\frac {\gamma (1 - \gamma^ {H - 1})}{(1 - \gamma^ {H}) (1 - \gamma)} \alpha H \left(\frac {c _ {\operatorname* {m a x}} - c _ {\operatorname* {m i n}}}{2}\right) + \frac {\gamma^ {H} \alpha H}{1 - \gamma^ {H}} \left(\frac {V _ {\operatorname* {m a x}} - V _ {\operatorname* {m i n}}}{2}\right) + \frac {\alpha}{1 - \gamma} + \frac {\gamma^ {H} \epsilon}{1 - \gamma^ {H}}\right) \tag {4}
+$$
+
+This theorem generalizes over various established results. Setting $H = 1, \epsilon = 0$ gives us the 1-step simulation lemma in Kearns & Singh (2002) (Appendix A.1.1). Setting $\alpha = 0$ , i.e. true model, recovers the cost-shaping result in Sun et al. (2018). Further inspecting terms in (4), we see that the model error increases with horizon $H$ (the first two terms) while the learned value error decreases with $H$ which matches our intuitions.
+
+In practice, the errors in model and value function are usually unknown and hard to estimate making it impossible to set the MPC horizon to the optimal value. Instead, we next propose a strategy to blend the Q-estimates from MPC and the learned value function at every timestep along the horizon, instead of just the terminal step such that we can properly balance the different sources of error.
+
+# 3.2 BLENDING MODEL PREDICTIVE CONTROL AND VALUE FUNCTIONS
+
+A naive way to blend Q-estimates from MPC with Q-estimates from the value function would be to consider a convex combination of the two
+
+$$
+(1 - \lambda) \underbrace {\hat {Q} (s , a)} _ {\text {m o d e l - f r e e}} + \lambda \underbrace {Q _ {H} ^ {\phi} (s , a)} _ {\text {m o d e l - b a s e d}} \tag {5}
+$$
+
+where $\lambda \in [0,1]$ . Here, the value function is contributing to a residual that is added to the MPC output, an approach commonly used to combine model-based and model-free methods (Lee et al., 2020). However, this is solution is rather ad hoc. If we have at our disposal a value function, why invoke it at only at the first and last timestep? As the value function gets better, it should be useful to invoke it at all timesteps.
+
+Instead, consider the following recursive formulation for the Q-estimate. Given $(s_i,a_i)$ , the state-action encountered at horizon $i$ , the blended estimate $Q^{\lambda}(s_i,a_i)$ is expressed as
+
+$$
+\underbrace {Q ^ {\lambda} \left(s _ {i} , a _ {i}\right)} _ {\text {c u r r e n t b l e n d e d e s t i m a t e}} = (1 - \lambda) \underbrace {\hat {Q} \left(s _ {i} , a _ {i}\right)} _ {\text {m o d e l - f r e e}} + \lambda \left(\underbrace {\hat {c} \left(s _ {i} , a _ {i}\right)} _ {\text {m o d e l - b a s i d}} + \gamma \underbrace {Q ^ {\lambda} \left(s _ {i + 1} , a _ {i + 1}\right)} _ {\text {f u t h e r b l e n d e d e s t i m a t e}}\right) \tag {6}
+$$
+
+where $\lambda \in [0,1]$ . The recursion ends at $Q^{\lambda}(s_H,a_H) = \hat{Q} (s_H,a_H)$ . In other words, the current blended estimate is a convex combination of the model-free value function and the one-step model-based return. The return in turn uses the future blended estimate. Note unlike (5), the model-free estimate is invoked at every timestep.
+
+We can unroll (6) in time to show $Q_H^\lambda(s, a)$ , the blended $H$ -horizon estimate, is simply an exponentially weighted average of all horizon estimates
+
+$$
+Q _ {H} ^ {\lambda} (s, a) = (1 - \lambda) \sum_ {i = 0} ^ {H - 1} \lambda^ {i} Q _ {i} ^ {\phi} (s, a) + \lambda^ {H} Q _ {H} ^ {\phi} (s, a) \tag {7}
+$$
+
+where $Q_{k}^{\phi}(s,a) = \mathbb{E}_{\mu_{\mathcal{M}}}\left[\sum_{i = 0}^{k - 1}\gamma^{i}\hat{c} (s_{i},a_{i}) + \gamma^{k}\hat{Q} (s_{k},a_{k})\mid s_{0} = s,a_{0} = a\right]$ is a $k$ -horizon estimate. When $\lambda = 0$ , the estimator reduces to the just using $\hat{Q}$ and when $\lambda = 1$ we recover the original MPC estimate $Q_{H}$ in (1). For intermediate values of $\lambda$ , we interpolate smoothly between the two by interpolating all $H$ estimates.
+
+Implementing (7) naively would require running $H$ versions of MPC and then combining their outputs. This is far too expensive. However, we can switch to the disadvantage formulation by applying a similar telescoping trick
+
+$$
+Q _ {H} ^ {\lambda} (s, a) = \mathbb {E} _ {\mu_ {\mathcal {M}} ^ {\pi_ {\phi}}} \left[ \hat {Q} \left(s _ {0}, a _ {0}\right) + \sum_ {i = 0} ^ {H - 1} (\gamma \lambda) ^ {i} A \left(s _ {i}, a _ {i}\right) \right] \tag {8}
+$$
+
+This estimator has a similar form as the $TD(\lambda)$ estimator for the value function. However, while $TD(\lambda)$ uses the $\lambda$ parameter for bias-variance trade-off, our blended estimator aims trade-off bias in dynamics model with bias in learned value function.
+
+Why use blending $\lambda$ when one can simply tune the horizon $H$ ? First, $H$ limits the resolution we can tune since it's an integer - as $H$ gets smaller the resolution becomes worse. Second, the blended estimator $Q_{H}^{\lambda}(s,a)$ uses far more samples. Say we have access to optimal horizon $H^{*}$ . Even if both $Q_{H}^{\lambda}$ and $Q_{H^{*}}^{\phi}$ had the same bias, the latter uses a strict subset of samples as the former. Hence the variance of the blended estimator will be lower, with high probability.
+
+# 4 THE MPQ(λ) ALGORITHM
+
+We develop a simple variant of Q-Learning, called Model-Predictive Q-Learning with $\lambda$ Weights $(\mathrm{MPQ}(\lambda))$ that learns a parameterized Q-function estimate $\hat{Q}_{\theta}$ . Our algorithm, presented in Alg. 1, modifies Q-learning to use blended Q-estimates as described in the (8) for both action selection and generating value targets. The parameter $\lambda$ is used to trade-off the errors due to model-bias and learned Q-function, $\hat{Q}_{\theta}$ . This can be viewed as an extension of the MPQ algorithm from Bhardwaj et al. (2020) to explicitly deal with model bias by incorporating the learned Q-function at all timesteps. Unlike MPQ, we do not explicitly consider the entropy-regularized formulation, although our framework can be modified to incorporate soft-Q targets.
+
+Algorithm 1: MPQ(λ)
+Input: Initial Q-function weights $\theta$ , Approximate dynamics $\hat{P}$ and cost function $\hat{c}$
+Parameter: MPC horizon $H$ , $\lambda$ schedule $[\lambda_1, \lambda_2, \dots]$ , discount factor $\gamma$ , minibatch size $K$ , num mini-batches $N$ , update frequency $t_{update}$ $\mathcal{D} \gets \emptyset$
+for $t = 1 \dots \infty$ do
+// Update $\lambda$ $\lambda = \lambda_t$
+// Blended MPC action selection
+ $\phi_t \gets \underset{\phi}{\mathrm{argmin}} \mathbb{E}_{\mu_{\mathcal{M}}}^{\pi_\phi} \left[ \hat{Q}_\theta(s_0, a_0) + \sum_{i=0}^{H-1} (\gamma \lambda)^i A(s_i, a_i) \mid s_0 = s_t \right]$ $a_t \sim \pi_{\phi_t}$
+Execute $a_t$ on the system and observe $(c_t, s_{t+1})$ $\mathcal{D} \gets (s_t, a_t, c_t, s_{t+1})$
+if $t \% t_{update} == 0$ then
+Sample N minibatches $\left( \left\{ s_{k,n}, a_{k,n}, c_{k,n}, s_{k,n}' \right\}_{k=1}^K \right)_{n=1}^N$ from $\mathcal{D}$
+// Generate Blended MPC value targets
+ $\hat{y}_{k,n} = c_{k,n} + \gamma \min_{\mu_{\mathcal{M}}} \mathbb{E}_{\mu_{\mathcal{M}}}^{\pi_\phi} \left[ \hat{Q}_\theta(s_0, a_0) + \sum_{i=0}^{H-1} (\gamma \lambda)^i A(s_i, a_i) \mid s_0 = s_{k,n}' \right]$
+Update $\theta$ with SGD on loss $\mathcal{L} = \frac{1}{N} \frac{1}{K} \sum_{n=1}^{N} \sum_{k=1}^{K} \left( \hat{y}_{k,n} - \hat{Q}_\theta(s_{k,n}, a_{k,n}) \right)^2$
+
+At every timestep $t$ , $\mathrm{MPQ}(\lambda)$ proceeds by using $H$ -horizon MPC from the current state $s_t$ to optimize a policy $\pi_{\phi}$ with parameters $\phi$ . We modify the MPC algorithm to optimize for the greedy policy with respect to the blended Q-estimator in (8), that is
+
+$$
+\phi_ {t} ^ {*} = \underset {\phi} {\operatorname {a r g m i n}} \mathbb {E} _ {\mu_ {\mathcal {M}}} ^ {\pi_ {\phi}} \left[ \hat {Q} _ {\theta} \left(s _ {0}, a _ {0}\right) + \sum_ {i = 0} ^ {H - 1} (\gamma \lambda) ^ {i} A \left(s _ {i}, a _ {i}\right) \mid s _ {0} = s _ {t} \right] \tag {9}
+$$
+
+An action sampled from the resulting policy is then executed on the system. A commonly used heuristic is to warm start the above optimization by shifting forward the solution from the previous timestep, which serves as a good initialization if the noise in the dynamics in small (Wagener et al., 2019). This can significantly cut computational cost by reducing the number of iterations required to optimize (9) at every timestep.
+
+Periodically, the parameters $\theta$ are updated via stochastic gradient descent to minimize the following loss function with $N$ mini-batches of experience tuples of size $K$ sampled from the replay buffer
+
+$$
+\mathcal {L} (\theta) = \frac {1}{N} \frac {1}{K} \sum_ {n = 1} ^ {N} \sum_ {k = 1} ^ {K} \left(\hat {y} _ {k, n} - \hat {Q} _ {\theta} \left(s _ {k, n}, a _ {k, n}\right)\right) ^ {2} \tag {10}
+$$
+
+The $H$ -horizon MPC with blended Q-estimator is again invoked to calculate the targets
+
+$$
+\hat {y} _ {j} = c \left(s _ {j}, a _ {j}\right) + \gamma \min _ {\phi} \mathbb {E} _ {\mu_ {\mathcal {M}}} ^ {\pi_ {\phi}} \left[ \hat {Q} _ {\theta} \left(s _ {0}, a _ {0}\right) + \sum_ {i = 0} ^ {H - 1} \left(\gamma \lambda\right) ^ {i} A \left(s _ {i}, a _ {i}\right) \mid s _ {0} = s _ {k, n} ^ {\prime} \right] \tag {11}
+$$
+
+Using MPC to reduce error in Q-targets has been previously explored in literature (Lowrey et al., 2018; Bhardwaj et al., 2020), where the model is either assumed to be perfect or model-error is not explicitly accounted for. MPC with the blended Q-estimator and an appropriate $\lambda$ allows us to generate more stable Q-targets than using $Q_{\theta}$ or model-based rollouts with a terminal Q-function alone. However, running H-horizon optimization for all samples in a mini-batch can be time-consuming, forcing the use of smaller batch sizes and sparse updates. In our experiments, we employ a practical modification where during
+
+
+Figure 1: Tasks for evaluating MPQ( $\lambda$ ). Left to right - cartpole, peg insertion with 7DOF arm, and in-hand manipulation to orient align pen(blue) with target(green) with 24DOF dexterous hand.
+
+
+
+
+
+the action selection step, MPC is also queried for value targets which are then stored in the replay buffer, thus allowing us to use larger batch sizes and updates at every timestep.
+
+Finally, we also allow $\lambda$ to vary over time. In practice, $\lambda$ is decayed as more data is collected on the system. Intuitively, in the early stages of learning, the bias in $\hat{Q}_{\theta}$ dominates and hence we want to rely more on the model. A larger value of $\lambda$ is appropriate as it up-weights longer horizon estimates in the blended-Q estimator. As $\hat{Q}_{\theta}$ estimates improve over time, a smaller $\lambda$ is favorable to reduce the reliance on the approximate model.
+
+# 5 EXPERIMENTS
+
+Task Details: We evaluate $\mathrm{MPQ}(\lambda)$ on simulated robot control tasks, including a complex manipulation task with a 7DOF arm and in-hand manipulation with a 24DOF anthropomorphic hand (Rajeswaran* et al., 2018) as shown in Fig. 1. For each task, we provide the agent with a biased version of simulation that is used as the dynamics model for MPC. We use Model Predictive Path Integral Control (MPPI) (Williams et al., 2017), a state-of-the-art sampling-based algorithm as our MPC algorithm throughout.
+
+1. CARTPOLESWINGUP: A classic control task where the agent slides a cart along a rail to swing up the pole attached via an unactuated hinge joint. Model bias is simulated by providing the agent incorrect masses of the cart and pole. The masses are set lower than the true values to make the problem harder for MPC as the algorithm will always input smaller controls than desired as also noted in Ramos et al. (2019). Initial position of cart and pole are randomized at every episode.
+2. SAWYERPEGINSERTION: The agent controls a 7DOF Sawyer arm to insert a peg attached to the end-effector into a hole at different locations on a table in front of the robot. We test the effects of inaccurate perception by simulating a sensor at the target location that provides noisy position measurements at every timestep. MPC uses a deterministic model that does not take sensor noise into account as commonly done in controls. This biases the cost of simulated trajectories, causing MPC to fail to reach the target.
+3. INHANDMANIPULATION: A challenging in-hand manipulation task with a 24DOF dexterous hand from Rajeswaran* et al. (2018). The agent must align the pen with target orientation within certain tolerance for success. The initial orientation of the pen is randomized at every episode. Here, we simulate bias by providing larger estimates of the mass and inertia of the pen as well as friction coefficients, which causes the MPC algorithm to optimize overly aggressive policies and drop the pen.
+
+Please refer to the Appendix A.2 for more details of the tasks, success criterion and biased simulations.
+
+Baselines: We compare $\mathrm{MPQ}(\lambda)$ against both model-based and model-free baselines - MPPI with true dynamics and no value function, MPPI with biased dynamics and no value function and Proximal Policy Optimization (PPO) Schulman et al. (2017).
+
+Learning Details: We represent the Q-function with a feed-forward neural network. We bias simulation parameters like mass or friction coefficients using the formula $m = (1 + b)m_{true}$ , where $b$ is a bias-factor. We also employ a practical modification to Alg. 1 in order to speed up training times as discussed in Section 4. Instead of maintaining a large replay-buffer and re-calculating targets for every experience tuple in a mini-batch, as done by approaches such as Bhardwaj et al. (2020); Lowrey et al. (2018), we simply query MPC for the value targets online and store them in a smaller buffer, which allows us to perform updates at every timestep. We use publicly the available implementation at https://bit.ly/38RcDrl for PPO. Refer to the Appendix A.2 for more details.
+
+
+(a) Fixed $\lambda$
+
+
+(b) Fixed v/s Decaying $\lambda$
+
+
+(c) Varying Model Bias
+
+
+(d) $\lambda$ decay with $H = 64$
+
+
+(e) $\lambda$ decay with $H = 32$
+
+
+(f) Varying Horizon v/s $\lambda$
+
+
+Figure 2: CARTPOLESWINGUP experiments. Solid lines show average rewards over 30 validation episodes (fixed start states) with length of 100 steps and 3 runs with different seeds. The dashed lines are average reward of MPPI for the same validation episodes. Shaded region depicts the standard error of the mean that denotes the confidence on the average reward estimated from finite samples. Training is performed for $100\mathrm{k}$ steps with validation after every $4\mathrm{k}$ steps. When decaying $\lambda$ as per a schedule, it is fixed to the current value during validation. In (b),(d),(e), (f) $\lambda_F$ denotes the $\lambda$ value at the end of training. PPO asymptotic performance is reported as average reward of last 10 validation iterations. (g) shows the best validation reward at the end of training for different horizon values and MPPI trajectory samples (particles) using $\lambda = 1.0$ and $\lambda = 0.8$
+
+
+(g) Bias-Variance Trade-off
+
+
+
+
+
+# 5.1 ANALYSIS OF OVERALL PERFORMANCE
+
+# O1. MPQ $(\lambda)$ is able to overcome model-bias in MPC for a wide range of $\lambda$ values.
+
+Fig. 2(a) shows a comparison of $\mathrm{MPQ}(\lambda)$ with MPPI using true and biased dynamics with $b = -0.5$ and $H = 64$ for various settings of $\lambda$ . There exists a wide range of $\lambda$ values for which $\mathrm{MPQ}(\lambda)$ can efficiently trade-off model-bias with the bias in the learned Q-function and out-perform MPPI with biased dynamics. However, setting $\lambda$ to a high value of 1.0 or 0.95, which weighs longer horizons heavily leads to poor performance as compounding effects of model-bias are not compensated by $Q_{\theta}$ . Performance also begins to drop as $\lambda$ decreases below 0.6. $\mathrm{MPQ}(\lambda)$ outperforms MPPI with access to the true dynamics and reaches close to asymptotic performance of PPO. This is not surprising as the learned Q-function adds global information to the optimization and $\lambda$ corrects for errors in optimizing longer horizons.
+
+# O2. Faster convergence can be achieved by decaying $\lambda$ over time.
+
+As more data is collected on the system, we expect the bias in $Q_{\theta}$ to decrease, whereas model bias remains constant. A larger value of $\lambda$ that favors longer horizons is better during initial steps of training as the effect of a randomly initialized $Q_{\theta}$ is diminished due to discounting and better exploration is achieved by forward lookahead. Conversely, as $Q_{\theta}$ gets more accurate, model-bias begins to hurt performance
+
+
+(a) INHANDMANIPULATION Reward
+
+
+(b) INHANDMANIPULATION Success Rate
+
+
+(c) SAWYERPEGINSERTION Reward
+
+
+(d) SAWYERPEGINSERTION Success Rate
+Figure 3: Robustness and sample efficiency of $\mathrm{MPQ}(\lambda)$ . (a), (b) Varying bias factor over mass, inertia and friction of pen (c), (d) Peg insertion with noisy perception. Total episode length is 75 steps for both. Same bias factor $b$ is used for all altered properties per task. Curves depict average reward over 30 validation episodes with multiple seeds and shaded areas are the standard error of the mean. Validation done after every 3k steps and $\lambda$ is decayed to 0.85 at end of 75k training steps in both. Asymptotic performance of PPO is average of last 10 validation iterations. Refer to Appendix A.2 for details on tasks and success metrics.
+
+and a smaller $\lambda$ is favorable. We test this by decaying $\lambda$ in $[1.0, \lambda_F]$ using a fixed schedule and observe that indeed faster convergence is obtained by reducing the dependence on the model over training steps as shown in 2(b). Figures 2(d) and 2(e) present ablations that show that $\mathrm{MPQ}(\lambda)$ is robust to a wide range of decay rates with $H = 64$ and 32 respectively. When provided with true dynamics, MPPI with $H = 32$ performs better than $H = 64$ due to optimization issues with long horizons. $\mathrm{MPQ}(\lambda)$ reaches performance comparable with MPPI $H = 32$ and asymptotic performance of PPO in both cases showing robustness to horizon values which is important since in practice we wish to set the horizon as large as our computation budget permits. However, decaying $\lambda$ too fast or too slow can have adverse effects on performance. An interesting question for future work is whether $\lambda$ can be adapted in a state-dependent manner. Refer to Appendix A.2 for details on the decay schedule.
+
+O3. MPQ(λ) is much more sample efficient compared to model-free RL on high-dimensional continuous control tasks, while maintaining asymptotic performance.
+
+Figures 2 and 3 show a comparison of $\mathrm{MPQ}(\lambda)$ with the model-free PPO baseline. In all cases, we observe that $\mathrm{MPQ}(\lambda)$ , through its use of approximate models, learned value functions, and a dynamically-varying $\lambda$ parameter to trade-off different sources of error, rapidly improves its performance and achieves average reward and success rate comparable to MPPI with access to ground truth dynamics and model-free RL in the limit. In INHANDMANIPULATION, PPO performance does not improve at all over $150\mathrm{k}$ training steps. In SAWYERPEGINSERTION, the small magnitude of reward difference between MPPI with true and biased models is due to the fact that despite model bias, MPC is able to get the peg close to the table, but sensor noise inhibits precise control to consistently insert it in the hole. Here, the value function learned by $\mathrm{MPQ}(\lambda)$ can adapt to sensor noise and allow for fine-grained control near the table.
+
+O 4. MPQ $(\lambda)$ is robust to large degree of model misspecification.
+
+Fig. 2(c) shows the effects of different values of the bias factor $b$ used to vary the mass of the cart and pole for MPQ( $\lambda$ ) with a fixed $\lambda$ decay rate of [1.0,0.75]. MPQ( $\lambda$ ) achieves performance better than MPPI ( $H = 64$ ) with true dynamics and comparable to model-free RL in the limit for a wide range of bias factors $b$ , and convergence rate is generally faster for smaller bias. For large values of $b$ , MPQ( $\lambda$ ) either fails to improve or
+
+diverges as the compounding effects of model-bias hurt learning, making model-free RL the more favorable alternative. A similar trend is observed in Figures 3(a) and 3(b) where $\mathrm{MPQ}(\lambda)$ outperforms MPPI with corresponding bias in the mass, inertia and friction coefficients of the pen with at least a margin of over $30\%$ in terms of success rate. It also achieves performance comparable to MPPI with true dynamics and model-free RL in the limit, but is unable to do so for $b = 1.0$ . We conclude that while $\mathrm{MPQ}(\lambda)$ is robust to large amount of model bias, if the model is extremely un-informative, relying on MPC can degrade performance.
+
+O 5. MPQ(λ) is robust to planning horizon and number of trajectory samples in sampling-based MPC. TD(λ) based approaches are used for bias-variance trade-off for value function estimation in model-free RL. In our framework, λ plays a similar role, but it trades-off bias due to the dynamics model and learned value function against variance due to long-horizon rollouts. We empirically quantify this on the CARTPOLESwINGUP task by training MPQ(λ) with different values of horizon and number of particles for λ = 1.0 and λ = 0.8 respectively. Results in Fig. 2(g) show that - (1) using λ can overcome effects of model-bias irrespective of the planning horizon (except for very small values of H = 1 or 2) and (2) using λ can overcome variance due to limited number of particles with long horizon rollouts. The ablative study in Fig. 2(f) lends evidence to the fact that is preferable to simply decay λ over time than trying to tune the discrete horizon value to balance model bias. Not only does decaying λ achieve a better convergence rate and asymptotic performance than tuning horizon, the performance is more robust to different decay rates (as evidenced from Fig. 2(d)), whereas the same does not hold for varying horizon.
+
+# 6 CONCLUSION
+
+In this paper, we presented a general framework to mitigate model-bias in MPC by blending model-free value estimates using a parameter $\lambda$ , to systematically trade-off different sources of error. Our practical algorithm achieves performance close to MPC with access to the true dynamics and asymptotic performance of model-free methods, while being sample efficient. An interesting avenue for future research is to vary $\lambda$ in a state-adaptive fashion. In particular, reasoning about the model and value function uncertainty may allow us to vary $\lambda$ to rely more or less on our model in certain parts of the state space. Another promising direction is to extend the framework to explicitly incorporate constraints by leveraging different constrained MPC formulations.
+
+# ACKNOWLEDGMENTS
+
+This work was supported in part by ARL SARA CRA W911NF-20-2-0095. The authors would like to thank Aravind Rajeswaran for help with code for the peg insertion task.
+
+# REFERENCES
+
+Pieter Abbeel and Andrew Y Ng. Exploration and apprenticeship learning in reinforcement learning. In Proceedings of the 22nd international conference on Machine learning, pp. 1-8, 2005.
+Pieter Abbeel, Adam Coates, and Andrew Y Ng. Autonomous helicopter aerobatics through apprenticeship learning. The International Journal of Robotics Research, 29(13):1608-1639, 2010.
+Thomas Anthony, Zheng Tian, and David Barber. Thinking fast and slow with deep learning and tree search. In Advances in Neural Information Processing Systems, pp. 5360-5370, 2017.
+Mohak Bhardwaj, Ankur Handa, Dieter Fox, and Byron Boots. Information theoretic model predictive q-learning. In Learning for Dynamics and Control, pp. 840-850, 2020.
+Kurtland Chua, Roberto Calandra, Rowan McAllister, and Sergey Levine. Deep reinforcement learning in a handful of trials using probabilistic dynamics models. In Advances in Neural Information Processing Systems, pp. 4754-4765, 2018.
+Tom Erez, Kendall Lowrey, Yuval Tassa, Vikash Kumar, Svetoslav Kolev, and Emanuel Todorov. An integrated system for real-time model predictive control of humanoid robots. In 2013 13th IEEE-RAS International Conference on Humanoid Robots (Humanoids), pp. 292-299. IEEE, 2013.
+Tuomas Haarnoja, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan, Vikash Kumar, Henry Zhu, Abhishek Gupta, Pieter Abbeel, et al. Soft actor-critic algorithms and applications. arXiv preprint arXiv:1812.05905, 2018.
+
+Sham Kakade, Michael J Kearns, and John Langford. Exploration in metric state spaces. In Proceedings of the 20th International Conference on Machine Learning (ICML-03), pp. 306-312, 2003.
+Michael Kearns and Satinder Singh. Near-optimal reinforcement learning in polynomial time. Machine learning, 49(2-3):209-232, 2002.
+Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
+Gilwoo Lee, Brian Hou, Sanjiban Choudhury, and Siddhartha S Srinivasa. Bayesian residual policy optimization: Scalable bayesian reinforcement learning with clairvoyant experts. arXiv preprint arXiv:2002.03042, 2020.
+Kendall Lowrey, Aravind Rajeswaran, Sham Kakade, Emanuel Todorov, and Igor Mordatch. Plan online, learn offline: Efficient learning and exploration via model-based control. arXiv preprint arXiv:1811.01848, 2018.
+David Q Mayne, James B Rawlings, Christopher V Rao, and Pierre OM Scokaert. Constrained model predictive control: Stability and optimality. Automatica, 36(6):789-814, 2000.
+David Q Mayne, Erric C Kerrigan, EJ Van Wyk, and Paola Falugi. Tube-based robust nonlinear model predictive control. International Journal of Robust and Nonlinear Control, 21(11):1341-1353, 2011.
+Anusha Nagabandi, Gregory Kahn, Ronald S Fearing, and Sergey Levine. Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 7559-7566. IEEE, 2018.
+Aravind Rajeswaran*, Vikash Kumar*, Abhishek Gupta, Giulia Vezzani, John Schulman, Emanuel Todorov, and Sergey Levine. Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations. In Proceedings of Robotics: Science and Systems (RSS), 2018.
+Fabio Ramos, Rafael Carvalhaes Possas, and Dieter Fox. Bayessim: adaptive domain randomization via probabilistic inference for robotics simulators. arXiv preprint arXiv:1906.01728, 2019.
+Stephane Ross and J Andrew Bagnell. Agnostic system identification for model-based reinforcement learning. arXiv preprint arXiv:1203.1007, 2012.
+John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
+Pranav Shyam, Wojciech Jaskowski, and Faustino Gomez. Model-based active exploration. In International Conference on Machine Learning, pp. 5779-5788, 2019.
+David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484, 2016.
+David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, et al. Mastering chess and shogi by self-play with a general reinforcement learning algorithm. arXiv preprint arXiv:1712.01815, 2017.
+Colin Summers, Kendall Lowrey, Aravind Rajeswaran, Siddhartha Srinivasa, and Emanuel Todorov. Lyceum: An efficient and scalable ecosystem for robot learning. arXiv preprint arXiv:2001.07343, 2020.
+Wen Sun, J Andrew Bagnell, and Byron Boots. Truncated horizon policy search: Combining reinforcement learning & imitation learning. arXiv preprint arXiv:1805.11240, 2018.
+Nolan Wagener, Ching-An Cheng, Jacob Sacks, and Byron Boots. An online learning approach to model predictive control. arXiv preprint arXiv:1902.08967, 2019.
+Grady Williams, Paul Drews, Brian Goldfain, James M Rehg, and Evangelos A Theodorou. Aggressive driving with model predictive path integral control. In 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 1433-1440. IEEE, 2016.
+
+Grady Williams, Nolan Wagener, Brian Goldfain, Paul Drews, James M Rehg, Byron Boots, and Evangelos A Theodorou. Information theoretic mpc for model-based reinforcement learning. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 1714-1721. IEEE, 2017.
+
+Mingyuan Zhong, Mikala Johnson, Yuval Tassa, Tom Erez, and Emanuel Todorov. Value function approximation and model predictive control. In 2013 IEEE symposium on adaptive dynamic programming and reinforcement learning (ADPRL), pp. 100-107. IEEE, 2013.
+
+# A APPENDIX
+
+# A.1 PROOFS
+
+We present upper-bounds on performance of a greedy policy when using approximate value functions and models. We also analyze the case of finite horizon planning with an approximate dynamics model and terminal value function which can be seen as a generalization of (Sun et al., 2018). For simplicity, we switch to using $\hat{V}(s)$ to the learnt model-free value function (instead of $\hat{Q}(s)$ )
+
+Let $\hat{V}(s)$ be an $\epsilon$ -approximation $\left\| \hat{V}(s) - V_{\hat{\mathcal{M}}}^{\pi^{*}}(s) \right\|_{\infty} \leq \epsilon$ . Let MDP $\hat{\mathcal{M}}$ be an $\alpha$ -approximation of $\mathcal{M}$ such that $\forall (s, a)$ , we have $\left\| \hat{P}(s'|s, a) - P(s'|s, a) \right\|_{1} \leq \alpha$ and $|\hat{c}(s, a) - c(s, a)| \leq \alpha$ .
+
+# A.1.1 A GENTLE START: BOUND ON PERFORMANCE OF 1-STEP GREEDY POLICY
+
+Theorem A.1. Let the one-step greedy policy be
+
+$$
+\hat {\pi} (s) = \underset {a \in \mathcal {A}} {\operatorname {a r g m i n}} \hat {c} (s, a) + \gamma \Sigma_ {s ^ {\prime}} \hat {P} \left(s ^ {\prime} \mid s, a\right) \hat {V} \left(s ^ {\prime}\right) \tag {12}
+$$
+
+The performance loss of $\hat{\pi}(s)$ w.r.t optimal policy $\pi^{*}$ on MDP $\mathcal{M}$ is bounded by
+
+$$
+\left| \left| V _ {\mathcal {M}} ^ {\hat {\pi}} (s) - V _ {\mathcal {M}} ^ {\pi^ {*}} (s) \right| \right| _ {\infty} \leq \frac {2 \left(\gamma \epsilon + \alpha + \gamma \alpha \left(\frac {V _ {\operatorname* {m a x}} - V _ {\operatorname* {m i n}}}{2}\right)\right)}{1 - \gamma} \tag {13}
+$$
+
+Proof. From (12) we have $\forall s\in S$
+
+$$
+\hat {c} (s, \hat {\pi} (s)) + \gamma \sum_ {s ^ {\prime}} \hat {P} (s ^ {\prime} | s, \hat {\pi} (s)) \hat {V} (s ^ {\prime}) \leq \hat {c} (s, \pi^ {*} (s)) + \gamma \sum_ {s ^ {\prime}} \hat {P} (s ^ {\prime} | s, \pi^ {*} (s)) \hat {V} (s ^ {\prime})
+$$
+
+$$
+\hat {c} (s, \hat {\pi} (s)) - \hat {c} (s, \pi^ {*} (s)) \leq \gamma \left(\sum_ {s ^ {\prime}} \hat {P} (s ^ {\prime} | s, \pi^ {*} (s)) \hat {V} (s ^ {\prime}) - \sum_ {s ^ {\prime}} \hat {P} (s ^ {\prime} | s, \hat {\pi} (s)) \hat {V} (s ^ {\prime})\right)
+$$
+
+$$
+(\text {u s i n g} \left\| \hat {V} (s) - V _ {\mathcal {M}} ^ {\pi^ {*}} (s) \right\| _ {\infty} \leq \epsilon)
+$$
+
+$$
+\hat {c} (s, \hat {\pi} (s)) - \hat {c} (s, \pi^ {*} (s)) \leq \gamma \left(\sum_ {s ^ {\prime}} \hat {P} (s ^ {\prime} | s, \pi^ {*} (s)) V _ {\mathcal {M}} ^ {\pi^ {*}} (s ^ {\prime}) - \sum_ {s ^ {\prime}} \hat {P} (s ^ {\prime} | s, \hat {\pi} (s)) V _ {\mathcal {M}} ^ {\pi^ {*}} (s ^ {\prime})\right) + 2 \gamma \epsilon
+$$
+
+$$
+(u s i n g \left| \hat {c} (s, a) - c (s, a) \right| \leq \alpha)
+$$
+
+$$
+\left. c \left(s, \hat {\pi} (s)\right) - c \left(s, \pi^ {*} (s)\right) \leq 2 \gamma \epsilon + 2 \alpha + \gamma \left(\sum_ {s ^ {\prime}} \hat {P} \left(s ^ {\prime} \mid s, \pi^ {*} (s)\right) V _ {\mathcal {M}} ^ {\pi^ {*}} \left(s ^ {\prime}\right) - \sum_ {s ^ {\prime}} \hat {P} \left(s ^ {\prime} \mid s, \hat {\pi} (s)\right) V _ {\mathcal {M}} ^ {\pi^ {*}} \left(s ^ {\prime}\right)\right) \right. \tag {14}
+$$
+
+Now, let $s$ be the state with the max loss $V_{\mathcal{M}}^{\hat{\pi}}(s) - V_{\mathcal{M}}^{\pi^{*}}(s)$ ,
+
+$$
+V _ {\mathcal {M}} ^ {\hat {\pi}} (s) - V _ {\mathcal {M}} ^ {\pi^ {*}} (s) = c (s, \hat {\pi}) - c (s, \pi^ {*}) + \gamma \sum_ {s ^ {\prime}} \Big (P (s ^ {\prime} | s, \hat {\pi}) V _ {\mathcal {M}} ^ {\hat {\pi}} (s ^ {\prime}) - P (s ^ {\prime} | s, \pi^ {*}) V _ {\mathcal {M}} ^ {\pi^ {*}} (s ^ {\prime}) \Big)
+$$
+
+Substituting from (14)
+
+$$
+\begin{array}{l} V _ {\mathcal {M}} ^ {\pi^ {*}} (s) - V _ {\mathcal {M}} ^ {\hat {\pi}} (s) \leq 2 \gamma \epsilon + 2 \alpha + \gamma \sum_ {s ^ {\prime}} \hat {P} (s ^ {\prime} | s, \pi^ {*} (s)) V _ {\mathcal {M}} ^ {\pi^ {*}} (s ^ {\prime}) - \gamma \sum_ {s ^ {\prime}} \hat {P} (s ^ {\prime} | s, \hat {\pi} (s)) V _ {\mathcal {M}} ^ {\pi^ {*}} (s ^ {\prime}) \\ - \gamma \sum_ {s ^ {\prime}} P (s ^ {\prime} | s, \pi^ {*}) V _ {\mathcal {M}} ^ {\pi^ {*}} (s ^ {\prime}) + \gamma \sum_ {s ^ {\prime}} P (s ^ {\prime} | s, \hat {\pi}) V _ {\mathcal {M}} ^ {\hat {\pi}} (s ^ {\prime}) \\ \end{array}
+$$
+
+Add and subtract $\gamma \sum_{s'} P(s'|s, \hat{\pi}) V_{\mathcal{M}}^{\pi^*}(s')$ and re-arrange
+
+$$
+\begin{array}{l} V _ {\mathcal {M}} ^ {\pi^ {*}} (s) - V _ {\mathcal {M}} ^ {\hat {\pi}} (s) \leq 2 \gamma \epsilon + 2 \alpha + \gamma \sum_ {s ^ {\prime}} \Big (\hat {P} (s ^ {\prime} | s, \pi^ {*}) - P (s ^ {\prime} | s, \pi^ {*}) \Big) V _ {\mathcal {M}} ^ {\pi^ {*}} (s ^ {\prime}) \\ - \gamma \sum_ {s ^ {\prime}} \Bigl (\hat {P} (s ^ {\prime} | s, \hat {\pi}) - P (s ^ {\prime} | s, \hat {\pi}) \Bigr) V _ {\mathcal {M}} ^ {\pi^ {*}} (s ^ {\prime}) \\ + \gamma \sum_ {s ^ {\prime}} P (s ^ {\prime} | s, \hat {\pi}) \left(V _ {\mathcal {M}} ^ {\hat {\pi}} (s ^ {\prime}) - V _ {\mathcal {M}} ^ {\pi^ {*}} (s ^ {\prime})\right) \\ \leq 2 \gamma \epsilon + 2 \alpha + 2 \gamma \alpha \bigg (\frac {V _ {\mathrm {m a x}} - V _ {\mathrm {m i n}}}{2} \bigg) + \gamma \sum_ {s ^ {\prime}} P (s ^ {\prime} | s, \hat {\pi}) \bigg (V _ {\mathcal {M}} ^ {\hat {\pi}} (s ^ {\prime}) - V _ {\mathcal {M}} ^ {\pi^ {*}} (s ^ {\prime}) \bigg) \\ \end{array}
+$$
+
+Since $s$ is the state with largest loss
+
+$$
+\begin{array}{l} \left| \left| V _ {\mathcal {M}} ^ {\pi^ {*}} (s) - V _ {\mathcal {M}} ^ {\hat {\pi}} (s) \right| \right| _ {\infty} \leq 2 \gamma \epsilon + 2 \alpha + 2 \gamma \alpha \left(\frac {V _ {\max} - V _ {\min}}{2}\right) + \gamma \sum_ {s ^ {\prime}} P (s ^ {\prime} | s, \hat {\pi}) \left| \left| V _ {\mathcal {M}} ^ {\pi^ {*}} (s) - V _ {\mathcal {M}} ^ {\hat {\pi}} (s) \right| \right| _ {\infty} \\ \leq 2 \gamma \epsilon + 2 \alpha + 2 \gamma \alpha \left(\frac {V _ {\max} - V _ {\min}}{2}\right) + \gamma \left| \left| V _ {\mathcal {M}} ^ {\pi^ {*}} (s) - V _ {\mathcal {M}} ^ {\hat {\pi}} (s) \right| \right| _ {\infty} \\ \end{array}
+$$
+
+Re-arranging terms we get
+
+$$
+\left| \left| V _ {\mathcal {M}} ^ {\pi^ {*}} (s) - V _ {\mathcal {M}} ^ {\hat {\pi}} (s) \right| \right| _ {\infty} \leq \frac {2 \left(\gamma \epsilon + \alpha + \gamma \alpha \left(\frac {V _ {\operatorname* {m a x}} - V _ {\operatorname* {m i n}}}{2}\right)\right)}{1 - \gamma} \tag {15}
+$$
+
+which concludes the proof.
+
+
+
+# A.1.2 BOUND ON PERFORMANCE OF H-STEP GREEDY POLICY
+
+Notation: For brevity let us define the following macro,
+
+$$
+\langle V, \pi , \mathcal {M} \rangle_ {H} = \mathbb {E} _ {\mu_ {\mathcal {M}} ^ {\pi}} \left[ \sum_ {i = 0} ^ {H - 1} \gamma^ {i} c \left(s _ {i}, a _ {i}\right) + \gamma^ {H} V \left(s _ {H}\right) \right] \tag {16}
+$$
+
+which represents the expected cost achieved when executing policy $\pi$ on $\mathcal{M}$ using $V$ as the terminal cost. We can substitute different policies, terminal costs and MDPs. For example, $\left\langle \hat{V}, \hat{\pi}, \hat{\mathcal{M}} \right\rangle_H$ is the expected cost obtained by running policy $\hat{\pi}$ on simulator $\hat{\mathcal{M}}$ for $H$ steps with approximate learned terminal value function $\hat{V}$ .
+
+Lemma A.1. For a given policy $\pi$ , the optimal value function $V_{\mathcal{M}}^{\pi^{*}}$ and MDPs $\mathcal{M},\hat{\mathcal{M}}$ the following performance difference holds
+
+$$
+\left| \left| \left\langle V _ {\mathcal {M}} ^ {\pi^ {*}}, \pi , \mathcal {M} \right\rangle_ {H} - \left\langle V _ {\mathcal {M}} ^ {\pi^ {*}}, \pi , \hat {\mathcal {M}} \right\rangle_ {H} \right| \right| _ {\infty} \leq \gamma \left(\frac {1 - \gamma^ {H - 1}}{1 - \gamma}\right) \alpha H \left(\frac {c _ {\max } - c _ {\min }}{2}\right) + \gamma^ {H} \alpha H \left(\frac {V _ {\max } - V _ {\min }}{2}\right) + \frac {1 - \gamma^ {H}}{1 - \gamma} \alpha
+$$
+
+Proof. We temporarily introduce a new MDP $\mathcal{M}'$ that has the same cost function as a $\mathcal{M}$ , but transition function of $\hat{\mathcal{M}}$
+
+$$
+\begin{array}{l} \left\langle V _ {\mathcal {M}} ^ {\pi^ {*}}, \pi , \mathcal {M} \right\rangle_ {H} - \left\langle V _ {\mathcal {M}} ^ {\pi^ {*}}, \pi , \hat {\mathcal {M}} \right\rangle_ {H} = \left\langle V _ {\mathcal {M}} ^ {\pi^ {*}}, \pi , \mathcal {M} \right\rangle_ {H} - \left\langle V _ {\mathcal {M}} ^ {\pi^ {*}}, \pi , \mathcal {M} ^ {\prime} \right\rangle_ {H} \tag {17} \\ + \left\langle V _ {\mathcal {M}} ^ {\pi^ {*}}, \pi , \mathcal {M} ^ {\prime} \right\rangle_ {H} - \left\langle V _ {\mathcal {M}} ^ {\pi^ {*}}, \pi , \hat {\mathcal {M}} \right\rangle_ {H} \\ \end{array}
+$$
+
+Let $\Delta P(s_0\ldots s_H) = P(s_0\ldots s_H) - \hat{P} (s_0\ldots s_H)$ represent the difference in distribution of states encountered by executing $\pi$ on $\mathcal{M}$ and $\hat{\mathcal{M}}$ respectively starting from state $s_0$ .
+
+Expanding the RHS of (17)
+
+$$
+= \sum_ {s _ {0}, \dots , s _ {H}} \Delta P \left(s _ {0} \dots s _ {H}\right) \left(\sum_ {i = 0} ^ {H - 1} \gamma^ {i} c \left(s _ {i}, a _ {i}\right) + \gamma^ {H} V _ {\mathcal {M}} ^ {\pi^ {*}} \left(s _ {H}\right)\right) + \mathbb {E} _ {\mu_ {\hat {\mathcal {M}}}} ^ {\pi} \left[ \sum_ {i = 0} ^ {H - 1} \gamma^ {i} \left(c \left(s _ {i}, a _ {i}\right) - \hat {c} \left(s _ {i}, a _ {i}\right)\right) \right] \tag {18}
+$$
+
+Since the first state $s_1$ is the same
+
+$$
+\begin{array}{l} = \sum_ {s _ {1}, \ldots , s _ {H}} \Delta P (s _ {1} \ldots s _ {H}) \left(\sum_ {i = 1} ^ {H - 1} \gamma^ {i} c (s _ {i}, a _ {i}) + \gamma^ {H} V _ {\mathcal {M}} ^ {\pi^ {*}} (s _ {H})\right) + \mathbb {E} _ {\mu_ {\mathcal {M}} ^ {\pi}} \left[ \sum_ {i = 0} ^ {H - 1} \gamma^ {i} (c (s _ {i}, a _ {i}) - \hat {c} (s _ {i}, a _ {i})) \right] \\ \leq \left\| \sum_ {s _ {1}, \dots , s _ {H}} \Delta P (s _ {1} \dots s _ {H}) \left(\sum_ {i = 1} ^ {H - 1} \gamma^ {i} c (s _ {i}, a _ {i}) + \gamma^ {H} V _ {\mathcal {M}} ^ {\pi^ {*}} (s _ {H})\right) \right\| _ {\infty} + \left\| \mathbb {E} _ {\mu_ {\hat {\mathcal {M}}} ^ {\pi}} \left[ \sum_ {i = 0} ^ {H - 1} \gamma^ {i} (c (s _ {i}, a _ {i}) - \hat {c} (s _ {i}, a _ {i})) \right] \right\| _ {\infty} \\ \leq \left\| \sum_ {s _ {1}, \dots , s _ {H}} \Delta P \left(s _ {1} \dots s _ {H}\right) \left(\sum_ {i = 1} ^ {H - 1} \gamma^ {i} c \left(s _ {i}, a _ {i}\right) + \gamma^ {H} V _ {\mathcal {M}} ^ {\pi^ {*}} \left(s _ {H}\right)\right) \right\| _ {\infty} + \frac {1 - \gamma^ {H}}{1 - \gamma} \alpha \tag {19} \\ \end{array}
+$$
+
+where the first inequality is obtained by applying the triangle inequality and the second one is obtained by applying triangle inequality followed by the upper bound on the error in cost-function.
+
+$$
+\leq \left\| \sum_ {s _ {2}, \dots , s _ {H + 1}} \Delta P \left(s _ {2} \dots s _ {H + 1}\right) \right\| _ {\infty} \sup \left(\sum_ {i = 1} ^ {H - 1} \gamma^ {i} c \left(s _ {i}, a _ {i}\right) + \gamma^ {H} V _ {\mathcal {M}} ^ {\pi^ {*}} \left(s _ {H}\right) - K\right) + \frac {1 - \gamma^ {H}}{1 - \gamma} \alpha \tag {20}
+$$
+
+By choosing $K = \sum_{i=1}^{H-1} \gamma^i (c_{\max} + c_{\min}) / 2 + \gamma^H (V_{\max} + V_{\min}) / 2$ we can ensure that the term inside sup is upper-bounded by $\gamma (1 - \gamma^{H-1}) / (1 - \gamma) ((c_{\max} - c_{\min}) / 2) + \gamma^H (V_{\max} - V_{\min}) / 2$
+
+$$
+\leq \gamma \left(\frac {1 - \gamma^ {H - 1}}{1 - \gamma}\right) \alpha H \left(\frac {c _ {\max} - c _ {\min}}{2}\right) + \gamma^ {H} \alpha H \left(\frac {V _ {\max} - V _ {\min}}{2}\right) + \frac {1 - \gamma^ {H}}{1 - \gamma} \alpha \tag {21}
+$$
+
+The above lemma builds on similar results in (Kakade et al., 2003; Abbeel & Ng, 2005; Ross & Bagnell, 2012).
+
+We are now ready to prove our main theorem, i.e. the performance bound of an MPC policy that uses an approximate model and approximate value function.
+
+# Proof of Theorem 3.1
+
+Proof. Since, $\hat{\pi}$ is the greedy policy when using $\hat{\mathcal{M}}$ and $\hat{V}$ ,
+
+$$
+\left\langle \hat {V}, \hat {\pi}, \hat {\mathcal {M}} \right\rangle_ {H} \leq \left\langle \hat {V}, \pi^ {*}, \hat {\mathcal {M}} \right\rangle_ {H} \tag {22}
+$$
+
+$$
+\left\langle V _ {\mathcal {M}} ^ {\pi^ {*}}, \hat {\pi}, \hat {\mathcal {M}} \right\rangle_ {H} \leq \left\langle V _ {\mathcal {M}} ^ {\pi^ {*}}, \pi^ {*}, \hat {\mathcal {M}} \right\rangle_ {H} + 2 \gamma^ {H} \epsilon (\text {u s i n g} \left| \left| \hat {V} - V _ {\mathcal {M}} ^ {\pi^ {*}} \right| \right| _ {1} \leq \epsilon)
+$$
+
+Also, we have
+
+$$
+\begin{array}{l} \left\langle V _ {\mathcal {M}} ^ {\pi^ {*}}, \hat {\pi}, \mathcal {M} \right\rangle_ {H} - \left\langle V _ {\mathcal {M}} ^ {\pi^ {*}}, \pi^ {*}, \mathcal {M} \right\rangle_ {H} = \left(\left\langle V _ {\mathcal {M}} ^ {\pi^ {*}}, \hat {\pi}, \mathcal {M} \right\rangle_ {H} - \left\langle V _ {\mathcal {M}} ^ {\pi^ {*}}, \hat {\pi}, \hat {\mathcal {M}} \right\rangle_ {H}\right) \\ \left. - \left(\left\langle V _ {\mathcal {M}} ^ {\pi^ {*}}, \pi^ {*}, \mathcal {M} \right\rangle_ {H} - \left\langle V _ {\mathcal {M}} ^ {\pi^ {*}}, \pi^ {*}, \hat {\mathcal {M}} \right\rangle_ {H}\right) \right. \tag {23} \\ + \left(\left\langle V _ {\mathcal {M}} ^ {\pi^ {*}}, \hat {\pi}, \hat {\mathcal {M}} \right\rangle_ {H} - \left\langle V _ {\mathcal {M}} ^ {\pi^ {*}}, \pi^ {*}, \hat {\mathcal {M}} \right\rangle_ {H}\right) \\ \end{array}
+$$
+
+The first two terms can be bounded using Lemma A.1 and the third term using Eq. (22) to get
+
+$$
+\begin{array}{l} \left\langle V _ {\mathcal {M}} ^ {\pi^ {*}}, \hat {\pi}, \mathcal {M} \right\rangle_ {H} - \left\langle V _ {\mathcal {M}} ^ {\pi^ {*}}, \pi^ {*}, \mathcal {M} \right\rangle_ {H} \\ \leq 2 \left(\gamma \frac {1 - \gamma^ {H - 1}}{1 - \gamma} \alpha H \left(\frac {c _ {\operatorname* {m a x}} - c _ {\operatorname* {m i n}}}{2}\right) + \gamma^ {H} \alpha H \left(\frac {V _ {\operatorname* {m a x}} - V _ {\operatorname* {m i n}}}{2}\right) + \frac {1 - \gamma^ {H}}{1 - \gamma} \alpha + \gamma^ {H} \epsilon\right) \tag {24} \\ \end{array}
+$$
+
+Now, let $s$ be the state with max loss $V_{\mathcal{M}}^{\hat{\pi}}(s) - V_{\mathcal{M}}^{\pi^{*}}(s)$
+
+$$
+\begin{array}{l} V _ {\mathcal {M}} ^ {\hat {\pi}} (s) - V _ {\mathcal {M}} ^ {\pi^ {*}} (s) = \left\langle V _ {\mathcal {M}} ^ {\hat {\pi}}, \hat {\pi}, \mathcal {M} \right\rangle_ {H} - \left\langle V _ {\mathcal {M}} ^ {\pi^ {*}}, \pi^ {*}, \mathcal {M} \right\rangle_ {H} \\ = \left(\left\langle V _ {\mathcal {M}} ^ {\hat {\pi}}, \hat {\pi}, \mathcal {M} \right\rangle_ {H} - \left\langle V _ {\mathcal {M}} ^ {\pi^ {*}}, \hat {\pi}, \mathcal {M} \right\rangle_ {H}\right) + \left(\left\langle V _ {\mathcal {M}} ^ {\pi^ {*}}, \hat {\pi}, \mathcal {M} \right\rangle_ {H} - \left\langle V _ {\mathcal {M}} ^ {\pi^ {*}}, \pi^ {*}, \mathcal {M} \right\rangle_ {H}\right) \\ = \gamma^ {H} \left(V _ {\hat {\mathcal {M}}} ^ {\hat {\pi}} \left(s _ {H + 1}\right) - V _ {\mathcal {M}} ^ {\pi^ {*}} \left(s _ {H + 1}\right)\right) + \left(\left\langle V _ {\mathcal {M}} ^ {\pi^ {*}}, \hat {\pi}, \mathcal {M} \right\rangle_ {H} - \left\langle V _ {\mathcal {M}} ^ {\pi^ {*}}, \pi^ {*}, \mathcal {M} \right\rangle_ {H}\right) \\ \leq \gamma^ {H} \left(V _ {\mathcal {M}} ^ {\hat {\pi}} (s) - V _ {\mathcal {M}} ^ {\pi^ {*}} (s)\right) \\ + 2 \left(\frac {\gamma (1 - \gamma^ {H - 1})}{1 - \gamma} \alpha H \left(\frac {c _ {\max} - c _ {\min}}{2}\right) + \gamma^ {H} \alpha H \left(\frac {V _ {\max} - V _ {\min}}{2}\right) + \frac {1 - \gamma^ {H}}{1 - \gamma} \alpha + \gamma^ {H} \epsilon\right) \tag {25} \\ \end{array}
+$$
+
+where last inequality comes from applying Eq. (24) and the fact that $s$ is the state with max loss. The final expression follows from simple algebraic manipulation.
+
+# A.2 EXPERIMENT DETAILS
+
+# A.2.1 TASK DETAILS
+
+# CARTPOLESWINGUP
+
+- Reward function: $x_{cart}^{2} + \theta_{pole}^{2} + 0.01v_{cart} + 0.01\omega_{pole} + 0.01a^{2}$
+- Observation: $[x_{\text{cart}}, \theta_{\text{pole}}, v_{\text{cart}}, \omega_{\text{pole}}]$ (4 dim)
+
+SAWYERPEGINSERTION We simulate sensor noise by placing a simulated position sensor at the target location in the MuJoCo physics engine that adds Gaussian noise with $\sigma = 10\mathrm{cm}$ to the observed 3D position vector. MPPI uses a deterministic model that does not take sensor noise into account for planning. Every episode lasts for 75 steps with a timestep of 0.02 seconds between steps
+
+- Reward function: $-1.0 * ||x_{ee} - x_{target}||_1 - 5.0 * ||x_{ee} - x_{target}||_2 + 5 * \mathbb{1}(||x_{ee} - x_{target}||_2 < 0.06)$
+- Observation: $\left[q_{pos}, q_{vel}, x_{ee}, x_{target}, x_{ee} - x_{target}, \left\|x_{ee} - x_{target}\right\|_1, \left\|x_{ee} - x_{target}\right\|_2\right]$ (25 dim)
+
+An episode is considered successful if the peg stays within the hole for at least 5 steps.
+
+INHANDMANIPULATION This environment was used without modification from the accompanying codebase for Rajeswaran* et al. (2018) and is available at https://bit.ly/3f6MNP
+
+- Reward function: $-||x_{obj} - x_{des}||_2 + z_{obj}^T z_{des} + \text{Bonus for proximity to desired pos} + \text{orien},$ $z_{obj}^{T}z_{des}$ represents dot product between object axis and target axis to measure orientation similarity.
+- Observation: $[q_{pos}, x_{obj}, v_{obj}, z_{obj}, z_{des}, x_{obj} - x_{des}, z_{obj} - z_{des}]$ (45 dim)
+
+Every episode lasts 75 steps with a timestep of 0.01 seconds between steps. An episode is considered successful if the orientation of the pen stays within a specified range of desired orientation for at least 20 steps. The orientation similarity is measured by the dot product between the pen's current longitudinal axis and desired with a threshold of 0.95.
+
+Table 1: MPPI Parameters
+
+| CARTPOLESWINGUP | SAWYERPEGINSERTION | INHANDMANIPULATION |
| Parameter | Value | Parameter | Value | Parameter | Value |
| Horizon | 32 | Horizon | 20 | Horizon | 32 |
| Num particles | 60 | Num particles | 100 | Num particles | 100 |
| Covariance (Σ) | 0.45 | Covariance (Σ) | 0.25 | Covariance (Σ) | 0.3 |
| Temperature( β) | 0.1 | Temperature( β) | 0.1 | Temperature( β) | 0.15 |
| Filter coefs | [1.0, 0.0, 0.0] | Filter coefs | [0.25, 0.8, 0.0] | Filter coefs | [0.25, 0.8, 0.0] |
| Step size | 1.0 | Step size | 0.9 | Step size | 1.0 |
| γ | 0.99 | γ | 0.99 | γ | 0.99 |
+
+# A.2.2 LEARNING DETAILS
+
+Validation: Validation is performed after every $N$ training episodes during training for $N_{\mathrm{eval}}$ episodes using a fixed set of start states that the environment is reset to. We ensure that the same start states are sampled at every validation iteration by setting the seed value to a pre-defined validation seed, which is kept constant across different runs of the algorithm with different training seeds. This helps ensure consistency in evaluating different runs of the algorithm. For all our experiments we set $N = 40$ and $N_{\mathrm{eval}} = 30$
+
+$\mathbf{MPQ}(\lambda)$ : For all tasks, we represent Q function using 2 layered fully-connected neural network with 100 units in each layer and ReLU activation. We use ADAM (Kingma & Ba, 2014) for optimization with a learning rate of 0.001 and discount factor $\gamma = 0.99$ . Further, the buffer size is 1500 for CARTPOLESWINGUP and 3000 for the others, with batch size of 64 for all. We smoothly decay $\lambda$ according to the following sublinear decay rate
+
+$$
+\lambda_ {t} = \frac {\lambda_ {0}}{1 + \kappa \sqrt {t}} \tag {26}
+$$
+
+where the decay rate $\kappa$ is calculate based on the desired final value of $\lambda$ . For batch size we did a search from [16, 64] with a step size of 16 and buffer size was chosen from 1500, 3000, 5000. While batch size was tuned for cartpole and then fixed for the remaining two environments, the buffer size was chosen independently for all three.
+
+Proximal Policy Optimization (PPO): Both policy and value functions are represented by feed forward networks with 2 layers each with 64 and 128 units for policy and value respectively. All other parameters are left to the default values. The number of trajectories collected per iteration is modified to correspond with the same number of samples collected between validation iterations for $\mathrm{MPQ}(\lambda)$ . We collect 40 trajectories per iteration. Asymptotic performance is reported as average of last 10 validation iterations after 500 training iter in SAWYERPEGINSERTION and 2k in INHANDMANIPULATION.
+
+MPPI parameters Table 1 shows the MPPI parameters used for different experiments. In addition to the standard MPPI parameters, in certain cases we also use a step size parameter as introduced by Wagener et al. (2019). For INHANDMANIPULATION and SAWYERPEGINSERTION we also apply autoregressive filtering on the sampled MPPI trajectories to induce smoothness in the sampled actions, with tuned filter coefficients. This has been found to be useful in prior works (Summers et al., 2020; Lowrey et al., 2018) for getting $\mathrm{MPQ}(\lambda)$ to work on high dimensional control tasks. The temperature, initial covariance and step size parameters for MPPI were tuned using a grid search with true dynamics. Temperature and initial covariance were set within the range of [0.0,1.0] and step size from [0.5,1.0] with a discretization of 0.05. The number of particles were searched from [40,120] with a step size of 10 and the horizon was chosen from 4 different values [16,20,32,64]. The best performing parameters then chosen based on average reward over 30 episodes with a fixed seed value to ensure reproducibility. The same parameters were then used in the case of biased dynamics and $\mathrm{MPQ}(\lambda)$ , to clearly demonstrate that $\mathrm{MPQ}(\lambda)$ can overcome sources of error in the base MPC implementation.
\ No newline at end of file
diff --git a/blendingmpcvaluefunctionapproximationforefficientreinforcementlearning/images.zip b/blendingmpcvaluefunctionapproximationforefficientreinforcementlearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..dec0ccd73e43ee508029181c2c3e659a39301f9e
--- /dev/null
+++ b/blendingmpcvaluefunctionapproximationforefficientreinforcementlearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:692b363f17d56953ab52d43ba46ee5a7b518e6371945779c765799e940cbf195
+size 747894
diff --git a/blendingmpcvaluefunctionapproximationforefficientreinforcementlearning/layout.json b/blendingmpcvaluefunctionapproximationforefficientreinforcementlearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..4bd7f9ce25b61a8e7a8732e508d8685ff5d318e8
--- /dev/null
+++ b/blendingmpcvaluefunctionapproximationforefficientreinforcementlearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:95f3e0254842fe70414dbd988819468b556e0454b6939cd5a941c60cd31f6964
+size 636570
diff --git a/boiltowardsrepresentationchangeforfewshotlearning/6c282308-27e2-46f8-adae-b5925314e0f8_content_list.json b/boiltowardsrepresentationchangeforfewshotlearning/6c282308-27e2-46f8-adae-b5925314e0f8_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..e7c2fbc7de7d26c40caf4569f76597b4a245996f
--- /dev/null
+++ b/boiltowardsrepresentationchangeforfewshotlearning/6c282308-27e2-46f8-adae-b5925314e0f8_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e0f8a24225d2eb5ce36db3d0e9c36ef2eb4860c9169325bdafea81be172a94a8
+size 163056
diff --git a/boiltowardsrepresentationchangeforfewshotlearning/6c282308-27e2-46f8-adae-b5925314e0f8_model.json b/boiltowardsrepresentationchangeforfewshotlearning/6c282308-27e2-46f8-adae-b5925314e0f8_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..20da0766e28b009b05f286782a8431f681ea40dc
--- /dev/null
+++ b/boiltowardsrepresentationchangeforfewshotlearning/6c282308-27e2-46f8-adae-b5925314e0f8_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:22d55edb2d6322034133f238f3cde6a12be872c9132b0504b2589b2c4a8f23ea
+size 195713
diff --git a/boiltowardsrepresentationchangeforfewshotlearning/6c282308-27e2-46f8-adae-b5925314e0f8_origin.pdf b/boiltowardsrepresentationchangeforfewshotlearning/6c282308-27e2-46f8-adae-b5925314e0f8_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..17506c9135ac4fc7580b3e4904ffc7888782150e
--- /dev/null
+++ b/boiltowardsrepresentationchangeforfewshotlearning/6c282308-27e2-46f8-adae-b5925314e0f8_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cea3a7aaae4ca399f511371aa89528d6bdf99dfee6aec411f1d33897cc874091
+size 4186583
diff --git a/boiltowardsrepresentationchangeforfewshotlearning/full.md b/boiltowardsrepresentationchangeforfewshotlearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..7ba7aec9829201e9738d827d87b9d49c81bb6b9c
--- /dev/null
+++ b/boiltowardsrepresentationchangeforfewshotlearning/full.md
@@ -0,0 +1,631 @@
+# BOIL: TOWARDS REPRESENTATION CHANGE FOR FEW-SHOT LEARNING
+
+Jaehoon Oh\*1, Hyungjun Yoo\*1, ChangHwan Kim & Se-Young Yun2
+
+1Graduate School of Knowledge Service Engineering, KAIST
+
+2Graduate School of Artificial Intelligence, KAIST
+
+{jaehoon.oh, yoohjun, kimbob, yunseyoung}@kaist.ac.kr
+
+# ABSTRACT
+
+Model Agnostic Meta-Learning (MAML) is one of the most representative of gradient-based meta-learning algorithms. MAML learns new tasks with a few data samples using inner updates from a meta-initialization point and learns the meta-initialization parameters with outer updates. It has recently been hypothesized that representation reuse, which makes little change in efficient representations, is the dominant factor in the performance of the meta-initialized model through MAML in contrast to representation change, which causes a significant change in representations. In this study, we investigate the necessity of representation change for the ultimate goal of few-shot learning, which is solving domain-agnostic tasks. To this aim, we propose a novel meta-learning algorithm, called BOIL (Body Only update in Inner Loop), which updates only the body (extractor) of the model and freezes the head (classifier) during inner loop updates. BOIL leverages representation change rather than representation reuse. This is because feature vectors (representations) have to move quickly to their corresponding frozen head vectors. We visualize this property using cosine similarity, CKA, and empirical results without the head. BOIL empirically shows significant performance improvement over MAML, particularly on cross-domain tasks. The results imply that representation change in gradient-based meta-learning approaches is a critical component.
+
+# 1 INTRODUCTION
+
+Meta-learning, also known as "learning to learn," is a methodology that imitates human intelligence that can adapt quickly with even a small amount of previously unseen data through the use of previous learning experiences. To this aim, meta-learning with deep neural networks has mainly been studied using metric- and gradient-based approaches. Metric-based meta-learning (Koch, 2015; Vinyals et al., 2016; Snell et al., 2017; Sung et al., 2018) compares the distance between feature embeddings using models as a mapping function of data into an embedding space, whereas gradient-based meta-learning (Ravi & Larochelle, 2016; Finn et al., 2017; Nichol et al., 2018) quickly learns the parameters to be optimized when the models encounter new tasks.
+
+Model-agnostic meta-learning (MAML) (Finn et al., 2017) is the most representative gradient-based meta-learning algorithm. MAML algorithm consists of two optimization loops: an inner loop and an outer loop. The inner loop learns task-specific knowledge, and the outer loop finds a universally good meta-initialized parameter allowing the inner loop to quickly learn any task from the initial point with only a few examples. This algorithm has been highly influential in the field of meta-learning, and numerous follow-up studies have been conducted (Oreshkin et al., 2018; Rusu et al., 2018; Zintgraf et al., 2018; Yoon et al., 2018; Finn et al., 2018; Triantafillou et al., 2019; Sun et al., 2019; Na et al., 2019; Tseng et al., 2020).
+
+Very recent studies (Raghu et al., 2020; Arnold et al., 2019) have attributed the success of MAML to high-quality features before the inner updates from the meta-initialized parameters. For instance, Raghu et al. (2020) claimed that MAML learns new tasks by updating the head (the last fully connected layer) with almost the same features (the output of the penultimate layer) from the meta-initialized network. In this paper, we categorize the learning patterns as follows: A small change in the representations during task learning is named representation reuse, whereas a large change is named representation change. $^{1}$ Thus, representation reuse was the common belief of MAML.
+
+
+(a) MAML/ANIL.
+
+
+(b) BOIL.
+Figure 1: Difference in task-specific (inner) updates between MAML/ANIL and BOIL. In the figure, the lines represent the decision boundaries defined by the head (classifier) of the network. Different shapes and colors mean different classes. (a) MAML mainly updates the head with a negligible change in body (extractor); hence, representations on the feature space are almost identical. ANIL does not change in the body during inner updates, and they are therefore identical. However, (b) BOIL updates only the body without changing the head during inner updates; hence, representations on the feature space change significantly with the fixed decision boundaries. We visualize the representations from various data sets using UMAP (Uniform Manifold Approximation and Projection for dimension reduction) (McInnes et al., 2018) in Appendix B.
+
+Herein, we pose an intriguing question: Is representation reuse sufficient for meta-learning? We believe that the key to successful meta-learning is closer to representation change than to representation reuse. More importantly, representation change is crucial for cross-domain adaptation, which is considered the ultimate goal of meta-learning. By contrast, the MAML accomplished with representation reuse might be poorly trained for cross-domain adaptation since the success of representation reuse might rely heavily on the similarity between the source and the target domains.
+
+To answer this question, we propose a novel meta-learning algorithm that leverages representation change. Our contributions can be summarized as follows:
+
+- We emphasize the necessity of representation change for meta-learning through cross-domain adaptation experiments.
+- We propose a simple but effective meta-learning algorithm that learns the Body (extractor) of the model Only in the Inner Loop (BOIL). We empirically show that BOIL improves the performance over most of benchmark data sets and that this improvement is particularly noticeable in fine-grained data sets or cross-domain adaptation.
+- We interpret the connection between BOIL and the algorithm using preconditioning gradients (Flennerhag et al., 2020) and show their compatibility, improving performance.
+- We demonstrate that the BOIL algorithm enjoys representation layer reuse on the low-/mid-level body and representation layer change on the high-level body using the cosine similarity and the Centered Kernel Alignment (CKA). We visualize the features between before and after an adaptation, and empirically analyze the effectiveness of the body of BOIL through an ablation study on eliminating the head.
+- For ResNet architectures, we propose a disconnection trick that removes the backpropagation path of the last skip connection. The disconnection trick strengthens representation layer change on the high-level body.
+
+# 2 PROBLEM SETTING
+
+# 2.1 META-LEARNING FRAMEWORK (MAML)
+
+The MAML algorithm (Finn et al., 2017) attempts to meta-learn the best initialization of the parameters for a task-learner. It consists of two main optimization loops: an inner loop and an outer loop. First, we sample a batch of tasks within a data set distribution. Each task $\tau_{i}$ consists of a support set $S_{\tau_i}$ and a query set $Q_{\tau_i}$ . When we sample a support set for each task, we first sample $n$ labels from the label set and then sample $k$ instances for each label. Thus, each support set contains $n\times k$ instances. For a query set, we sample instances from the same labels with the support set.
+
+With these tasks, the MAML algorithm conducts both meta-training and meta-testing. During meta-training, we first sample a meta-batch consisting of $B$ tasks from the meta-training data set. In the
+
+inner loops, we update the meta-initialized parameters $\theta$ to task-specific parameters $\theta_{\tau_i}$ using the task-specific loss $L_{S_{\tau_i}}(f_\theta)$ , where $f_\theta$ is a neural network parameterized by $\theta$ , as follows:
+
+$$
+\theta_ {\tau_ {i}} = \theta - \alpha \nabla_ {\theta} L _ {S _ {\tau_ {i}}} (f _ {\theta}) \tag {1}
+$$
+
+Using the query set of the corresponding task, we compute the loss $L_{Q_{\tau_i}}(f_{\theta_{\tau_i}})$ based on each inner updated parameter. By summing all these losses, the meta-loss of each meta-batch, $L_{meta}(\theta)$ , is computed. The meta-initialized parameters are then updated using the meta-loss in the outer loop through a gradient descent.
+
+$$
+\theta^ {\prime} = \theta - \beta \nabla_ {\theta} L _ {m e t a} (\theta), \text {w h e r e} L _ {m e t a} (\theta) = \sum_ {i = 1} ^ {B} L _ {Q _ {\tau_ {i}}} \left(f _ {\theta_ {\tau_ {i}}}\right) \tag {2}
+$$
+
+In meta-testing, the inner loop, which can be interpreted as task-specific learning, is the same as in meta-training. However, the outer loop only computes the accuracy using a query set of tasks and does not perform a gradient descent; thus, it does not update the meta-initialization parameters.
+
+# 2.2 EXPERIMENTAL SETUP
+
+We used two backbone networks, 4conv network with 64 channels from Vinyals et al. (2016) and ResNet-12 starting with 64 channels and doubling them after every block from Oreshkin et al. (2018). For the batch normalization, we used batch statistics instead of the running statistics during meta-testing, following the original MAML (Finn et al., 2017). We trained 4conv network and ResNet-12 for 30,000 and 10,000 epochs, respectively, and then used the model with the best accuracy on meta-validation data set to verify the performance. We applied an inner update once for both meta-training and meta-testing. The outer learning rate was set to 0.001 and 0.0006 and the inner learning rate was set to 0.5 and 0.3 for 4conv network and ResNet-12, respectively. All results were reproduced by our group and reported as the average and standard deviation of the accuracies over $5 \times 1,000$ tasks, and the values in parentheses in the algorithm name column of the tables are the number of shots. We validated both MAML/ANIL and BOIL on two general data sets, miniImageNet (Vinyals et al., 2016) and tieredImageNet (Ren et al., 2018), and two specific data sets, Cars (Krause et al., 2013) and CUB (Welinder et al., 2010). Note that our algorithm is not for state-of-the-art performance but for a proposal of a new learning scheme for meta-learning. Full details on the implementation and data sets are described in Appendix A. $^{3}$ In addition, the results of the other data sets at a size of $32 \times 32$ and using the 4conv network with 32 channels from Finn et al. (2017) (i.e., original setting) are reported in Appendix C and Appendix D, respectively.
+
+# 3 BOIL (BODY ONLY UPDATE IN INNER LOOP)
+
+# 3.1 THE ULTIMATE GOAL OF META-LEARNING: DOMAIN-AGNOSTIC ADAPTATION
+
+Recently, Raghu et al. (2020) proposed two opposing hypotheses, representation reuse and representation change, and demonstrated that representation reuse is the dominant factor of MAML. We can discriminate two hypotheses according to which part of the neural network, body or head, is mostly updated through the inner loop. Here, the body indicates all convolutional layers, and the head indicates the remaining fully connected layer. In other words, the representation change hypothesis attributes the capability of MAML to the updates on the body, whereas the representation reuse hypothesis considers that the network body is already universal to various tasks before the inner loops. To demonstrate the representation reuse hypothesis of MAML, the authors proposed the ANIL (Almost No Inner Loop) algorithm, which only updates the head in the inner loops during training and testing, and showed that ANIL has a performance comparable to that of MAML. This implies that the representation trained by MAML/ANIL, even before updated task-specifically, is sufficient
+
+to achieve the desired performance. Furthermore, they proposed the NIL-testing (No Inner Loop) algorithm, which removes the head and performs unseen tasks using only the distance between the representations of a support set and those of a query set during testing to identify the capability of representation reuse. NIL-testing of MAML also achieves a performance comparable to MAML. Based on these results, it was claimed that the success of MAML is attributed to representation reuse.
+
+Here, we investigate the necessity of representation change. We believe that the meta-trained models should achieve a good performance in many other domains, which is referred to as domain-agnostic adaptation in this paper. To this end, representation reuse is not appropriate since representation reuse uses the similarity between the source and target domains. The higher the similarity, the higher the efficiency. Therefore, when there are no strong similarities between the source and target domains, good representations for the source domain could be imperfect representations for the target domain. Table 2, which lists our experimental results on cross-domain tasks, shows that the MAML enjoying representation reuse is worse than BOIL leveraging representation change, which will be discussed in detail in the next section.
+
+# 3.2 BOIL ALGORITHM
+
+Inspired by the necessity, we design an algorithm that updates only the body of the model and freezes the head of the model during the task learning to enforce representation change through inner updates. Because the gradients must be back-propagated to update the body, we set the learning rate of the head to zero in the inner updates during both meta-training and meta-testing. Otherwise, the learning and evaluation procedures of BOIL are the same as those of MAML. Therefore, the computational overhead does not change.
+
+Formally speaking, with the notations used in Section 2.1, the meta-initialized parameters $\theta$ can be separated into body parameters $\theta_{b}$ and head parameters $\theta_{h}$ , that is, $\theta = \{\theta_{b},\theta_{h}\}$ . For a sample image $x\in \mathbb{R}^i$ , an output can be expressed as $\hat{y} = f_{\theta}(x) = f_{\theta_h}(f_{\theta_b}(x))\in \mathbb{R}^n$ , where $f_{\theta_b}(x)\in \mathbb{R}^d$ . The task-specific body parameters $\theta_{b,\tau_i}$ and head parameters $\theta_{h,\tau_i}$ through an inner loop given task $\tau_{i}$ are thus as follows:
+
+$$
+\theta_ {b, \tau_ {i}} = \theta_ {b} - \alpha_ {b} \nabla_ {\theta_ {b}} L _ {S _ {\tau_ {i}}} (f _ {\theta}) \& \theta_ {h, \tau_ {i}} = \theta_ {h} - \alpha_ {h} \nabla_ {\theta_ {h}} L _ {S _ {\tau_ {i}}} (f _ {\theta}) \tag {3}
+$$
+
+where $\alpha_{b}$ and $\alpha_{h}$ are the inner loop learning rates corresponding to the body and head, respectively. MAML usually sets $\alpha = \alpha_{b} = \alpha_{h} (\neq 0)$ , ANIL sets $\alpha_{b} = 0$ and $\alpha_{h} \neq 0$ , and BOIL sets $\alpha_{b} \neq 0$ and $\alpha_{h} = 0$ .
+
+These simple differences force the change in the dominant factor of task-specific updates, from the head to the body. Figure 1 shows the main difference in the inner updates between MAML/ANIL and BOIL. To solve new tasks, the head mainly or only changes in MAML/ANIL (Raghu et al., 2020), whereas in BOIL, the body changes.
+
+# 3.2.1 PERFORMANCE IMPROVEMENT ON BENCHMARK DATA SETS AND CROSS-DOMAIN TASKS
+
+Table 1: Test accuracy (\%) of 4conv network on benchmark data sets. The values in parenthesis in the algorithm name column of tables are the number of shots.
+
+| Domain | General (Coarse-grained) | Specific (Fine-grained) |
| Dataset | miniImageNet | tieredImageNet | Cars | CUB |
| MAML(1) | 47.44 ± 0.23 | 47.44 ± 0.18 | 45.27 ± 0.26 | 56.18 ± 0.37 |
| ANIL(1) | 47.82 ± 0.20 | 49.35 ± 0.26 | 46.81 ± 0.24 | 57.03 ± 0.41 |
| BOIL(1) | 49.61 ± 0.16 | 48.58 ± 0.27 | 56.82 ± 0.21 | 61.60 ± 0.57 |
| MAML(5) | 61.75 ± 0.42 | 64.70 ± 0.14 | 53.23 ± 0.26 | 69.66 ± 0.03 |
| ANIL(5) | 63.04 ± 0.42 | 65.82 ± 0.12 | 61.95 ± 0.38 | 70.93 ± 0.28 |
| BOIL(5) | 66.45 ± 0.37 | 69.37 ± 0.12 | 75.18 ± 0.21 | 75.96 ± 0.17 |
+
+Table 1 and Table 2 display the superiority of BOIL on most benchmark data sets and on cross-domain adaptation tasks, where the source and target domains differ (i.e., the meta-training and meta-testing data sets are different). In Table 1, the performance improvement is particularly noticeable on the specific domain data sets Cars and CUB. The results demonstrate that representation change is necessary even if there is a similarity between the source and target domains. Table 2 shows that BOIL is closer to the ultimate goal of meta-learning, which is a domain-agnostic adaptation.
+
+Table 2: Test accuracy (%) of 4conv network on cross-domain adaptation.
+
+| adaptation | General to General | General to Specific | Specific to General | Specific to Specific |
| meta-train | tieredImageNet | miniImageNet | miniImageNet | miniImageNet | Cars | Cars | CUB | Cars |
| meta-test | miniImageNet | tieredImageNet | Cars | CUB | miniImageNet | tieredImageNet | Cars | CUB |
| MAML(1) | 47.60 ± 0.24 | 51.61 ± 0.20 | 33.57 ± 0.14 | 40.51 ± 0.08 | 26.95 ± 0.15 | 28.46 ± 0.18 | 32.22 ± 0.30 | 29.64 ± 0.19 |
| ANIL(1) | 49.67 ± 0.31 | 52.82 ± 0.29 | 34.77 ± 0.31 | 41.12 ± 0.15 | 28.67 ± 0.17 | 29.41 ± 0.19 | 33.07 ± 0.43 | 28.32 ± 0.32 |
| BOIL(1) | 49.74 ± 0.26 | 53.23 ± 0.41 | 36.12 ± 0.29 | 44.20 ± 0.15 | 33.71 ± 0.13 | 34.06 ± 0.20 | 35.44 ± 0.46 | 34.79 ± 0.27 |
| MAML(5) | 65.22 ± 0.20 | 65.76 ± 0.27 | 44.56 ± 0.21 | 53.09 ± 0.16 | 30.64 ± 0.19 | 32.62 ± 0.21 | 41.24 ± 0.21 | 32.18 ± 0.13 |
| ANIL(5) | 66.47 ± 0.16 | 66.52 ± 0.28 | 46.55 ± 0.29 | 55.82 ± 0.21 | 35.38 ± 0.10 | 36.94 ± 0.10 | 43.05 ± 0.23 | 37.99 ± 0.15 |
| BOIL(5) | 69.33 ± 0.19 | 69.37 ± 0.23 | 50.64 ± 0.22 | 60.92 ± 0.11 | 44.51 ± 0.25 | 46.09 ± 0.23 | 47.30 ± 0.22 | 45.91 ± 0.28 |
+
+Recently, Guo et al. (2019) noted that existing meta-learning algorithms have weaknesses in terms of cross-domain adaptation. We divide the cross-domain adaptation into four cases: general to general, general to specific, specific to general, and specific to specific. Previous studies considered the cross-domain scenario from a general domain to a specific domain (Chen et al., 2019; Guo et al., 2019). In this paper, we also evaluate the reverse case. BOIL outperforms MAML/ANIL not only on the typical cross-domain adaptation scenario but also on the reverse one. In particular, the performance improvement, when the domain changes from birds (CUB as a meta-train set) to cars (Cars as a meta-test set), implies that the representation change in BOIL enables the model to adapt to an unseen target domain that is entirely different from the source domain.
+
+# 3.2.2 ABLATION STUDY ON THE LEARNING RATE OF THE HEAD
+
+In this section, we control the inner loop update learning rate of the head to verify the effect of training the head to the performance. The results are depicted in Table 3. The best performance is achieved when the learning rate is 0 (BOIL). However, the accuracy rapidly decreases as the learning rate of the head grows. Even with $1 / 10 \times$ head learning rate com
+
+| Head's Learning Rate (αh) | miniImageNet | Cars |
| 0.00 (BOIL) | 66.45 ± 0.37 | 75.18 ± 0.21 |
| 0.05 | 38.81 ± 0.21 | 68.67 ± 0.21 |
| 0.10 | 49.49 ± 0.16 | 68.86 ± 0.30 |
| 0.5 (MAML in ours) | 61.75 ± 0.42 | 53.23 ± 0.26 |
+
+pared to other layers, the test accuracies are significantly degraded. Therefore, it is thought that freezing head is crucial.
+
+# 3.2.3 BOIL AND PRECONDITIONING GRADIENTS
+
+Some aspects of BOIL can be explained by preconditioning gradients (Lee & Choi, 2018; Flennerhag et al., 2020). Preconditioning gradients occur when a particular layer is shared over all tasks, warping the spaces (e.g., rotating and scaling). For instance, one might consider the frozen head of BOIL to be a warp layer of the entire body (Flennerhag et al., 2020).
+
+Preconditioning gradients can avoid overfitting in a high-capacity model (Flennerhag et al., 2020), and such a benefit is still valid with BOIL. Indeed, many prior studies have suffered from an overfitting problem, and thus it is challenging to train the backbone network more extensively than the 4conv network with 32 filters (Finn et al., 2017). By contrast, BOIL can increase the validation accuracy with more extensive networks. The accuracy of models with 32, 64, and 128 filters continues to increase to 64.02, 66.72, and 69.23, without overfitting. In Appendix E, we report these results as well as the training and valid accuracy curves of BOIL for three different network sizes, in which the larger networks are trained well. We further hypothesize that the head is the most critical part of an overfitting problem, and BOIL can succeed in dealing with the problem by simply ignoring the head in the inner loops.
+
+However, one essential difference between BOIL and the preconditioning gradients is whether the head is frozen. Prior studies did not freeze the last fully connected layer or used any additional fully connected layer to precondition the gradients, and hence representation reuse is still the major factor of their training. To the best of our knowledge, BOIL is the first approach that enforces representation change by freezing the head in the inner loops.
+
+To investigate the gain from representation change, we adapt BOIL to WarpGrad (Flennerhag et al., 2020). Four different models are tested, the architectures of which are fully described in Appendix F. Table 4 shows the test accuracy of the four models, where the BOIL-WarpGrad
+
+Table 3: 5-Way 5-Shot test accuracy according to the learning rate of the head.
+
+| Model | Accuracy |
| WarpGrad w/ last warp head | 83.19 ± 0.79 |
| WarpGrad w/o last warp head | 83.16 ± 0.69 |
| BOIL-WarpGrad w/ last warp conv | 83.68 ± 0.82 |
| BOIL-WarpGrad w/o last warp conv | 84.88 ± 0.42 |
+
+Table 4: Test accuracy(%) of WarpGrad and BOIL-WarpGrad over $5\times 100$ tasks.
+
+models freeze the fully connected layer from the corresponding WarpGrad model. It is observed that BOIL-WarpGrad improves WarpGrad and BOIL-WarpGrad without the last warp conv improves BOIL-WarpGrad with the last warp conv. The latter result indicates that, to support BOIL, the last convolution layer must not be fixed but rather updated during the inner loops.
+
+# 4 REPRESENTATION CHANGE IN BOIL
+
+# 4.1 REPRESENTATION CHANGE BEFORE/ABTER ADAPTATION
+
+
+(a) MAML.
+(b) ANIL.
+(c) BOIL.
+Figure 2: Cosine similarity of 4conv network.
+
+To analyze whether the learning scheme of BOIL is representation reuse or representation change, we explore the layer-wise alteration of the representations before and after adaptation. We compute the cosine similarities and CKA values of the convolution layers with the meta-trained 4conv network (as detailed in Appendix A). We first investigate the cosine similarity between the representations of a query set including 5 classes and 15 samples per class from miniImageNet after every convolution module. In Figure 2, the orange line represents the average of the cosine similarity between the samples having the same class, and the blue line represents the average of the cosine similarity between the samples having different classes. In Figure 2, the left panel of each algorithm is before the inner loop adaptation, and the right panel is after inner loop adaptation.
+
+The key observations from Figure 2, which are also discussed in Section 4.2 with other experiments, are as follows:
+
+- The cosine similarities of MAML/ANIL (Figure 2a and Figure 2b) have the similar patterns, supporting representation reuse. Their patterns do not show any noticeable difference before and after adaptation. They make the average of the cosine similarities monotonically decrease and make the representations separable by classes when the representations reach the last convolution layer. These analyses indicate that the effectiveness of MAML/ANIL heavily leans on the meta-initialized body, not the task-specific adaptation.
+- The cosine similarities of BOIL (Figure 2c) have a different pattern from those of MAML/ANIL, supporting representation change. The BOIL's pattern changes to distinguish classes after adaptation. Before adaptation, BOIL reduces the average cosine similarities only up to conv3, and all representations are concentrated regardless of their classes after the last convolution layer. Hence, BOIL's meta-initialized body cannot distinguish the classes. However, after adaptation, the similarity of the different classes rapidly decreases on conv4, which means that the body can distinguish the classes through adaptation.
+- The reason why the change in BOIL before and after adaptation occurs only on conv4 is a peculiarity of the convolutional body, analyzed by Zeiler & Fergus (2014). Although the general and low-level features produced through the front convolution layers (e.g., colors, lines, and shapes) do not differ much from the task-specific adaptation, the discriminative representations produced through the last convolution layer (conv4) differ from class to class. The importance of the last convolutional layer on the performance in few-shot image classification tasks is also investigated by Arnold et al. (2019); Chen et al. (2020). These changes before and after the adaptation support the fact that BOIL enjoys representation layer reuse at the low- and mid-levels of the body and representation layer change in a high-level of the body.5 Nevertheless, the degree of representation layer reuse in a low- and mid-levels in BOIL is lower than that in MAML/ANIL, which is measured using gradient norms (Appendix G). We also report the cosine similarity including head in Appendix H.
+
+Through these observations, we believe that MAML follows the representation reuse training scheme, whereas BOIL follows representation change training scheme through representation layer reuse before the last convolution layer and representation layer change at the last convolution layer.
+
+Next, we demonstrate that BOIL enjoys representation layer reuse on the low- and mid-level and representation layer change on the high-level of the body by computing the CKA (Kornblith et al., 2019) before and after adaptation. When the CKA between two representations is close to 1, the representations are almost identical. In Figure 3, as mentioned in Raghu et al. (2020), CKA shows that the MAML/ANIL algorithms do not change the representation in the body. However, BOIL changes the representation of the last convolution layer. This result indicates that the BOIL algorithm learns rapidly through representation change. In addition, the representation change on the Cars data set is described in Appendix I.
+
+
+Figure 3: CKA of 4conv.
+
+# 4.2 EMPIRICAL ANALYSIS OF REPRESENTATION CHANGE IN BOIL
+
+Table 5: Test accuracy (%) of 4conv network according to the head's existence before/after an adaptation.
+
+| meta-train | miniImageNet |
| meta-test | miniImageNet | Cars |
| head | w/ head | w/o head (NIL-testing) | w/ head | w/o head (NIL-testing) |
| adaptation | before after | before after | before after | before after | before after | |
| MAML(1) | 19.96 ± 0.25 | 47.44 ± 0.23 | 48.28 ± 0.20 | 47.87 ± 0.14 | 20.05 ± 0.16 | 33.57 ± 0.14 |
| ANIL(1) | 20.09 ± 0.19 | 47.92 ± 0.20 | 48.86 ± 0.12 | 48.86 ± 0.12 | 20.16 ± 0.05 | 34.77 ± 0.31 |
| BOIL(1) | 19.94 ± 0.13 | 49.61 ± 0.16 | 24.07 ± 0.19 | 46.73 ± 0.17 | 19.94 ± 0.06 | 36.12 ± 0.29 |
| MAML(5) | 20.04 ± 0.17 | 61.75 ± 0.42 | 64.61 ± 0.39 | 64.47 ± 0.39 | 19.97 ± 0.18 | 44.56 ± 0.21 |
| ANIL(5) | 20.09 ± 0.13 | 63.04 ± 0.42 | 66.11 ± 0.51 | 66.11 ± 0.51 | 20.08 ± 0.07 | 46.55 ± 0.29 |
| BOIL(5) | 20.04 ± 0.21 | 66.45 ± 0.37 | 32.03 ± 0.16 | 64.61 ± 0.27 | 20.06 ± 0.16 | 50.64 ± 0.22 |
+
+Table 5 describes the test accuracy on miniImageNet and Cars of the model meta-trained on miniImageNet before and after an inner update according to the presence of the head. To evaluate the performance in a case without a classifier, we first create a template of each class by averaging the representations from the support set. Then, the class of the sample from the query set is predicted as the class whose template has the highest cosine similarity with the representation of the sample. This is the same as NIL-testing in (Raghu et al., 2020).
+
+The results provide some intriguing interpretations:
+
+- With the head for all algorithms. Before adaptation, all algorithms on the same- and cross-domain are unable to distinguish all classes (20%). This status could be considered as an optimum of meta-initialization. We also discuss it in Appendix L. In BOIL, the representations have to move quickly to their corresponding frozen head. Moreover, after adaptation, BOIL overwhelms the performance of the other algorithms. This means that representation change of BOIL is more effective than representation reuse of MAML/ANIL.
+- Without the head in MAML/ANIL. In this setting, representations from the body are evaluated before and after adaptation. Before adaptation, MAML and ANIL already generate sufficient representations to classify, and adaptation makes little or no difference. The former observation matches the cosine similarity gap between an intra- and inter-class on each left panel in Figure 2a and Figure 2b, and the latter observation matches the CKA values of close to or exactly 1 on conv4 of MAML/ANIL in Figure 3.
+- Without the head in BOIL. BOIL shows a steep performance improvement through adaptation on the same- and cross-domain. This result implies that the body of BOIL can be task-specifically updated. It is matched with Figure 2c, where the cosine similarity gap between the intra-class and inter-class on the space after conv4 is near zero before adaptation but increases after adaptation. This implies that the poor representations can be dramatically improved in BOIL. Therefore, the low CKA value on conv4 of BOIL in Figure 3 is natural.
+
+To summarize, the meta-initialization by MAML and ANIL provides efficient representations through the body even before adaptation. By contrast, although BOIL's meta-initialization provides less efficient representations compared to MAML and ANIL, the body can extract more efficient representations through task-specific adaptation based on representation change.
+
+Note that the penultimate layer (i.e., conv4) of BOIL acts differently from the output layer (i.e., head) of MAML/ANIL. The penultimate layer of BOIL might seem like a pseudo-head layer, but not at all. The output layer of MAML/ANIL is adapted based on the well-represented features (i.e., features after conv4). In contrast, the penultimate layer of BOIL is adapted based on the poorly-represented features (i.e., features after conv3). It means that the head layer's role of MAML/ANIL is to draw a simple decision boundary for the high-level features represented by the output of the last convolutional layer. However, the penultimate layer of BOIL acts as a non-linear transformation so that the fixed output head layer can effectively conduct a classifier role. We also report empirical analyses of penultimate layer of BOIL and an ablation study on learning layer in Appendix J and Appendix K.
+
+# 5 BOIL TO A LARGER NETWORK
+
+Many recent studies (Vuorio et al., 2019; Rusu et al., 2018; Sun et al., 2019) have used deeper networks such as ResNet (He et al., 2016), Wide-ResNet (Zagoruyko & Komodakis, 2016), and DenseNet (Huang et al., 2017) as a backbone network. The deeper networks, in general, use feature wiring structures to facilitate the feature propagation. We explore BOIL's applicability to a deeper network with the wiring structure, ResNet-12, and propose a simple trick to boost representation change by disconnecting the last skip connection. This trick is described in Section 5.1.
+
+Table 6: 5-Way 5-Shot test accuracy (%) of ResNet-12. LSC means Last Skip Connection.
+
+| Meta-train | miniImageNet | Cars |
| Meta-test | miniImageNet | tieredImageNet | Cars | Cars | miniImageNet | CUB |
| MAML w/ LSC | 68.51 ± 0.39 | 71.67 ± 0.13 | 43.46 ± 0.15 | 75.49 ± 0.20 | 34.42 ± 0.06 | 35.87 ± 0.19 |
| MAML w/o LSC | 67.87 ± 0.22 | 70.31 ± 0.10 | 41.40 ± 0.11 | 73.63 ± 0.26 | 37.65 ± 0.11 | 34.77 ± 0.26 |
| ANIL w/ LSC | 68.54 ± 0.34 | 71.93 ± 0.11 | 45.13 ± 0.15 | 79.45 ± 0.23 | 35.03 ± 0.07 | 35.09 ± 0.19 |
| ANIL w/o LSC | 67.20 ± 0.13 | 69.79 ± 0.24 | 43.46 ± 0.18 | 75.32 ± 0.15 | 38.15 ± 0.15 | 36.06 ± 0.14 |
| BOIL w/ LSC | 70.50 ± 0.28 | 71.86 ± 0.21 | 49.69 ± 0.17 | 80.98 ± 0.14 | 45.89 ± 0.32 | 43.34 ± 0.21 |
| BOIL w/o LSC | 71.30 ± 0.28 | 74.12 ± 0.30 | 49.71 ± 0.28 | 83.99 ± 0.20 | 48.41 ± 0.18 | 44.23 ± 0.18 |
+
+Table 6 shows the test accuracy results of ResNet-12, which is meta-trained and meta-tested with various data sets according to the fineness of the domains. This result indicates that BOIL can be applied to other general architectures by showing a better performance than MAML not only on standard benchmark data sets but also on cross-domain adaptation. Note that BOIL has achieved the best performance without the last skip connection in every experiment.
+
+# 5.1 DISCONNECTION TRICK
+
+Connecting the two learning schemes and ResNet's wiring structure, we propose a simple trick to eliminate the skip connection of the last residual block, which is referred to as a disconnection trick. In section 4.1, we confirmed that the model learned with BOIL applies the representation layer reuse at the low- and mid-levels of the body and representation layer change at the high-level of the body.
+
+
+Figure 4: Cosine similarity of ResNet-12.
+
+To investigate the effects of skip connections on a representation change learning scheme, we analyze the cosine similarity after every residual block in the same way as Figure 2. Figure 4a shows that ResNet with skip connections on all blocks rapidly changes not only the last block but also the other blocks. Because skip connections strengthen the gradient back-propagation, the scope of representation layer change extends to the front. Therefore, to achieve both the effective representation layer reuse and the representation layer change of BOIL, we suggest a way to weaken the gradient back-propagation from the loss function by removing the skip connection of the last block. As shown in Figure 4b, with this simple disconnection trick, ResNet can improve the effectiveness of BOIL, as well as the representation layer reuse at the front blocks of the body and the representation layer change at the last block, and significantly improves the performance, as described in Table 6.
+
+We also report various analyses on ResNet-12 in the same way we analyzed 4conv network and representation layer change in the last block in Appendix M and Appendix N.
+
+# 6 RELATED WORK
+
+MAML (Finn et al., 2017) is one of the most well-known algorithms in gradient-based meta-learning, achieving a competitive performance on few-shot learning benchmark data sets (Vinyals et al., 2016; Ren et al., 2018; Bertinetto et al., 2018; Oreshkin et al., 2018). To tackle the task ambiguity caused by insufficient data in few-shot learning, numerous studies have sought to extend MAML in various ways. Some studies (Oreshkin et al., 2018; Sun et al., 2019; Vuorio et al., 2019) have proposed feature modulators that make task-specific adaptation more amenable by shifting and scaling the representations extracted from the network body. In response to the lack of data for task-specific updates, there have also been attempts to incorporate additional parameters in a small number, rather than all model parameters (Zintgraf et al., 2018; Rusu et al., 2018; Lee & Choi, 2018; Flennerhag et al., 2020). With a similar approach, some studies suggested a way to update only the heads in the inner loop, which has been further improved to update the head using linear separable objectives. (Raghu et al., 2020; Bertinetto et al., 2018; Lee et al., 2019). Grant et al. (2018); Finn et al. (2018); Yoon et al. (2018); Na et al. (2019) have taken a probabilistic approach using Bayesian modeling and variational inference. In addition, Chen et al. (2020) showed that by allowing the discovered task-specific modules (i.e., a (small) subset of a network) to adapt, better performance is achieved than when allowing the whole network to adapt. Notably, in few-shot image classification tasks, the authors showed that the last convolution layer (i.e., penultimate layer) is the most important. Such results were also observed in Arnold et al. (2019).
+
+To tackle more realistic problems, few-shot learning has recently been expanding beyond the standard $n$ -way $k$ -shot classification. Triantafillou et al. (2019) constructed a more scalable and realistic data set, called a meta-data set, which contains several data sets collected from different sources. In additions, Na et al. (2019) addressed $n$ -way any-shot classification by considering the imbalanced data distribution in real-world. Furthermore, some studies (Cai & Shen, 2020; Chen et al., 2019) have recently explored few-shot learning on cross-domain adaptation, which is one of the ultimate goals of meta-learning. In addition, Guo et al. (2019) suggested a new cross-domain benchmark data set for few-shot learning and showed that the current meta-learning algorithms (Finn et al., 2017; Vinyals et al., 2016; Snell et al., 2017; Sung et al., 2018; Lee et al., 2019) underachieve compared to simple fine-tuning on cross-domain adaptation. We demonstrated that task-specific update with representation change is efficient for a cross-domain adaptation.
+
+# 7 CONCLUSION
+
+In this study, we investigated the necessity of representation change for solving domain-agnostic tasks and proposed the BOIL algorithm, which is designed to enforce representation change by learning only the body of the model in the inner loop. We connected BOIL with preconditioning gradients and showed that the effectivenesses from a connection, such as an overfitting reduction and robustness to hyperparameters change, are still valid. Furthermore, we adapt BOIL to WarpGrad, demonstrating improved performance. This result decouples the benefits of representation change and preconditioning gradients. Next, we demonstrated that BOIL trains a model to follow the representation layer reuse scheme on the low- and mid-levels of the body but trains it to follow the representation layer change scheme on the high-level of the body using the cosine similarity and the CKA. We validated the BOIL algorithm on various data sets and a cross-domain adaptation using a standard 4conv network and ResNet-12. The experimental results showed a significant improvement over MAML/ANIL, particularly cross-domain adaptation, implying that representation change should be considered for adaptation to unseen tasks.
+
+We hope that our study inspires representation change in gradient-based meta-learning approaches. Our approach is the first to study representation change and focuses on classification tasks. However, we believe that our approach is also efficient in other methods or fields because our algorithm has no restrictions. Furthermore, connecting representation change to memorization overfitting addressed in (Yin et al., 2019; Rajendran et al., 2020) will be an interesting topic.
+
+# ACKNOWLEDGMENTS
+
+This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program(KAIST)) and by Korea Electric Power Corporation (Grant number: R18XA05).
+
+# REFERENCES
+
+Sebastien MR Arnold, Shariq Iqbal, and Fei Sha. Decoupling adaptation from modeling with meta-optimizers for meta learning. arXiv preprint arXiv:1910.13603, 2019.
+Luca Bertinetto, Joao F Henriques, Philip HS Torr, and Andrea Vedaldi. Meta-learning with differentiable closed-form solvers. arXiv preprint arXiv:1805.08136, 2018.
+John Cai and Sheng Mei Shen. Cross-domain few-shot learning with meta fine-tuning. arXiv preprint arXiv:2005.10544, 2020.
+Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Frank Wang, and Jia-Bin Huang. A closer look at few-shot classification. arXiv preprint arXiv:1904.04232, 2019.
+Yutian Chen, Abram L Friesen, Feryal Behbahani, Arnaud Doucet, David Budden, Matthew Hoffman, and Nando de Freitas. Modular meta-learning with shrinkage. Advances in Neural Information Processing Systems, 33, 2020.
+Tristan Deleu, Tobias Würfl, Mandana Samiei, Joseph Paul Cohen, and Yoshua Bengio. Torchmeta: A Meta-Learning library for PyTorch, 2019. URL https://arxiv.org/abs/1909.06576. Available at: https://github.com/tristandeleu/pytorch-meta.
+Rafael Rego Drumond, Lukas Brinkmeyer, Josif Grabocka, and Lars Schmidt-Thieme. Hidra: Head initialization across dynamic targets for robust architectures. In Proceedings of the 2020 SIAM International Conference on Data Mining, pp. 397-405. SIAM, 2020.
+Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1126-1135. JMLR.org, 2017.
+Chelsea Finn, Kelvin Xu, and Sergey Levine. Probabilistic model-agnostic meta-learning. In Advances in Neural Information Processing Systems, pp. 9516–9527, 2018.
+Sebastian Flennerhag, Andrei A. Rusu, Razvan Pascanu, Francesco Visin, Hujun Yin, and Raia Hadsell. Meta-learning with warped gradient descent. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/forum?id=rkeiQlBFPB.
+Erin Grant, Chelsea Finn, Sergey Levine, Trevor Darrell, and Thomas Griffiths. Recasting gradient-based meta-learning as hierarchical bayes. arXiv preprint arXiv:1801.08930, 2018.
+Yunhui Guo, Noel CF Codella, Leonid Karlinsky, John R Smith, Tajana Rosing, and Rogerio Feris. A new benchmark for evaluation of cross-domain few-shot learning. arXiv preprint arXiv:1912.07200, 2019.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
+Nathan Hilliard, Lawrence Phillips, Scott Howland, Artem Yankov, Courtney D Corley, and Nathan O Hadas. Few-shot learning with metric-agnostic conditional embeddings. arXiv preprint arXiv:1802.04376, 2018.
+Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700-4708, 2017.
+Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
+Gregory Koch. Siamese neural networks for one-shot image recognition. 2015.
+Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey Hinton. Similarity of neural network representations revisited. arXiv preprint arXiv:1905.00414, 2019.
+
+Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In Proceedings of the IEEE international conference on computer vision workshops, pp. 554-561, 2013.
+Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
+Kwonjoon Lee, Subhransu Maji, Avinash Ravichandran, and Stefano Soatto. Meta-learning with differentiable convex optimization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10657-10665, 2019.
+Yoonho Lee and Seungjin Choi. Gradient-based meta-learning with learned layerwise metric and subspace. In International Conference on Machine Learning, pp. 2927-2936, 2018.
+Subhransu Maji, Esa Rahtu, Juho Kannala, Matthew Blaschko, and Andrea Vedaldi. Fine-grained visual classification of aircraft. arXiv preprint arXiv:1306.5151, 2013.
+Leland McInnes, John Healy, Nathaniel Saul, and Lukas Großberger. Umap: Uniform manifold approximation and projection. Journal of Open Source Software, 3(29), 2018.
+Donghyun Na, Hae Beom Lee, Saehoon Kim, Minseop Park, Eunho Yang, and Sung Ju Hwang. Learning to balance: Bayesian meta-learning for imbalanced and out-of-distribution tasks. arXiv preprint arXiv:1905.12917, 2019.
+Alex Nichol, Joshua Achiam, and John Schulman. On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999, 2018.
+Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing, pp. 722-729. IEEE, 2008.
+Boris Oreshkin, Pau Rodríguez López, and Alexandre Lacoste. Tadam: Task dependent adaptive metric for improved few-shot learning. In Advances in Neural Information Processing Systems, pp. 721-731, 2018.
+Aniruddh Raghu, Maithra Raghu, Samy Bengio, and Oriol Vinyals. Rapid learning or feature reuse? towards understanding the effectiveness of maml. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=rkgMkCEtPB.
+Janarthanan Rajendran, Alexander Irpan, and Eric Jang. Meta-learning requires meta-augmentation. Advances in Neural Information Processing Systems, 33, 2020.
+Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. 2016.
+Mengye Ren, Eleni Triantafillou, Sachin Ravi, Jake Snell, Kevin Swersky, Joshua B Tenenbaum, Hugo Larochelle, and Richard S Zemel. Meta-learning for semi-supervised few-shot classification. arXiv preprint arXiv:1803.00676, 2018.
+Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211-252, 2015.
+Andrei A Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero, and Raia Hadsell. Meta-learning with latent embedding optimization. arXiv preprint arXiv:1807.05960, 2018.
+Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In Advances in neural information processing systems, pp. 4077-4087, 2017.
+Qianru Sun, Yaoyao Liu, Tat-Seng Chua, and Bernt Schiele. Meta-transfer learning for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 403-412, 2019.
+Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1199-1208, 2018.
+
+Eleni Triantafillou, Tyler Zhu, Vincent Dumoulin, Pascal Lamblin, Utku Evci, Kelvin Xu, Ross Goroshin, Carles Gelada, Kevin Swersky, Pierre-Antoine Manzagol, et al. Meta-dataset: A dataset of datasets for learning to learn from few examples. arXiv preprint arXiv:1903.03096, 2019.
+Hung-Yu Tseng, Hsin-Ying Lee, Jia-Bin Huang, and Ming-Hsuan Yang. Cross-domain few-shot classification via learned feature-wise transformation. arXiv preprint arXiv:2001.08735, 2020.
+Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. Matching networks for one shot learning. In Advances in neural information processing systems, pp. 3630-3638, 2016.
+Risto Vuorio, Shao-Hua Sun, Hexiang Hu, and Joseph J Lim. Multimodal model-agnostic meta-learning via task-aware modulation. In Advances in Neural Information Processing Systems, pp. 1-12, 2019.
+P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. Caltech-UCSD Birds 200. Technical Report CNS-TR-2010-001, California Institute of Technology, 2010.
+Bing Xu, Naiyan Wang, Tianqi Chen, and Mu Li. Empirical evaluation of rectified activations in convolutional network. arXiv preprint arXiv:1505.00853, 2015.
+Mingzhang Yin, George Tucker, Mingyuan Zhou, Sergey Levine, and Chelsea Finn. Meta-learning without memorization. In International Conference on Learning Representations, 2019.
+Jaesik Yoon, Taesup Kim, Ousmane Dia, Sungwoong Kim, Yoshua Bengio, and Sungjin Ahn. Bayesian model-agnostic meta-learning. In Advances in Neural Information Processing Systems, pp. 7332-7342, 2018.
+Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
+Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In European conference on computer vision, pp. 818-833. Springer, 2014.
+Luisa M Zintgraf, Kyriacos Shiarlis, Vitaly Kurin, Katja Hofmann, and Shimon Whiteson. Fast context adaptation via meta-learning. arXiv preprint arXiv:1810.03642, 2018.
+
+# A IMPLEMENTATION DETAIL
+
+# A.1 $n$ -WAY $k$ -SHOT SETTING
+
+We experimented in the 5-way 1-shot and 5-way 5-shot, and the number of shots is marked in parentheses in the algorithm name column of all tables. During meta-training, models are inner loop updated only once, and the meta-batch size for the outer loop is set to 4. During meta-testing, the number of task-specific (inner loop) updates is the same as meta-training. All the reported results are based on the model with the best validation accuracy.
+
+# A.2 MODEL IMPLEMENTATIONS
+
+In our experiments, we employ the 4conv network and ResNet-12 for MAML/ANIL and BOIL algorithms. 4conv network has of 4 convolution modules, and each module consists of a $3 \times 3$ convolution layer with 64 filters, batch normalization (Ioffe & Szegedy, 2015), a ReLU non-linearity, a $2 \times 2$ max-pool. ResNet-12 (He et al., 2016) has the same structure with the feature extractor of TADAM (Oreshkin et al., 2018). It has four residual blocks, and each block consists of 3 modules of convolution, batch normalization, and leaky ReLU (Xu et al., 2015). At every end of each residual block, $2 \times 2$ max-pool is applied, and the number of convolution filters is doubled from 64 on each block. Each block also has a wiring structure known as skip connection, which is a link made up of additions between the block's input and output feature for strengthening feature propagation.
+
+Our proposed algorithms can be implemented by just dividing learning rates into for the body and the head. Table 7 shows the learning rates of each network and algorithm. $\alpha_{b}$ and $\alpha_{h}$ are the learning rates of the body and the head of the model during inner loops, and $\beta_{b}$ and $\beta_{h}$ are the learning rates of the body and the head of the model during outer loops.
+
+ | 4conv network | ResNet-12 |
| MAML | ANIL | BOIL | MAML | ANIL | BOIL |
| αb | 0.5 | 0.0 | 0.5 | 0.3 | 0.0 | 0.3 |
| αh | 0.5 | 0.5 | 0.0 | 0.3 | 0.3 | 0.0 |
| βb | 0.001 | 0.001 | 0.001 | 0.0006 | 0.0006 | 0.0006 |
| βh | 0.001 | 0.001 | 0.001 | 0.0006 | 0.0006 | 0.0006 |
+
+Table 7: Learning rates according to the algorithms.
+
+# A.3 DATASET
+
+We validate the BOIL and MAML/ANIL algorithms on several data sets, considering image size and fineness. Table 8 is the summarization of the used data sets.
+
+Table 8: Summary of data sets.
+
+| Data sets | miniImageNet | tieredImageNet | Cars | CUB |
| Source | Russakovsky et al. (2015) | Russakovsky et al. (2015) | Krause et al. (2013) | Welinder et al. (2010) |
| Image size | 84×84 | 84×84 | 84×84 | 84×84 |
| Fineness | Coarse | Coarse | Fine | Fine |
| # meta-training classes | 64 | 351 | 98 | 100 |
| # meta-validation classes | 16 | 97 | 49 | 50 |
| # meta-testing classes | 20 | 160 | 49 | 50 |
| Split setting | Vinyals et al. (2016) | Ren et al. (2018) | Tseng et al. (2020) | Hilliard et al. (2018) |
| Data sets | FC100 | CIFAR-FS | VGG-Flower | Aircraft |
| Source | Krizhevsky et al. (2009) | Krizhevsky et al. (2009) | Nilsback & Zisserman (2008) | Maji et al. (2013) |
| Image size | 32×32 | 32×32 | 32×32 | 32×32 |
| Fineness | Coarse | Coarse | Fine | Fine |
| # meta-training classes | 60 | 64 | 71 | 70 |
| # meta-validation classes | 20 | 16 | 16 | 15 |
| # meta-testing classes | 20 | 20 | 15 | 15 |
| Split setting | Bertinetto et al. (2018) | Oreshkin et al. (2018) | Na et al. (2019) | Na et al. (2019) |
+
+# B VISUALIZATION USING UMAP
+
+Through section 4.1, we show that conv4 in a 4conv network is the critical layer where representation layer change happens. We visualize these representations, the output of conv4, of samples from various data sets using UMAP (McInnes et al., 2018), which is an algorithm for general non-linear dimension reduction. Samples with the same line color belong to the same class. Many examples show the consistency with the intuition shown in Figure 1. When 1) similar instances with different classes are sampled together and 2) representations on the meta-train data set cannot capture representations on the meta-test data set, MAML/ANIL seems to be challenging to cluster samples on representation space since they are based on representation reuse.
+
+# B.1 BENCHMARK DATA SETS
+
+
+(a) MAML.
+
+
+(b) ANIL.
+
+
+(c) BOIL.
+
+
+Figure 5: UMAP of samples from miniImageNet using the model meta-trained on miniImageNet.
+(a) MAML.
+Figure 6: UMAP of samples from Cars using the model meta-trained on Cars.
+
+
+(b) ANIL.
+
+
+(c) BOIL.
+
+# B.2 CROSS-DOMAIN ADAPTATION
+
+
+(a) MAML.
+
+
+(b) ANIL.
+
+
+(c) BOIL.
+
+
+Figure 7: UMAP of samples from tieredImageNet using the model meta-trained on miniImageNet.
+(a) MAML.
+
+
+(b) ANIL.
+
+
+(c) BOIL.
+
+
+Figure 8: UMAP of samples from Cars using the model meta-trained on miniImageNet.
+(a) MAML.
+
+
+(b) ANIL.
+
+
+(c) BOIL.
+
+
+Figure 9: UMAP of samples from miniImageNet using the model meta-trained on Cars.
+(a) MAML.
+Figure 10: UMAP of samples from CUB using the model meta-trained on Cars.
+
+
+(b) ANIL.
+
+
+(c) BOIL.
+
+# C RESULTS ON OTHER DATA SETS
+
+We applied our algorithm to other data sets with image size of $32 \times 32$ . Similar to the analyses on section 4, these data sets can be divided into two general data sets, CIFAR-FS (Bertinetto et al., 2018) and FC100 (Oreshkin et al., 2018), and two specific data sets, VGG-Flower (Nilsback & Zisserman, 2008) and Aircraft (Maji et al., 2013). Table 9, Table 10, and Table 11 generally show the superiority of BOIL even if image size is extremely tiny.
+
+Table 9: Test accuracy (%) of 4conv network on benchmark dataset.
+
+| Domain | General (Coarse-grained) | Specific (Fine-grained) |
| Dataset | CIFAR-FS | FC100 | VGG-Flower | Aircraft |
| MAML(1) | 56.55 ± 0.45 | 35.99 ± 0.48 | 60.94 ± 0.35 | 52.27 ± 0.23 |
| ANIL(1) | 57.13 ± 0.47 | 36.37 ± 0.33 | 63.05 ± 0.30 | 54.54 ± 0.16 |
| BOIL(1) | 58.03 ± 0.43 | 38.93 ± 0.45 | 65.64 ± 0.26 | 53.37 ± 0.29 |
| MAML(5) | 70.10 ± 0.29 | 47.58 ± 0.30 | 75.13 ± 0.43 | 63.44 ± 0.26 |
| ANIL(5) | 69.87 ± 0.39 | 45.65 ± 0.44 | 72.07 ± 0.48 | 63.21 ± 0.16 |
| BOIL(5) | 73.61 ± 0.32 | 51.66 ± 0.32 | 79.81 ± 0.42 | 66.03 ± 0.14 |
+
+Table 10: Test accuracy $(\%)$ of 4conv network on cross-domain adaptation.
+
+| adaptation | general to general | general to specific | specific to general | specific to specific |
| meta-train | FC100 | CIFAR-FS | CIFAR-FS | CIFAR-FS | VGG-Flower | VGG-Flower | Aircraft | VGG-Flower |
| meta-test | CIFAR-FS | FC100 | VGG-Flower | Aircraft | CIFAR-FS | FC100 | VGG-Flower | Aircraft |
| MAML(1) | 62.58 ± 0.35 | 52.81 ± 0.28 | 49.69 ± 0.24 | 27.03 ± 0.18 | 34.38 ± 0.19 | 32.45 ± 0.23 | 37.05 ± 0.19 | 25.70 ± 0.19 |
| ANIL(1) | 63.05 ± 0.39 | 55.36 ± 0.47 | 50.61 ± 0.29 | 27.39 ± 0.09 | 35.90 ± 0.20 | 33.84 ± 0.30 | 31.59 ± 0.22 | 24.55 ± 0.14 |
| BOIL(1) | 60.71 ± 0.43 | 54.18 ± 0.35 | 56.77 ± 0.35 | 29.29 ± 0.10 | 39.15 ± 0.20 | 34.37 ± 0.14 | 49.85 ± 0.26 | 29.05 ± 0.16 |
| MAML(5) | 75.32 ± 0.34 | 63.00 ± 0.18 | 64.49 ± 0.23 | 33.85 ± 0.25 | 46.81 ± 0.11 | 42.06 ± 0.43 | 47.74 ± 0.07 | 30.65 ± 0.19 |
| ANIL(5) | 77.01 ± 0.51 | 63.89 ± 0.16 | 64.20 ± 0.10 | 33.24 ± 0.21 | 44.52 ± 0.25 | 40.51 ± 0.26 | 50.28 ± 0.12 | 28.74 ± 0.23 |
| BOIL(5) | 76.33 ± 0.30 | 68.55 ± 0.20 | 74.93 ± 0.11 | 39.96 ± 0.11 | 55.48 ± 0.21 | 47.17 ± 0.38 | 64.68 ± 0.23 | 39.81 ± 0.25 |
+
+Table 11: 5-Way 5-Shot test accuracy (%) of ResNet-12. The lsc means the last skip connection.
+
+| Meta-train | CIFAR-FS | VGG-Flower |
| Meta-test | CIFAR-FS | FC100 | VGG-Flower | VGG-Flower | CIFAR-FS | Aircraft |
| MAML w/ lsc | 75.30 ± 0.19 | 69.34 ± 0.35 | 65.82 ± 0.30 | 74.82 ± 0.29 | 42.91 ± 0.20 | 28.50 ± 0.12 |
| MAML w/o lsc | 71.72 ± 0.19 | 67.60 ± 0.34 | 59.20 ± 0.26 | 72.07 ± 0.29 | 39.27 ± 0.23 | 26.94 ± 0.18 |
| ANIL w/ lsc | 74.87 ± 0.11 | 75.34 ± 0.45 | 63.72 ± 0.40 | 77.02 ± 0.29 | 45.80 ± 0.32 | 27.24 ± 0.13 |
| ANIL w/o lsc | 71.39 ± 0.28 | 69.29 ± 0.32 | 52.70 ± 0.24 | 72.13 ± 0.39 | 38.99 ± 0.22 | 26.09 ± 0.08 |
| BOIL w/ lsc | 78.17 ± 0.14 | 77.22 ± 0.45 | 73.90 ± 0.38 | 82.00 ± 0.17 | 50.91 ± 0.35 | 35.54 ± 0.25 |
| BOIL w/o lsc | 77.38 ± 0.10 | 70.98 ± 0.34 | 73.96 ± 0.27 | 83.97 ± 0.17 | 55.82 ± 0.44 | 37.74 ± 0.21 |
+
+# D RESULTS UNDER THE ORIGINAL HYPERPARAMETERS
+
+We also evaluate our algorithm in the original setting (50 times smaller inner learning rate than ours) and confirm that BOIL is more robust to the change of hyperparameters than MAML. Such a characteristic is investigated in Lee & Choi (2018). Table 13 shows the test accuracy of BOIL and MAML/ANIL with the same hyperparameters optimized for MAML, and Figure 11 and Table 12 describe it according to the number of adaptation(s). It is observed that BOIL is the best or near-best, although the hyperparameters are not optimized for BOIL. Moreover, BOIL rapidly adapts and achieves considerable performance through just one adaptation.
+
+
+Figure 11: Accuracy on miniImageNet according to the number of adaptation(s).
+
+Table 12: Test accuracy (%) according to the number of adaptation(s). Training and testing are on miniImageNet.
+
+| Adaptation # | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
| MAML(1) | 32.01 ± 0.24 | 36.21 ± 0.17 | 42.74 ± 0.20 | 45.54 ± 0.18 | 46.04 ± 0.16 | 46.21 ± 0.19 | 46.17 ± 0.18 | 46.20 ± 0.18 | 46.22 ± 0.16 | 46.25 ± 0.18 |
| ANIL(1) | 20.97 ± 0.03 | 31.68 ± 0.25 | 41.41 ± 0.26 | 45.69 ± 0.20 | 46.78 ± 0.26 | 46.95 ± 0.30 | 47.05 ± 0.31 | 47.10 ± 0.30 | 47.17 ± 0.28 | 47.20 ± 0.27 |
| BOIL(1) | 45.79 ± 0.45 | 47.15 ± 0.30 | 47.46 ± 0.34 | 47.61 ± 0.34 | 47.67 ± 0.31 | 47.70 ± 0.32 | 47.70 ± 0.33 | 47.71 ± 0.34 | 47.74 ± 0.32 | 47.76 ± 0.31 |
| MAML(5) | 20.02 ± 0.00 | 20.15 ± 0.02 | 60.31 ± 0.34 | 63.78 ± 0.34 | 64.41 ± 0.35 | 64.55 ± 0.33 | 64.64 ± 0.31 | 64.72 ± 0.31 | 64.77 ± 0.30 | 64.83 ± 0.40 |
| ANIL(5) | 20.00 ± 0.00 | 24.52 ± 0.19 | 51.59 ± 0.17 | 58.84 ± 0.46 | 62.06 ± 0.37 | 62.34 ± 0.36 | 62.45 ± 0.38 | 62.51 ± 0.37 | 62.55 ± 0.38 | 62.59 ± 0.39 |
| BOIL(5) | 58.15 ± 0.23 | 62.42 ± 0.33 | 63.56 ± 0.26 | 64.04 ± 0.30 | 64.21 ± 0.28 | 64.27 ± 0.30 | 64.32 ± 0.30 | 64.35 ± 0.29 | 64.38 ± 0.28 | 64.40 ± 0.28 |
+
+Table 13: Test accuracy (\%) under the same architecture, learning rate, and the number of inner updates with (Finn et al., 2017; Raghu et al., 2020).
+
+| Meta-train | miniImageNet | Cars |
| Meta-test | miniImageNet | tieredImageNet | Cars | Cars | miniImageNet | CUB |
| MAML(1) | 46.25 ± 0.18 | 49.45 ± 0.14 | 34.78 ± 0.36 | 46.02 ± 0.33 | 28.87 ± 0.11 | 29.92 ± 0.23 |
| ANIL(1) | 47.20 ± 0.27 | 50.04 ± 0.13 | 32.87 ± 0.39 | 45.31 ± 0.27 | 29.12 ± 0.11 | 30.39 ± 0.21 |
| BOIL(1) | 47.76 ± 0.31 | 51.35 ± 0.18 | 34.89 ± 0.23 | 50.54 ± 0.41 | 32.40 ± 0.19 | 32.99 ± 0.29 |
| MAML(5) | 64.83 ± 0.30 | 67.06 ± 0.25 | 48.25 ± 0.24 | 69.27 ± 0.27 | 43.52 ± 0.20 | 45.12 ± 0.20 |
| ANIL(5) | 62.59 ± 0.39 | 65.55 ± 0.16 | 45.44 ± 0.18 | 62.67 ± 0.25 | 36.89 ± 0.16 | 40.38 ± 0.19 |
| BOIL(5) | 64.40 ± 0.28 | 65.81 ± 0.26 | 48.39 ± 0.25 | 68.56 ± 0.34 | 43.34 ± 0.21 | 46.32 ± 0.11 |
+
+# E OVERFITTING ISSUE
+
+We employ networks with various sizes of filters, 32, 64, and 128. The best validation scores of each model are 64.01, 66.72, and 69.23, and these results mean that with BOIL, the more extensive network yields higher accuracy without overfitting.
+
+
+Figure 12: Training/Validation accuracy curve on miniImageNet according to filters in BOIL.
+
+# F WARPGRAD AND BOIL-WARPGRAD
+
+# F.1 IMPLEMENTATION DETAIL
+
+We follow the default setting of the public code except for meta train steps and the number of filters, following Flennerhag et al. (2020). We change meta train steps to 100 and the number of filters to 128. The task is 20way-5shot(in expectation) on Omniglot. Here, "in expectation" means that 100 samples are used for task-specific updates, but the number of samples per class is not the same. Furthermore, this task supports the superiority of BOIL in long-term adaptations.
+
+# F.2 ARCHITECTURE
+
+Here are the architectures of WarpGrad and BOIL-WarpGrad. The WarpGrad w/o last warp head model is the default one in the original code.
+
+
+Figure 13: Architectures of WarpGrad and BOIL-WarpGrad.
+
+# G GRADIENT NORM
+
+
+Figure 14: Gradient norm.
+
+We calculated the norm of gradients caused by an inner loop according to the algorithm. Although the norm of gradients on the head of BOIL is not really zero, we marked 0 because the learning rate on the head is zero. The norm of gradients of biases is negligible (about $10^{-8}$ ), and thus it was omitted. MAML/ANIL has an extremely small norm or no norms on all convolutional layers. It implies that representations little or no change. On the other hand, BOIL has a large norm on the conv4 layer. It implies that representations change significantly. In addition, from the analysis of cosine similarity and CKA, we mentioned that BOIL enjoys representation layer reuse in a low- and mid-level of body. Nevertheless, Figure 14 shows that the amount of representation layer change in a low- and mid-level in BOIL is larger than that in MAML/ANIL.
+
+# H COSINE SIMILARITIES INCLUDING HEAD
+
+Figure 15 is an extended version of Figure 2.
+
+
+Figure 15: Cosine similarity of 4conv network including head.
+
+# I REPRESENTATION CHANGE IN BOIL ON CARS
+
+This section describes representation change in BOIL on Cars. The structure of this section is the same as that of section 4.
+
+
+(a) MAML.
+
+
+Figure 16: Cosine similarity of 4conv network on Cars.
+
+
+(b) ANIL.
+
+
+(c) BOIL.
+
+
+
+
+Figure 17: CKA of 4conv on Cars.
+
+Table 14: Test accuracy (%) of 4conv network according to the head's existence before/after an adaptation.
+
+| meta-train | Cars |
| meta-test | Cars | CUB |
| head | w/ head | w/o head (NIL-testing) | w/ head | w/o head (NIL-testing) |
| adaptation | before after | before after | before after | before after | before after | before after |
| MAML(1) | 20.03 ± 0.25 | 45.27 ± 0.26 | 47.87 ± 0.18 | 47.23 ± 0.24 | 20.01 ± 0.08 | 31.01 ± 0.26 |
| ANIL(1) | 20.01 ± 0.18 | 46.81 ± 0.24 | 49.45 ± 0.18 | 49.45 ± 0.18 | 20.02 ± 0.08 | 29.72 ± 0.27 |
| BOIL(1) | 20.19 ± 0.19 | 56.82 ± 0.21 | 25.46 ± 0.29 | 52.36 ± 0.13 | 19.96 ± 0.12 | 22.93 ± 0.17 |
| MAML(5) | 20.11 ± 0.16 | 53.23 ± 0.26 | 59.67 ± 0.22 | 59.38 ± 0.23 | 20.00 ± 0.22 | 36.12 ± 0.24 |
| ANIL(5) | 20.09 ± 0.17 | 61.95 ± 0.38 | 67.03 ± 0.36 | 67.03 ± 0.36 | 19.99 ± 0.18 | 43.27 ± 0.31 |
| BOIL(5) | 20.04 ± 0.08 | 75.18 ± 0.21 | 36.65 ± 0.11 | 71.52 ± 0.27 | 20.02 ± 0.05 | 29.04 ± 0.18 |
+
+# J OUTPUT LAYER OF MAML/ANIL AND PENULTIMATE LAYER OF BOIL
+
+To investigate the role of the penultimate layer (i.e., the last convolutional layer) of BOIL, we evaluated MAML/ANIL and BOIL through NIL-testing on conv3 in the same way with NIL-testing on conv4 (Table 5), except for the position of representations. Table 15 shows that the input of the penultimate layer (i.e., features after conv3) cannot be simply classified (i.e., the desired performance cannot be achieved), and these representation capacities are similar for all MAML, ANIL, and BOIL. Therefore, It is thought that MAML/ANIL and BOIL acts similarly until conv3 and BOIL is not simply the shifted version of MAML/ANIL.
+
+Table 15: Test accuracy (%) of NIL-testing on conv3 and conv4 of 4conv network before/after an adaptation.
+
+| meta-train | miniImageNet |
| meta-test | miniImageNet | Cars |
| head | NIL-testing on conv3 | NIL-testing on conv4 | NIL-testing on conv3 | NIL-testing on conv4 |
| adaptation | before after | before after | before after | before after | before after | before after |
| MAML(1) | 32.56 ± 0.14 | 48.28 ± 0.20 | 47.87 ± 0.14 | 30.86 ± 0.09 | 31.03 ± 0.08 | 34.47 ± 0.19 |
| ANIL(1) | 34.34 ± 0.10 | 48.86 ± 0.12 | 48.86 ± 0.12 | 31.09 ± 0.08 | 31.09 ± 0.08 | 35.48 ± 0.24 |
| BOIL(1) | 30.65 ± 0.07 | 24.07 ± 0.19 | 46.73 ± 0.17 | 27.53 ± 0.16 | 27.80 ± 0.15 | 23.30 ± 0.15 |
| MAML(5) | 51.03 ± 0.08 | 64.61 ± 0.39 | 64.47 ± 0.39 | 44.54 ± 0.21 | 45.19 ± 0.21 | 47.66 ± 0.28 |
| ANIL(5) | 55.55 ± 0.14 | 66.11 ± 0.51 | 66.11 ± 0.51 | 45.81 ± 0.14 | 45.81 ± 0.14 | 49.62 ± 0.20 |
| BOIL(5) | 49.42 ± 0.12 | 32.03 ± 0.16 | 64.61 ± 0.27 | 43.93 ± 0.32 | 44.52 ± 0.18 | 30.33 ± 0.18 |
+
+Furthermore, the input of the output layer before adaptation (i.e., features after the penultimate layer) is enough to achieve the desired performance in MAML/ANIL in advance. From this result, we believe the head layer's role of MAML/ANIL is to draw a simple decision boundary for the high-level features represented by the output of the last convolutional layer. However, the penultimate layer of BOIL acts as a non-linear transformation so that the fixed output head layer can effectively conduct a classifier role.
+
+# K ABLATION STUDY ON THE LEARNING LAYER
+
+In this section, we investigate whether training any representation layer during inner updates is better than training the output layer during inner updates by learning only one convolutional layer. Figure 18 shows the test accuracy according to a single learning layer in the body. For instance, conv1 plotted in red color indicates that the algorithm updates only the conv1 layer during inner updates. In contrast, conv1 plotted in blue color is the case that the algorithm updates the conv1 layer and the head layer during inner updates.
+
+
+(a) miniImageNet.
+Figure 18: Test accuracy according to the learning layer in the body.
+
+
+(b) Cars.
+
+On both miniImageNet and Cars, it is observed that training a higher-level representation layer (conv3, conv4) without updating the head (red line) performs better than training any single conv layer with the output layer (blue line). We also observe that learning only a lower-level representation layer (conv1, conv2) can significantly decrease accuracy. The results reassure that representation layer change in higher-level layers of the body boosts the performance discussed by the layer-wise analyses in section 4. However, training only a lower-level layer behaves badly since lower-level layers retain general representations (a related discussion is in Section 4.1, e.g., the third key observation).
+
+Table 16: Test accuracy (\%) of 4conv network according to the learning layer(s). Standard deviation is omitted.
+
+| Learning layer | 1 layer | 2 layers | 3 layers | 4 layers | all |
| conv1 | ✓ | | | | | ✓ | | | | ✓ | | | ✓ | | ✓ |
| conv2 | ✓ | | | | | ✓ | ✓ | | | ✓ | ✓ | | ✓ | ✓ | ✓ |
| conv3 | ✓ | | | | | ✓ | ✓ | | | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| conv4 | ✓ | | | | | ✓ | ✓ | | | ✓ | ✓ | | ✓ | ✓ | ✓ |
| head | ✓ | | | | | ✓ | ✓ | | | ✓ | ✓ | | ✓ | ✓ | ✓ |
| Algorithm | ANIL | | | | | | | | BOIL | MAML |
| miniImageNet | 54.74 | 41.07 | 67.07 | 66.19 | 63.04 | 60.79 | 67.44 | 65.86 | 61.24 | 67.18 | 67.40 | 62.99 | 66.45 | 61.11 | 61.75 |
| tieredImageNet | 52.78 | 61.76 | 69.20 | 69.39 | 65.82 | 61.01 | 67.09 | 70.34 | 64.92 | 68.72 | 69.51 | 64.72 | 69.37 | 64.69 | 64.70 |
| Cars | 52.55 | 70.08 | 75.90 | 73.56 | 61.95 | 68.70 | 78.21 | 72.99 | 61.46 | 73.59 | 74.80 | 64.58 | 75.18 | 63.97 | 53.23 |
| CUB | 66.22 | 72.40 | 80.52 | 77.25 | 70.93 | 73.48 | 80.42 | 77.61 | 77.35 | 79.19 | 76.62 | 77.40 | 75.96 | 71.24 | 69.66 |
| CIFAR-FS | 71.58 | 70.47 | 71.01 | 70.88 | 69.87 | 72.15 | 71.85 | 74.43 | 71.43 | 72.93 | 75.56 | 72.07 | 73.61 | 71.73 | 70.10 |
| FC100 | 48.03 | 48.04 | 48.97 | 47.93 | 45.65 | 47.71 | 48.23 | 53.86 | 48.78 | 52.95 | 53.11 | 48.25 | 51.66 | 47.59 | 47.58 |
| VGG-Flower | 77.96 | 76.84 | 76.02 | 77.10 | 72.07 | 74.69 | 76.10 | 81.63 | 74.73 | 83.61 | 82.46 | 77.74 | 79.81 | 77.72 | 75.13 |
| Aircraft | 65.01 | 61.63 | 64.91 | 65.91 | 63.21 | 62.93 | 63.37 | 66.62 | 62.71 | 66.33 | 67.12 | 63.39 | 66.03 | 62.88 | 63.44 |
+
+We expand this ablation study to training multiple consecutive layers with and without the head. The results are reported in Table 16 and Table 17. In Table 16, we consistently observe that learning with the head is far from the best accuracy. All the combinations having nice performances do not train the head in the inner loop update. We also find several settings skipping the lower-level layers in the inner loop that perform slightly better than BOIL. We believe each neural network architecture and data set pair has its own best layer combination. When it is allowed to search for the best combination using huge computing power, we can further improve BOIL. However, the most important design policy is that the inner loop update should freeze the head and encourage to learn higher-level representation features but to reuse lower-level representation features. BOIL follows the design rule by simply
+
+Table 17: Test accuracy $(\%)$ of ResNet-12 without last skip connection according to the learning block(s). Standard deviation is omitted.
+
+| Learning block | 1 block | 2 blocks | 3 blocks | 4 blocks | all |
| block1 | ✓ | | | | | ✓ | | | | ✓ | | | ✓ | | ✓ |
| block2 | | ✓ | | | | ✓ | ✓ | | | ✓ | ✓ | | ✓ | ✓ | ✓ |
| block3 | | | ✓ | | | | ✓ | ✓ | | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| block4 | | | | ✓ | | | | ✓ | ✓ | | ✓ | ✓ | ✓ | ✓ | ✓ |
| head | | | | | ✓ | | | | ✓ | | | ✓ | | ✓ | ✓ |
| Algorithm | ANIL | | | | | | | | BOIL | MAML |
| miniImageNet | 19.94 | 64.20 | 69.95 | 70.52 | 67.20 | 63.08 | 69.19 | 67.75 | 66.19 | 70.12 | 69.76 | 69.08 | 71.30 | 66.44 | 67.87 |
| tieredImageNet | 20.10 | 47.64 | 68.57 | 70.41 | 72.22 | 55.22 | 69.69 | 70.11 | 72.38 | 67.64 | 69.61 | 71.67 | 73.44 | 71.02 | 71.25 |
| Cars | 19.98 | 55.86 | 74.71 | 74.45 | 75.32 | 65.30 | 70.41 | 74.06 | 69.51 | 68.16 | 58.43 | 69.78 | 83.99 | 71.75 | 73.63 |
| CUB | 20.02 | 74.84 | 79.26 | 81.58 | 74.66 | 75.07 | 80.12 | 82.09 | 74.24 | 80.26 | 82.75 | 74.94 | 83.22 | 75.66 | 76.23 |
| CIFAR-FS | 19.98 | 69.90 | 77.65 | 78.39 | 72.47 | 69.65 | 78.06 | 79.83 | 72.38 | 77.31 | 79.15 | 71.22 | 78.63 | 71.83 | 71.79 |
| FC100 | 20.15 | 48.32 | 50.86 | 49.60 | 45.61 | 47.44 | 51.75 | 51.82 | 46.93 | 50.72 | 50.55 | 45.21 | 49.87 | 46.29 | 44.90 |
| VGG-Flower | 19.98 | 80.32 | 84.68 | 82.22 | 73.77 | 79.80 | 85.13 | 80.75 | 72.14 | 83.53 | 82.69 | 71.93 | 82.17 | 72.00 | 72.43 |
| Aircraft | 20.06 | 71.48 | 76.97 | 77.22 | 78.62 | 72.17 | 78.47 | 77.61 | 79.51 | 76.75 | 76.89 | 78.16 | 78.85 | 78.79 | 77.15 |
+
+freezing the head in the inner loop that is already almost the best approach in most cases. In Table 17, there are the cases where learning a classifier leads to performance improvement. We thought that this is because the ablation study on ResNet-12 is done at the block-level. More precisely, one block includes many layers and this issue is discussed in Appendix N.
+
+# L ADDITIONAL CONSIDERATIONS OF THE HEAD OF BOIL
+
+We additionally discuss what the ideal meta-initialization is. Because the few-shot classification tasks are constructed with sampled classes each time, every task consists of different classes. Since the class indices are randomly assigned at the beginning of each task learning, the meta-initialized parameters cannot contain any prior information on the class indices. For instance, it is not allowed that the meta-initialized parameters encode class similarities between class $i$ and class $j$ . Any biased initial guess could hinder the task learning. The meta-initialized parameters should be in-between (local) optimal points of tasks as depicted in Figure 19 so that the network can adapt to each task with few task-specific updates.6
+
+
+Figure 19: Ideal metainitialization.
+
+
+(a) Comparison with centering algorithm.
+
+
+Figure 20: Valid accuracy curves of (a) centering algorithm and (b) fix algorithm on Cars.
+
+
+(b) Comparison with fix algorithm.
+
+
+
+When the head parameters $\theta_h = [\theta_{h,1},\dots,\theta_{h,n}]^\top \in \mathbb{R}^{n\times d}$ have orthonormal rows (i.e., $\| \theta_{h,i}\| _2 = 1$ for all $i$ and $\theta_{h,i}^{\top}\theta_{h,j} = 0$ for all $i\neq j$ ), the meta-initialized model can have the unbiased classifier. Here, $a^\top$ denotes the transpose of $a$ and $\| \cdot \| _2$ denotes the Euclidean norm. With the orthonormal rows, therefore, each logit value $\theta_{h,j}^{\top}f_{\theta_b}(x)$ can be controlled independently of other logit values. Recall that the softmax probability $p_j$ for class $j$ of sample $x$ is computed as follows:
+
+$$
+p _ {j} (x) = \frac {e ^ {\theta_ {h , j} {} ^ {\top} f _ {\theta_ {b}} (x)}}{\sum_ {i = 1} ^ {n} e ^ {\theta_ {h , i} {} ^ {\top} f _ {\theta_ {b}} (x)}} = \frac {1}{\sum_ {i = 1} ^ {n} e ^ {(\theta_ {h , i} - \theta_ {h , j}) {} ^ {\top} f _ {\theta_ {b}} (x)}}. \tag {4}
+$$
+
+In Equation 4, indeed, the softmax probability only depends on the differences of the rows of the head parameters $\theta_{h,i} - \theta_{h,j}$ . Adding a vector to all the rows (i.e., $\theta_{h,i} \gets \theta_{h,i} + c$ for all $i$ ) does not change the softmax vector. So, we can expect the same nice meta-initialized model, when a parallel shift of the rows of the head parameters can make orthonormal rows. To support this experimentally, we design the centering algorithm that operates a parallel shift of $\theta_{h}$ by subtracting the average of the row vectors of $\theta_{h}$ after every outer update on both MAML and BOIL, i.e., $[\theta_{h,1} - \bar{\theta}_{h}, \dots, \theta_{h,n} - \bar{\theta}_{h}]^{\top}$ where $\bar{\theta}_{h} = \frac{1}{n} \sum_{i=1}^{n} \theta_{h,i}$ . Figure 20a shows that this parallel shift operations does not affect the performance of two algorithms on Cars.
+
+Next, we investigate the cosine similarity between $\theta_{h,i}^{\top} - \theta_{h,k}^{\top}$ and $\theta_{h,j}^{\top} - \theta_{h,k}^{\top}$ for all different $i$ , $j$ , and fixed $k$ . From the training procedures of MAML and BOIL, it is observed that the average of cosine similarities between the two gaps keeps near 0.5 during meta-training (Figure 21). Note that 0.5 is the cosine similarity between $\theta_{h,i}^{\top} - \theta_{h,k}^{\top}$ and $\theta_{h,j}^{\top} - \theta_{h,k}^{\top}$ when $\theta_{h,i}^{\top}, \theta_{h,j}^{\top}$ , and $\theta_{h,k}^{\top}$ are orthonormal. From the results, we evidence that the orthonormality of $\theta_h$ is important for the meta-initialization and meta learning algorithms naturally keep the orthonormality.
+
+
+Figure 21: Average of cosine similarities between gaps.
+
+From the above observation, we design the fix algorithm that fixes $\theta_h$ to be orthonormal for the meta-initialized model. Namely, MAML-fix updates $\theta_h$ in inner loops only, and BOIL-fix does not update $\theta_h$ . The fix algorithm can be easily implemented by initializing $\theta_h$ to be orthonormal through the Gram-Schmidt method from a random matrix and setting the learning rate for the head of the model during the outer loop to zero.
+
+Figure 20b depicts the valid accuracy curves of the fix algorithm on Cars. The experiments substantiate that orthonormal rows of $\theta_h$ are important and that BOIL improves the performance. (1) Comparing MAML to MAML-fix (the left panel of Figure 20b), MAML-fix outperforms MAML. It means that the outer loop calculated through the task-specific head following MAML is detrimental because the outer loop adds unnecessary task-specific information to the model. (2) Comparing vanilla models to fix models (both panels of Figure 20b), a fixed meta-initialized head with orthonormality is less over-fitted. (3) Comparing BOIL to BOIL-fix (the right panel of Figure 20b), although BOIL-fix can achieve almost the same performance with BOIL with sufficient iterations, BOIL converges faster to a better local optimum. This is because $\theta_h$ is trained so that the inner loop can easily adapt $f_{\theta_b}(x)$ to each class.
+
+# M REPRESENTATION CHANGE IN RESNET-12
+
+Figure 22 shows the CKA of ResNet according to the algorithm. Like the 4conv network, MAML/ANIL algorithms change the values only in the logit space, i.e., the space after head, regardless of the last skip connection. However, the BOIL algorithm changes the values in the representation spaces. By disconnecting the last skip connection, representation layer change is concentrated on the high-level representation space, i.e., the CKA of BOIL w/o LSC is smaller than that of BOIL w/ LSC after block4. Table 18 shows empirical results of ResNet-12.
+
+
+Figure 22: CKA of ResNet12 on miniImageNet.
+
+Table 18: 5-Way 5-Shot test accuracy (\%) of ResNet-12 meta-trained on miniImageNet according to the head's existence before/after an adaptation.
+
+| meta-train | miniImageNet |
| meta-test | miniImageNet | Cars |
| head | w/ head | w/o head (NIL-testing) | w/ head | w/o head (NIL-testing) |
| adaptation | before after | before after | before after | before after | before after | |
| MAML w/LSC | 20.02 ± 0.21 | 68.51 ± 0.39 | 70.44 ± 0.30 | 70.37 ± 0.32 | 20.06 ± 0.07 | 43.46 ± 0.15 |
| MAML w/o LSC | 20.04 ± 0.36 | 67.87 ± 0.22 | 69.35 ± 0.15 | 69.28 ± 0.14 | 19.98 ± 0.16 | 41.40 ± 0.11 |
| ANIL w/LSC | 19.85 ± 0.19 | 68.54 ± 0.34 | 70.31 ± 0.34 | 70.31 ± 0.34 | 19.99 ± 0.15 | 45.13 ± 0.15 |
| ANIL w/o LSC | 19.97 ± 0.21 | 67.20 ± 0.13 | 68.47 ± 0.21 | 68.47 ± 0.21 | 20.03 ± 0.23 | 43.46 ± 0.18 |
| BOIL w/LSC | 20.01 ± 0.18 | 70.50 ± 0.28 | 44.65 ± 0.49 | 70.34 ± 0.31 | 20.02 ± 0.08 | 49.69 ± 0.17 |
| BOIL w/o LSC | 20.00 ± 0.00 | 71.30 ± 0.28 | 40.04 ± 0.33 | 71.18 ± 0.29 | 20.00 ± 0.00 | 49.71 ± 0.28 |
+
+Furthermore, we identified representation change of ResNet-12 meta-trained on Cars in BOIL.
+
+
+
+
+
+
+
+
+
+
+(a) MAML w/ LSC.
+
+
+(d) MAML w/o LSC.
+
+
+(b) ANIL w/ LSC.
+
+
+(e) ANIL w/o LSC.
+
+
+(c) BOIL w/ LSC.
+(f) BOIL w/o LSC.
+
+
+Figure 23: Cosine similarity of ResNet-12 on Cars.
+Figure 24: CKA of ResNet-12 on Cars.
+
+Table 19: 5-Way 5-Shot test accuracy (%) of ResNet-12 meta-trained on Cars according to the head's existence before/after an adaptation.
+
+| meta-train | Cars |
| meta-test | Cars | CUB |
| head | w/ head | w/o head (NIL-testing) | w/ head | w/o head (NIL-testing) |
| adaptation | before after | before after | before after | before after | before after |
| MAML w/LSC | 20.11 ± 0.22 | 75.49 ± 0.20 | 77.01 ± 0.14 | 76.76 ± 0.17 | 20.13 ± 0.10 | 35.87 ± 0.19 |
| MAML w/o LSC | 20.04 ± 0.08 | 73.63 ± 0.26 | 75.81 ± 0.20 | 75.61 ± 0.25 | 20.03 ± 0.14 | 34.77 ± 0.26 |
| ANIL w/LSC | 20.06 ± 0.35 | 79.45 ± 0.23 | 80.88 ± 0.19 | 80.88 ± 0.19 | 20.12 ± 0.14 | 35.09 ± 0.19 |
| ANIL w/o LSC | 20.20 ± 0.33 | 75.32 ± 0.15 | 76.90 ± 0.16 | 76.90 ± 0.16 | 20.01 ± 0.18 | 36.06 ± 0.14 |
| BOIL w/LSC | 20.00 ± 0.00 | 80.98 ± 0.14 | 40.54 ± 0.11 | 81.11 ± 0.24 | 20.00 ± 0.00 | 43.84 ± 0.21 |
| BOIL w/o LSC | 20.00 ± 0.00 | 83.99 ± 0.20 | 50.42 ± 0.23 | 83.60 ± 0.18 | 20.00 ± 0.00 | 44.23 ± 0.18 |
+
+# N REPRESENTATION LAYER CHANGE IN THE LAST BLOCK OF RESNET-12
+
+In this section, we explore the representation layer reuse and representation layer change in the last block of a deeper architecture network. Figure 25 and Figure 26 show the cosine similarities between representations after all layers in the last block (i.e., block4) on miniImageNet and Cars. In the case of MAML/ANIL, there is little or no representation layer change in all layers of the last block. In contrast, in the case of BOIL, the gap between intra-class similarity and inter-class similarity is enlarged through adaptation in some or all layers of the last block, which indicates that representation layer reuse in low-level layers of the last block and representation layer change in high-level layers of the block are mixed even in the last block.
+
+Furthermore, it is observed that representation at high-level layers in the last block changes more when the last skip connection does not exist (e.g., Figure 25f and Figure 26f) than when the last
+
+skip connection exists (e.g., Figure 25e and Figure 26e). This result confirms that the disconnection trick strengthens representation layer change at high-level layers by not directly propagating general representations from prior blocks.
+
+
+Intra-class Inter-class
+
+
+
+
+(a) MAML w/ LSC.
+
+
+
+
+(b) MAML w/o LSC.
+
+
+
+
+(c) ANIL w/ LSC.
+
+
+
+
+(d) ANIL w/o LSC.
+
+
+
+
+(e) BOIL w/ LSC.
+(f) BOIL w/o LSC.
+
+
+Figure 25: Cosine similarity in block4 (the last block) of ResNet-12 on miniImagenet.
+
+
+
+
+(a) MAML w/ LSC.
+
+
+
+
+
+
+
+
+(b) MAML w/o LSC.
+
+
+
+
+(c) ANIL w/ LSC.
+
+
+
+
+(d) ANIL w/o LSC.
+
+
+
+
+(e) BOIL w/ LSC.
+(f) BOIL w/o LSC.
+
+
+Figure 26: Cosine similarity in block4 (the last block) of ResNet-12 on Cars.
\ No newline at end of file
diff --git a/boiltowardsrepresentationchangeforfewshotlearning/images.zip b/boiltowardsrepresentationchangeforfewshotlearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..475160227612e4793d84eb582d000d8e6d73a88f
--- /dev/null
+++ b/boiltowardsrepresentationchangeforfewshotlearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d6359fc95ff93237fcaca5ced80c9e591d3717db975b8bf6c2f79f0973103b7c
+size 2308322
diff --git a/boiltowardsrepresentationchangeforfewshotlearning/layout.json b/boiltowardsrepresentationchangeforfewshotlearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..d5012bcc7dd55dda27938e9403213c3082e74f6c
--- /dev/null
+++ b/boiltowardsrepresentationchangeforfewshotlearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:82b20c868cea5bb32fd14811d3a1abaa9ebbcd0377f0a4dbbbdfc852b52b6463
+size 776341
diff --git a/boostthenconvolvegradientboostingmeetsgraphneuralnetworks/08e96e39-c0bc-476f-ae90-bb99b0ba9d58_content_list.json b/boostthenconvolvegradientboostingmeetsgraphneuralnetworks/08e96e39-c0bc-476f-ae90-bb99b0ba9d58_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..4b836c02a5b2fe65445a4ee31ca32d7c0aee8347
--- /dev/null
+++ b/boostthenconvolvegradientboostingmeetsgraphneuralnetworks/08e96e39-c0bc-476f-ae90-bb99b0ba9d58_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2d7b3472b6aa985f4f4e1b767155c6e485fbeda8e607829d9d46113c3c098633
+size 103604
diff --git a/boostthenconvolvegradientboostingmeetsgraphneuralnetworks/08e96e39-c0bc-476f-ae90-bb99b0ba9d58_model.json b/boostthenconvolvegradientboostingmeetsgraphneuralnetworks/08e96e39-c0bc-476f-ae90-bb99b0ba9d58_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..fdd5e432101b5312f2202b1c7e41319f78bd2ccb
--- /dev/null
+++ b/boostthenconvolvegradientboostingmeetsgraphneuralnetworks/08e96e39-c0bc-476f-ae90-bb99b0ba9d58_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:225b377abff0cead0498157af6b1eacf4d9c3eaa5b2cc045502458f41a062891
+size 125703
diff --git a/boostthenconvolvegradientboostingmeetsgraphneuralnetworks/08e96e39-c0bc-476f-ae90-bb99b0ba9d58_origin.pdf b/boostthenconvolvegradientboostingmeetsgraphneuralnetworks/08e96e39-c0bc-476f-ae90-bb99b0ba9d58_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..672ad8b57aa9ceccbcf5e31a3511773513a2581b
--- /dev/null
+++ b/boostthenconvolvegradientboostingmeetsgraphneuralnetworks/08e96e39-c0bc-476f-ae90-bb99b0ba9d58_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cd5c64cd6fda75edb3f8a4ffd8dda6313cd7d8dbb571deeb8f288b5b094ea4a9
+size 2058446
diff --git a/boostthenconvolvegradientboostingmeetsgraphneuralnetworks/full.md b/boostthenconvolvegradientboostingmeetsgraphneuralnetworks/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..7643e454ba5b7fbd3cbe1c1f789ef04992186f7b
--- /dev/null
+++ b/boostthenconvolvegradientboostingmeetsgraphneuralnetworks/full.md
@@ -0,0 +1,411 @@
+# BOOST THEN CONVOLVE: GRADIENT BOOSTING MEETS GRAPH NEURAL NETWORKS
+
+Sergei Ivanov
+
+Criteo AI Lab; Skoltech
+
+Paris, France
+
+s.ivanov@criteo.com
+
+Liudmila Prokhorenkova
+
+Yandex; HSE University; MIPT
+
+Moscow, Russia
+
+ostroumova-la@yandex-team.ru
+
+# ABSTRACT
+
+Graph neural networks (GNNs) are powerful models that have been successful in various graph representation learning tasks. Whereas gradient boosted decision trees (GBDT) often outperform other machine learning methods when faced with heterogeneous tabular data. But what approach should be used for graphs with tabular node features? Previous GNN models have mostly focused on networks with homogeneous sparse features and, as we show, are suboptimal in the heterogeneous setting. In this work, we propose a novel architecture that trains GBDT and GNN jointly to get the best of both worlds: the GBDT model deals with heterogeneous features, while GNN accounts for the graph structure. Our model benefits from end-to-end optimization by allowing new trees to fit the gradient updates of GNN. With an extensive experimental comparison to the leading GBDT and GNN models, we demonstrate a significant increase in performance on a variety of graphs with tabular features. The code is available: https://github.com/nd7141/bgnn.
+
+# 1 INTRODUCTION
+
+Graph neural networks (GNNs) have shown great success in learning on graph-structured data with various applications in molecular design (Stokes et al., 2020), computer vision (Casas et al., 2019), combinatorial optimization (Mazyavkina et al., 2020), and recommender systems (Sun et al., 2020). The main driving force for progress is the existence of canonical GNN architecture that efficiently encodes the original input data into expressive representations, thereby achieving high-quality results on new datasets and tasks.
+
+Recent research has mostly focused on GNNs with sparse data representing either homogeneous node embeddings (e.g., one-hot encoded graph statistics) or bag-of-words representations. Yet tabular data with detailed information and rich semantics among nodes in the graph are more natural for many situations and abundant in real-world AI (Xiao et al., 2019). For example, in a social network, each person has socio-demographic characteristics (e.g., age, gender, date of graduation), which largely vary in data type, scale, and missing values. GNNs for graphs with tabular data remain unexplored, with gradient boosted decision trees (GBDTs) largely dominating in applications with such heterogeneous data (Bentéjac et al., 2020).
+
+GBDTs are so successful for tabular data because they possess certain properties: (i) they efficiently learn decision space with hyperplane-like boundaries that are common in tabular data; (ii) they are well-suited for working with variables of high cardinality, features with missing values, and of different scale; (iii) they provide qualitative interpretation for decision trees (e.g., by computing decrease in node impurity for every feature) or for ensembles via post-hoc analysis stage (Kaur et al., 2020); (iv) in practical applications, they mostly converge faster even for large amounts of data.
+
+In contrast, a crucial feature of GNNs is that they take into account both the neighborhood information of the nodes and the node features to make a prediction, unlike GBDTs that require additional preprocessing analysis to provide the algorithm with graph summary (e.g., through unsupervised graph embeddings (Hu et al., 2020a)). Moreover, it has been shown theoretically that message-passing GNNs can compute any function on its graph input that is computable by a Turing machine, i.e., GNN is known to be the only learning architecture that possesses universality properties on graphs (approximation (Keriven & Peyré, 2019; Maron et al., 2019) and computability (Loukas, 2020)).
+
+Furthermore, gradient-based learning of neural networks can have numerous advantages over the tree-based approach: (i) relational inductive bias imposed in GNNs alleviates the need to manually engineer features that capture the topology of the network (Battaglia et al., 2018); (ii) the end-to-end nature of training neural networks allows multi-stage (Fey et al., 2019) or multi-component (Wang et al., 2020) integration of GNNs in application-dependent solutions; (iii) pretraining representations with graph networks enriches transfer learning for many valuable tasks such as unsupervised domain adaptation (Wu et al., 2020), self-supervised learning (Hu et al., 2020b), and active learning regimes (Satorras & Estrach, 2018).
+
+Undoubtedly, there are major benefits in both GBDT and GNN methods. Is it possible to get advantages of both worlds? All previous approaches (Arik & Pfister, 2020; Popov et al., 2019; Badirli et al., 2020) that attempt to combine gradient boosting and neural networks are computationally heavy, do not consider graph-structured data, and suffer from the lack of relational bias imposed in GNN architectures, see Appendix A for a more detailed comparison with related literature. To the best of our knowledge, the current work is the first to explore using GBDT models for graph-structured data.
+
+In this paper, we propose a novel learning architecture for graphs with tabular data, BGNN, that combines GBDT's learning on tabular node features with GNN that refines the predictions utilizing the graph's topology. This allows BGNN to inherit the advantages of gradient boosting methods (heterogeneous learning and interpretability) and graph networks (representation learning and end-to-end training). Overall, our contributions are the following:
+
+(1) We design a novel generic architecture that combines GBDT and GNN into a unique pipeline. To the best of our knowledge, this is the first work that systematically studies the application of GBDT to graph-structured data.
+(2) We overcome the challenge of end-to-end training of GBDT by iteratively adding new trees that fit the gradient updates of GNN. This allows us to backpropagate the error signal from the topology of the network to GBDT.
+(3) We perform an extensive evaluation of our approach against strong baselines in node prediction tasks. Our results consistently demonstrate significant performance improvements on heterogeneous node regression and node classification tasks over a variety of real-world graphs with tabular data.
+(4) We show that our approach is also more efficient than the state-of-the-art GNN models due to much faster loss convergence during training. Furthermore, learned representations exhibit discernible structure in the latent space, which further demonstrates the expressivity of our approach.
+
+# 2 BACKGROUND
+
+Let $G = (V, E)$ be a graph with nodes having features and target labels. In node prediction tasks (classification or regression), some target labels are known, and the goal is to predict the remaining ones. Throughout the text, by lowercase variables $\mathbf{x}_v$ ( $v \in V$ ) or $\mathbf{x}$ we denote features of individual nodes, and $\mathbf{X}$ represents the matrix of all features for $v \in V$ . Individual target labels are denoted by $y_v$ , while $Y$ is the vector of known labels.
+
+Graph Neural Networks (GNNs) use both the network's connectivity and the node features to learn latent representations for all nodes $v \in V$ . Many popular GNNs use a neighborhood aggregation approach, also called the message-passing mechanism, where the representation of a node $v$ is updated by applying a non-linear aggregation function of $v$ 's neighbors representation (Fey & Lenssen, 2019). Formally, GNN is a differentiable, permutation-invariant function $g_{\theta} : (G, \mathbf{X}) \mapsto \widehat{Y}$ , where $\widehat{Y}$ is the vector of predicted labels. Similar to traditional neural networks, GNNs are composed of multiple layers, each representing a non-linear message-passing function:
+
+$$
+\mathbf {x} _ {v} ^ {t} = \operatorname {C O M B I N E} ^ {t} \left(\mathbf {x} _ {v} ^ {t - 1}, \text {A G G R E G A T E} ^ {t} \left(\left\{\left(\mathbf {x} _ {w} ^ {t - 1}, \mathbf {x} _ {v} ^ {t - 1}\right): (w, v) \in E \right\}\right)\right), \tag {1}
+$$
+
+where $\mathbf{x}_v^t$ is the representation of node $v$ at layer $t$ , and $\mathrm{COMBINE}^t$ and $\mathrm{AGGREGATE}^t$ are (parametric) functions that aggregate representations from the local neighborhood of a node. Then, the GNN mapping $g_{\theta}$ includes multiple layers of aggregation (1). Parameters of GNN model $\theta$ are optimized with gradient descent by minimizing an empirical loss function $L_{\mathrm{GNN}}(Y,g_{\theta}(G,\mathbf{X}))$ .
+
+Gradient Boosted Decision Trees (GBDT) is a well-known and widely used algorithm that is defined on non-graph tabular data (Friedman, 2001) and is particularly successful for tasks containing heterogeneous features and noisy data.
+
+The core idea of gradient boosting is to construct a strong model by iteratively adding weak ones (usually decision trees). Formally, at each iteration $t$ of the gradient boosting algorithm, the model $f(\mathbf{x})$ is updated in an additive manner:
+
+$$
+f ^ {t} (\mathbf {x}) = f ^ {t - 1} (\mathbf {x}) + \epsilon h ^ {t} (\mathbf {x}), \tag {2}
+$$
+
+where $f^{t - 1}$ is a model constructed at the previous iteration, $h^t$ is a weak learner that is chosen from some family of functions $\mathcal{H}$ , and $\epsilon$ is a learning rate. The weak learner $h^t \in \mathcal{H}$ is chosen to approximate the negative gradient of a loss function $L$ w.r.t. the current model's predictions:
+
+$$
+h ^ {t} = \underset {h \in \mathcal {H}} {\arg \min } \sum_ {i} \left(- \frac {\partial L \left(f ^ {t - 1} \left(\mathbf {x} _ {i}\right) , y _ {i}\right)}{\partial f ^ {t - 1} \left(\mathbf {x} _ {i}\right)} - h \left(\mathbf {x} _ {i}\right)\right) ^ {2}. \tag {3}
+$$
+
+The gradient w.r.t. the current predictions indicates how one should change these predictions to improve the loss function. Informally, gradient boosting can be thought of as performing gradient descent in functional space.
+
+The set of weak learners $\mathcal{H}$ is usually formed by shallow decision trees. Decision trees are built by a recursive partition of the feature space into disjoint regions called leaves. This partition is usually constructed greedily to minimize the loss function (3). Each leaf $R_{j}$ of the tree is assigned to a value $a_{j}$ , which estimates the response $y$ in the corresponding region. In our case, $a_{j}$ is equal to the average negative gradient value in the leaf $R_{j}$ . To sum up, we can write $h(x) = \sum_{j}a_{j}1_{\{x\in R_{j}\}}$ .
+
+# 3 GBDT MEETS GNN
+
+
+Figure 1: Training of BGNN, steps for one epoch are numbered.
+
+# Algorithm 1 Training of BGNN
+
+Input: Graph $G$ , node features $\mathbf{X}$ , targets $Y$
+
+Initialize GBDT targets $\mathcal{V} = Y$
+
+for epoch $i = 1$ to $N$ do
+
+Train $k$ trees of GBDT with eq. (2)-(3)
+
+$$
+\begin{array}{l} f^{i}\underset {k}{\leftarrow}\operatorname *{arg min}_{f^{i}}L_{\text{GBDT}}(f^{i}(\mathbf{X}),\mathcal{Y}) \\ f \leftarrow f + f ^ {i} \\ \end{array}
+$$
+
+Train $l$ steps of GNN on new node features
+
+$$
+\begin{array}{l} \mathbf {X} ^ {\prime} \leftarrow f (\mathbf {X}) \\ \theta , \mathbf {X} ^ {\prime} \xleftarrow [ l ]{\arg \min} _ {\theta , \mathbf {X} ^ {\prime}} L _ {\mathrm {G N N}} (g _ {\theta} (G, \mathbf {X} ^ {\prime}), Y) \\ \end{array}
+$$
+
+Update targets for next iteration of GBDT
+
+$$
+\mathcal {Y} \leftarrow \mathbf {X} ^ {\prime} - f (\mathbf {X})
+$$
+
+end for
+
+Output: Models GBDT $f$ and GNN $g_{\theta}$
+
+Gradient boosting approach is successful for learning on tabular data; however, there are challenges of applying GBDT on graph-structured data: (i) how to propagate relational signal, in addition to node features, to otherwise inherently tabular model; and (ii) how to train it together with GNN in an end-to-end fashion. Indeed, optimizations of GBDT and GNN follow different approaches: the parameters of GNN are optimized via gradient descent, while GBDT is constructed iteratively, and the decision trees remain fixed after being built (decision trees are based on hard splits of the feature space, which makes them non-differentiable).
+
+A straightforward approach would be to train the GBDT model only on the node features and then use the obtained predictions, jointly with the original input, as new node features for GNN. In this case, the graph-insensitive predictions of GBDT will further be refined by a graph neural network.
+
+This approach (which we call Res-GNN) can already boost the performance of GNN for some tasks. However, in this case, the GBDT model completely ignores the graph structure and may miss descriptive features of the graph, providing inaccurate input data to GNN.
+
+In contrast, we propose end-to-end training of GBDT and GNN called BGNN (for Boost-GNN). As before, we first apply GBDT and then GNN, but now we optimize both of them, taking into account the quality of final predictions. The training of BGNN is shown in Figure 1. Recall that one cannot tune already built decision trees due to their discrete structure, so we iteratively update the GBDT model by adding new trees that approximate the GNN loss function.
+
+In Algorithm 1, we present the training of BGNN that combines GBDT and GNN for any node-level prediction problem such as semi-supervised node regression or classification. In the first iteration, we build a GBDT model $f^{1}(\mathbf{x})$ with $k$ decision trees by minimizing the loss function $L_{\mathrm{GBDT}}(f^{1}(\mathbf{x}),y)$ (e.g., RMSE for regression or cross-entropy for classification) averaged over the train nodes, following the equations (2)-(3). Using all predictions $f^{1}(\mathbf{X})$ , we update the node features to $\mathbf{X}'$ that we pass to GNN. Possible update functions that we experiment with include concatenation with the original node features and their replacement by $f^{1}(\mathbf{X})$ . Next, we train a graph neural network $g_{\theta}$ on a graph $G$ with node features $\mathbf{X}'$ by minimizing $L_{\mathrm{GNN}}(g_{\theta}(G,\mathbf{X}',Y))$ with $l$ steps of gradient descent. Importantly, we optimize both the parameters $\theta$ of GNN and the node features $\mathbf{X}'$ . Then, we use the difference between the optimized node features $\mathbf{X}_{new}'$ and the input node features $\mathbf{X}' = f^{1}(\mathbf{X})$ as the target for the next decision trees built by GBDT. If $l = 1$ , the difference $\mathbf{X}_{new}' - \mathbf{X}'$ exactly equals the negative gradient of the loss function w.r.t. the input features $\mathbf{X}'$ multiplied by the learning rate $\eta$ :
+
+$$
+\mathbf {X} _ {n e w} ^ {\prime} = \mathbf {X} ^ {\prime} - \eta \frac {\partial L _ {\mathrm {G N N}} (g _ {\theta} (G , \mathbf {X} ^ {\prime}) , Y)}{\partial \mathbf {X} ^ {\prime}}.
+$$
+
+In the second iteration, we train a new GBDT model $f^2$ with the original input features $\mathbf{X}$ but new target labels: $\mathbf{X}_{new}' - \mathbf{X}'$ . Intuitively, $f^2$ fits the direction that would improve GNN prediction based on the first predictions $f^1(\mathbf{X})$ . In other words, GBDT approximates the gradient steps made by GNN for the node features $\mathbf{X}'$ . This is a regression problem, so here $L_{\mathrm{GBDT}}$ is the RMSE loss.
+
+After $f^2$ is trained, we combine the predictions $f(\mathbf{X}) = f^{1}(\mathbf{X}) + f^{2}(\mathbf{X})$ and pass the obtained values $\mathbf{X}'$ to GNN as node features. GNN model $g_{\theta}$ again does $l$ steps of backpropagation and passes the new difference $\mathbf{X}_{new}' - \mathbf{X}'$ as a target to the next iteration of GBDT. In total, the model is trained for $N$ epochs and outputs a GBDT model $f: \mathbf{X} \mapsto Y$ and GNN model $g_{\theta}: (G, \mathbf{X}) \mapsto Y$ , which can be used for downstream tasks.
+
+Intuitively, BGNN model consists of two consecutive blocks, GBDT and GNN, which are trained end-to-end, and therefore can be interpreted from two angles: GBDT as an embedding layer for GNN or GNN as a parametric loss function for GBDT. In the former case, GBDT transforms the original input features $\mathbf{X}$ to new node features $\mathbf{X}'$ , which are then passed to GNN. In the latter case, one can see BGNN as a standard gradient boosted training where GNN acts as a complex loss function that depends on the graph topology.
+
+# 4 EXPERIMENTS
+
+We have performed a comparative evaluation of BGNN and Res-GNN against a wide variety of strong baselines and previous approaches on heterogeneous node prediction problems, achieving significant improvement in performance across all of them. This section outlines our experimental setting, the results on node regression and classification problems, and extracted feature representations.
+
+In our first experiments, we want to answer two questions:
+
+Q1 Does combination of GBDT and GNN lead to better qualitative results in heterogeneous node regression and classification problems?
+Q2 Is the end-to-end training proposed in Algorithm 1 better than a combination of pretrained GBDT with GNN?
+
+To answer these questions, we consider several strong baselines among GBDTs, GNNs, and pure neural networks. CatBoost is a recent GBDT implementation (Prokhorenkova et al., 2018) that uses
+
+oblivious trees as weak learners. LightGBM is another GBDT model (Ke et al., 2017) that is used extensively in ML competitions. Among GNNs, we tested four state-of-the-art recent models that showed superior performance in node prediction tasks: GAT (Veličković et al., 2018), GCN (Kipf & Welling, 2017), AGNN (Thekumparamil et al., 2018), APPNP (Klicpera et al., 2019). Additionally, we test the performance of fully-connected neural network FCNN and its end-to-end combination with GNNs, FCNN-GNN.
+
+We compare these baselines against two proposed approaches: the end-to-end BGNN model and not end-to-end Res-GNN. The BGNN model follows Algorithm 1 and builds each tree approximating the GNN error in the previous iteration. In contrast, Res-GNN first trains a GBDT model on the training set of nodes and then either appends its predictions for all nodes to the original node features or replaces the original features with the GBDT predictions, after which GNN is trained on the updated features, and GNN's predictions are used to calculate metrics. Hence, Res-GNN is a two-stage approach where the training of GBDT is independent of GNN. On the other hand, BGNN trains GBDT and GNN simultaneously in an end-to-end fashion. In most of our experiments, the GNN-component of FCNN-GNN, Res-GNN, and BGNN is based on GAT, while in Section 4.3 we analyze consistency of improvements across different GNN models.
+
+We ensure that the comparison is done fairly by training each model until the convergence with a reasonable set of hyperparameters evaluated on the validation set. We run each hyperparameter setting three times and take the average of the results. Furthermore, we have five random splits of the data, and the final number represents the average performance of the model for all five random seeds. More details about hyperparameters can be found in Appendix B.
+
+# 4.1 NODE REGRESSION
+
+# 4.1.1 DATASETS
+
+We utilize five real-world node regression datasets with different properties outlined in Table 1. Four of these datasets are heterogeneous, i.e., the input features are of different types, scales, and meaning. For example, for the VK dataset, the node features are both numerical (e.g., last time seen on the platform) and categorical (e.g., country of living and university). On the other hand, Wiki dataset is homogeneous, i.e., the node features are interdependent and correspond to the bag-of-words representations of Wikipedia articles. Additional details about the datasets can be found in Appendix C.
+
+Table 1: Summary of regression datasets.
+
+ | House | County | VK | Avazu | Wiki |
| Setting | Heterogeneous | Heterogeneous | Heterogeneous | Heterogeneous | Homogeneous |
| # Nodes | 20640 | 3217 | 54028 | 1297 | 5201 |
| # Edges | 182146 | 12684 | 213644 | 54364 | 198493 |
| # Features/Node | 6 | 7 | 14 | 9 | 3148 |
| Mean Target | 2.06 | 5.44 | 35.47 | 0.08 | 27923.86 |
| Min Target | 0.14 | 1.7 | 13.48 | 0 | 16 |
| Max Target | 5.00 | 24.1 | 118.39 | 1 | 849131 |
| Median Target | 1.79 | 5 | 33.83 | 0 | 9225 |
+
+# 4.1.2 RESULTS
+
+The results of our comparative evaluation for node regression are summarized in Table 2. We report the mean RMSE (with standard deviation) on the test set and the relative gap between RMSE of the GAT model (Veličković et al., 2018) and other methods, i.e., $\mathrm{gap} = (r_m - r_{gnn}) / r_{gnn}$ , where $r_m$ and $r_{gnn}$ are RMSE of that model and of GAT, respectively.
+
+Our results demonstrate significant improvement of BGNN over the baselines. In particular, in the heterogeneous case, BGNN achieves $8\%$ , $14\%$ , $4\%$ , and $4\%$ reduction of the error for House, County, VK, and Avazu datasets, respectively. Res-GNN model that uses a pretrained CatBoost model for the input of GNN also decreases RMSE, although not as much as the end-to-end model
+
+Table 2: Summary of our results for node regression. Gap % is relative difference w.r.t. GAT RMSE (the smaller the better). Top-2 results are highlighted in bold.
+
+ | Heterogeneous | Homogeneous |
| House | County | VK | Avazu | Wiki |
| RMSE | Gap % | RMSE | Gap % | RMSE | Gap % | RMSE | Gap % | RMSE | Gap % |
| GBDT | CatBoost | 0.63 ± 0.01 | 15.3 | 1.39 ± 0.07 | -4.32 | 7.16 ± 0.20 | -0.82 | 0.1172 ± 0.02 | 3.36 | 46359 ± 4508 | 0.97 |
| LightGBM | 0.63 ± 0.01 | 15.98 | 1.4 ± 0.07 | -3.93 | 7.2 ± 0.21 | -0.33 | 0.1171 ± 0.02 | 3.27 | 49915 ± 3643 | 8.71 |
| GNN | GAT | 0.54 ± 0.01 | 0 | 1.45 ± 0.06 | 0 | 7.22 ± 0.19 | 0 | 0.1134 ± 0.01 | 0 | 45916 ± 4527 | 0 |
| GCN | 0.63 ± 0.01 | 16.77 | 1.48 ± 0.08 | 2.06 | 7.25 ± 0.19 | 0.34 | 0.1141 ± 0.02 | 0.58 | 44936 ± 4083 | -2.14 |
| AGNN | 0.59 ± 0.01 | 8.01 | 1.45 ± 0.08 | -0.19 | 7.26 ± 0.20 | 0.54 | 0.1134 ± 0.02 | -0.02 | 45982 ± 3058 | 0.14 |
| APPNP | 0.69 ± 0.01 | 27.11 | 1.5 ± 0.11 | 3.39 | 13.23 ± 0.12 | 83.19 | 0.1127 ± 0.01 | -0.65 | 53426 ± 4159 | 16.36 |
| NN | FCNN | 0.68 ± 0.02 | 25.49 | 1.48 ± 0.07 | 1.56 | 7.29 ± 0.21 | 1.02 | 0.118 ± 0.02 | 4.07 | 51662 ± 2983 | 12.51 |
| FCNN-GNN | 0.53 ± 0.01 | -2.48 | 1.39 ± 0.06 | -4.68 | 7.22 ± 0.20 | 0.01 | 0.1114 ± 0.02 | -1.82 | 48491 ± 7889 | 5.61 |
| Ours | Res-GNN | 0.51 ± 0.01 | -6.39 | 1.33 ± 0.08 | -8.35 | 7.07 ± 0.20 | -2.04 | 0.1095 ± 0.01 | -3.42 | 46747 ± 4639 | 1.81 |
| BGNN | 0.5 ± 0.01 | -8.15 | 1.26 ± 0.08 | -13.67 | 6.95 ± 0.21 | -3.8 | 0.109 ± 0.01 | -3.9 | 49222 ± 3743 | 7.2 |
+
+BGNN. In the homogeneous dataset Wiki, CatBoost and, subsequently, Res-GNN and BGNN are outperformed by the GNN model. Intuitively, when the features are homogeneous, neural network approaches are sufficient to attain the best results. This shows that BGNN leads to better qualitative results and its end-to-end training outperforms other approaches in node prediction tasks for graphs with heterogeneous tabular data.
+
+We can also observe that the end-to-end combination FCNN-GNN often leads to better performance than pure GNN. However, its improvement is smaller than for BGNN which uses the advantages of GBDT models. Moreover, CatBoost and LightGBM can be effective on their own, but their performance is not stable across all datasets. Overall, these experiments demonstrate the superiority of BGNN against other strong models.
+
+# 4.2 NODE CLASSIFICATION
+
+For node classification, we use five datasets with different properties. Due to the lack of publicly available datasets with heterogeneous node features, we adopt the datasets House_class and VK_class from the regression task by converting the target labels into several discrete classes. We additionally include two sparse node classification datasets SLAP and DBLP coming from heterogeneous information networks (HIN) with nodes having different types. We also include one homogeneous dataset OGB-ArXiv (Hu et al., 2020a). In this dataset, the node features correspond to a 128-dimensional feature vector obtained by averaging the embeddings of words in the title and abstract. Hence, the features are not heterogeneous, and therefore GBDT is not expected to outperform neural network approaches. More details about these datasets can be found in Appendix D.
+
+Table 3: Summary of our results for node classification. Gap % is the relative difference w.r.t. GAT accuracy (the higher the better). Top-2 results are highlighted in bold.
+
+ | Heterogeneous | Homogeneous |
| House_class | VK_class | Slap | DBLP | OGB-ArXiv |
| Acc. | Gap % | Acc. | Gap % | Acc. | Gap % | Acc. | Gap % | Acc. | Gap % |
| GBDT | CatBoost | 0.52 ± 0.01 | -16.82 | 0.57 ± 0.01 | -1.26 | 0.922 ± 0.01 | 15.12 | 0.759 ± 0.03 | -5.42 | 0.45 | -36.35 |
| LightGBM | 0.55 ± 0.00 | -11.98 | 0.579 ± 0.01 | 0.26 | 0.963 ± 0.00 | 20.3 | 0.913 ± 0.01 | 13.73 | 0.51 | -26.97 |
| GNN | GAT | 0.625 ± 0.00 | 0 | 0.577 ± 0.00 | 0 | 0.801 ± 0.01 | 0 | 0.802 ± 0.01 | 0 | 0.70 | 0 |
| GCN | 0.6 ± 0.00 | -3.98 | 0.574 ± 0.00 | -0.6 | 0.878 ± 0.01 | 9.72 | 0.428 ± 0.04 | -46.6 | - | - |
| AGNN | 0.614 ± 0.01 | -1.73 | 0.572 ± 0.00 | -0.79 | 0.892 ± 0.01 | 11.47 | 0.794 ± 0.01 | -1.02 | - | - |
| APPNP | 0.619 ± 0.00 | -0.89 | 0.573 ± 0.00 | -0.67 | 0.895 ± 0.01 | 11.79 | 0.83 ± 0.02 | 3.47 | - | - |
| NN | FCNN | 0.534 ± 0.01 | -14.53 | 0.567 ± 0.01 | -1.72 | 0.759 ± 0.04 | -5.24 | 0.623 ± 0.02 | -22.3 | 0.50 | -28.91 |
| FCNN-GNN | 0.64 ± 0.00 | 2.36 | 0.589 ± 0.00 | 2.13 | 0.89 ± 0.01 | 11.11 | 0.81 ± 0.01 | 0.94 | 0.71 | 0.54 |
| Ours | Res-GNN | 0.625 ± 0.01 | -0.06 | 0.603 ± 0.00 | 4.45 | 0.905 ± 0.01 | 13.06 | 0.892 ± 0.01 | 11.11 | 0.70 | -0.33 |
| BGNN | 0.682 ± 0.00 | 9.18 | 0.683 ± 0.00 | 18.3 | 0.95 ± 0.00 | 18.61 | 0.889 ± 0.01 | 10.77 | 0.67 | -4.36 |
+
+As can be seen from Table 3, on the datasets with heterogeneous tabular features (House_class and VK_class), BGNN outperforms other approaches with a significant margin. For example, for the
+
+VK_class dataset BGNN achieves more than $18\%$ of relative increase in accuracy. This demonstrates that learned representations of GBDT together with GNN can be equally useful for node classification setting on data with heterogeneous features.
+
+The other two datasets, Slap and DBLP, have sparse bag-of-words features that are particularly challenging for the GNN model. On these two datasets, GBDT is the strongest baseline. Moreover, since FCNN outperforms GNN, we conclude that graph structure does not help, hence BGNN is not supposed to beat GBDT. This is indeed the case: the final accuracy of BGNN is slightly worse than that of GBDT.
+
+In the homogeneous OGB-ArXiv dataset, FCNN-GNN and GNN achieve the top performance followed by Res-GNN and BGNN models. In a nutshell, GBDT does not learn good predictions on the homogeneous input features and therefore reduces the discriminative power of GNN. Both cases, with sparse and with homogeneous features, show that the performance of BGNN is on par or higher than of GNN; however, lacking heterogeneous structure in the data may make the joint training of GBDT and GNN redundant.
+
+# 4.3 CONSISTENCY ACROSS DIFFERENT GNN MODELS
+
+Seeing that our models perform significantly better than strong baselines on various datasets, we want to test whether the improvement is consistent if different GNN models are used. Thus, we ask:
+
+Q3 Do different GNN models benefit from our approach of combination with GBDT?
+
+To answer this question, we compare GNN models that include GAT (Veličković et al., 2018), GCN (Kipf & Welling, 2017), AGNN (Thekumparampil et al., 2018), and APPNP (Klicpera et al., 2019). We substitute each of these models to Res-GNN and BGNN and measure the relative change in performance with respect to the original GNN's performance.
+
+
+(a) House
+
+
+(b) VK
+Figure 2: Relative difference for Res-GNN (yellow, diagonal) and BGNN (red, squared) for different GNN architectures w.r.t. GNN RMSE (the smaller the better).
+
+
+(c) Avazu
+
+In Figure 2 we report the relative RMSE gap between Res-GNN and BGNN for each of the GNN architectures, i.e., we compute $\text{gap} = (r_m - r_{gnn}) / r_{gnn}$ , where $r_m$ and $r_{gnn}$ are RMSE of that model and of GNN respectively. This experiment positively answers Q3 and shows that all tested GNN architectures significantly benefit from the proposed approach. For example, for House dataset the decrease in the mean squared error is $9\%$ , $18\%$ , $19\%$ , and $17\%$ for GAT, GCN, AGNN, and APPNP models respectively. Additionally, one can see that the end-to-end training of BGNN (red, squared) leads to larger improvements than a naive combination of CatBoost and GNN in Res-GNN (yellow, diagonal). Exact metrics and training time are in Appendix E.
+
+# 4.4 TRAINING TIME
+
+As the previous experiments demonstrated superior quality across various datasets and GNN models, it is important to understand if the additional GBDT part can become a bottleneck in terms of efficiency for training this model on real-world datasets. Hence, we ask:
+
+Q4 Do BGNN and Res-GNN models incur a significant increase in training time?
+
+To answer this question, we measure the clock time to train each model until convergence, considering early stopping. Table 4 presents training time for each model. We can see that both BGNN and Res-GNN run faster than GNN in most cases. In other words, BGNN and Res-GNN models do not incur an increase in training time but actually are more efficient than GNN. For example, for VK dataset BGNN and Res-GNN run $3\mathrm{x}$ and $2\mathrm{x}$ faster than GNN, respectively. Moreover, BGNN is consistently faster than another end-to-end implementation FCNN-GNN that uses FCNN instead of CatBoost to preprocess the original input features.
+
+Table 4: Training time (s) in node regression task.
+
+ | Method | House | County | VK | Wiki | Avazu |
| CBDT | CatBoost | 4 ± 1 | 2 ± 1 | 24 ± 4 | 10 ± 1 | 2 ± 2 |
| LightGBM | 3 ± 0 | 1 ± 0 | 5 ± 3 | 3 ± 2 | 0 ± 0 |
| GNN | GAT | 35 ± 2 | 19 ± 6 | 42 ± 4 | 15 ± 1 | 9 ± 2 |
| GCN | 28 ± 0 | 18 ± 7 | 38 ± 0 | 13 ± 3 | 12 ± 6 |
| AGNN | 38 ± 5 | 28 ± 3 | 48 ± 3 | 19 ± 5 | 14 ± 8 |
| APPNP | 68 ± 1 | 34 ± 10 | 81 ± 3 | 49 ± 26 | 24 ± 15 |
| NN | FCNN | 16 ± 5 | 2 ± 1 | 109 ± 35 | 12 ± 2 | 2 ± 0 |
| FCNN-GNN | 39 ± 1 | 21 ± 6 | 48 ± 2 | 16 ± 1 | 14 ± 3 |
| Ours | Res-GNN | 36 ± 7 | 7 ± 3 | 41 ± 7 | 31 ± 9 | 7 ± 2 |
| BGNN | 20 ± 4 | 2 ± 0 | 16 ± 0 | 21 ± 7 | 5 ± 1 |
+
+The reason for improved efficiency is that BGNN and Res-GNN converge with a much fewer number of iterations as demonstrated in Figure 3. We plot RMSE on the test set during training for all models (with winning hyperparameters). We can see that BGNN converges within the first ten iterations (for $k = 20$ ), leading to fast training. In contrast, Res-GNN is similar in terms of convergence to GNN for the first 100 epochs, but then it continues decreasing RMSE unlike GNN that requires much more epochs to converge. This behavior is similar for other datasets (see Appendix F).
+
+
+(a) House
+
+
+(b) VK
+Figure 3: RMSE on the test set during training for two node regression datasets.
+
+# 4.5 VISUALIZING PREDICTIONS
+
+To investigate the performance of BGNN, we plot the final predictions of trained models for observations in the training set. Our motivation is to scrutinize which points are correctly classified by different models. Figure 4 displays the predictions of GBDT, GNN, Res-GNN, and BGNN models as well as the true target value. To better understand the predictions of the BGNN model, in Figure 4(e) we show the values predicted by GBDT that was trained as a part of BGNN. This experiment is performed on House dataset, the plots for other datasets show similar trends and can be found in the supplementary materials.
+
+Several observations can be drawn from these figures. First, the true target values change quite smoothly within local neighborhoods; however, there are a few outliers: single red points among many blue points and conversely. These points can mislead the model during the training, predicting
+
+
+(a) True
+
+
+
+
+(b) GBDT
+
+
+
+
+(c) GNN
+
+
+
+
+(d) Res-GNN
+
+
+
+
+(e) GBDT in BGNN
+
+
+Figure 4: House dataset. True labels and predictions by trained GBDT, GNN, Res-GNN, and BGNN models (training points only). Point coordinates correspond to BGNN learned representations in the first hidden layer. Color represents the final predictions made by each model.
+
+
+(f) BGNN
+
+
+
+the wrong target value for many observations in the outliers' local neighborhoods. Hence, it is important for a model to make smoothed predictions in the local neighborhoods.
+
+Second, comparing the prediction spaces of GBDT, GNN, Res-GNN, and BGNN models we can observe that predictions for GBDT are much more grainy with large variations in the neighborhoods of the vertices (high quality images can be found in the supplementary materials). Intuitively, because the GBDT model does not have access to the graph structure, it cannot propagate its predictions in the nodes' vicinity. Alternatively, GNN, Res-GNN, and BGNN can extrapolate the outputs among local neighbors, smoothing out the final predictions as seen in Figures 4(c), 4(d), 4(f).
+
+Third, focusing on the values of predictions (color bars on the right of each plot) of GBDT, GNN, and BGNN models we notice that the scale of final predictions for GBDT and BGNN models is closely aligned with the true predictions, while GNN's predictions mismatch the true values by large margin. Our intuition is that the expressive power of GBDT to learn piecewise decision boundaries common in tabular datasets helps GBDT and BGNN to properly tune its final predictions with respect to the true range of values. In contrast, GNN relies solely on neural layers to learn complex decision rules.
+
+Another observation comes from looking at the values predicted by GBDT trained as a part of BGNN (see Figure 4(e)). While this GBDT model is initialized using the true target labels, it was not forced to predict the target during the training. Interestingly, this model shows the same trend and clearly captures the regions on high/low target values. On the other hand, GBDT trained as a part of BGNN is much more conservative: on all datasets, the range of predicted values is significantly smaller than the true one. We hypothesize that GBDT is trained to scale its predictions to make them more suitable for further improvements by GNN.
+
+# 5 CONCLUSION
+
+We have presented BGNN, a novel architecture for learning on graphs with heterogeneous tabular node features. BGNN takes advantages of the GBDT model to build hyperplane decision boundaries that are common for heterogeneous data, and then utilizes GNN to refine the predictions using relational information. Our approach is end-to-end and can be incorporated with any message-passing neural network and gradient boosting method. Extensive experiments demonstrate that the proposed architecture is superior to strong existing competitors in terms of accuracy of predictions and training time. A possible direction for future research is to analyze whether this approach is profitable for graph-level predictions such as graph classification or subgraph detection.
+
+# ACKNOWLEDGMENTS
+
+The authors thank the Anonymous Reviewers for their reviews and Anton Tsitsulin for kindly sharing VK data. Liudmila Prokhorenkova also acknowledge the financial support from the Ministry of Education and Science of the Russian Federation in the framework of MegaGrant 075-15-2019-1926 and from the Russian President grant supporting leading scientific schools of the Russian Federation NSh-2540.2020.1.
+
+# REFERENCES
+
+Sercan O Arik and Tomas Pfister. Tabnet: Attentive interpretable tabular learning. arXiv preprint arXiv:1908.07442, 2020.
+Sarkhan Badirli, Xuanqing Liu, Zhengming Xing, Avradeep Bhowmik, and Sathiya S Keerthi. Gradient boosting neural networks: Grownet. arXiv preprint arXiv:2002.07971, 2020.
+Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018.
+Candice Bentéjac, Anna Csörgő, and Gonzalo Martínez-Muñoz. A comparative analysis of gradient boosting algorithms. Artificial Intelligence Review, pp. 1-31, 2020.
+Sergio Casas, Cole Gulino, Renjie Liao, and Raquel Urtasun. Spatially-aware graph neural networks for relational behavior forecasting from sensor data. arXiv preprint arXiv:1910.08233, 2019.
+Djork-Arne Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). In 4th International Conference on Learning Representations, 2016.
+Ji Feng, Yang Yu, and Zhi-Hua Zhou. Multi-layered gradient boosting decision trees. In Advances in neural information processing systems, pp. 3551-3561, 2018.
+Matthias Fey and Jan Eric Lenssen. Fast graph representation learning with pytorch geometric. arXiv preprint arXiv:1903.02428, 2019.
+Matthias Fey, Jan E Lenssen, Christopher Morris, Jonathan Masci, and Nils M Kriege. Deep graph matching consensus. In International Conference on Learning Representations, 2019.
+Jerome H Friedman. Greedy function approximation: a gradient boosting machine. Annals of statistics, pp. 1189-1232, 2001.
+Hussein Hazimeh, Natalia Ponomareva, Petros Mol, Zhenyu Tan, and Rahul Mazumder. The tree ensemble layer: Differentiability meets conditional computation. In 37th International Conference on Machine Learning (ICML 2020), 2020.
+Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. In Conference on Neural Information Processing Systems, 2020a.
+Weihua Hu, Bowen Liu, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay Pande, and Jure Leskovec. Strategies for pre-training graph neural networks. In International Conference on Learning Representations, 2020b.
+Junteng Jia and Austin Benson. Outcome correlation in graph neural network regression. arXiv preprint arXiv:2002.08274, 2020.
+Harmanpreet Kaur, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna Wallach, and Jennifer Wortman Vaughan. Interpreting interpretability: Understanding data scientists' use of interpretability tools for machine learning. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 2020.
+
+Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. Lightgbm: A highly efficient gradient boosting decision tree. In Advances in Neural Information Processing Systems 30. 2017.
+Nicolas Keriven and Gabriel Peyré. Universal invariant and equivariant graph neural networks. In Advances in Neural Information Processing Systems, pp. 7092-7101, 2019.
+Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, 2017.
+Johannes Klicpera, Aleksandar Bojchevski, and Stephan Gunnemann. Predict then propagate: Graph neural networks meet personalized pagerank. In International Conference on Learning Representations, 2019.
+Pan Li, Zhen Qin, Xuanhui Wang, and Donald Metzler. Combining decision trees and neural networks for learning-to-rank in personal search. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2032-2040, 2019.
+Andreas Loukas. What graph neural networks cannot learn: depth vs width. In International Conference on Learning Representations, 2020.
+Haggai Maron, Ethan Fetaya, Nimrod Segol, and Yaron Lipman. On the universality of invariant networks. In International conference on machine learning, pp. 4363-4371. PMLR, 2019.
+Nina Mazyavkina, Sergey Sviridov, Sergei Ivanov, and Evgeny Burnaev. Reinforcement learning for combinatorial optimization: A survey. arXiv preprint arXiv:2003.03600, 2020.
+R Kelley Pace and Ronald Barry. Sparse spatial autoregressions. Statistics & Probability Letters, 33 (3):291-297, 1997.
+Ben Peters, Vlad Niculae, and André FT Martins. Sparse sequence-to-sequence models. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 1504-1519, 2019.
+Sergei Popov, Stanislav Morozov, and Artem Babenko. Neural oblivious decision ensembles for deep learning on tabular data. In International Conference on Learning Representations, 2019.
+Liudmila Prokhorenkova, Gleb Gusev, Aleksandr Vorobev, Anna Veronika Dorogush, and Andrey Gulin. Catboost: unbiased boosting with categorical features. In Advances in neural information processing systems, pp. 6638-6648, 2018.
+Yuxiang Ren, Bo Liu, Chao Huang, Peng Dai, Liefeng Bo, and Jiawei Zhang. Heterogeneous deep graph infomax. arXiv preprint arXiv:1911.08538, 2019.
+Benedek Rozemberczki, Carl Allen, and Rik Sarkar. Multi-scale attributed node embedding. arXiv preprint arXiv:1909.13021, 2019.
+Victor Garcia Satorras and Joan Bruna Estrach. Few-shot learning with graph neural networks. In International Conference on Learning Representations, 2018.
+Weiping Song, Chence Shi, Zhiping Xiao, Zhijian Duan, Yewen Xu, Ming Zhang, and Jian Tang. Autoint: Automatic feature interaction learning via self-attentive neural networks. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pp. 1161–1170, 2019.
+Jonathan M Stokes, Kevin Yang, Kyle Swanson, Wengong Jin, Andres Cubillos-Ruiz, Nina M Donghia, Craig R MacNair, Shawn French, Lindsey A Carfrae, Zohar Bloom-Ackerman, et al. A deep learning approach to antibiotic discovery. Cell, 180(4):688-702, 2020.
+Jianing Sun, Wei Guo, Dengcheng Zhang, Yingxue Zhang, Florence Regol, Yaochen Hu, Huifeng Guo, Ruiming Tang, Han Yuan, Xiuqiang He, and Mark Coates. A framework for recommending accurate and diverse items using bayesian graph convolutional neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2030-2039, 2020.
+
+Ke Sun, Zhouchen Lin, and Zhanxing Zhu. Adagcn: Adaboosting graph convolutional networks into deep models. arXiv preprint arXiv:1908.05081, 2019.
+Kiran K Thekumparampil, Chong Wang, Sewoong Oh, and Li-Jia Li. Attention-based graph neural network for semi-supervised learning. arXiv preprint arXiv:1803.03735, 2018.
+Anton Tsitsulin, Davide Mottin, Panagiotis Karras, and Emmanuel Müller. Verse: Versatile graph embeddings from similarity measures. In Proceedings of the 2018 World Wide Web Conference, pp. 539-548, 2018.
+Petar Velicković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph Attention Networks. International Conference on Learning Representations, 2018.
+Duo Wang, Mateja Jamnik, and Pietro Lio. Abstract diagrammatic reasoning with multiplex graph networks. In International Conference on Learning Representations, 2020.
+Man Wu, Shirui Pan, Chuan Zhou, Xiaojun Chang, and Xingquan Zhu. Unsupervised domain adaptive graph convolutional networks. In Proceedings of The Web Conference, 2020.
+Yuxin Xiao, Zecheng Zhang, Carl Yang, and Chengxiang Zhai. Non-local attention learning on large heterogeneous information networks. In 2019 IEEE International Conference on Big Data (Big Data), pp. 978-987. IEEE, 2019.
+Yongxin Yang, Irene Garcia Morillo, and Timothy M Hospedales. Deep neural decision trees. arXiv preprint arXiv:1806.06988, 2018.
+Zhi-Hua Zhou and Ji Feng. Deep forest. National Science Review, 6(1):74-86, 2019.
+
+# A FURTHER RELATED WORK
+
+To the best of our knowledge, there are no approaches combining the benefits of GBDT and GNN models for representation learning on graphs with tabular data. However, there are many attempts to adapt non-graph neural networks for tabular data or to combine them with gradient boosting in different ways.
+
+Several works (Popov et al., 2019; Yang et al., 2018; Zhou & Feng, 2019; Feng et al., 2018; Hazimeh et al., 2020) attempt to mitigate the non-differentiable nature of decision trees. For example, Popov et al. (2019) proposed to replace hard choices for tree splitting features and splitting thresholds with their continuous counterparts, using $\alpha$ -entmax transformation (Peters et al., 2019). While such an approach becomes suitable for a union of decision trees with GNN, the computational burden of training both end-to-end becomes a bottleneck for large graphs.
+
+Another method (Badirli et al., 2020) uses neural networks as weak learners for the GBDT model. For graph representation problems such as node regression, one can replace standard neural networks with graph neural networks. However, training different GNN as weak classifiers at once would be exhaustive. Additionally, such a combination lacks some advantages of GBDT, like handling heterogeneous and categorical features and missing values. An approach called AdaGCN (Sun et al., 2019) incorporates AdaBoost ideas into the design of GNNs in order to construct deep models. Again, this method does not exploit the advantages of GBDT methods.
+
+Finally, Li et al. (2019) investigated different ways of combining decision-tree-based models and neural networks. While the motivation is similar to ours — get the benefits of both types of models — the paper focuses specifically on learning-to-rank problems. Additionally, while some of their methods are similar in spirit to Res-GNN, they do not update GBDT in an end-to-end manner, which is a substantial contribution of the current research.
+
+# B HYPERPARAMETERS
+
+Parameters in brackets $\{\}$ are selected by hyperparameter search on the validation set.
+
+LightGBM: number of leaves is $\{15, 63\}$ , $||\lambda||_2 = 0$ , boosting type is gbdt, number of epochs is 1000, early stopping rounds is 100.
+
+CatBoost: depth is $\{4,6\}$ , $||\lambda ||_2 = 0$ , number of epochs is 1000, early stopping rounds is 100.
+
+FCNN: number of layers is $\{2, 3\}$ , dropout is $\{0., 0.5\}$ , hidden dimension is 64, number of epochs is 5000, early stopping rounds is 2000.
+
+GNN: dropout rate is $\{0., 0.5\}$ , hidden dimension is 64, number of epochs is 2000, early stopping rounds is 200. GAT, GCN, and AGNN models have two convolutional layers with dropout and ELU activation function (Clevert et al., 2016). APPNP has a two-layer fully-connected neural network with dropout and ELU activation followed by a convolutional layer with $k = 10$ and $\alpha = 0.1$ . We use eight heads with eight hidden neurons for GAT model.
+
+Res-GNN: dropout rate is $\{0., 0.5\}$ , hidden dimension is 64, number of epochs is 1000, early stopping rounds is 100. We also tune whether to use solely predictions of CatBoost model or append them to the input features. CatBoost model is trained for 1000 epochs.
+
+BGNN: dropout rate is $\{0., 0.5\}$ , hidden dimension is 64, number of epochs is 200, early stopping rounds is 10, number of trees and backward passes per epoch is $\{10, 20\}$ , depth of the tree is 6. We also tune whether to use solely predictions of CatBoost model or append them to the input features.
+
+For all models, we also perform a hyperparameter search on learning rate in $\{0.1, 0.01\}$ . Every hyperparameter setting is evaluated three times and an average is taken. We use five random splits for train/validation/test with $0.6 / 0.2 / 0.2$ ratio. The average across five seeds is reported in the tables.
+
+# C REGRESSION DATASETS
+
+In House dataset (Pace & Barry, 1997), nodes are the properties, edges connect the proximal nodes, and the target is the property's price. We use the publicly available dataset (Pace & Barry, 1997) of
+
+all the block groups in California collected from the 1990 Census. We connect each block with at most five of its nearest neighbors if they lie within a ball of a certain radius, as measured by latitude and longitude. We keep the following node features: MedInc, HouseAge, AveRooms, AveBedrms, Population, AveOccup.
+
+County dataset (Jia & Benson, 2020) is a county-level election map network. Each node is a county, and two nodes are connected if they share a border. We consider node features coming from the 2016 year. These features include DEM, GOP, MedianIncome, MigraRate, BirthRate, DeathRate, BachelorRate, UnemploymentRate. We follow the setup of the original paper and select UnemploymentRate as the target label. We filter out all nodes in the original data if they do not have features.
+
+VK dataset (Tsitsulin et al., 2018) comes from a popular social network where people are mutually connected based on friendships, and the regression problem is to predict the age of a person. We use an open-access subsample of the VK social network of the first 1M users.3 Then, the dataset has been preprocessed to keep only the users who opt in to share their demographic information and preferences: country, city, hasmobile, last_seenplatform, political, religion_id, alcohol, smoking, relation, sex, university.
+
+Wiki dataset (Rozemberczki et al., 2019) represents a page-page network on a specific topic (squirrels) with the task of predicting average monthly traffic. The features are bag-of-words for informative nouns (3148 in total) that appeared in the main text of the Wikipedia article. The target is the average monthly traffic between October 2017 and November 2018 for each article.
+
+Avazu dataset (Song et al., 2019) represents a device-device network, with two devices being connected if they appear on the same site within the same application. For this dataset, the goal is to predict click-through-rate (CTR) for each device. We take the first 10M rows from the publicly available train log of user clicks.4 We compute CTR for each device id and filter those ids that do not have at least 10 ad displays. We connect two devices if they had ad displays on the same site id from the same application id. The node features are anonymized categories: $C1$ , $C14$ , $C15$ , $C16$ , $C17$ , $C18$ , $C19$ , $C20$ , $C21$ .
+
+# D CLASSIFICATION DATASETS
+
+For node classification, we consider three types of node features: heterogeneous (VK and House), sparse (Slap and DBLP), and homogeneous (OGB-ArXiv).
+
+For House and VK, we transform the original numerical target value with respect to the bin it falls to. More specifically, for VK we consider the classes $< 20, 20 - 25, 25 - 30, \ldots, 45 - 50, > 50$ for the age attribute. Similarly, for House dataset we replace the target value with the bin it falls to in the range [1, 1.5, 2, 2.5]. Hence, there are 7 and 5 classes for VK and House, respectively.
+
+For datasets with sparse features, we consider two datasets coming from heterogeneous information networks (HIN), where nodes have a few different types. A common way to represent HIN is through meta-paths, i.e., a collection of all possible paths between nodes of a particular type. For example, for a citation network, one may specify paths of the type paper-author-paper (PAP) and the type paper-subject-paper (PSP). Then the original graph is approximated as several adjacency matrices for different types.
+
+Table 5: Summary of classification datasets.
+
+ | SLAP | DBLP | OGB-ArXiv |
| # Nodes | 20419 | 14475 | 169343 |
| # Edges | 172248 | 40269 | 1166243 |
| # Features | 2701 | 5002 | 128 |
| Classes | 15 | 4 | 40 |
| Min Class | 103 | 745 | 29 |
| Max Class | 534 | 1197 | 27321 |
+
+DBLP dataset (Ren et al., 2019) is a network with three node types (authors, papers, conferences) and four target classes of the authors (database, data mining, information retrieval, and machine learning). To obtain a single graph, we use the adjacency matrix for the relation APA, which closely reflects the relationships between authors. Each author has a bag-of-words representation (300 words) of all the abstracts published by the author. Furthermore, for every node, we compute the degrees for all
+
+types of relations and append them as additional node features. Namely, we have two additional node features corresponding to degrees for paper nodes in APA and APCPA adjacency matrices.
+
+SLAP dataset (Xiao et al., 2019) is a multiple-hub network in bioinformatics that contains node types such as chemical compound, gene, disease, pathway, etc. The goal is to predict one of 15 gene types. To obtain a single graph, we use the adjacency matrix for the relation GG between genes. Each gene has 3000 features that correspond to the extracted gene ontology terms (GO terms). As for DBLP, we compute the degrees for all types of relations and append them as additional node features.
+
+As a dataset with homogeneous node features we consider OGB-ArXiv (Hu et al., 2020a). The node features correspond to a 128-dimensional feature vector obtained by averaging the embeddings of words in the title and abstract. Note that for this particular dataset we used the implementation of $\mathrm{GAT}^5$ as a backbone architecture for GNN, Res-GNN, and BGNN models. This model scored the top place on the leaderboard. $^6$ A summary of statistics for all datasets is outlined in Table 5.
+
+# E COMPARISON OF GNN MODELS
+
+In this section, we show the exact RMSE values and time for all tested GNN models on all regression datasets. We consider several state-of-the-art GNN models that include GAT (Velicković et al., 2018), GCN (Kipf & Welling, 2017), AGNN (Thekumparampil et al., 2018), and APPNP (Klicpera et al., 2019).
+
+Table 6 demonstrates that for all considered models BGNN and Res-GNN achieve significant increase in performance compared to vanilla GNN. Additionally, end-to-end training of BGNN achieves typically better results than a straightforward implementation of Res-GNN.
+
+Table 6: Summary of our results for different GNN architectures for node regression.
+
+ | Method | House | County | VK | Wiki | Avazu |
| RMSE | Time (s) | RMSE | Time (s) | RMSE | Time (s) | RMSE | Time (s) | RMSE | Time (s) |
| GAT | GNN | 0.54 ± 0.01 | 35 ± 2 | 1.45 ± 0.06 | 19 ± 6 | 7.22 ± 0.19 | 42 ± 4 | 45916 ± 4527 | 15 ± 1 | 0.113 ± 0.01 | 9 ± 2 |
| Res-GNN | 0.51 ± 0.01 | 36 ± 7 | 1.33 ± 0.08 | 7 ± 3 | 7.07 ± 0.20 | 41 ± 7 | 46747 ± 4639 | 31 ± 9 | 0.109 ± 0.01 | 7 ± 2 |
| BGNN | 0.5 ± 0.01 | 20 ± 4 | 1.26 ± 0.08 | 2 ± 0 | 6.95 ± 0.21 | 16 ± 0 | 49222 ± 3743 | 21 ± 7 | 0.109 ± 0.01 | 5 ± 1 |
| GCN | GNN | 0.63 ± 0.01 | 28 ± 0 | 1.48 ± 0.08 | 18 ± 7 | 7.25 ± 0.19 | 38 ± 0 | 44936 ± 4083 | 13 ± 3 | 0.114 ± 0.02 | 12 ± 6 |
| Res-GNN | 0.59 ± 0.01 | 25 ± 2 | 1.35 ± 0.09 | 11 ± 5 | 7.03 ± 0.20 | 52 ± 6 | 44876 ± 3777 | 21 ± 5 | 0.111 ± 0.02 | 9 ± 6 |
| BGNN | 0.54 ± 0.01 | 41 ± 15 | 1.33 ± 0.13 | 12 ± 8 | 7.12 ± 0.21 | 76 ± 6 | 47426 ± 4112 | 22 ± 11 | 0.107 ± 0.01 | 4 ± 1 |
| AGNN | GNN | 0.59 ± 0.01 | 38 ± 5 | 1.45 ± 0.08 | 28 ± 3 | 7.26 ± 0.20 | 48 ± 3 | 45982 ± 3058 | 19 ± 5 | 0.113 ± 0.02 | 14 ± 8 |
| Res-GNN | 0.52 ± 0.01 | 33 ± 4 | 1.3 ± 0.07 | 16 ± 4 | 7.08 ± 0.20 | 51 ± 15 | 46010 ± 2355 | 24 ± 3 | 0.111 ± 0.02 | 7 ± 2 |
| BGNN | 0.49 ± 0.01 | 34 ± 4 | 1.28 ± 0.08 | 3 ± 1 | 6.89 ± 0.21 | 25 ± 4 | 53080 ± 5117 | 47 ± 37 | 0.108 ± 0.02 | 5 ± 1 |
| APPNP | GNN | 0.69 ± 0.01 | 68 ± 1 | 1.5 ± 0.11 | 34 ± 10 | 13.23 ± 0.12 | 81 ± 3 | 53426 ± 4159 | 49 ± 26 | 0.113 ± 0.01 | 24 ± 15 |
| Res-GNN | 0.67 ± 0.01 | 58 ± 12 | 1.41 ± 0.12 | 19 ± 10 | 13.06 ± 0.17 | 76 ± 11 | 53206 ± 4593 | 66 ± 27 | 0.110 ± 0.01 | 15 ± 10 |
| BGNN | 0.59 ± 0.01 | 21 ± 7 | 1.33 ± 0.10 | 17 ± 6 | 12.36 ± 0.14 | 50 ± 6 | 54359 ± 4734 | 30 ± 13 | 0.108 ± 0.01 | 6 ± 1 |
+
+# F LOSS CONVERGENCE
+
+In Figure 5, we plot RMSE on the test set during training for the remaining datasets — County, Wiki, and Avazu. These results confirm that BGNN converges to its optimal value within the first ten iterations (for $k = 20$ ). Note that on the Wiki dataset, similarly to Figure 3, Res-GNN convergence is similar to GNN for the first 100 iterations and then the loss of Res-GNN decreases faster than of GNN.
+
+
+(a) County
+
+
+(b) Wiki
+Figure 5: Summary of RMSE of test set during training for node regression datasets.
+
+
+(c) Avazu
\ No newline at end of file
diff --git a/boostthenconvolvegradientboostingmeetsgraphneuralnetworks/images.zip b/boostthenconvolvegradientboostingmeetsgraphneuralnetworks/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..09b258e54c681b41f47f3900890e765f45b5f8d4
--- /dev/null
+++ b/boostthenconvolvegradientboostingmeetsgraphneuralnetworks/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:30bffa3e386f6f97d8a49259ec644d33fe1e20ccf920cfc7e51469983840ed60
+size 627870
diff --git a/boostthenconvolvegradientboostingmeetsgraphneuralnetworks/layout.json b/boostthenconvolvegradientboostingmeetsgraphneuralnetworks/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..c2b1ae221f9b6b0cdb9ff23f147c94df326c4c03
--- /dev/null
+++ b/boostthenconvolvegradientboostingmeetsgraphneuralnetworks/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:da7f61b7b82ab472cbccd3c479efb3fceb1e0af6cb467ffc7db3fcb2429d84e8
+size 522615
diff --git a/bowtienetworksgenerativemodelingforjointfewshotrecognitionandnovelviewsynthesis/7cc65188-62da-4efe-afad-bf32d2b7e352_content_list.json b/bowtienetworksgenerativemodelingforjointfewshotrecognitionandnovelviewsynthesis/7cc65188-62da-4efe-afad-bf32d2b7e352_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..83463589d54b68762a5967e7b778a7b96417bad4
--- /dev/null
+++ b/bowtienetworksgenerativemodelingforjointfewshotrecognitionandnovelviewsynthesis/7cc65188-62da-4efe-afad-bf32d2b7e352_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:85f92dc1a4aeb1e5e226f2106a49142fc95048454f934fe575670bd12c1e9a95
+size 111761
diff --git a/bowtienetworksgenerativemodelingforjointfewshotrecognitionandnovelviewsynthesis/7cc65188-62da-4efe-afad-bf32d2b7e352_model.json b/bowtienetworksgenerativemodelingforjointfewshotrecognitionandnovelviewsynthesis/7cc65188-62da-4efe-afad-bf32d2b7e352_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..40036ae9507a0feaceafeb099fa709663573bc8b
--- /dev/null
+++ b/bowtienetworksgenerativemodelingforjointfewshotrecognitionandnovelviewsynthesis/7cc65188-62da-4efe-afad-bf32d2b7e352_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:922d5e087b6b1f79fa75891dec934e408fd110d8ccf17ea311ee262bb8d43721
+size 141984
diff --git a/bowtienetworksgenerativemodelingforjointfewshotrecognitionandnovelviewsynthesis/7cc65188-62da-4efe-afad-bf32d2b7e352_origin.pdf b/bowtienetworksgenerativemodelingforjointfewshotrecognitionandnovelviewsynthesis/7cc65188-62da-4efe-afad-bf32d2b7e352_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c3c013033411306ae8cca399e9d8c5983c119f72
--- /dev/null
+++ b/bowtienetworksgenerativemodelingforjointfewshotrecognitionandnovelviewsynthesis/7cc65188-62da-4efe-afad-bf32d2b7e352_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ab5f00a43120906b774f5022d6033f2d4db2d49365eca4c6d2a369bdfffc5748
+size 8803863
diff --git a/bowtienetworksgenerativemodelingforjointfewshotrecognitionandnovelviewsynthesis/full.md b/bowtienetworksgenerativemodelingforjointfewshotrecognitionandnovelviewsynthesis/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..6da24aaa8f9e0437c9d1b9e3ec162eebb874b3a3
--- /dev/null
+++ b/bowtienetworksgenerativemodelingforjointfewshotrecognitionandnovelviewsynthesis/full.md
@@ -0,0 +1,427 @@
+# BOWTIE NETWORKS: GENERATIVE MODELING FOR JOINT FEW-SHOT RECOGNITION AND NOVEL-VIEW SYNTHESIS
+
+Zhipeng Bao
+
+Yu-Xiong Wang2
+
+Martial Hebert1
+
+$^{1}$ Carnegie Mellon University
+
+$^{2}$ University of Illinois at Urbana-Champaign
+
+{zbao, hebert}@cs.cmu.edu
+
+yxw@illinois.edu
+
+# ABSTRACT
+
+We propose a novel task of joint few-shot recognition and novel-view synthesis: given only one or few images of a novel object from arbitrary views with only category annotation, we aim to simultaneously learn an object classifier and generate images of that type of object from new viewpoints. While existing work copes with two or more tasks mainly by multi-task learning of shareable feature representations, we take a different perspective. We focus on the interaction and cooperation between a generative model and a discriminative model, in a way that facilitates knowledge to flow across tasks in complementary directions. To this end, we propose bowtie networks that jointly learn 3D geometric and semantic representations with a feedback loop. Experimental evaluation on challenging fine-grained recognition datasets demonstrates that our synthesized images are realistic from multiple viewpoints and significantly improve recognition performance as ways of data augmentation, especially in the low-data regime. Code and pre-trained models are released at https://github.com/zpbao/bowtie_networks.
+
+# 1 INTRODUCTION
+
+Given a never-before-seen object (e.g., a gadwall in Figure 1), humans are able to generalize even from a single image of this object in different ways, including recognizing new object instances and imagining what the object would look like from different viewpoints. Achieving similar levels of generalization for machines is a fundamental problem in computer vision, and has been actively explored in areas such as few-shot object recognition (Fei-Fei et al., 2006; Vinyals et al., 2016; Wang & Hebert, 2016; Finn et al., 2017; Snell et al., 2017) and novel-view synthesis (Park et al., 2017; Nguyen-Phuoc et al., 2018; Sitzmann et al., 2019). However, such exploration is often limited in separate areas with specialized algorithms but not jointly.
+
+We argue that synthesizing images and recognizing them are inherently interconnected with each other. Being able to simultaneously address both tasks with a single model is a crucial step toward human-level generalization. This requires learning a richer, shareable internal representation for more comprehensive object understanding than it could be within individual tasks. Such "cross-task" knowledge becomes particularly critical in the low-data regime, where identifying 3D geometric structures of input images facilities recognizing their semantic categories, and vice versa.
+
+Inspired by this insight, here we propose a novel task of joint few-shot recognition and novel-view synthesis: given only one or few images of a novel object from arbitrary views with only category annotation, we aim to simultaneously learn an object classifier and generate images of that type of object from new viewpoints. This joint task is challenging, because of its (i) weak supervision, where we do not have access to any 3D supervision, and (ii) few-shot setting, where we need to effectively learn both 3D geometric and semantic representations from minimal data.
+
+While existing work copes with two or more tasks mainly by multi-task learning or meta-learning of a shared feature representation (Yu et al., 2020; Zamir et al., 2018; Lake et al., 2015), we take a different perspective in this paper. Motivated by the nature of our problem, we focus on the interaction and cooperation between a generative model (for view synthesis) and a discriminative model (for recognition), in a way that facilitates knowledge to flow across tasks in complementary directions, thus making the tasks help each other. For example, the synthesized images produced by
+
+
+Figure 1: Left: Given a single image of a novel visual concept (e.g., a gadwall), a person can generalize in various ways, including imagining what this gadwall would look like from different viewpoints (top) and recognizing new gadwall instances (bottom). Right: Inspired by this, we introduce a general feedback-based bowtie network that facilitates the interaction and cooperation between a generative module and a discriminative module, thus simultaneously addressing few-shot recognition and novel-view synthesis in the low-data regime.
+
+the generative model provide viewpoint variations and could be used as additional training data to build a better recognition model; meanwhile, the recognition model ensures the preservation of the desired category information and deals with partial occlusions during the synthesis.
+
+To this end, we propose a feedback-based bowtie network (FBNet), as illustrated in Figure 1. The network consists of a view synthesis module and a recognition module, which are linked through feedback connections in a bowtie fashion. This is a general architecture that can be used on top of any view synthesis model and any recognition model. The view synthesis module explicitly learns a 3D geometric representation from 2D images, which is transformed to target viewpoints, projected to 2D features, and rendered to generate images. The recognition module then leverages these synthesized images from different views together with the original real images to learn a semantic feature representation and produce corresponding classifiers, leading to the feedback from the output of the view synthesis module to the input of the recognition module. The semantic features of real images extracted from the recognition module are further fed into the view synthesis module as conditional inputs, leading to the feedback from the output of the recognition module to the input of the view synthesis module.
+
+One potential difficulty, when combining the view synthesis and the recognition modules, lies in the mismatch in their level of image resolutions. Deep recognition models can benefit from high-resolution images, and the recognition performance greatly improves with increased resolution (Wang et al., 2016; Cai et al., 2019; He et al., 2016). By contrast, it is still challenging for modern generative models to synthesize very high-resolution images (Regmi & Borji, 2018; Nguyen-Phuoc et al., 2019). To address this challenge, while operating on a resolution consistent with state-of-the-art view synthesis models (Nguyen-Phuoc et al., 2019), we further introduce resolution distillation to leverage additional knowledge in a recognition model that is learned from higher-resolution images.
+
+Our contributions are three-folds. (1) We introduce a new problem of simultaneous few-shot recognition and novel-view synthesis, and address it from a novel perspective of cooperating generative and discriminative modeling. (2) We propose feedback-based bowtie networks that jointly learn 3D geometric and semantic representations with feedback in the loop. We further address the mismatch issue between different modules by leveraging resolution distillation. (3) Our approach significantly improves both view synthesis and recognition performance, especially in the low-data regime, by enabling direct manipulation of view, shape, appearance, and semantics in generative image modeling.
+
+# 2 RELATED WORK
+
+Few-Shot Recognition is a classic problem in computer vision (Thrun, 1996; Fei-Fei et al., 2006). Many algorithms have been proposed to address this problem (Vinyals et al., 2016; Wang & Hebert, 2016; Finn et al., 2017; Snell et al., 2017), including the recent efforts on leveraging generative models (Li et al., 2015; Wang et al., 2018; Schwartz et al., 2018; Zhang et al., 2018; Tsutsui et al., 2019; Chen et al., 2019b; Li et al., 2019; Zhang et al., 2019; Sun et al., 2019). A hallucinator is introduced to generate additional examples in a pre-trained feature space as data augmentation to help with low-shot classification (Wang et al., 2018). MetaGAN improves few-shot recognition by producing fake images as a new category (Zhang et al., 2018). However, these methods either do not synthesize images directly or use a pre-trained generative model that is not optimized towards the downstream task. By contrast, our approach performs joint training of recognition and view synthesis, and enables the two tasks to cooperate through feedback connections. In addition, while there has been work considering both classification and exemplar generation in the few-shot regime, such investigation focuses on simple domains like handwritten characters (Lake et al., 2015) but we address more realistic scenarios with natural images. Note that our effort is largely orthogonal to
+
+designing the best few-shot recognition or novel-view synthesis method; instead, we show that the joint model outperforms the original methods addressing each task in isolation.
+
+Novel-View Synthesis aims to generate a target image with an arbitrary camera pose from one given source image (Tucker & Snavely, 2020). It is also known as "multiview synthesis." For this task, some approaches are able to synthesize lifelike images (Park et al., 2017; Yin & Shi, 2018; Nguyen-Phuoc et al., 2018; Sitzmann et al., 2019; Iqbal et al., 2020; Yoon et al., 2020; Wiles et al., 2020; Wortsman et al., 2020). However, they heavily rely on pose supervision or 3D annotation, which is not applicable in our case. An alternative way is to learn a view synthesis model in an unsupervised manner. Pix2Shape learns an implicit 3D scene representation by generating a 2.5D surfel based reconstruction (Rajeswar et al., 2020). HoloGAN proposes an unsupervised approach to learn 3D feature representations and render 2D images accordingly (Nguyen-Phuoc et al., 2019). Nguyen-Phuoc et al. (2020) learn scene representations from 2D unlabeled images through foreground-background fragmenting. Different from them, not only can our view synthesis module learn from weakly labeled images, but it also enables conditional synthesis to facilitate recognition.
+
+Feedback-Based Architectures, where the full or partial output of a system is routed back into the input as part of an iterative cause-and-effect process (Ford, 1999), have been recently introduced into neural networks (Belagiannis & Zisserman, 2017; Zamir et al., 2017; Yang et al., 2018). Compared with prior work, our FBNet contains two complete sub-networks, and the output of each module is fed into the other as one of the inputs. Therefore, FBNet is essentially a bi-directional feedback-based framework which optimizes the two sub-networks jointly.
+
+Multi-task Learning focuses on optimizing a collection of tasks jointly (Misra et al., 2016; Ruder, 2017; Kendall et al., 2018; Pal & Balasubramanian, 2019; Xiao & Marlet, 2020). Task relationships have also been studied (Zamir et al., 2018; Standley et al., 2020). Some recent work investigates the connection between recognition and view synthesis, and makes some attempt to combine them together (Sun et al., 2018; Wang et al., 2018; Xian et al., 2019; Santurkar et al., 2019; Xiong et al., 2020; Michalkiewicz et al., 2020). For example, Xiong et al. (2020) use multiview images to tackle fine-grained recognition tasks. However, their method needs strong pose supervision to train the view synthesis model, while we do not. Also, these approaches do not treat the two tasks of equal importance, i.e., one task as an auxiliary task to facilitate the other. On the contrary, our approach targets the joint learning of the two tasks and improves both of their performance. Importantly, we focus on learning a shared generative model, rather than a shared feature representation as is normally the case in multi-task learning.
+
+Joint Data Augmentation and Task Model Learning leverage generative networks to improve other visual tasks (Peng et al., 2018; Hu et al., 2019; Luo et al., 2020; Zhang et al., 2020). A generative network and a discriminative pose estimation network are trained jointly through adversarial loss in Peng et al. (2018), where the generative network performs data augmentation to facilitate the downstream pose estimation task. Luo et al. (2020) design a controllable data augmentation method for robust text recognition, which is achieved by tracking and refining the moving state of the control points. Zhang et al. (2020) study and make use of the relationship among facial expression recognition, face alignment, and face synthesis to improve training. Mustikovela et al. (2020) leverage a generative model to boost viewpoint estimation. The main difference is that we focus on the joint task of synthesis and recognition and achieve bi-directional feedback, while existing work only considers optimizing the target discriminative task using adversarial training or with a feedforward network.
+
+# 3 OUR APPROACH
+
+# 3.1 JOINT TASK OF FEW-SHOT RECOGNITION AND NOVEL-VIEW SYNTHESIS
+
+Problem Formulation: Given a dataset $\mathcal{D} = \{(x_i, y_i)\}$ , where $x_i \in \mathcal{X}$ is an image of an object and $y_i \in \mathcal{C}$ is the corresponding category label ( $\mathcal{X}$ and $\mathcal{C}$ are the image space and label space, respectively), we address the following two tasks simultaneously. (i) Object recognition: learning a discriminative model $R: \mathcal{X} \to \mathcal{C}$ that takes as input an image $x_i$ and predicts its category label. (ii) Novel-view synthesis: learning a generative model $G: \mathcal{X} \times \Theta \to \mathcal{X}$ that, given an image $x_i$ of category $y_i$ and an arbitrary 3D viewpoint $\theta_j \in \Theta$ , synthesizes an image in category $y_i$ viewed from $\theta_j$ . Notice that we are more interested in category-level consistency, for which $G$ is able to generate images of not only the instance $x_i$ but also other objects of the category $y_i$ from different viewpoints. This joint-task scenario requires us to improve the performance of both 2D and 3D tasks under weak supervision without any ground-truth 3D annotations. Hence, we need to exploit the cooperation between them.
+
+
+View Synthesis Module V
+Figure 2: Architecture of our feedback-based bowtie network. The whole network consists of a view synthesis module and a recognition module, which are linked through feedback connections in a bowtie fashion.
+
+Few-Shot Setting: The few-shot dataset consists of one or only a few images per category, which makes our problem even more challenging. To this end, following the recent work on knowledge transfer and few-shot learning (Hariharan & Girshick, 2017; Chen et al., 2019a), we leverage a set of "base" classes $\mathcal{C}_{\mathrm{base}}$ with a large-sample dataset $\mathcal{D}_{\mathrm{base}} = \{(x_i,y_i),y_i\in \mathcal{C}_{\mathrm{base}}\}$ to train our initial model. We then fine-tune the pre-trained model on our target "novel" classes $\mathcal{C}_{\mathrm{novel}}$ ( $\mathcal{C}_{\mathrm{base}}\cap \mathcal{C}_{\mathrm{novel}} = 0$ ) with its small-sample dataset $\mathcal{D}_{\mathrm{novel}} = \{(x_i,y_i),y_i\in \mathcal{C}_{\mathrm{novel}}\}$ (e.g., a $K$ -shot setting corresponds to $K$ images per class).
+
+# 3.2 FEEDBACK-BASED BOWTIE NETWORKS
+
+To address the joint task, we are interested in learning a generative model that can synthesize realistic images of different viewpoints, which are also useful for building a strong recognition model. We propose a feedback-based bowtie network (FBNet) for this purpose. This model consists of a view synthesis module and a recognition module, trained in a joint, end-to-end fashion. Our key insight is to explicitly introduce feedback connections between the two modules, so that they cooperate with each other, thus enabling the entire model to simultaneously learn 3D geometric and semantic representations. This general architecture can be used on top of any view synthesis model and any recognition model. Here we focus on a state-of-the-art view synthesis model - HoloGAN (Nguyen-Phuoc et al., 2019), and a widely adopted few-shot recognition model - prototypical network (Snell et al., 2017), as shown in Figure 2.
+
+# 3.2.1 VIEW SYNTHESIS MODULE
+
+The view synthesis module $V$ is shown in the blue shaded region in Figure 2. It is adapted from HoloGAN (Nguyen-Phuoc et al., 2019), a state-of-the-art model for unsupervised view synthesis. This module consists of a generator $G$ which first generates a 3D feature representation from a latent constant tensor (initial cube) through 3D convolutions. The feature representation is then transformed to a certain pose and projected to 2D with a projector. The final color image is then computed through 2D convolutions. This module takes two inputs: a latent vector input $z$ and a view input $\theta$ . $z$ characterizes the style of the generated image through adaptive instance normalization (AdaIN) (Huang & Belongie, 2017) units. $\theta = [\theta^x, \theta^y, \theta^z]$ guides the transformation of the 3D feature representation. This module also contains a discriminator $D$ to detect whether an image is real or fake (not shown in Figure 2). We use the standard GAN loss from DC-GAN (Radford et al., 2016), $\mathcal{L}_{\mathrm{GAN}}(G, D)$ . We make the following important modifications to make the architecture applicable to our joint task.
+
+Latent Vector Formulation: To allow the synthesis module to get feedback from the recognition module (details are shown in Section 3.2.3), we first change HoloGAN from unconditional to conditional. To this end, we model the latent input $z$ as: $z_{i} = f_{i} \oplus n_{i}$ , where $f_{i}$ is the conditional feature input derived from image $x_{i}$ and $n_{i}$ is a noise vector sampled from Gaussian distribution. $\oplus$ is the combination strategy (e.g., concatenation). By doing so, the synthesis module leverages
+
+additional semantic information, and thus maintains the category-level consistency with a target image and improves the diversity of the generated images.
+
+Identity Regularizer: Inspired by Chen et al. (2016), we introduce an identity regularizer to ensure that the synthesis module simultaneously satisfies two critical properties: (i) the identity of the generated image remains when we only change the view input $\theta$ ; (ii) the orientation of the generated image preserves when we only change the latent input $z$ , and this orientation should be consistent with the view input $\theta$ . Specifically, we leverage an encoding network $H$ to predict the reconstructed latent vector $z'$ and the view input $\theta': H(G(z, \theta)) = [z', \theta']$ , where $G(z, \theta)$ is the generated image. Then we minimize the difference between the real and the reconstructed inputs as
+
+$$
+\mathcal {L} _ {\text {i d e n t i t y}} (G, H) = \mathbb {E} _ {z} \| z - z ^ {\prime} \| ^ {2} + \mathbb {E} _ {\theta} \| \theta - \theta^ {\prime} \| ^ {2}. \tag {1}
+$$
+
+Here $H$ shares the majority of the convolution layers of the discriminator $D$ , but uses an additional fully-connected layer. Section A explains the detailed architecture of the view synthesis module.
+
+# 3.2.2 RECOGNITION MODULE
+
+The recognition module $R$ (green shaded region in Fig. 2) consists of a feature extraction network $F$ which transforms images to latent features, and a prototypical classification network $P$ (Snell et al., 2017) which performs the final classification. Below we explain the design of these two components, focusing on how to address the technical challenges faced by joint training with view synthesis.
+
+Feature Extraction with Resolution Distillation: We use a ResNet (He et al., 2016) as our feature extraction network $F$ to transform images into latent features for the recognition module. One of the main obstacles to combining $F$ with the synthesis module is that state-of-the-art synthesis models and recognition models operate on different resolutions. Concretely, to the best of our knowledge, current approaches to unsupervised novel-view synthesis still cannot generate satisfactory high-resolution images (e.g., $224 \times 224$ ) (Nguyen-Phuoc et al., 2019). By contrast, the performance of current well-performing recognition models substantially degrades with low-resolution images (Wang et al., 2016; Cai et al., 2019). To reconcile the resolution incompatibility, we introduce a simple distillation technique inspired by the general concept of knowledge distillation (Hinton et al., 2014). Specifically, we operate on the resolution of the synthesis module (e.g., $64 \times 64$ ). But we benefit from an additional auxiliary feature extraction network $F_{\mathrm{highR}}$ that is trained on high-resolution images (e.g., $224 \times 224$ ). We first pre-train $F_{\mathrm{highR}}$ following the standard practice with a cross-entropy softmax classifier (Liu et al., 2016). We then train our feature extraction network $F_{\mathrm{lowR}}$ (the one used in the recognition module), under the guidance of $F_{\mathrm{highR}}$ through matching their features:
+
+$$
+\mathcal {L} _ {\text {f e a t u r e}} \left(F _ {\text {l o w R}}\right) = \mathbb {E} _ {x} \| F _ {\text {h i g h R}} (x) - F _ {\text {l o w R}} (x) \| ^ {2}, \tag {2}
+$$
+
+where $x$ is a training image. With the help of resolution distillation, the feature extraction network re-captures information in high-resolution images but potentially missed in low-resolution images.
+
+Prototypical Classification Network: We use the prototypical network $P$ (Snell et al., 2017) as our classifier. The network assigns class probabilities $\hat{p}$ based on distance of the input feature vector from class centers $\mu$ ; and $\mu$ is calculated by using support images in the latent feature space:
+
+$$
+\hat {p} _ {c} (x) = \frac {e ^ {- d \left(P \left(F _ {\text {l o w R}} (x)\right) , \mu_ {c}\right)}}{\sum_ {j} e ^ {- d \left(P \left(F _ {\text {l o w R}} (x)\right) , \mu_ {j}\right)}}, \quad \mu_ {c} = \frac {\sum_ {\left(x _ {i} , y _ {i}\right) \in S} P \left(F _ {\text {l o w R}} \left(x _ {i}\right)\right) \mathbf {I} \left[ y _ {i} = c \right]}{\sum_ {\left(x _ {i} , y _ {i}\right) \in S} \mathbf {I} \left[ y _ {i} = c \right]}, \tag {3}
+$$
+
+where $x$ is a real query image, $\hat{p}_c$ is the probability of category $c$ , and $d$ is a distance metric (e.g., Euclidean distance). $S$ is the support dataset. $P$ operates on top of the feature extraction network $F$ , and consists of 3 fully-connected layers as additional feature embedding (the classifier is non-parametric). Another benefit of using the prototypical network lies in that it enables the recognition module to explicitly leverage the generated images in a way of data augmentation, i.e., $S$ contains both real and generated images to compute the class mean. Notice that, though, the module parameters are updated based on the loss calculated on the real query images, which is a cross-entropy loss $\mathcal{L}_{\mathrm{rec}}(R)$ between their predictions $\hat{p}$ and ground-truth labels.
+
+# 3.2.3 FEEDBACK-BASED BOWTIE MODEL
+
+As shown in Figure 2, we leverage a bowtie architecture for our full model, where the output of each module is fed into the other module as one of its inputs. Through joint training, such connections work as explicit feedback to facilitate the communication and cooperation between different modules.
+
+Feedback Connections: We introduce two complementary feedback connections between the view synthesis module and the recognition module: (1) recognition output $\rightarrow$ synthesis input (green arrow in Figure 2), where the features of the real images extracted from the recognition module are fed into the synthesis module as conditional inputs to generate images from different views; (2) synthesis output $\rightarrow$ recognition input (blue arrow in Figure 2), where the generated images are used to produce an augmented set to train the recognition module.
+
+Categorical Loss for Feedback: The view synthesis module needs to capture the categorical semantics in order to further encourage the generated images to benefit the recognition. Therefore, we introduce a categorical loss to update the synthesis module with the prediction results of the generated images:
+
+$$
+\mathcal {L} _ {\mathrm {c a t}} (G) = \mathbb {E} _ {y _ {i}} \| - \log \left(R \left(G \left(z _ {i}, \theta_ {i}\right)\right)\right) \|, \tag {4}
+$$
+
+where $y_{i}$ is the category label for the generated image $G(z_{i},\theta_{i})$ . This loss also implicitly increases the diversity and quality of the generated images.
+
+Final Loss Function: The final loss function is:
+
+$$
+\mathcal {L} _ {\text {T o t a l}} = \mathcal {L} _ {\mathrm {G A N}} + \mathcal {L} _ {\text {r e c}} + \mathcal {L} _ {\text {f e a t u r e}} + \lambda_ {\mathrm {i d}} \mathcal {L} _ {\text {i d e n t i t y}} + \lambda_ {\text {c a t}} \mathcal {L} _ {\text {c a t}}, \tag {5}
+$$
+
+where $\lambda_{\mathrm{id}}$ and $\lambda_{\mathrm{cat}}$ are trade-off hyper-parameters.
+
+**Training Procedure:** We first pre-train $F_{\mathrm{highR}}$ on the high-resolution dataset and save the computed features. These features are used to help train the feature extraction network $F_{\mathrm{lowR}}$ through $\mathcal{L}_{\mathrm{feature}}$ . Then the entire model is first trained on $\mathcal{C}_{\mathrm{base}}$ and then fine-tuned on $\mathcal{C}_{\mathrm{novel}}$ . The training on the two sets are similar. During each iteration, we randomly sample some images per class as a support set and one image per class as a query set. The images in the support set, together with their computed features via the entire recognition module, are fed into the view synthesis module to generate multiple images from different viewpoints. These synthesized images are used to augment the original support set to compute the prototypes. Then, the query images are used to update the parameters of the recognition module through $\mathcal{L}_{\mathrm{rec}}$ ; the view-synthesis module is updated through $\mathcal{L}_{\mathrm{GAN}}$ , $\mathcal{L}_{\mathrm{identity}}$ , and $\mathcal{L}_{\mathrm{cat}}$ . The entire model is trained in an end-to-end fashion. More details are in Section B.
+
+# 4 EXPERIMENTAL EVALUATION
+
+Datasets: We focus on two datasets here: the Caltech-UCSD Birds (CUB) dataset which contains 200 classes with 11,788 images (Welinder et al., 2010), and the CompCars dataset which contains 360 classes with 25,519 images (Yang et al., 2015). Please refer to Section C for more details of the datasets. These are challenging fine-grained recognition datasets for our joint task. The images are resized to $64 \times 64$ . We randomly split the entire dataset into $75\%$ as the training set and $25\%$ as the test set. For CUB, 150 classes are selected as base classes and 50 as novel classes. For CompCars, 240 classes are selected as base classes and 120 as novel classes. Note that we focus on simultaneous recognition and synthesis over all base or novel classes, which is significantly more challenging than typical 5-way classification over sampled classes in most of few-shot classification work (Snell et al., 2017; Chen et al., 2019a). We also include evaluation on additional datasets in Section D.
+
+Implementation Details: We set $\lambda_{\mathrm{id}} = 10$ and $\lambda_{\mathrm{cat}} = 1$ via cross-validation. We use ResNet-18 (He et al., 2016) as the feature extraction network, unless otherwise specified. To match the resolution of our data, we change the kernel size of the first convolution layer of ResNet from 7 to 5. The training process requires hundreds of examples at each iteration, which may not fit in the memory of our device. Hence, inspired by Wang et al. (2018), we make a trade-off to first train the feature extraction network through resolution distillation. We then freeze its parameters and train the other parts of our model. Section C includes more implementation details.
+
+Compared Methods: Our feedback connections enable the two modules to cooperate through joint training. Therefore, to evaluate the effectiveness of the feedback connections, we focus on the following comparisons. (1) For the novel-view image synthesis task, we compare our approach FBNet with the state-of-the-art method HoloGAN (Nguyen-Phuoc et al., 2019). We also consider a
+
+ | Model | Base | Novel-K=1 | Novel-K=5 |
| CUB | FBNet-rec | 57.91 | 47.53 ± 0.14 | 71.26 ± 0.26 |
| FBNet-aug | 58.03 | 47.20 ± 0.19 | 71.51 ± 0.33 |
| FBNet | 59.43 | 48.39 ± 0.19 | 72.76 ± 0.24 |
| CompCars | FBNet-rec | 46.05 | 20.83 ± 0.03 | 50.52 ± 0.11 |
| FBNet-aug | 47.41 | 21.59 ± 0.05 | 51.07 ± 0.14 |
| FBNet | 49.63 | 23.28 ± 0.05 | 53.12 ± 0.09 |
+
+Table 1: Top-1 (%) recognition accuracy on the CUB and CompCars datasets. For base classes: 150-way classification on CUB and 240-way classification on CompCars; for $K$ -shot novel classes: 50-way classification on CUB and 120-way classification on CompCars. Our FBNet consistently achieves the best performance for both base and novel classes, and joint training significantly outperforms training each module individually.
+
+ | Model | IS (↑) | FID (↓) |
| Base | Novel-K=1 | Novel-K=5 | Base | Novel-K=1 | Novel-K=5 |
| CUB | Real Images | 4.55 ± 0.30 | 3.53 ± 0.22 | 3.53 ± 0.22 | 0 | 0 | 0 |
| HoloGAN (Nguyen-Phuoc et al., 2019) | 3.55 ± 0.09 | 2.44 ± 0.07 | 2.58 ± 0.08 | 79.01 | 106.56 | 94.73 |
| FBNet-view | 3.60 ± 0.12 | 2.53 ± 0.03 | 2.64 ± 0.05 | 75.38 | 107.36 | 103.25 |
| FBNet | 3.69 ± 0.17 | 2.79 ± 0.06 | 2.83 ± 0.12 | 70.86 | 104.04 | 92.97 |
| CompCars | Real Images | 2.96 ± 0.12 | 2.80 ± 0.13 | 2.80 ± 0.13 | 0 | 0 | 0 |
| HoloGAN (Nguyen-Phuoc et al., 2019) | 1.85 ± 0.08 | 1.41 ± 0.04 | 1.65 ± 0.07 | 51.49 | 93.48 | 83.17 |
| FBNet-view | 2.03 ± 0.09 | 1.44 ± 0.05 | 1.71 ± 0.07 | 49.94 | 92.01 | 83.58 |
| FBNet | 2.33 ± 0.14 | 1.89 ± 0.07 | 1.91 ± 0.10 | 44.70 | 89.39 | 78.38 |
+
+Table 2: Novel-view synthesis results under the FID and IS metrics. $\uparrow$ indicates that higher is better, and $\downarrow$ indicates that lower is better. As a reference, FID and IS of Real Images represent the best results we could expect. FBNet consistently outperforms the baselines, achieving $18\%$ improvements for FID and $19\%$ for IS.
+
+variant of our approach FBNet-view, which has the same architecture as our novel-view synthesis module, but takes the constant features extracted by a pre-trained ResNet-18 as latent input. FBNet-view can be also viewed as a conditional version of HoloGAN. (2) For the few-shot recognition task, we compare our full model FBNet with its two variants: FBNet-rec inherits the architecture of our recognition module, which is essentially a prototypical network (Snell et al., 2017); FBNet-aug uses the synthesized images from individually trained FBNet-view as data augmentation for the recognition module. Note that, while conducting comparisons with other few-shot recognition (e.g., Chen et al. (2019a); Finn et al. (2017)) or view synthesis models (e.g., Yoon et al. (2020); Wiles et al. (2020)) is interesting, it is not the main focus of this paper. We aim to validate that the feedback-based bowtie architecture outperforms the single-task models upon which it builds, rather than designing the best few-shot recognition or novel-view synthesis method. In Section F, we show that our framework is general and can be used on top of other single-task models and improve their performance. All the models are trained following the same few-shot setting described in Section 3.1.
+
+View Synthesis Facilitates Recognition: Table 1 presents the top-1 recognition accuracy for the base classes and the novel classes, respectively. We focus on the challenging 1, 5-shot settings, where the number of training examples per novel class $K$ is 1 or 5. For the novel classes, we run five trials for each setting of $K$ , and report the average accuracy and standard deviation for all the approaches. Table 1 shows that our FBNet consistently achieves the best few-shot recognition performance on the two datasets. Moreover, the significant improvement of FBNet over FBNet-aug (where the recognition model uses additional data from the conditional view synthesis model, but they are trained separately) indicates that the feedback-based joint training is the key to improve the recognition performance.
+
+Recognition Facilitates View Synthesis: We investigate the novel-view synthesis results under two standard metrics. The FID score computes the Fréchet distance between two Gaussians fitted to feature representations of the source (real) images and the target (synthesized) images (Dowson & Landau, 1982). The Inception Score (IS) uses an Inception network pre-trained on ImageNet (Deng et al., 2009) to predict the label of the generated image and calculate the entropy based on the predictions. IS seeks to capture both the quality and diversity of a collection of generated images (Salimans et al., 2016). A higher IS or a lower FID value indicates better realism of the generated images. A larger variance of IS indicates more diversity of the generated images. We generate images of random views in one-to-one correspondence with the training examples for all the models, and compute the IS and FID values based on these images. The results are reported in Table 2. As a reference, we also show the results of real images under the two metrics, which are the best results we could expect from synthesized images. Our FBNet consistently achieves the best performance under both metrics. Compared with HoloGAN, our method brings up to $18\%$ improvement under FID and $19\%$ under IS. Again, the significant performance gap between FBNet and FBNet-view shows that the feedback-based joint training substantially improves the synthesis performance.
+
+IS and FID cannot effectively evaluate whether the generated images maintain the category-level identity and capture different viewpoints. Therefore, Figure 3 visualizes the synthesized multiview images. Note that, in our problem setting of limited training data under weak supervision, we could
+
+
+Figure 3: Synthesized images from multiple viewpoints. Images in the same row/column are from the same viewpoint/object. Our approach captures the shape and attributes well even in the extremely low-data regime.
+
+| Setting | Model | ResNet-10 | ResNet-18 | ResNet-34 | ResNet-50 |
| K=1 | FBNet-view | 46.28 | 47.53 | 46.79 | 45.68 |
| FBNet | 48.85 | 48.39 | 47.65 | 47.03 |
| K=5 | FBNet-view | 71.66 | 71.26 | 70.69 | 70.00 |
| FBNet | 72.49 | 72.76 | 71.28 | 70.95 |
+
+Table 3: Few-shot recognition accuracy consistently improves with different feature extraction networks.
+
+| Setting | K=1 | K=5 |
| Acc | FID (↓) | IS (↑) | Acc | FID (↓) | IS (↑) |
| Multitask-Feat (Ruder, 2017) | 34.71 | 110.03 | 2.19 ± 0.03 | 52.54 | 99.61 | 2.44 ± 0.04 |
| FBNet w/o Dist | 22.47 | 108.73 | 2.31 ± 0.05 | 34.15 | 97.64 | 2.42 ± 0.07 |
| FBNet w/o Proto | 44.62 | 105.81 | 2.61 ± 0.07 | 70.04 | 95.15 | 2.76 ± 0.10 |
| FBNet | 48.39 | 104.04 | 2.79 ± 0.06 | 72.76 | 92.97 | 2.83 ± 0.12 |
+
+Table 4: Ablation studies on CUB regarding (i) learning a shared feature representation through standard multi-task learning, (ii) FBNet without resolution distillation, and (iii) FBNet using a regular classification network without prototypical classification. Our full model achieves the best performance.
+
+not expect that the quality of the synthesized images would match those generated based on large amounts of training data, e.g. Brock et al. (2019). This demonstrates the general difficulty of image generation in the few-shot setting, which is worth further exploration in the community.
+
+Notably, even in this challenging setting, our synthesized images are of significantly higher visual quality than the state-of-the-art baselines. Specifically, (1) our FBNet is able to perform controllable conditional generation, while HoloGAN cannot. Such conditional generation enables FBNet to better capture the shape information of different car models on CompCars, which is crucial to the recognition task. On CUB, FBNet captures both the shape and attributes well even in the extremely low-data regime (1-shot), thus generating images of higher quality and more diversity. (2) Our FBNet also better maintains the identity of the objects in different viewpoints. For both HoloGAN and FBNet-view, it is hard to tell whether they keep the identity, but FBNet synthesizes images well from all the viewpoints while maintaining the main color and shape. (3) In addition, we notice that there is just a minor improvement for the visual quality of the synthesis results from HoloGAN to FBNet-view, indicating that simply changing the view synthesis model from unconditional to conditional versions does not improve the performance. However, through our feedback-based joint training with recognition, the quality and diversity of the generated images significantly improve.
+
+Shared Generative Model vs. Shared Feature Representation: We further compare with a standard multi-task baseline (Ruder, 2017), which learns a shared feature representation across the joint tasks, denoted as 'Multitask-Feat' in Table 4. We treat the feature extraction network as a shared component between the recognition module and the view synthesis module, and update its parameters using both tasks without feedback connections. Table 4 shows that, through the feedback connections, our shared generative model captures the underlying image generation mechanism for more comprehensive object understanding, outperforming direct task-level shared feature representation.
+
+Ablation - Different Recognition Networks: While we used ResNet-18 as the default feature extraction network, our approach is applicable to different recognition models. Table 3 shows that the recognition performance with different feature extraction networks consistently improves. Interestingly, ResNet-10/18 outperform the deeper models, indicating that the deeper models might suffer from over-fitting in few-shot regimes, consistent with the observation in Chen et al. (2019a).
+
+
+Top 1 Acc: $68.84 \pm 0.29$
+
+
+Top 1 Acc: $70.93 \pm 0.27$
+FID:99.50
+
+
+$\lambda_{cat} = 1$
+Top 1 Acc: $72.76 \pm 0.24$
+FID:92.97
+
+
+$\lambda_{cat} = 5$
+Top 1 Acc: $73.09 \pm 0.21$
+FID:114.85
+
+
+FID:103.75
+IS:2.58:
+Figure 4: Ablation on $\lambda_{\mathrm{cat}}$ . Categorical loss trades off the performance between view synthesis and recognition.
+HoloGAN
+Figure 5: Synthesized images by HoloGAN and FBNet on CelebA-HQ. Few-shot attributes (left to right): Black Hair, Gray Hair, Bald, Wearing Hat, and Aging. FBNet synthesizes images of higher quality and diversity.
+
+
+IS:2.75±0.09
+
+
+$\mathrm{IS}:2.83\pm 0.12$
+FBNet
+
+
+$\mathrm{IS}:2.36\pm 0.08$
+
+Ablation - Categorical Loss: In addition to the feedback connections, our synthesis and recognition modules are linked by the categorical loss. To analyze its effect, we vary $\lambda_{\mathrm{cat}}$ among 0 (without the categorical loss), 0.1, 1, and 5. Figure 4 shows the quantitative and qualitative results on CUB. With $\lambda_{\mathrm{cat}}$ increasing, the recognition performance improves gradually. Meanwhile, a too large $\lambda_{\mathrm{cat}}$ reduces the visual quality of the generated images: checkerboard noise appears. While these images are not visually appealing, they still benefit the recognition task. This shows that the categorical loss trades off the performance between the two tasks, and there is a "sweet spot" between them.
+
+Ablation - Resolution Distillation and Prototypical Classification: Our proposed resolution distillation reconciles the resolution inconsistency between the synthesis and recognition modules, and further benefits from a recognition model trained on high-resolution images. The prototypical network leverages the synthesized images, which constitutes one of the feedback connections. We evaluate their effect by building two variants of our model without these techniques: 'FBNet w/o Dist' trains the feature extraction network directly from low-resolution images; 'FBNet w/o Proto' uses a regular classification network instead of the prototypical network. Table 4 shows that the performance of full FBNet significantly outperforms these variants, verifying the importance of our techniques.
+
+Qualitative Results on the CelebA-HQ Dataset: We further show that the visual quality of our synthesized images significantly gets improved on datasets with better aligned poses. For this purpose, we conduct experiments on CelebA-HQ (Lee et al., 2020), which contains 30,000 aligned human face images regarding 40 attributes in total. We randomly select 35 attributes as training attributes and 5 as few-shot test attributes. While CelebA-HQ does not provide pose annotation, the aligned faces mitigate the pose issue to some extent. Figure 5 shows that both the visual quality and diversity of our synthesized images substantially improve, while consistently outperforming HoloGAN.
+
+Discussion and Future Work: Our experimental evaluation has focused on fine-grained categories, mainly because state-of-the-art novel-view synthesis models still cannot address image generation for a wide spectrum of general images (Liu et al., 2019). Meanwhile, our feedback-based bowtie architecture is general. With the advance in novel-view synthesis, such as the recent work of BlockGAN (Nguyen-Phuoc et al., 2020) and RGBD-GAN (Noguchi & Harada, 2020), our framework could be potentially extended to deal with broader types of images. Additional further investigation includes exploring more architecture choices and dealing with images with more than one object.
+
+# 5 CONCLUSION
+
+This paper has proposed a feedback-based bowtie network for the joint task of few-shot recognition and novel-view synthesis. Our model consistently improves performance for both tasks, especially with extremely limited data. The proposed framework could be potentially extended to address more tasks, leading to a generative model useful and shareable across a wide range of tasks.
+
+Acknowledgement: This work was supported in part by ONR MURI N000014-16-1-2007 and by AFRL Grant FA23861714660. We also thank NVIDIA for donating GPUs and AWS Cloud Credits for Research program.
+
+# REFERENCES
+
+Vasileios Belagiannis and Andrew Zisserman. Recurrent human pose estimation. In International Conference on Automatic Face & Gesture Recognition (FG), 2017. 3
+Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale GAN training for high fidelity natural image synthesis. In ICLR, 2019. 8
+Dingding Cai, Ke Chen, Yanlin Qian, and Joni-Kristian Kämäräinen. Convolutional low-resolution fine-grained classification. Pattern Recognition Letters, 2019. 2, 5
+Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Frank Wang, and Jia-Bin Huang. A closer look at few-shot classification. In ICLR, 2019a. 4, 6, 7, 8
+Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets. In NeurIPS, 2016. 5
+Zitian Chen, Yanwei Fu, Yu-Xiong Wang, Lin Ma, Wei Liu, and Martial Hebert. Image deformation meta-networks for one-shot learning. In CVPR, 2019b. 2
+Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In CVPR, 2009. 7
+DC Dowson and BV Landau. The frechet distance between multivariate normal distributions. Journal of multivariate analysis, 1982. 7
+Li Fei-Fei, Rob Fergus, and Pietro Perona. One-shot learning of object categories. PAMI, 2006. 1, 2
+Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In ICML, 2017. 1, 2, 7
+Andrew Ford. Modeling the environment: an introduction to system dynamics models of environmental systems. Island press, 1999. 3
+Bharath Hariharan and Ross Girshick. Low-shot visual recognition by shrinking and hallucinating features. In ICCV, 2017. 4
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016. 2, 5, 6, 14
+Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. In NeurIPS Workshops, 2014. 5
+Zhiting Hu, Bowen Tan, Russ R Salakhutdinov, Tom M Mitchell, and Eric P Xing. Learning data manipulation for augmentation and weighting. In NeurIPS, 2019. 3
+Xun Huang and Serge Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In ICCV, 2017. 4
+Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. ICML, 2015. 14
+Umar Iqbal, Pavlo Molchanov, and Jan Kautz. Weakly-supervised 3D human pose learning via multi-view images in the wild. CVPR, 2020. 3
+Alex Kendall, Yarin Gal, and Roberto Cipolla. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In CVPR, 2018. 3
+Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao, and Li Fei-Fei. Novel dataset for fine-grained image categorization. In CVPR Workshops, 2011. 15
+Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 2015. 1, 2
+Cheng-Han Lee, Ziwei Liu, Lingyun Wu, and Ping Luo. Maskgan: Towards diverse and interactive facial image manipulation. In CVPR, 2020. 9
+
+Aoxue Li, Tiange Luo, Tao Xiang, Weiran Huang, and Liwei Wang. Few-shot learning with global class representations. In ICCV, 2019. 2
+Yujia Li, Kevin Swersky, and Rich Zemel. Generative moment matching networks. In ICML, 2015. 2
+Ming-Yu Liu, Xun Huang, Arun Mallya, Tero Karras, Timo Aila, Jaakko Lehtinen, and Jan Kautz. Few-shot unsupervised image-to-image translation. In ICCV, 2019. 9
+Weiyang Liu, Yandong Wen, Zhiding Yu, and Meng Yang. Large-margin softmax loss for convolutional neural networks. In ICML, 2016. 5
+Canjie Luo, Yuanzhi Zhu, Lianwen Jin, and Yongpan Wang. Learn to augment: Joint data augmentation and network optimization for text recognition. In CVPR, 2020. 3
+Mateusz Michalkiewicz, Sarah Parisot, Stavros Tsogkas, Mahsa Baktashmotlagh, Anders Eriksson, and Eugene Belilovsky. Few-shot single-view 3-D object reconstruction with compositional priors. In ECCV, 2020. 3
+Ishan Misra, Abhinav Shrivastava, Abhinav Gupta, and Martial Hebert. Cross-stitch networks for multi-task learning. In CVPR, 2016. 3
+Siva Karthik Mustikovela, Varun Jampani, Shalini De Mello, Sifei Liu, Umar Iqbal, Carsten Rother, and Jan Kautz. Self-supervised viewpoint learning from image collections. In CVPR, 2020. 3
+Thu Nguyen-Phuoc, Chuan Li, Lucas Theis, Christian Richardt, and Yong-Liang Yang. HoloGAN: Unsupervised learning of 3D representations from natural images. In ICCV, 2019. 2, 3, 4, 5, 6, 7, 14, 16, 18
+Thu Nguyen-Phuoc, Christian Richardt, Long Mai, Yong-Liang Yang, and Niloy Mitra. BlockGAN: Learning 3D object-aware scene representations from unlabelled images. NeurIPS, 2020. 3, 9
+Thu H Nguyen-Phuoc, Chuan Li, Stephen Balaban, and Yongliang Yang. Rendernet: A deep convolutional network for differentiable rendering from 3D shapes. In NeurIPS, 2018. 1, 3
+Atsuhiro Noguchi and Tatsuya Harada. RGBD-GAN: Unsupervised 3D representation learning from natural image datasets via rgb image synthesis. In ICLR, 2020. 9
+Arghya Pal and Vineeth N Balasubramanian. Zero-shot task transfer. In CVPR, 2019. 3
+Eunbyung Park, Jimei Yang, Ersin Yumer, Duygu Ceylan, and Alexander C Berg. Transformation-grounded image generation network for novel 3d view synthesis. In CVPR, 2017. 1, 3
+Xi Peng, Zhiqiang Tang, Fei Yang, Rogerio S Feris, and Dimitris Metaxas. Jointly optimize data augmentation and network training: Adversarial data augmentation in human pose estimation. In CVPR, 2018. 3
+Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016. 4
+Sai Rajeswar, Fahim Mannan, Florian Golemo, Jérôme Parent-Lévesque, David Vazquez, Derek Nowrouzezahrai, and Aaron Courville. Pix2Shape: Towards unsupervised learning of 3D scenes from images using a view-based representation. IJCV, 2020. 3
+Krishna Regmi and Ali Borji. Cross-view image synthesis using conditional gans. In CVPR, 2018. 2
+Sebastian Ruder. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098, 2017. 3, 8
+Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In NeurIPS, 2016. 7
+Shibani Santurkar, Andrew Ilyas, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Image synthesis with a single (robust) classifier. In NeurIPS, 2019. 3
+Eli Schwartz, Leonid Karlinsky, Joseph Shtok, Sivan Harary, Mattias Marder, Abhishek Kumar, Rogerio Feris, Raja Giryes, and Alex Bronstein. Delta-encoder: an effective sample synthesis method for few-shot object recognition. In NeurIPS, 2018. 2
+
+Vincent Sitzmann, Justus Thies, Felix Heide, Matthias Nießner, Gordon Wetzstein, and Michael Zollhofer. Deepvoxels: Learning persistent 3D feature embeddings. In CVPR, 2019. 1, 3
+Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In NeurIPS, 2017. 1, 2, 4, 5, 6, 7, 17
+Trevor Standley, Amir R Zamir, Dawn Chen, Leonidas Guibas, Jitendra Malik, and Silvio Savarese. Which tasks should be learned together in multi-task learning? In ICML, 2020. 3
+Qianru Sun, Yaoyao Liu, Tat-Seng Chua, and Bernt Schiele. Meta-transfer learning for few-shot learning. In CVPR, 2019. 2
+Shao-Hua Sun, Minyoung Huh, Yuan-Hong Liao, Ning Zhang, and Joseph J Lim. Multi-view to novel view: Synthesizing novel views with self-learned confidence. In ECCV, 2018. 3
+Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales. Learning to compare: Relation network for few-shot learning. In CVPR, 2018. 17
+Sebastian Thrun. Is learning the n-th thing any easier than learning the first? In NeurIPS, 1996. 2
+Satoshi Tsutsui, Yanwei Fu, and David Crandall. Meta-reinforced synthetic data for one-shot fine-grained visual recognition. In NeurIPS, 2019. 2
+Richard Tucker and Noah Snavely. Single-view view synthesis with multiplane images. In CVPR, 2020. 3
+Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022, 2016. 14
+Grant Van Horn, Steve Branson, Ryan Farrell, Scott Haber, Jessie Barry, Panos Ipeirotis, Pietro Perona, and Serge Belongie. Building a bird recognition app and large scale dataset with citizen scientists: The fine print in fine-grained dataset collection. In CVPR, 2015. 15
+Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. Matching networks for one shot learning. In NeurIPS, 2016. 1, 2, 17
+Yu-Xiong Wang and Martial Hebert. Learning to learn: Model regression networks for easy small sample learning. In ECCV, 2016. 1, 2
+Yu-Xiong Wang, Ross Girshick, Martial Hebert, and Bharath Hariharan. Low-shot learning from imaginary data. In CVPR, 2018. 2, 3, 6, 17
+Zhangyang Wang, Shiyu Chang, Yingzhen Yang, Ding Liu, and Thomas S Huang. Studying very low resolution recognition using deep networks. In CVPR, 2016. 2, 5
+P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. Caltech-UCSD Birds 200. Technical Report CNS-TR-2010-001, California Institute of Technology, 2010. 6, 14, 17
+Olivia Wiles, Georgia Gkioxari, Richard Szeliski, and Justin Johnson. SynSin: End-to-end view synthesis from a single image. In CVPR, 2020. 3, 7
+Mitchell Wortsman, Vivek Ramanujan, Rosanne Liu, Aniruddha Kembhavi, Mohammad Rastegari, Jason Yosinski, and Ali Farhadi. Supermasks in superposition. In NeurIPS, 2020. 3
+Yongqin Xian, Saurabh Sharma, Bernt Schiele, and Zeynep Akata. F-VAEGAN-D2: A feature generating framework for any-shot learning. In CVPR, 2019. 3
+Yang Xiao and Renaud Marlet. Few-shot object detection and viewpoint estimation for objects in the wild. In ECCV, 2020. 3
+Wei Xiong, Yutong He, Yixuan Zhang, Wenhan Luo, Lin Ma, and Jiebo Luo. Fine-grained image-to-image transformation towards visual recognition. In CVPR, 2020. 3
+Linjie Yang, Ping Luo, Chen Change Loy, and Xiaou Tang. A large-scale car dataset for fine-grained categorization and verification. In CVPR, 2015. 6, 14, 17
+
+Yibo Yang, Zhisheng Zhong, Tiancheng Shen, and Zhouchen Lin. Convolutional neural networks with alternately updated clique. In CVPR, 2018. 3
+Zhichao Yin and Jianping Shi. Geonet: Unsupervised learning of dense depth, optical flow and camera pose. In CVPR, 2018. 3
+Jae Shin Yoon, Kihwan Kim, Orazio Gallo, Hyun Soo Park, and Jan Kautz. Novel view synthesis of dynamic scenes with globally coherent depths from a monocular camera. In CVPR, 2020. 3, 7
+Tianhe Yu, Deirdre Quillen, Zhanpeng He, Ryan Julian, Karol Hausman, Chelsea Finn, and Sergey Levine. Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning. In CoRL, 2020. 1
+Amir R Zamir, Te-Lin Wu, Lin Sun, William B Shen, Bertram E Shi, Jitendra Malik, and Silvio Savarese. Feedback networks. In CVPR, 2017. 3
+Amir R Zamir, Alexander Sax, William Shen, Leonidas J Guibas, Jitendra Malik, and Silvio Savarese. Taskonomy: Disentangling task transfer learning. In CVPR, 2018. 1, 3
+Feifei Zhang, Tianzhu Zhang, Qirong Mao, and Changsheng Xu. A unified deep model for joint facial expression recognition, face synthesis, and face alignment. TIP, 2020. 3
+Hongguang Zhang, Jing Zhang, and Piotr Koniusz. Few-shot learning via saliency-guided hallucination of samples. In CVPR, 2019. 2
+Ruixiang Zhang, Tong Che, Zoubin Ghahramani, Yoshua Bengio, and Yangqiu Song. MetaGAN: An adversarial approach to few-shot learning. In NeurIPS, 2018. 2, 17
+
+# APPENDIX
+
+
+Figure A: Architecture of ResBlock used in the view synthesis module. The default kernel size is 3 and the stride is 1.
+
+# A DETAILED ARCHITECTURE OF VIEW SYNTHESIS MODULE
+
+One of the central components in our view synthesis module is the ResBlock adapted from ResNet (He et al., 2016), where we use Instance Normalization instead of Batch Normalization (Ulyanov et al., 2016; Ioffe & Szegedy, 2015). Figure A shows the architecture of a 2D ResBlock. For all the convolution layers in this structure, the kernel size is 3 and the stride is 1. By changing the 2D convolution layers to 3D convolution layers, we will get a 3D ResBlock. Figure B shows the architecture of the discriminator $D$ . The kernel size is 3 and the stride is 2 for all the convolution layers. The structure of the generator $G$ is illustrated in Figure C. Notice that the "Res-Up" module is the structure of ResBlock, followed by an upsampling layer. The kernel size is 3 and the stride is 1 for all the convolution layers.
+
+# B PSEUDOCODE OF TRAINING ALGORITHM
+
+To have a better understanding of the training procedure of FBNet, we include Algorithm 1 to show the detailed training process on base classes. The training on novel classes follows similar process, except the sample number $n$ ( $n = 1$ when we conduct 1-shot training).
+
+# C ADDITIONAL EXPERIMENTAL DETAILS
+
+Data Pre-processing: For the CUB (Welinder et al., 2010) dataset, we first square crop the images with the given bounding boxes, and then we resize the cropped images to 64-resolution. For the CompCars (Yang et al., 2015) dataset, slightly different from the original paper and HologAN (Nguyen-Phuoc et al., 2019), we follow the instructions of the publicly released CompCars dataset for classification tasks, and obtain 366 classes of car models after standard pre-processing. We then manually drop 6 classes which have fewer than 15 images per class, and construct a proper dataset for our experiments.
+
+Additional Implementation Details: In the main paper, we set $\lambda_{\mathrm{id}} = 10$ and $\lambda_{\mathrm{cat}} = 1$ via cross-validation, and found that the performance is relatively stable to the setting of these trade-off hyper-parameters. We sample 5 images per class for $C_\mathrm{base}$ and 1 image for $C_\mathrm{novel}$ . We use Adam optimizer for all the networks. The learning rate is set to $5e - 5$ . The final dimension of the feature extraction network is 1,000. The hidden size of all the three fully-connected layers is 128, and the
+
+
+Figure B: Architecture of the discriminator in the view synthesis module. The kernel size of all the convolution layers is 3 and the stride is 2.
+
+
+Figure C: Architecture of the generator in the view synthesis module. Res-Up module is a combination of a ResBlock and an upsampling layer.
+
+final feature dimension of the prototypical classification network is also 128. The batch size is 64 for the view synthesis module. We train 1,400 iterations for $\mathcal{C}_{\mathrm{base}}$ and 100 iterations for $\mathcal{C}_{\mathrm{novel}}$ .
+
+max_it: Maximum iteration for the training;
+
+$R$ : Recognition module, $V$ : View synthesis module;
+
+$F$ : Feature extraction network, $F_{\mathrm{high}}$ : Feature extraction network with high-resolution images;
+
+$G$ : Generator of view synthesis module, $D$ : Discriminator of view synthesis module;
+
+$n$ : Number of support images per class, $n = 5$ ;
+
+Algorithm 1: Training process of FBNet on base classes.
+for iter $\leftarrow 1$ to max_iter do
+for $c\in \mathcal{C}_{base}$ do
+```latex
+$\begin{array}{rl}{S_{\mathrm{support}} = \{\} ,S_{\mathrm{query}} = \{\} ,S_{\mathrm{augmentd}} = \{\} ;} & {} \end{array}$
+```
+
+Initialization:
+end
+```txt
+support ims $\leftarrow$ sample $n$ images in $c$ queryIMS $\leftarrow$ sample 1 image in $c$ $S_{\mathrm{support}}\gets S_{\mathrm{support}}\cup$ support IMS $S_{\mathrm{query}}\gets S_{\mathrm{query}}\cup$ queryIMS;
+```
+
+for img in $S_{\text{support}}$ do
+```txt
+$f_{\mathrm{high}}\gets F_{\mathrm{high}}(S_{\mathrm{support}}\cup S_{\mathrm{query}});$ $f_{\mathrm{low}}\gets F(S_{\mathrm{support}}\cup S_{\mathrm{query}})$
+```
+
+end
+```txt
+$f = R(img)$ $z = f\oplus \mathcal{N}$ . $\theta \gets$ sample a view angle; $img^{\prime}\gets G(z,\theta);$ $y = D(img)$ $[y^{\prime},z^{\prime},\theta^{\prime}] = D(img^{\prime})$ .. $\mathcal{L}_{\mathrm{GAN}}(y,img,y',img')\to$ update $G,D;$ $\mathcal{L}_{\mathrm{id}}(z,\theta ,z',\theta ')\to$ update $D$ $S_{\mathrm{augmented}}\leftarrow S_{\mathrm{augmented}}\cup img'$
+```
+
+end
+```txt
+$S_{\mathrm{whole}}\gets S_{\mathrm{support}}\cup S_{\mathrm{augmentd}}$ $\mathcal{L}_{\mathrm{rec}}(S_{\mathrm{whole}},S_{\mathrm{query}})\to \mathrm{update}R;$ $\mathcal{L}_{\mathrm{feature}}(f_{\mathrm{high}},f_{\mathrm{real}})\to \mathrm{update}F;$ $\mathcal{L}_{\mathrm{cat}}(S_{\mathrm{support}},S_{\mathrm{augmentd}})\to \mathrm{update}G;$
+```
+
+# D EXPERIMENTS ON ADDITIONAL DATASETS
+
+To show the effectiveness and generality of our proposed FBNet, we conduct experiments on two additional datasets: the North American Birds (NAB) dataset (Van Horn et al., 2015) and the Stanford Dog dataset (DOG) (Khosla et al., 2011). There are 555 classes with 48,527 images in NAB dataset.
+
+ | Model | Base | Novel-K=1 | Novel-K=5 |
| NAB | FBNet-rec | 44.56 | 23.97 ± 0.05 | 58.09 ± 0.19 |
| FBNet-aug | 44.85 | 23.69 ± 0.08 | 58.40 ± 0.26 |
| FBNet | 45.63 | 24.15 ± 0.07 | 58.98 ± 0.15 |
| DOG | FBNet-rec | 51.13 | 53.33 ± 0.09 | 72.59 ± 0.17 |
| FBNet-aug | 51.46 | 53.11 ± 0.11 | 72.68 ± 0.15 |
| FBNet | 52.25 | 53.78 ± 0.08 | 73.21 ± 0.14 |
+
+Table A: Top-1 $(\%)$ recognition accuracy on the NAB and DOG datasets. For base classes: 350-way classification on NAB and 80-way classification on DOG; for $K$ -shot novel classes: 205-way classification on NAB and 40-way classification on DOG. Again, our FBNet consistently achieves the best performance for both base and novel classes.
+
+ | Model | IS (↑) | FID (↓) |
| Base | Novel-K=1 | Novel-K=5 | Base | Novel-K=1 | Novel-K=5 |
| NAB | Real Images | 4.90 ± 0.31 | 3.47 ± 0.14 | 3.88 ± 0.19 | 0 | 0 | 0 |
| HoloGAN (Nguyen-Phuoc et al., 2019) | 4.06 ± 0.08 | 2.52 ± 0.04 | 2.65 ± 0.06 | 47.52 | 85.74 | 76.38 |
| FBNet | 4.13 ± 0.12 | 2.90 ± 0.04 | 3.05 ± 0.08 | 40.00 | 74.55 | 62.39 |
| DOG | Real Images | 8.62 ± 0.36 | 4.18 ± 0.11 | 4.91 ± 0.19 | 0 | 0 | 0 |
| HoloGAN (Nguyen-Phuoc et al., 2019) | 6.06 ± 0.29 | 3.35 ± 0.11 | 3.85 ± 0.17 | 53.85 | 82.64 | 73.21 |
| FBNet | 6.42 ± 0.32 | 3.64 ± 0.16 | 4.02 ± 0.21 | 45.17 | 79.25 | 69.66 |
+
+Table B: Quantitative results of novel-view synthesis under the FID and IS metrics on the NAB and DOG datasets. $\uparrow$ indicates that higher is better, and $\downarrow$ indicates that lower is better. FBNet consistently outperforms the baselines.
+
+
+Figure D: Qualitative comparison of synthesized images from multiple viewpoints between our FBNet and state-of-the-art HoloGAN on the NAB and DOG datasets. Images in the same row/column are from the same viewpoint/object. The overall quality of the synthesized images for both methods indicates the general difficulty of the task, due to weak supervision and lack of data. However, our FBNet still captures the shape and attributes well, even though the data is scarcely limited. This shows the strong adaptability of FBNet for few-shot learning.
+
+We randomly select 350 classes as base classes and 255 classes as novel classes. For the DOG dataset, there are 20,580 images belonging to 120 classes. We randomly select 80 classes as base and 40 classes as novel. For each class of the two datasets, we randomly select $75\%$ as training data and $25\%$ as test data. We pre-process these two datasets in a similar way as the CUB dataset.
+
+Slightly different from the evaluation in the main paper, we only compare our FBNet with HoloGAN (Nguyen-Phuoc et al., 2019) for the view synthesis task. For the recognition task, we compare our method with FBNet-rec and FBNet-aug. All the experimental setting remains the same as that in the main paper on the CUB and CompCars datasets.
+
+Quantitative Results: Table A shows the recognition performance of the three competing models. Again, our method achieves the best performance for both base and novel classes. Table B shows the results of view synthesis and our method also achieves the best performance.
+
+Qualitative Results: Figure D shows the synthesized images by HoloGAN and our FBNet on the two datasets. First, we note that the overall quality of the synthesized images for both methods becomes substantially worse than one would expect with large amounts of training images. This demonstrates the general difficulty of the task due to weak supervision and lack of data, indicating the need for the community to focus on such problems. Second, our FBNet significantly outperforms the state-of-the-art HoloGAN, especially for the diversity of the synthesized images. Additionally, even though the data is scarcely limited, FBNet still captures the shape and some detailed attributes of images well.
+
+| Setting | PN | RN | MN | PMN | PN w/ G | MetaGAN | FBNet |
| Base | 46.05 | 45.89 | 46.72 | 47.10 | 48.57 | 39.91 | 49.63 |
| Novel-K=1 | 20.83 | 22.14 | 21.62 | 22.78 | 22.71 | 18.59 | 23.28 |
| Novel-K=5 | 50.52 | 50.26 | 50.59 | 51.07 | 52.65 | 44.20 | 53.12 |
+
+Table C: Top-1 $(\%)$ recognition accuracy for our approach and other state-of-the-art few-shot methods, including data hallucination-based methods on CompCars. Our FBNet consistently outperforms the other models.
+
+ | Model | Base | Novel-K=1 | Novel-K=5 |
| CUB | PN (Snell et al., 2017) | 57.91 | 47.53 | 71.26 |
| FBNet-PN | 59.43 | 48.39 | 72.76 |
| RN (Sung et al., 2018) | 58.10 | 47.77 | 70.94 |
| FBNet-RN | 59.19 | 48.46 | 72.60 |
| CompCars | PN (Snell et al., 2017) | 46.05 | 20.83 | 50.52 |
| FBNet-PN | 49.63 | 23.28 | 53.12 |
| RN (Sung et al., 2018) | 45.89 | 22.14 | 50.26 |
| FBNet-RN | 48.99 | 24.83 | 52.72 |
+
+Table D: Top-1 $(\%)$ recognition accuracy for recognition modules with PN (prototypical network) and RN (Relation Network), respectively, on the CUB and CompCars datasets. For base classes: 150-way classification on CUB and 240-way classification on CompCars; for $K$ -shot novel classes: 50-way classification on CUB and 120-way classification on CompCars. The proposed FBNet consistently improves the performance of the single recognition modules for both PN and RN, indicating the generality of our framework.
+
+# E COMPARISON WITH OTHER FEW-SHOT RECOGNITION METHODS
+
+We further compare our FBNet with a variety of state-of-the-art few-shot recognition methods on the CompCars dataset (Yang et al., 2015), including prototypical network (PN) (Snell et al., 2017), relation network (RN) (Sung et al., 2018), matching network (MN) (Vinyals et al., 2016), and proto-matching network (PMN) (Wang et al., 2018). We also compare with two data hallucination-based methods: PN w/ G (Wang et al., 2018) and MetaGAN (Zhang et al., 2018). Table C shows that our FBNet consistently outperforms these methods. Importantly, note that these methods can only address few-shot recognition, while our FBNet is able to deal with the joint task.
+
+# F ADDITIONAL EXPERIMENTAL RESULTS WITH RELATION NETWORK
+
+The proposed feedback-based framework has a great generalization capability. In the main paper, we have shown that it is flexible with different backbone feature extraction networks $F$ . In addition, our FBNet could improve the performance of the recognition module with different classification networks $P$ through feedback connections. To show this, we change the prototypical network (Snell et al., 2017) to the relation network (Sung et al., 2018) and report the recognition result on the CUB (Welinder et al., 2010) and CompCars (Yang et al., 2015) dataset in Table D. The experimental setting remains the same as that in the main paper. From the table, we can see that the whole feedback-based models consistently outperform the single recognition modules for the prototypical network and the relation network. This indicates that our proposed bowtie architecture is a general and robust framework that could improve the performance of different types of classification networks.
+
+
+Base
+
+
+K=5
+
+
+K=1
+Figure E: Synthesized 128-resolution images by our FBNet on the CUB dataset.
+
+
+Figure F: FBNet with an extended attribute transfer task. 'im1' is the source image corresponding with 3D AdaIN and 'im2' is the attribute image corresponding with 2D AdaIN. Images in the same row/column have the same identity/attributes.
+
+# G EXPERIMENTS WITH HIGHER-RESOLUTION IMAGES
+
+In the main paper, following the state-of-the-art novel-view synthesis model HoloGAN (Nguyen-Phuoc et al., 2019), we mainly focused on images of size $64 \times 64$ . Our FBNet can also operate on higher-resolutions, by adding more 2D ResBlocks for the view synthesis module, and adding more convolution layers for the recognition module. Here, we modify our model to operate on images of size $128 \times 128$ and show some representative generated images on the CUB dataset in Figure E. We see that the proposed method effectively works with 128-resolution. We also note that higher resolution requires higher-quality training data. The unsatisfactory size and quality of the CUB dataset introduce additional challenges for synthesis, such as missing of details, inconsistency of identity across different viewpoints, and noisy background. However, such problems could be further addressed by improving the quantity and quality of the training data.
+
+# H FROM JOINT-TASK TO TRIPLE-TASK: ATTRIBUT TRANSFER
+
+The proposed bowtie framework could also be extended to address more than two tasks, by either introducing additional feedback connections or changing the architectures of the two modules. Here as an example, we introduce an additional "attribute transfer" task combined with the novel-view synthesis task. That is, instead of seeing one image each time, the view synthesis module sees one source image ('im1') and one target image ('im2') at the same time; then it generates images with the object of the source image and the attributes of the target image. We achieve this by arranging source latent input for 3D AdaIN units and target latent input for 2D AdaIN units in the view-synthesis module. Figure F shows the result of this additional task: images in the same row keep the same identity with im1; images in the same column have the same attributes of im2. Note that, to have a better visualization, we use the predicted views from the source images as the view input for all the generated images in Figure F, but the model could still synthesize images of different viewpoints.
\ No newline at end of file
diff --git a/bowtienetworksgenerativemodelingforjointfewshotrecognitionandnovelviewsynthesis/images.zip b/bowtienetworksgenerativemodelingforjointfewshotrecognitionandnovelviewsynthesis/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..fafca1329cc2b37b3fa58396c9087b3c12670c55
--- /dev/null
+++ b/bowtienetworksgenerativemodelingforjointfewshotrecognitionandnovelviewsynthesis/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c06dfc45242bb09520334202b0a3c7c30fdf01dcd1ef3b3f2b276c5376670cd9
+size 783064
diff --git a/bowtienetworksgenerativemodelingforjointfewshotrecognitionandnovelviewsynthesis/layout.json b/bowtienetworksgenerativemodelingforjointfewshotrecognitionandnovelviewsynthesis/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..8ba7de1c54085200688215a49173cd59aa13189a
--- /dev/null
+++ b/bowtienetworksgenerativemodelingforjointfewshotrecognitionandnovelviewsynthesis/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b894ce39d6e608208295c070e1d82f5d9db39c2bcffcace655a0aff7e2927fc6
+size 609546
diff --git a/brecqpushingthelimitofposttrainingquantizationbyblockreconstruction/b5d6bcae-019f-479e-8c73-e8d492e550c8_content_list.json b/brecqpushingthelimitofposttrainingquantizationbyblockreconstruction/b5d6bcae-019f-479e-8c73-e8d492e550c8_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..ffa5212595854ec059ba1374e528aaf58c944a9a
--- /dev/null
+++ b/brecqpushingthelimitofposttrainingquantizationbyblockreconstruction/b5d6bcae-019f-479e-8c73-e8d492e550c8_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:17094d52bd40c36965c1c6a2f42d316d258b4b4b4201b6abf2499cea9c9856ef
+size 107165
diff --git a/brecqpushingthelimitofposttrainingquantizationbyblockreconstruction/b5d6bcae-019f-479e-8c73-e8d492e550c8_model.json b/brecqpushingthelimitofposttrainingquantizationbyblockreconstruction/b5d6bcae-019f-479e-8c73-e8d492e550c8_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..fec646e8c63e6007123d58dfbee2037d6695005f
--- /dev/null
+++ b/brecqpushingthelimitofposttrainingquantizationbyblockreconstruction/b5d6bcae-019f-479e-8c73-e8d492e550c8_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ee49ebbd48426e7105ec2845f85c5814254bd8be479cd4dd19cd39b575085c58
+size 127456
diff --git a/brecqpushingthelimitofposttrainingquantizationbyblockreconstruction/b5d6bcae-019f-479e-8c73-e8d492e550c8_origin.pdf b/brecqpushingthelimitofposttrainingquantizationbyblockreconstruction/b5d6bcae-019f-479e-8c73-e8d492e550c8_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..88550f4c248e4e91190ebf7215b2b66e4c6cc336
--- /dev/null
+++ b/brecqpushingthelimitofposttrainingquantizationbyblockreconstruction/b5d6bcae-019f-479e-8c73-e8d492e550c8_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c6d3c74789f7b0b69f658646b0e8574ecf5469bf4b86c09a3694c666e2caeffa
+size 667398
diff --git a/brecqpushingthelimitofposttrainingquantizationbyblockreconstruction/full.md b/brecqpushingthelimitofposttrainingquantizationbyblockreconstruction/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..728c5fac73408a53f8ed3851aced0b6c0f895a33
--- /dev/null
+++ b/brecqpushingthelimitofposttrainingquantizationbyblockreconstruction/full.md
@@ -0,0 +1,401 @@
+# BRECQ: PUSHING THE LIMIT OF POST-TRAINING QUANTIZATION BY BLOCK RECONSTRUCTION
+
+Yuhang Li $^{12*}$ , Ruihao Gong $^{2*}$ , Xu Tan $^{2}$ , Yang Yang $^{2}$ , Peng Hu $^{2}$ , Qi Zhang $^{2}$ , Fengwei Yu $^{2}$ , Wei Wang, Shi Gu $^{1\boxtimes}$
+
+1University of Electronic Science and Technology of China, 2SenseTime Research liuyuhang699@gmail.com, gongruihao@sensetime.com, gus@uestc.edu.cn
+
+# ABSTRACT
+
+We study the challenging task of neural network quantization without end-to-end retraining, called Post-training Quantization (PTQ). PTQ usually requires a small subset of training data but produces less powerful quantized models than Quantization-Aware Training (QAT). In this work, we propose a novel PTQ framework, dubbed BRECQ, which pushes the limits of bitwidth in PTQ down to INT2 for the first time. BRECQ leverages the basic building blocks in neural networks and reconstructs them one-by-one. In a comprehensive theoretical study of the second-order error, we show that BRECQ achieves a good balance between cross-layer dependency and generalization error. To further employ the power of quantization, the mixed precision technique is incorporated in our framework by approximating the inter-layer and intra-layer sensitivity. Extensive experiments on various handcrafted and searched neural architectures are conducted for both image classification and object detection tasks. And for the first time we prove that, without bells and whistles, PTQ can attain 4-bit ResNet and MobileNetV2 comparable with QAT and enjoy $240 \times$ faster production of quantized models. Codes are available at https://github.com/yhhhli/BRECQ.
+
+# 1 INTRODUCTION
+
+The past decade has witnessed the rapid development of deep learning in many tasks, such as computer vision, autonomous driving, etc. However, the issue of huge computation cost and memory footprint requirements in deep learning has received considerable attention. Some works such as neural architecture search (Zoph & Le, 2016) try to design and search a tiny network, while others, like quantization (Hubara et al., 2017), and network pruning (Han et al., 2015) are designed to compress and accelerate off-the-shelf well-trained redundant networks.
+
+Many popular quantization and network pruning methods follow a simple pipeline: training the original model and then finetune the quantized/pruned model. However, this pipeline requires a full training dataset and many computation resources to perform end-to-end backpropagation, which will greatly delay the production cycle of compressed models. Besides, not all training data are always ready-to-use considering the privacy problem. Therefore, there is more demand in industry for quantizing the neural networks without retraining, which is called Post-training Quantization. Although PTQ is fast and light, it suffers from severe accuracy degeneration when the quantization precision is low. For example, DFQ (Nagel et al., 2019) can quantize ResNet-18 to 8-bit without accuracy loss (69.7% top-1 accuracy) but in 4-bit quantization, it can only achieve 39% top-1 accuracy. The primary reason is the approximation in the parameter space is not equivalent to the approximation in model space thus we cannot assure the optimal minimization on the final task loss.
+
+Recent works like (Nagel et al., 2020) recognized the problem and analyzed the loss degradation by Taylor series expansion. Analysis of the second-order error term indicates we can reconstruct each layer output to approximate the task loss degeneration. However, their work cannot further quantize the weights into INT2 because the cross-layer dependency in the Hessian matrix cannot be ignored when the perturbation on weight is not small enough. In this work, we analyze the second-order
+
+error based on the Gauss-Newton matrix. We show that the second-order error can be transformed into network final outputs but suffer from bad generalization. To achieve the best tradeoff, we adopt an intermediate choice, block reconstruction. In addition, our contributions are threefold:
+
+1. Based on the second-order analysis, we define a set of reconstruction units and show that block reconstruction is the best choice with the support from theoretical and empirical evidence. We also use Fisher Information Matrix to assign each pre-activation with an importance measure during reconstruction.
+2. We incorporate genetic algorithm and the well-defined intra-block sensitivity measure to generate latency and size guaranteed mixed precision quantized neural networks, which fulfills a general improvement on both specialized hardware (FPGA) and general hardware (ARM CPU).
+3. We conduct extensive experiments to verify our proposed methods. We find that our method is applicable to a large variety of tasks and models. Moreover, we show that post-training quantization can quantize weights to INT2 without significant accuracy loss for the first time.
+
+# 2 PRELIMINARIES
+
+Notations Vectors are denoted by small bold letters and matrices (or tensors) are denoted by capital bold letters. For instance, $\mathbf{W}$ and $\mathbf{w}$ represent the weight tensor and its flattened version. Bar accent denotes the expectation over data points, e.g. $\bar{\mathbf{a}}$ . Bracketed superscript $\mathbf{w}^{(\ell)}$ indicates the layer index. For a convolutional or a fully-connected layer, we mark its input and output vectors by $\mathbf{x}$ and $\mathbf{z}$ . Thus given a feedforward neural network with $n$ layers, we can denote the forward process by
+
+$$
+\mathbf {x} ^ {(\ell + 1)} = h \left(\mathbf {z} ^ {(\ell)}\right) = h \left(\mathbf {W} ^ {(\ell)} \mathbf {x} ^ {(\ell)} + \mathbf {b} ^ {(\ell)}\right), \quad 1 \leq \ell \leq n, \tag {1}
+$$
+
+where $h(\cdot)$ indicates the activation function (ReLU in this paper). For simplicity, we omit the analysis of bias $\mathbf{b}^{(\ell)}$ as it can be merged into activation. $||\cdot||_F$ denotes the Frobenius norm.
+
+Quantization Background Uniform symmetric quantization maps the floating-point numbers to several fixed-points. These points (or grids) have the same interval and are symmetrically distributed. We denote the set that contains these grids as $\mathcal{Q}_b^{\mathrm{u,sym}} = s\times \{-2^{b - 1},\ldots ,0,\ldots ,2^{b - 1} - 1\}$ . Here, $s$ is the step size between two grids and $b$ is the bit-width. Quantization function, denoted by $q(\cdot):\mathcal{R}\to \mathcal{Q}_b^{\mathrm{u,sym}}$ , is generally designed to minimize the quantization error:
+
+$$
+\min \left\| \hat {\mathbf {w}} - \mathbf {w} \right\| _ {F} ^ {2}. \text {s . t .} \hat {\mathbf {w}} \in \mathcal {Q} _ {b} ^ {\mathrm {u}, \mathrm {s y m}} \tag {2}
+$$
+
+Solving this minimization problem, one can easily get the $q(\cdot)$ by leveraging the rounding-to-nearest operation $\lfloor \cdot \rfloor$ . Rounding-to-nearest is a prevalent method to perform quantization, e.g. PACT (Choi et al., 2018). However, recently some empirical or theoretical evidence supports that simply minimizing the quantization error in parameter space does not bring optimal task performances. Specifically, Esser et al. (2020) propose to learn the step size $s$ by gradient descent in quantization-aware training (QAT). LAPQ (Nahshan et al., 2019) finds the optimal step size when the loss function is minimized without re-training the weights. Their motivations are all towards minimizing a final objective, which is the task loss, i.e.,
+
+$$
+\min \mathbb {E} [ L (\hat {\mathbf {w}}) ], \text {s . t .} \hat {\mathbf {w}} \in \mathcal {Q} _ {b} ^ {\mathrm {u}, \text {s y m}}. \tag {3}
+$$
+
+While this optimization objective is simple and can be well-optimized in QAT scenarios, it is not easy to learn the quantized weight without end-to-end finetuning as well as sufficient training data and computing resources. In post-training quantization settings, we only have full precision weights that $\mathbf{w}^{\star} = \arg \min_{\mathbf{w}}\mathbb{E}[L(\mathbf{w})]$ where $\mathbf{w}\in \mathcal{R}$ and a small subset of training data to do calibration.
+
+Taylor Expansion It turns out that the quantization imposed on weights could be viewed as a special case of weight perturbation. To quantitatively analyze the loss degradation caused by quantization, Nagel et al. (2020) use Taylor series expansions and approximates the loss degradation by
+
+$$
+\mathbb {E} [ L (\mathbf {w} + \Delta \mathbf {w}) ] - \mathbb {E} [ L (\mathbf {w}) ] \approx \Delta \mathbf {w} ^ {\top} \bar {\mathbf {g}} ^ {(\mathbf {w})} + \frac {1}{2} \Delta \mathbf {w} ^ {\top} \bar {\mathbf {H}} ^ {(\mathbf {w})} \Delta \mathbf {w}, \tag {4}
+$$
+
+where $\bar{\mathbf{g}}^{(\mathbf{w})} = \mathbb{E}[\nabla_{\mathbf{w}}L]$ and $\bar{\mathbf{H}}^{(\mathbf{w})} = \mathbb{E}[\nabla_{\mathbf{w}}^2 L]$ are the gradients and the Hessian matrix and $\Delta \mathbf{w}$ is the weight perturbation. Given the pre-trained model is converged to a minimum, the gradients can be safely thought to be close to 0. However, optimizing with the large-scale full Hessian is memory-infeasible on many devices as the full Hessian requires terabytes of memory space. To tackle this problem, they make two assumptions:
+
+1. Layers are mutual-independent, thus the Hessian is in the form of layer-diagonal1 and Kronecker-factored, i.e., $\bar{\mathbf{H}}^{(\mathbf{w}^{(\ell)})} = \mathbb{E}[\mathbf{x}^{(\ell)}\mathbf{x}^{(\ell),\top}\otimes \mathbf{H}^{(\mathbf{z}^{(\ell)})}]$ , where $\otimes$ is the Kronecker product.
+2. The second-order derivatives of pre-activations are constant diagonal matrix $(\mathbf{H}^{(\boldsymbol{\mathbf{z}}^{(\ell)})} = c\times \mathbf{I})$ which is independent of input data points.
+
+At last, the objective is transformed into a practical proxy signal, the change in feature-maps $(\mathbf{z} = \mathbf{W}\mathbf{x})$ , and the quantized model can be obtained by a layer-by-layer feature map reconstruction algorithm (with few calibration images). Recent works, like Bit-Split (Wang et al., 2020) and AdaQuant (Hubara et al., 2020), also take this layer-wise objective to improve the post-training quantization. However, they failed to quantize weights to INT2. We think the inherent reason is that when $\Delta \mathbf{w}$ grows higher, the former assumptions do not hold and an accurate signal is required.
+
+# 3 PROPOSED METHOD
+
+# 3.1 CROSS-LAYER DEPENDENCY
+
+Denote the neural network output $\mathbf{z}^{(n)} = f(\theta)$ , the loss function can be represented by $L(f(\theta))$ where $\theta = \mathrm{vec}[\mathbf{w}^{(1),\top},\dots,\mathbf{w}^{(n),\top}]^{\top}$ is the stacked vector of weights in all $n$ layers. The Hessian matrix can be computed by
+
+$$
+\frac {\partial^ {2} L}{\partial \theta_ {i} \partial \theta_ {j}} = \frac {\partial}{\partial \theta_ {j}} \left(\sum_ {k = 1} ^ {m} \frac {\partial L}{\partial \mathbf {z} _ {k} ^ {(n)}} \frac {\partial \mathbf {z} _ {k} ^ {(n)}}{\partial \theta_ {i}}\right) = \sum_ {k = 1} ^ {m} \frac {\partial L}{\partial \mathbf {z} _ {k} ^ {(n)}} \frac {\partial^ {2} \mathbf {z} _ {k} ^ {(n)}}{\partial \theta_ {i} \partial \theta_ {j}} + \sum_ {k, l = 1} ^ {m} \frac {\partial \mathbf {z} _ {k} ^ {(n)}}{\partial \theta_ {i}} \frac {\partial^ {2} L}{\partial \mathbf {z} _ {k} ^ {(n)} \partial \mathbf {z} _ {l} ^ {(n)}} \frac {\partial \mathbf {z} _ {l} ^ {(n)}}{\partial \theta_ {j}}, \tag {5}
+$$
+
+where $\mathbf{z}^{(n)}\in \mathcal{R}^m$ . Since the pretrained full precision model is converged to a local minimum, we can assume the Hessian is positive-semidefinite (PSD). Specifically, the converged model has $\nabla_{\mathbf{z}^{(n)}}L$ close to $\mathbf{0}$ so the first term in Eq. (5) is neglected and Hessian becomes the Gauss-Newton (GN) matrix $\mathbf{G}^{(\theta)}$ . GN matrix can be written in matrix form (Botev et al., 2017) as
+
+$$
+\mathbf {H} ^ {(\theta)} \approx \mathbf {G} ^ {(\theta)} = \mathbf {J} _ {\mathbf {z} ^ {(n)}} (\theta) ^ {\top} \mathbf {H} ^ {(\mathbf {z} ^ {(n)})} \mathbf {J} _ {\mathbf {z} ^ {(n)}} (\theta), \tag {6}
+$$
+
+where $\mathbf{J}_{\mathbf{z}^{(n)}}(\theta)$ is the Jacobian matrix of the network output with respect to the network parameters. However, in practice, we cannot explicitly compute and store the Jacobian for each input data point in such a raw form. To reduce the computation and memory budget, we will transform the second-order error into the network output, as shown in the following theorem.
+
+Theorem 3.1. Consider an $n$ -layer feedforward neural network with ReLU activation function. Assuming all weights are quantized, the second-order error optimization can be transformed by:
+
+$$
+\underset {\hat {\theta}} {\arg \min } \Delta \theta^ {\mathsf {T}} \bar {\mathbf {H}} ^ {(\theta)} \Delta \theta \approx \underset {\hat {\theta}} {\arg \min } \mathbb {E} \left[ \Delta \mathbf {z} ^ {(n), \mathsf {T}} \mathbf {H} ^ {(\mathbf {z} ^ {(n)})} \Delta \mathbf {z} ^ {(n)} \right]. \tag {7}
+$$
+
+Remark 3.1. The same transformation is also applicable for activation quantization. The quadratic loss is defined as $\mathbb{E}[\Delta \gamma^{\mathrm{T}}\mathbf{H}^{(\gamma)}\Delta \gamma ]$ where $\Delta \gamma = \operatorname {vec}[\Delta \mathbf{x}^{(1),\mathrm{T}},\dots ,\Delta \mathbf{x}^{(n),\mathrm{T}}]^{\mathrm{T}}$
+
+We prove the theorem using the quadratic form, details can be found in Appendix A.1. Here we provide a sketch of the proof by matrix form. The product of perturbation and Jacobian can be thought as the first-order Taylor approximation of the change in network output $\Delta \mathbf{z}^{(n)}$ :
+
+$$
+\Delta \mathbf {z} ^ {(n)} = \hat {\mathbf {z}} ^ {(n)} - \mathbf {z} ^ {(n)} \approx \mathbf {J} _ {\mathbf {z} ^ {(n)}} (\theta) \Delta \theta . \tag {8}
+$$
+
+Therefore, combining Eq. (8) and Eq. (6) we can transform the large-scale second-order error into the change in network outputs characterized by the output Hessian $\mathbf{H}^{(\mathbf{z}^{(n)})}$ . The theorem indicates a simple observation, suppose a well-trained teacher model and an initialized student model, we can minimize their discrepancy by reconstructing the network's final output $\mathbf{z}^{(n)}$ , which coincides with and generalizes the distillation (Hinton et al., 2015; Polino et al., 2018). LAPQ (Nahshan et al., 2019) also considers the dependency but their optimization does not rely on second-order information. However, we should emphasize that distillation requires the same computation and data resources as in normal training procedure, which is impractical for PTQ with limited data.
+
+Figure 1: We define 4 kinds of reconstruction granularity, namely net-wise, stage-wise, block-wise and layerwise optimization, each of them corresponds an essential component of CNN.
+
+(a) A typical structure of CNN (taken from Radosavovic et al. (2020)). Network is composed of a stem layer (first convolution on input images), a body and a head layer (average pooling with a fully connected layer). A body contains several stages, and a stage contains several blocks. A representative block is the bottleneck block with residual path.
+(b) An example illustration of Hessian (or Fisher) matrix. Blue sub-block means the layer-diagonal and each layer are mutual-independent; orange sub-block considers the dependency inside a building block and green parts measure all dependencies.
+
+# 3.2 BLOCK RECONSTRUCTION
+
+Although the network output reconstruction has an accurate estimation of the second-order error, we find in practice it is worse than the layer-by-layer reconstruction in PTQ. The primary reason for this is optimizing the whole networks over 1024 calibration data samples leads to over-fitting easily. As Jakubovitz et al. (2019) explained, the networks can have perfect expressivity when the number of parameters exceeds the number of data samples during training, but lower training error does not ensure lower test error. We find layer-wise reconstruction acts like a regularizer which reduces the generalization error by matching each layer's output distribution. In other words, both layer-wise and network-wise output reconstruction has their own drawbacks. And there should be a better bias-variance trade-off choice to conduct reconstruction at an intermediate granularity.
+
+The layer-wise optimization corresponds to layer-diagonal Hessian (Fig. 1b blue parts) and the network-wise optimization corresponds to full Hessian (Fig. 1b green parts). Similarly, we can define an intermediate block-diagonal Hessian. Formally, if layer $k$ to layer $\ell$ (where $1\leq k < \ell \leq n$ ) form a block, the weight vector is defined as $\tilde{\theta} = \mathrm{vec}[\mathbf{w}^{(k),\top},\dots,\mathbf{w}^{(\ell),\top}]^{\top}$ and the Hessian can be also transformed by $\Delta \tilde{\theta}^{\top}\bar{\mathbf{H}}^{(\tilde{\theta})}\Delta \tilde{\theta} = \mathbb{E}[\Delta \mathbf{z}^{(\ell),\top}\mathbf{H}^{(\mathbf{z}^{(\ell)})}\Delta \mathbf{z}^{(\ell)}]$ . Such block-diagonal Hessian ignores the inter-block dependency and considers the intra-block dependency but it produces less generalization error. Then we can block-by-block reconstruct the intermediate output.
+
+To this end, we define 2 extra kinds of intermediate reconstruction granularity: Stage-wise reconstruction and Block-wise reconstruction. These 4 reconstruction granularities are described below:
+
+1. Layer-wise Reconstruction: Assume the Hessian matrix is layer-diagonal and optimize the layer output one-by-one. It does not consider cross-layer dependency and resemble existing methods (Nagel et al., 2020; Hubara et al., 2020; Wang et al., 2020).
+2. Block-wise Reconstruction: A block is the core component in modern CNN, such as the Residual Bottleneck Block as shown in Fig. 1a. This method assumes the Hessian matrix is block-diagonal and block-by-block perform reconstruction, which ignores inter-block dependencies.
+3. Stage-wise Reconstruction: A stage is where the featuremaps will be downsampled and generate more channels, which is believed to produce higher-level features. Typical CNN in ImageNet dataset contains 4 or 5 different stages. This method simultaneously optimizes all layers within a stage and thus considers more dependencies than the block-wise method.
+4. Network-wise Reconstruction: Optimize the whole quantized network by reconstructing the output of the final layers. This method resembles distillation but does not result in good performances with few images because of high generalization error.
+
+The relationship between network, stage, block, and layer is illustrated in Fig. 1a. We test these 4 kinds of reconstruction granularity and find that block-wise optimization outperforms others. We think this is because the main off-diagonal loss in the Hessian is concentrated in each block, as Fig. 1b orange part illustrated, while the inter-block loss is small and can be ignored in the opti
+
+Algorithm 1: BRECQ optimization
+Input: Pretrained FP model; Calibration dataset, iteration $T$
+for all $i = 1,2,\dots ,N$ th block in the FP model do Collect input data to the block $\mathbf{x}^{(i)}$ , the FP output $\mathbf{z}^{(i)}$ and its gradient $\mathbf{g}^{(\mathbf{z}^{(i)})}$ . for all $j = 1,2,\ldots ,T$ -iteration do Get quantized output $\hat{\mathbf{z}}^{(i)}$ and compute $\Delta \mathbf{z}^{(i)} = \mathbf{z}^{(i)} - \hat{\mathbf{z}}^{(i)}$ . Descend Eq. (10) and update the rounding of all the weights in this block (Eq. (16)); if Activation Quantization is triggered then Update the activation quantization step size (Eq. (18)). After optimization, compute the sensitivity for each layer and between layers (2-bit only);
+return Quantized model, Sensitivities for mixed precision;
+
+mization. The shortcut connections, which is proposed in (He et al., 2016), may also increase the dependencies within a block. Also, the stage-wise or net-wise optimization suffer from the bad generalization on the validation set and degenerate the final performances. We report the quantitative comparison in Sec. 4.1. We name our algorithm BRECQ, because we choose block as our base reconstruction unit. It is necessary to point out that our analysis does not give the optimal configuration of the reconstruction granularity. The choice of block-wise optimization comes from our experiments and we find this choice has two merits. (1) No hyper-parameters included and (2) applicable for all models and all tasks we tested.
+
+# 3.3 APPROXIMATING PRE-ACTIVATION HESSIAN
+
+With block-diagonal approximated Hessian matrix, we can measure the cross-layer dependency inside each block and transform any block's second-order error to the output of this block $\mathbb{E}[\Delta \mathbf{z}^{(\ell),\top}\mathbf{H}^{(\mathbf{z}^{(\ell)})}\Delta \mathbf{z}^{(\ell)}]$ . This objective requires the further computation of the knowledge in the rest of the network, i.e., pre-activation Hessian $\mathbf{H}^{(\mathbf{z}^{(\ell)})}$ . One way is to follow Nagel et al. (2020) and assume $\mathbf{H}^{(\mathbf{z}^{(\ell)})} = c\times \mathbf{I}$ . Therefore the quadratic loss becomes $||\Delta \mathbf{z}^{(\ell)}||^2$ . This method might be easy to implement but lose too much information.
+
+We use the diagonal Fisher Information Matrix (FIM) to replace the pre-activation Hessian. Formally, given a probabilistic model $p(x|\theta)$ , the FIM is defined as:
+
+$$
+\bar {\mathbf {F}} ^ {(\theta)} = \mathbb {E} \left[ \nabla_ {\theta} \log p _ {\theta} (y | x) \nabla_ {\theta} \log p _ {\theta} (y | x) ^ {\mathsf {T}} \right] = - \mathbb {E} \left[ \nabla_ {\theta} ^ {2} \log p _ {\theta} (y | x) \right] = - \bar {\mathbf {H}} _ {\log p (x | \theta)} ^ {(\theta)}. \qquad (9)
+$$
+
+The FIM is equal to the negative expected Hessian of the log-likelihood function, therefore, a simple corollary is that the Hessian of task loss will become FIM if the model distribution matches the true data distribution (LeCun et al., 2012). Although matching true data distribution seems impossible, this is the best we can do since the pretrained model is converged.
+
+The diagonal of the pre-activation FIM is equal to the squared gradients of each elements, which is successfully used in Adam (Kingma & Ba, 2014) for the second momentum. The optimization objective becomes
+
+$$
+\min _ {\tilde {\mathbf {w}}} \mathbb {E} \left[ \Delta \mathbf {z} ^ {(\ell), \top} \mathbf {H} ^ {(\mathbf {z} ^ {(\ell)})} \Delta \mathbf {z} ^ {(\ell)} \right] = \min _ {\tilde {\mathbf {w}}} \mathbb {E} \left[ \Delta \mathbf {z} ^ {(\ell), \top} \operatorname {d i a g} \left(\left(\frac {\partial L}{\partial \mathbf {z} _ {1} ^ {(\ell)}}\right) ^ {2}, \dots , \left(\frac {\partial L}{\partial \mathbf {z} _ {a} ^ {(\ell)}}\right) ^ {2}\right) \Delta \mathbf {z} ^ {(\ell)} \right]. \tag {10}
+$$
+
+Compared with the MSE minimization, the above minimization incorporates the squared gradient information. If the output has higher absolute gradients, it will receive more attention when being reconstructed. A similar method for pruning the pre-activation has been proposed in Theis et al. (2018).
+
+Note that BRECQ is compatible with any optimization method, like STE (Hubara et al., 2017). Here we adopt adaptive rounding (Nagel et al., 2020) for weights and learned step size (Esser et al., 2020) for activation step size because we observe they generally perform better in PTQ, see details in Appendix B.4.1. We formulate the overall calibration algorithm for a unified precision model in algorithm 1. We should emphasize that we only need a small subset (1024 in our experiments) of the
+
+whole training dataset to calibrate the quantized model. And we can obtain a quantized ResNet-18 within 20 minutes on a single GTX 1080TI GPU.
+
+# 3.4 MIXED PRECISION
+
+To further push the limit of post-training quantization, we employ mixed precision techniques, which can be formulated by
+
+$$
+\min _ {\mathbf {c}} L (\hat {\mathbf {w}}, \mathbf {c}), \text {s . t .} H (\mathbf {c}) \leq \delta , \mathbf {c} \in \{2, 4, 8 \} ^ {n}. \tag {11}
+$$
+
+Here, $\mathbf{c}$ is the bit-width vector with the shape of number of layers. $H(\cdot)$ is a hardware performance measurement function, which is used to ensure the mixed precision model has the same or lower hardware performance (e.g., memory and speed) than a predefined threshold $\delta$ . We choose 2, 4, 8-bit for mixed precision because they are most common in practical deployment.
+
+Regarding the training loss $L$ , we find that nearly all existing literature (Cai et al., 2020; Hubara et al., 2020; Dong et al., 2019) uses layer-wise measurement. They all assume the sensitivity within a layer is independent and can be summed together. Therefore, the mixed precision problem becomes an integer programming problem. However, we argue that the loss measurement should contain two parts: diagonal loss and off-diagonal loss, the first is the same with previous works and measure the sensitivity of each layer independently, while the off-diagonal loss is used to measure the cross-layer sensitivity. Theoretically, we should examine all permutations, which results in $3^n$ possibilities and prohibits the search algorithm. Our first attempt is to reduce the off-diagonal loss into the block-level as we mentioned that the Hessian can be approximated to a block-diagonal matrix. Granted, we still find the search space is large, for example, if a block has four layers, then we have to consider the $3^4 = 81$ permutations for a single block. Based on our preliminary experiments, we find that 4-bit and 8-bit quantization nearly do not drop the final accuracy. Hence we only take 2-bit permutations into consideration and drastically reduce the search space. We use genetic algorithm (Guo et al., 2020) to search the optimal bitwidth configuration with hardware performance threshold, the algorithm is located in algorithm 2. Due to space limits, we put related works in Appendix 5. Readers can refer to related works for a brief discussion on quantization and second-order analysis.
+
+# 4 EXPERIMENTS
+
+In this section, we report experimental results for the ImageNet classification task and MS COCO object detection task. The detailed implementation of the experiments can be found in the Appendix B.4.4. The rest of this section will contain ablation study on reconstruction granularity, classification and detection results, mixed precision results and comparison with quantization-aware training. In Appendix B, we conduct more experiments, including the impact of the first and the last layer, the impact of calibration dataset size and data source.
+
+# 4.1 ABLATION STUDY
+
+We test four kinds of reconstruction granularity: Net-wise, Stage-wise, Block-wise, and Layer-wise
+
+reconstruction. We conduct ImageNet experiments using MobileNetV2 and ResNet-18 with 2-bit weight quantization for all layers except for the first and the last layer. It can be seen from Table 1 that Block-wise optimization outperforms other methods. This result implies that the generalization error in net-wise and stage-wise optimization outweighs their off-diagonal loss. In ResNet
+
+18, we find the difference is not significant, this can be potentially attributed to that ResNet-18 only has 19 layers in the body and the block size, as well as the stage size, is small, therefore leading to indistinct results.
+
+Table 1: Ablation study.
+
+| Model | Layer | Block | Stage | Net |
| ResNet-18 | 65.19 | 66.39 | 66.01 | 54.15 |
| MobileNetV2 | 52.13 | 59.67 | 54.23 | 40.76 |
+
+# 4.2 IMAGENET
+
+We conduct experiments on a variety of modern deep learning architectures, including ResNet (He et al., 2016) with normal convolution, MobileNetV2 (Sandler et al., 2018) with depthwise separa
+
+Table 2: Accuracy comparison on weight-only quantized post-training models. Activations here are unquantized and kept full precision. We also conduct variance study for our experiments. Bold values indicate best results. * indicates our implementation based on open-source codes.
+
+| Methods | Bits (W/A) | ResNet-18 | ResNet-50 | MobileNetV2 | RegNet-600MF | RegNet-3.2GF | MnasNet-2.0 |
| Full Prec. | 32/32 | 71.08 | 77.00 | 72.49 | 73.71 | 78.36 | 76.68 |
| Bias Correction* | 4/32 | 50.43 | 64.64 | 62.82 | 67.09 | 71.73 | 72.31 |
| OMSE (Choukroun et al., 2019) | 4/32 | 67.12 | 74.67 | - | - | - | - |
| AdaRound (Nagel et al., 2020) | 4/32 | 68.71 | 75.23 | 69.78 | 71.97* | 77.12* | 74.87* |
| AdaQuant (Hubara et al., 2020) | 4/32 | 68.82 | 75.22 | 44.78 | - | - | - |
| Bit-Split (Wang et al., 2020) | 4/32 | 69.11 | 75.58 | - | - | - | - |
| BRECQ (Ours) | 4/32 | 70.70±0.07 | 76.29±0.04 | 71.66±0.04 | 73.02±0.09 | 78.04±0.04 | 76.00±0.02 |
| Bias Correction* | 3/32 | 12.85 | 7.97 | 10.89 | 28.82 | 17.95 | 40.72 |
| AdaRound (Nagel et al., 2020)* | 3/32 | 68.07 | 73.42 | 64.33 | 67.71 | 72.31 | 69.33 |
| AdaQuant (Hubara et al., 2020)* | 3/32 | 58.12 | 67.61 | 12.56 | - | - | - |
| Bit-Split (Wang et al., 2020) | 3/32 | 66.75 | 73.24 | - | - | - | - |
| BRECQ (Ours) | 3/32 | 69.81±0.05 | 75.61±0.09 | 69.50±0.12 | 71.48±0.07 | 77.22±0.04 | 74.58±0.08 |
| Bias Correction* | 2/32 | 0.13 | 0.12 | 0.14 | 0.18 | 0.11 | 0.11 |
| AdaRound (Nagel et al., 2020)* | 2/32 | 55.96 | 47.95 | 32.54 | 25.66 | 24.70 | 30.60 |
| AdaQuant (Hubara et al., 2020)* | 2/32 | 0.30 | 0.49 | 0.11 | - | - | - |
| BRECQ (Ours) | 2/32 | 66.30±0.12 | 72.40±0.12 | 59.67±0.13 | 65.83±0.13 | 73.88±0.14 | 67.13±0.13 |
+
+Table 3: Accuracy comparison on fully quantized post-training models. Activations here are quantized to 4-bit. Notations follow the upper table.
+
+| Methods | Bits (W/A) | ResNet-18 | ResNet-50 | MobileNetV2 | RegNet-600MF | RegNet-3.2GF | MNasNet-2.0 |
| Full Prec. | 32/32 | 71.08 | 77.00 | 72.49 | 73.71 | 78.36 | 76.68 |
| ACIQ-Mix (Banner et al., 2019) | 4/4 | 67.0 | 73.8 | - | - | - | - |
| ZeroQ (Cai et al., 2020)* | 4/4 | 21.71 | 2.94 | 26.24 | 28.54 | 12.24 | 3.89 |
| LAPQ (Nahshan et al., 2019) | 4/4 | 60.3 | 70.0 | 49.7 | 57.71* | 55.89* | 65.32* |
| AdaQuant (Hubara et al., 2020) | 4/4 | 67.5 | 73.7 | 34.95* | - | - | - |
| Bit-Split (Wang et al., 2020) | 4/4 | 67.56 | 73.71 | - | - | - | - |
| BRECQ (Ours) | 4/4 | 69.60±0.04 | 75.05±0.09 | 66.57±0.67 | 68.33±0.28 | 74.21±0.19 | 73.56±0.24 |
| ZeroQ (Cai et al., 2020)* | 2/4 | 0.08 | 0.08 | 0.10 | 0.10 | 0.05 | 0.12 |
| LAPQ (Nahshan et al., 2019)* | 2/4 | 0.18 | 0.14 | 0.13 | 0.17 | 0.12 | 0.18 |
| AdaQuant (Hubara et al., 2020)* | 2/4 | 0.21 | 0.12 | 0.10 | - | - | - |
| BRECQ (Ours) | 2/4 | 64.80±0.08 | 70.29±0.23 | 53.34±0.15 | 59.31±0.49 | 67.15±0.11 | 63.01±0.35 |
+
+ble convolution and RegNet (Radosavovic et al., 2020) with group convolution. Last but not least important, we also investigate the neural architecture searched (NAS) models, MNasNet (Tan et al., 2019). In Table 2, we only quantize weights into low-bit integers and keep activations full precision. We compare with strong baselines including Bias Correction, optimal MSE, AdaRound, AdaQuant, and Bit-split. Note that the first and the last layer are kept with 8-bit. While most of the existing methods have good performances in 4-bit quantization, they cannot successfully quantize the model into 2-bit. Our method consistently achieves the lowest accuracy degradation for ResNets (within $5\%$ ) and other compact models. We further quantize activations into 4-bit to make the quantized model run on integer-arithmetic hardware platforms. We find that 4-bit activation quantization can have a huge impact on RegNet and MobileNet. Nonetheless, our methods produce higher performance than other state-of-the-arts. To be noted, BRECQ is the first to promote the 2W4A accuracy of PTQ to a usable level while all other existing methods crashed.
+
+# 4.3 COMPARISON WITH QUANTIZATION-AWARE TRAINING
+
+Table 4: Performance as well as training cost comparison with quantization-aware training (QAT).
+
+| Models | Methods | Precision | Accuracy | Model Size | Training Data | GPU hours |
| ResNet-18 | ZEROQ (CAI ET AL., 2020) | 4/4 | 21.20 | 5.81 MB | 0 | 0.008 |
| BRECQ (OURS) | 4/4 | 69.60 | 5.81 MB | 1024 | 0.4 |
| FP: 71.08 | BRECQ (W/ DISTILLED DATA) | 4/4 | 69.32 | 5.81 MB | 0 | 0.4 |
| PACT (CHOI ET AL., 2018) | 4/4 | 69.2 | 5.81 MB | 1.2 M | 100 |
| DSQ (GONG ET AL., 2019) | 4/4 | 69.56 | 5.81 MB | 1.2 M | 100 |
| LSQ (ESSER ET AL., 2020) | 4/4 | 71.1 | 5.81 MB | 1.2 M | 100 |
| MobileNetV2 | BRECQ (OURS) | 4/4 | 66.57 | 2.26 MB | 1024 | 0.8 |
| PACT (CHOI ET AL., 2018) | 4/4 | 61.40 | 2.26 MB | 1.2 M | 192 |
| FP: 72.49 | DSQ (GONG ET AL., 2019) | 4/4 | 64.80 | 2.26 MB | 1.2 M | 192 |
| BRECQ (OURS) | Mixed/8 | 70.74 | 1.38 MB | 1024 | 3.2 |
| HAQ (WANG ET AL., 2019) | Mixed/8 | 70.90 | 1.38 MB | 1.2 M | 384 |
+
+Table 5: Objection detection task (MS COCO) comparison on fully quantized post-training models. Activations here are quantized to 8-bit. We report the bounding box mean Average Precision (mAP) metric.
+
+| Models | Backbone | Full Prec. | Bias Correction* | AdaRound* | ZeroQ | BRECQ (Ours) |
| 32/32 | 8/8 | 4/8 | 4/8 | 2/8 | 4MP/8 | 8/8 | 4/8 | 2/8 |
| Faster RCNN | ResNet-18 | 34.55 | 34.30 | 0.84 | 33.96 | 23.01 | - | 34.53 | 34.34 | 31.82 |
| (Ren et al., 2015) | ResNet-50 | 38.55 | 38.25 | 0.25 | 37.58 | 19.63 | - | 38.54 | 38.29 | 34.23 |
| MobileNetV2 | 33.44 | 33.24 | 18.39 | 32.77 | 16.35 | - | 33.40 | 33.18 | 27.54 |
| RetinaNet | ResNet-18 | 33.20 | 33.00 | 0.04 | 32.59 | 19.93 | - | 33.14 | 33.01 | 31.42 |
| (Lin et al., 2017) | ResNet-50 | 36.82 | 36.68 | 0.07 | 36.00 | 19.97 | 33.7 | 36.73 | 36.65 | 34.75 |
| MobileNetV2 | 32.63 | 32.60 | 18.47 | 31.89 | 14.10 | - | 32.57 | 32.31 | 27.59 |
+
+
+Figure 2: Mixed Precision results.
+
+In this section, we compare our algorithm (post-training quantization) with some quantization-aware training methods, including PACT (Choi et al., 2018), DSQ (Gong et al., 2019), LSQ (Esser et al., 2020), and a mixed precision technique HAQ (Wang et al., 2019). Table 4 shows that although BRECQ is a PTQ method with limited available data, it can achieve comparable accuracy results with existing quantization-aware training models. In addition, our method can surpass them in 4-bit MobileNetV2 while using less than one training GPU hours. Our method also has comparable accuracy with HAQ, which is a training-based mixed precision method. Note that our GPU hours include 3 unified precision training (2-, 4-, 8-bit respectively) and further mixed-precision training only needs to check the lookup table. Instead, HAQ would end-to-end search for each hardware performance threshold from scratch.
+
+# 4.4 MS COCO
+
+To validate the effectiveness of BRECQ on other tasks, we conduct object detection on the two-stage Faster-RCNN (Ren et al., 2015) and the one-stage RetinaNet (Lin et al., 2017). ResNet-18, 50 as well as MobileNetV2 are adopted as backbones for the detection model. Results in Table 5 demonstrate our method nearly does not drop the performance in 4-bit weight quantization and 8-bit activation. In particular, BRECQ only decreases $0.21\%$ mAP performance on 4-bit ResNet-18 backboned Faster RCNN. On 4-bit ResNet-50 backboned RetinaNet, our method is outperforms the mixed precision based ZeroQ model by $3\%$ mAP. Even when the weight bit decreases to 2, the model still achieves near-to-original mAP.
+
+# 4.5 MIXED PRECISION
+
+In this section, we test (1) model-size guaranteed mixed precision and (2) FPGA latency guaranteed mixed precision² to unleash the potential of mixed precision and further push the limit of PTQ. We choose ResNet-18, MobileNetV2, and RegNetX-600MF to validate the efficacy of our algorithm. Note that in this section, we keep activation in 8-bit because we only compare the discrepancy between the unified and mixed precision in weights. We omit 3-bit weight quantization in unified precision because it is usually unfriendly to the hardware. Latency settings can be found in Appendix B.4.3. From Fig. 2 we find that (1) mixed precision consistently outperforms unified precision, especially when using extremely low-bit, e.g., up to $10\%$ accuracy increase with the same latency as the 2-bit model. (2) mixed precision can produce many bit configurations that can adapt to plenty of hardware requirements while unified precision can only have 2 fixed models.
+
+# 5 RELATED WORKS
+
+Quantization Model quantization can be classified into two categories: Quantization-aware Training (QAT) and Post-training Quantization (PTQ). Rounding floating-point numbers to fixed-points numbers will produce 0 gradients almost everywhere. Therefore, most QAT methods employ the Straight-Through Estimator (STE) for gradients approximation. Gong et al. (2019) uses a differentiable tanh function to gradually approach the step function. Choi et al. (2018); Esser et al. (2020) introduces parameterized clipping thresholds to learn it by STE. Apart from uniform quantization, some works like Li et al. (2019) argue that non-uniform quantization has better performance than uniform quantization while keeping its efficiency. Despite the promising results given by QAT methods, they usually need more than 100 GPU hours to get it. In that case, PTQ plays an important role which is what we focus on in this paper. Generally, most deep learning models can be safely quantized to 8-bit without re-training. Data-Free Quantization Nagel et al. (2019) even do layer-wise 8-bit PTQ without any data. However, in 4-bit quantization, most parameter space-based methods cannot obtain good performances. Recently, Nagel et al. (2020) propose to do layer-wise calibration and made huge progress in 4-bit quantization. Our work continues its analysis on Taylor expansion and considers the off-diagonal loss. Another perspective of quantification is the precision allocation scheme. Hardware-aware Quantization (HAQ Wang et al. (2019)) leverages reinforcement learning to search the optimal bitwidth configuration. Hessian-aware Weight Quantization (HAWQ) (Dong et al., 2019) utilizes the second-order information to decide the bitwidth. Mixed precision also appears in PTQ, such as the Pareto frontier method in ZeroQ (Cai et al., 2020) and the Integer Programming method in AdaQuant (Hubara et al., 2020).
+
+Second-order Analysis and Optimization The history of second-order information in perturbation analysis can be traced to the 1990s like Optimal Brain Surgeon (Hassibi & Stork, 1993; Dong et al., 2017). The Hessian matrix is essential for pruning and quantization. As aforementioned, HAWQ uses the largest eigenvalue of Hessian to determine the sensitivity. Hessian matrix is also important for second-order optimization like Newton's method as it consists of the curvature information. However, calculating the real full Hessian is prohibitive on today's deep learning architectures. Therefore, approximations are made to simplify the calculation and make the storage more flexible, e.g., Gauss-Newton optimization with Kronecker-factored recursive approximation Botev et al. (2017). Hessian-Free optimization (Martens, 2010) avoids the explicit computation of the Hessian matrix by solving the linear system $g = Hv$ . Second-order optimization with FIM is called Natural Gradient Descent (Amari, 1998). K-FAC (Martens & Grosse, 2015) utilizes the layer-diagonal FIM and the approximation of the expected Kronecker product to compute the curvature information.
+
+# 6 CONCLUSION
+
+In this paper, we propose BRECQ, a post-training quantization framework by analyzing the second-order error. We show that the reconstruction of quantization at the block granularity arrives at a good balance of cross-layer dependency and first order approximation, especially in 2-bit weight quantization where no prior works succeed to quantize. BRECQ is compatible with mixed precision and can reduce the search cost. To our best knowledge, BRECQ reaches the highest performance in post-training quantization and is the first to be on a par with quantization-aware training using 4-bit.
+
+# ACKNOWLEDGMENT
+
+We thank Markus Nagel and anonymous reviewers for their kind help of this work. This project is primarily supported by NSFC 61876032.
+
+# REFERENCES
+
+Shun-Ichi Amari. Natural gradient works efficiently in learning. Neural computation, 10(2):251-276, 1998.
+Haoli Bai, Jiaxiang Wu, Irwin King, and Michael Lyu. Few shot network compression via cross distillation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pp. 3203-3210, 2020.
+Ron Banner, Yury Nahshan, and Daniel Soudry. Post training 4-bit quantization of convolutional networks for rapid-deployment. In Advances in Neural Information Processing Systems, 2019.
+Aleksandar Botev, Hippolyt Ritter, and David Barber. Practical gauss-newton optimisation for deep learning. arXiv preprint arXiv:1706.03662, 2017.
+Yaohui Cai, Zhewei Yao, Zhen Dong, Amir Gholami, Michael W Mahoney, and Kurt Keutzer. Zeroq: A novel zero shot quantization framework. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13169-13178, 2020.
+Jungwook Choi, Zhuo Wang, Swagath Venkataramani, Pierce I-Jen Chuang, Vijayalakshmi Srinivasan, and Kailash Gopalakrishnan. Pact: Parameterized clipping activation for quantized neural networks. arXiv preprint arXiv:1805.06085, 2018.
+Yoni Choukroun, Eli Kravchik, Fan Yang, and Pavel Kisilev. Low-bit quantization of neural networks for efficient inference. In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp. 3009-3018. IEEE, 2019.
+Xin Dong, Shangyu Chen, and Sinno Pan. Learning to prune deep neural networks via layer-wise optimal brain surgeon. In Advances in Neural Information Processing Systems, 2017.
+Zhen Dong, Zhewei Yao, Amir Gholami, Michael W Mahoney, and Kurt Keutzer. Hawq: Hessian aware quantization of neural networks with mixed-precision. In Proceedings of the IEEE International Conference on Computer Vision, pp. 293–302, 2019.
+Steven K. Esser, Jeffrey L. McKinstry, Deepika Bablani, Rathinakumar Appuswamy, and Dharmendra S. Modha. Learned step size quantization. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=rkg066VKDS.
+Ruihao Gong, Xianglong Liu, Shenghu Jiang, Tianxiang Li, Peng Hu, Jiazhen Lin, Fengwei Yu, and Junjie Yan. Differentiable soft quantization: Bridging full-precision and low-bit neural networks. arXiv preprint arXiv:1908.05033, 2019.
+Zichao Guo, Xiangyu Zhang, Haoyuan Mu, Wen Heng, Zechun Liu, Yichen Wei, and Jian Sun. Single path one-shot neural architecture search with uniform sampling. In European Conference on Computer Vision, pp. 544-560. Springer, 2020.
+Qingchang Han, Yongmin Hu, Fengwei Yu, Hailong Yang, Bing Liu, Peng Hu, Ruihao Gong, Yanfei Wang, Rui Wang, Zhongzhi Luan, et al. Extremely low-bit convolution optimization for quantized neural network on modern computer architectures. In 49th International Conference on Parallel Processing-ICPP, pp. 1-12, 2020.
+Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015.
+Babak Hassibi and David G Stork. Second order derivatives for network pruning: Optimal brain surgeon. In Advances in neural information processing systems, pp. 164-171, 1993.
+
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
+Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
+Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Quantized neural networks: Training neural networks with low precision weights and activations. The Journal of Machine Learning Research, 18(1):6869-6898, 2017.
+Itay Hubara, Yury Nahshan, Yair Hanani, Ron Banner, and Daniel Soudry. Improving post training neural quantization: Layer-wise calibration and integer programming. arXiv preprint arXiv:2006.10518, 2020.
+Daniel Jakubovitz, Raja Giryes, and Miguel RD Rodrigues. Generalization error in deep learning. In Compressed Sensing and Its Applications, pp. 153-193. Springer, 2019.
+Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
+Yann A LeCun, Léon Bottou, Genevieve B Orr, and Klaus-Robert Müller. Efficient backprop. In Neural networks: Tricks of the trade, pp. 9-48. Springer, 2012.
+Yuhang Li, Xin Dong, and Wei Wang. Additive powers-of-two quantization: An efficient nonuniform discretization for neural networks. In International Conference on Learning Representations, 2019.
+Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dólár. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pp. 2980-2988, 2017.
+James Martens. Deep learning via hessian-free optimization. 2010.
+James Martens and Roger Grosse. Optimizing neural networks with kronecker-factored approximate curvature. In International conference on machine learning, pp. 2408-2417, 2015.
+Markus Nagel, Mart van Baalen, Tijmen Blankevoort, and Max Welling. Data-free quantization through weight equalization and bias correction. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1325-1334, 2019.
+Markus Nagel, Rana Ali Amjad, Mart van Baalen, Christos Louizos, and Tijmen Blankevoort. Up or down? adaptive rounding for post-training quantization. arXiv preprint arXiv:2004.10568, 2020.
+Yury Nahshan, Brian Chmiel, Chaim Baskin, Evgenii Zheltonozhskii, Ron Banner, Alex M Bronstein, and Avi Mendelson. Loss aware post-training quantization. arXiv preprint arXiv:1911.07190, 2019.
+Antonio Polino, Razvan Pascanu, and Dan Alistarh. Model compression via distillation and quantization. arXiv preprint arXiv:1802.05668, 2018.
+Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, and Piotr Dólar. Designing network design spaces. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10428-10436, 2020.
+Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pp. 91-99, 2015.
+Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4510-4520, 2018.
+
+Hardik Sharma, Jongse Park, Naveen Suda, Liangzhen Lai, Benson Chau, Vikas Chandra, and Hadi Esmaeilzadeh. Bit fusion: Bit-level dynamically composable architecture for accelerating deep neural network. In 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA), pp. 764-775. IEEE, 2018.
+Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, and Quoc V Le. Mnasnet: Platform-aware neural architecture search for mobile. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2820-2828, 2019.
+Lucas Theis, Iryna Korshunova, Alykhan Tejani, and Ferenc Huszár. Faster gaze prediction with dense networks and fisher pruning. arXiv preprint arXiv:1801.05787, 2018.
+Kuan Wang, Zhijian Liu, Yujun Lin, Ji Lin, and Song Han. Haq: Hardware-aware automated quantization with mixed precision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8612-8620, 2019.
+Peisong Wang, Qiang Chen, Xiangyu He, and Jian Cheng. Towards accurate post-training network quantization via bit-split and stitching. In Proc. 37nd Int. Conf. Mach. Learn.(ICML), 2020.
+Barret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578, 2016.
+
+# A MAIN PROOFS
+
+# A.1 PROOF OF THEOREM 3.1
+
+Proof. We will prove the theorem using quadratic form. Denote the weight vector shape as $\theta \in \mathbb{R}^d$ , and the network output vector shape as $\mathbf{z}^{(n)} \in \mathbb{R}^m$ . The quadratic form of the $\Delta \theta^{\mathrm{T}}\mathbf{H}^{(\theta)}\Delta \theta$ can be represented by:
+
+$$
+\Delta \theta^ {\top} \mathbf {H} ^ {(\theta)} \Delta \theta = \sum_ {i = 1} ^ {d} \Delta \theta_ {i} ^ {2} \left(\frac {\partial^ {2} L}{\partial \theta_ {i} ^ {2}}\right) + 2 \sum_ {i < j} ^ {d} \Delta \theta_ {i} \Delta \theta_ {j} \frac {\partial L}{\partial \theta_ {i} \theta_ {j}} = \sum_ {i = 1} ^ {d} \sum_ {j = 1} ^ {d} \left(\Delta \theta_ {i} \Delta \theta_ {j} \frac {\partial L}{\partial \theta_ {i} \theta_ {j}}\right), \tag {12}
+$$
+
+where $L$ is the cross-entropy loss. Based on Eq. (5), we have
+
+$$
+\frac {\partial^ {2} L}{\partial \theta_ {i} \theta_ {j}} = \sum_ {k, l} ^ {m} \frac {\partial \mathbf {z} _ {k} ^ {(n)}}{\partial \theta_ {l}} \frac {\partial^ {2} L}{\partial \mathbf {z} _ {k} ^ {(n)} \mathbf {z} _ {l} ^ {(n)}} \frac {\partial \mathbf {z} _ {l} ^ {(n)}}{\partial \theta_ {j}} \tag {13}
+$$
+
+Substituting above equation in Eq. (12), we have
+
+$$
+\begin{array}{l} \Delta \theta^ {\top} \mathbf {H} ^ {(\theta)} \Delta \theta = \sum_ {i = 1} ^ {d} \sum_ {j = 1} ^ {d} \Delta \theta_ {i} \Delta \theta_ {j} \left(\sum_ {k = 1} ^ {m} \sum_ {l = 1} ^ {m} \frac {\partial \mathbf {z} _ {k} ^ {(n)}}{\partial \theta_ {i}} \frac {\partial^ {2} L}{\partial \mathbf {z} _ {k} ^ {(n)} \mathbf {z} _ {l} ^ {(n)}} \frac {\partial \mathbf {z} _ {l} ^ {(n)}}{\partial \theta_ {j}}\right) (14a) \\ = \sum_ {i = 1} ^ {d} \sum_ {j = 1} ^ {d} \sum_ {k = 1} ^ {m} \sum_ {l = 1} ^ {m} \left(\Delta \theta_ {i} \Delta \theta_ {j} \frac {\partial \mathbf {z} _ {k} ^ {(n)}}{\partial \theta_ {i}} \frac {\partial^ {2} L}{\partial \mathbf {z} _ {k} ^ {(n)} \mathbf {z} _ {l} ^ {(n)}} \frac {\partial \mathbf {z} _ {l} ^ {(n)}}{\partial \theta_ {j}}\right) (14b) \\ = \sum_ {k = 1} ^ {m} \sum_ {l = 1} ^ {m} \left(\frac {\partial^ {2} L}{\partial \mathbf {z} _ {k} ^ {(n)} \mathbf {z} _ {l} ^ {(n)}}\right) \left(\sum_ {i = 1} ^ {d} \Delta \theta_ {i} \frac {\partial \mathbf {z} _ {k} ^ {(n)}}{\partial \theta_ {i}}\right) \left(\sum_ {j = 1} ^ {d} \Delta \theta_ {j} \frac {\partial \mathbf {z} _ {k} ^ {(n)}}{\partial \theta_ {j}}\right) (14c) \\ = \left(\Delta \theta \mathbf {J} \left[ \frac {\mathbf {z} ^ {(n)}}{\theta} \right]\right) ^ {\top} \mathbf {H} ^ {\left(\mathbf {z} ^ {(n)}\right)} \left(\Delta \theta \mathbf {J} \left[ \frac {\mathbf {z} ^ {(n)}}{\theta} \right]\right), (14d) \\ \end{array}
+$$
+
+where we define the $\mathbf{J}\left[\frac{x}{y}\right]$ is the Jacobian matrix of $x$ w.r.t. $y$ . To this end, we use the first-order Taylor expansion as we did in Eq. (8) to approximate the change in network output, i.e.,
+
+$$
+\Delta \mathbf {z} ^ {(n)} \approx \Delta \theta \mathbf {J} \left[ \frac {\mathbf {z} ^ {(n)}}{\theta} \right] \tag {15}
+$$
+
+Therefore, the final objective is transformed to $\Delta \mathbf{z}^{(n),\top}\mathbf{H}^{(\mathbf{z}^{(n)})}\Delta \mathbf{z}^{(n)}$
+
+# B EXPERIMENTS
+
+# B.1 EFFECT OF THE FIRST AND THE LAST LAYER
+
+Many papers claim that the first and the last layer can have a huge impact on the final accuracy. In this section, we investigate this phenomenon as well as the impact of the first and the last layer on hardware performances. We test ResNet-18, MobileNetV2 as well as RegNet-600MF. Our observations include:
+
+1. In terms of accuracy, the 4-bit quantization is essentially good, both of these two layers won't drop too much accuracy (with $0.2\%$ ). But in 2-bit quantization, the last fully connected layer is far more important than the first layer. We also observe that the first layer in MobileNetV2 and RegNet $(3\times 3$ kernels, 32 channels) is slightly more sensitive than that in ResNet-18 $(7\times 7$ kernel, 64 channels).
+2. In terms of model size, the first layer merely has a minor impact because the input images only have 3 channels, while the last layer contains many weight parameters and greatly affects the memory footprint. We should point out that just the model size in the first layer is low doesn't mean the memory burden is low, because the input image will cost huge memory space.
+3. In terms of latency, the situation depends on the architecture. For example, in ResNet-18 the first layer has a huge impact on the latency, while in MobileNetV2 and RegNet-600MF the last layer is more important than the first layer. This is because the latency is affected by multiple factors, such as the input size of the featuremap, the FLOPs, and the weight memory size. The arithmetic intensity (OPs/byte) greatly affects the latency. We find that the operations with high arithmetic intensity, i.e., shallow layers in the network, generate a less latency gap between different bit-widths.
+
+In conclusion, we find that keeping the first and the last layer 8-bit is unnecessary. Especially in ResNet-18, we find that setting all layers to 4-bit results in $53.3\mathrm{ms}$ latency and is faster than the $59.8~\mathrm{ms}$ in 2-bit quantization (with first and last layer 8-bit), but the accuracy is even $4\%$ higher. Such phenomenon indicates the potential power of the mixed precision.
+
+# B.2 EFFECT OF DATA
+
+We evaluated the influence of the size of calibration dataset and the source of the data on ResNet-18. We test different numbers of input data point and find that the improvement in 4-bit quantization is trivial. Yet in 2-bit quantization we can see that the accuracy increases $5\%$ when #data points increase. We also test the distilled data introduced in ZeroQ (Cai et al., 2020). Distilled data is learned from pretrained models' BN statistics, i.e., $\mathbf{x}_{\mathrm{distilled}}^{1} = \arg \min_{\mathbf{x}\in \mathcal{R}}\sum_{i = 1}^{n}((\mu_{i} - \hat{\mu}_{i})^{2} + (\varsigma_{i} - \hat{\varsigma}_{i}))$ where $\mu_{i}$ and $\varsigma_{i}$ is the original mean and variance in BN statistics of the $(i)$ -th layer. We find that distilled data performs good in 4-bit quantization but still has a large margin with the original ImageNet dataset in 2-bit quantization. We also find the final accuracy does not benefit much from the increase of number of distilled data, this might because the distilled data are minimized by a same objective and has low diversity.
+
+Table 6: Impact of the first and the last layer
+
+| Models | No Quantization | Precision 4/8 | Precision 2/8 |
| First | Last | Accuracy | Model Size | Latency | Accuracy | Model Size | Latency |
| ResNet-18 | ✓ | ✓ | 70.76 | 5.81 MB | 70.72 ms | 66.30 | 3.15 MB | 59.84 ms |
| X | ✓ | 70.66 | 5.81 MB | 53.76 ms | 65.95 | 3.15 MB | 31.20 ms |
| FP: 71.08 | ✓ | X | 70.64 | 5.57 MB | 70.08 ms | 64.87 | 2.79 MB | 58.72 ms |
| X | X | 70.58 | 5.56 MB | 53.28 ms | 64.53 | 2.78 MB | 30.88 ms |
| MobileNetV2 | ✓ | ✓ | 71.80 | 2.26 MB | 32.80 ms | 59.59 | 1.74 MB | 30.40 ms |
| X | ✓ | 71.69 | 2.26 MB | 32.64 ms | 59.13 | 1.74 MB | 30.24 ms |
| FP: 72.49 | ✓ | X | 71.42 | 1.65 MB | 31.52 ms | 56.29 | 0.83 MB | 28.48 ms |
| X | X | 71.42 | 1.65 MB | 31.36 ms | 55.58 | 0.82 MB | 28.32 ms |
| RegNet-600MF | ✓ | ✓ | 72.98 | 3.19 MB | 31.84 ms | 65.66 | 1.84 MB | 23.20 ms |
| X | ✓ | 72.89 | 3.19 MB | 31.68 ms | 65.83 | 1.85 MB | 22.88 ms |
| FP: 73.71 | ✓ | X | 72.69 | 2.94 MB | 31.20 ms | 62.93 | 1.47 MB | 22.40 ms |
| X | X | 72.73 | 2.94 MB | 31.04 ms | 63.08 | 1.47 MB | 22.08 ms |
+
+
+Figure 3: Effect of #data points and data source.
+
+
+
+# B.3 MOBILE CPU LATENCY GUARANTEED MIXED PRECISION
+
+
+Figure 4: Mixed precision results on ResNet-18 and 50.
+
+
+
+In this section, we test the mobile CPU latency guaranteed mixed precision. The latency lookup table is tested using the technique in Gong et al. (2019). We only validate it on ResNet-18 and ResNet-50 because the current low-bit General Matrix Multiply (GEMM) implementation only supports normal convolution. The results concur with Fig. 2. Below 4-bit, the mixed precision can achieve better task performance than the unified precision models. For ResNet-50, the improvement is lower than that for ResNet-18 and any other mixed precision models. We think this is because the sensitivity in ResNet-50 is not distinct and therefore the improvement brought by mixed precision is trivial.
+
+# B.4 IMPLEMENTATION
+
+# B.4.1 LEARNING STRATEGIES
+
+In this work, we mainly focus on developing optimization objective rather than optimization strategies. We observe adaptive rounding performs well in post-training quantization. A brief introduction on AdaRound is given below, the detailed algorithm can be found in Nagel et al. (2020).
+
+Traditional quantization function is performed by rounding-to-nearest operation: $\hat{\mathbf{w}} = s \times \mathrm{clip}(\lfloor \mathbf{w} / s \rfloor, n, p)$ . AdaRound optimizes the rounding policy in post-training quantization. Specifically, all weights are initially rounded by floor operation, and a learnable variable $\mathbf{v}$ determines the final rounding result to be flooring or ceiling. A sigmoid-like function $\sigma(\cdot)$ keeps the learnable variable $\mathbf{v}$ moving between 0 and 1 and a regularization term assures the $\sigma(\mathbf{v})$ can converge to either 0 or 1. The formulation is given by
+
+$$
+\hat {\mathbf {w}} = s \times \operatorname {c l i p} \left(\left\lfloor \frac {\mathbf {w}}{s} \right\rfloor + \sigma (\mathbf {v}), n, p\right) \tag {16}
+$$
+
+The minimization problem together with the regularization is given by
+
+$$
+\underset {\mathbf {v}} {\arg \min } \mathbb {E} \left[ \Delta \mathbf {z} ^ {(\ell), \top} \operatorname {d i a g} \left(\left(\frac {\partial L}{\partial \mathbf {z} _ {1} ^ {(\ell)}}\right) ^ {2}, \dots , \left(\frac {\partial L}{\partial \mathbf {z} _ {a} ^ {(\ell)}}\right) ^ {2}\right) \Delta \mathbf {z} ^ {(\ell)} \right] + \lambda \sum_ {i} \left(1 - \left| 2 \sigma \left(\mathbf {v} _ {i}\right) - 1 \right| ^ {\beta}\right), \tag {17}
+$$
+
+where progressively decreasing $\beta$ in the calibration ensures the $\sigma(\mathbf{v})$ converged to binary values. The activations cannot be quantized using adaptive rounding because they vary with different input data points. Thus, we can only adjust its quantization step size Esser et al. (2020). Denoting the quadratic loss in above equation as $L_{q}$ , the gradients of step size is given by
+
+$$
+\frac {\partial L _ {q}}{\partial s} = \left\{ \begin{array}{l l} \frac {\partial L _ {q}}{\partial \hat {\mathbf {x}}} & \text {i f} \mathbf {x} > n \\ \frac {\partial L _ {q}}{\partial \hat {\mathbf {x}}} \left(\frac {\hat {\mathbf {x}}}{s} - \frac {\mathbf {x}}{s}\right) & \text {i f} 0 \leq \mathbf {x} < \alpha , \\ 0 & \text {i f} \mathbf {x} \leq 0 \end{array} \right. \tag {18}
+$$
+
+where all step size in the block will be optimized.
+
+# B.4.2 GENETIC ALGORITHM FOR MIXED PRECISION
+
+Algorithm 2: Genetic algorithm
+Input: Random initialized population $P_0$ with population size $S$ ; Iteration $T$ , mutation probability $p$ ; Hardware performance threshold $\delta$ ; Hardware measurement function $H(\cdot)$ $TopK = \emptyset$ ;
+for all $t = 1,2,\ldots,T$ -th iteration do
+Evaluate fitness value (Eq. (11)) for each individual;
+Update and sort $TopK$ based on fitness function;
+repeat
+New bitwidth configuration by crossover $\mathbf{c}_{cross} = \text{Crossover}(TopK)$ ;
+ $P_{crossover} := P_{crossover} + \mathbf{c}_{cross}$ if $H(\mathbf{c}_{cross}) < \delta$ ;
+until Size of $P_{crossover}$ equal to $S/2$ ;
+repeat
+New bitwidth configuration by mutation $\mathbf{c}_{mutate} = \text{Mutate}(TopK, \text{probability} = p)$ ;
+ $P_{mutate} := P_{mutate} + \mathbf{c}_{mutate}$ if $H(\mathbf{c}_{mutate}) < \delta$ ;
+until Size of $P_{mutate}$ equal to $S/2$ ;
+ $P_t = P_{crossover} \cup P_{mutate}$ ;
+ $P_{mutate} = \emptyset$ , $P_{crossover} = \emptyset$ ;
+
+Get the best fitted entry and then do the overall block reconstruction (cf. algorithm 1);
+
+return mixed precision model
+
+# B.4.3 LATENCY ACQUISITION
+
+We test the latency of quantized neural networks on a self-developed simulator of a precision-variable accelerator for NN. The basic architecture of this accelerator is inspired by typical systolic-matrix multiplication. The accelerator supports the per-channel quantization parameter. The precision of each layer of a NN is highly configurable in this accelerator, supporting 9 types of precision combination: activation: 2-, 4-, 8-bit $\times$ weight: 2-, 4-, 8-bit, see Fig. 5a. With the support of scalable function-unit (Sharma et al., 2018), the peak performance of the accelerator is able to achieve corresponding linear improvement as the precision decreases. For example, the peak performance of this accelerator is 256 GMAC/s in 8-bit $\times$ 8-bit precision, and it scales to 512 GMAC/s in 8-bit $\times$ 4-bit precision and 4 TMAC/s in 2-bit $\times$ 2-bit precision. Although this accelerator provides considerable computation resources especially in low precision, the parallelism of the specific layer (like depthwise convolution) and the bandwidth of on-chip buffer is limited. Consequently, actual performance may not scale accurately along with the peak performance, and the final performance differs according to the size and type of layers. The simulator performs cycle-accurate simulation and evaluation for a given NN executed on the accelerator, so we can get an equivalent evaluation by using this simulator. The simulator is available in the provided source codes.
+
+For the acquisition of mobile ARM CPU latency, we adopt the redesigned low-bit GEMM implementation in Han et al. (2020). Fig. 5b shows a brief overview of the low-bit GEMM implementation. Since there is no instruction supporting the bit-width below 8 on ARM architecture, we can
+
+
+(a) Accelerator design.
+Figure 5: FPGA-based and mobile CPU-based latency acquisition.
+
+
+(b) Low-bit GEMM implementation on ARM CPU.
+
+not get a higher computation efficiency for extremely low-bit such as 2-bit and 4-bit. But we can acquire a better memory access efficiency. The primary speedup comes from the reduction of data movement. Specifically, we can conduct more times of addition in the same 8-bit register before we have to move it to a 16-bit register to avoid overflow. The lower bit-width is used, the less movement is needed. Together with the optimized data packing and data padding, we can run mixed precision quantization on Raspberry Pi 3B, which has a 1.2 GHz 64-bit quad-core ARM Cortex-A53. Note that this implementation is not optimized for depthwise separable or group convolution, therefore we only verify the latency on ResNets.
+
+# B.4.4 IMPLEMENTATION DETAILS
+
+The ImageNet dataset consists of 1.2M training images and 50K test images. We follows standard pre-process (He et al., 2016) to get $1024 \times 224$ input images as the calibration dataset. We fold the batch normalization layer into convolution and freeze the BN statistics before post-training quantization. We use Adam optimizer (Kingma & Ba, 2014) to learn the weight rounding and activation range to reconstruct the block output. Note that some layers are not a component of any block, such as the first convolutional layer and the last fully connected layer and the last convolutional layer in the MobileNetV2. These layers use naive layer reconstruction. The batch size of learning is set to 32 and each block will be optimized for $2 \times 10^{4}$ iterations. The learning rate is set to $10^{-3}$ during the whole learning process. Other hyper-parameters such as the temperature $\beta$ are kept the same with Nagel et al. (2020). For activation step size, we also use Adam optimizer and set the learning rate to 4e-5. Note that we do not implement the gradient scale as introduced in the original paper (Esser et al., 2020). After reconstruction, we will store the sensitivity measured on the calibration dataset. Note that we will store intra-block sensitivity in 2-bit quantization. The sensitivity, as well as hardware performances for each layer, will be stored in a lookup table. When calculating the fitness value and determining the hardware performances in a genetic algorithm, we will check the lookup table. For genetic algorithm, we set the population size to 50 and evolve 100 iterations to obtain the best individual. The first population is initialized by Gaussian distribution and we round the samples to integers in [0, 1, 2], corresponding to bit-width [2, 4, 8]. The mutation probability is set to 0.1. The genetic algorithm usually completes the evolution in only about 3 seconds.
+
+For object detection tasks, we use 256 training images taken from the MS COCO dataset for calibration. The image resolution is set to 800 (max size 1333) for ResNet-18 and ResNet-50, while the image resolution for MobileNetV2 is set to 600 (max size 1000). Note that we only apply block reconstruction in the backbone because other parts of the architecture, such as Feature Pyramid Net, do not have the block structure. Therefore a naive layer reconstruction is applied to the rest of the network. Learning hyper-parameters are kept the same with ImageNet experiments.
\ No newline at end of file
diff --git a/brecqpushingthelimitofposttrainingquantizationbyblockreconstruction/images.zip b/brecqpushingthelimitofposttrainingquantizationbyblockreconstruction/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..4a742b34b7e57b71d9e62c5a795e7cd29e1d6818
--- /dev/null
+++ b/brecqpushingthelimitofposttrainingquantizationbyblockreconstruction/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:072b65985fcc329114ecdb62e9bd6305e06d7a207016a9c59e45a09dbec1065b
+size 791632
diff --git a/brecqpushingthelimitofposttrainingquantizationbyblockreconstruction/layout.json b/brecqpushingthelimitofposttrainingquantizationbyblockreconstruction/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..4f22d16b2f9e5a0a73acdd685c34cad9224b0ec2
--- /dev/null
+++ b/brecqpushingthelimitofposttrainingquantizationbyblockreconstruction/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2e9d444e87e71e7093f6444986a5b56469e3fe64c79310cd1dfb6b9e69798b3d
+size 502749
diff --git a/breedsbenchmarksforsubpopulationshift/894090f6-e36b-4067-ac86-bf63136ca0da_content_list.json b/breedsbenchmarksforsubpopulationshift/894090f6-e36b-4067-ac86-bf63136ca0da_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..0198cb2c67259ff0e5e3234df2a3aed9c7fecc32
--- /dev/null
+++ b/breedsbenchmarksforsubpopulationshift/894090f6-e36b-4067-ac86-bf63136ca0da_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:78360957c8f02b0171300cfb02e57c4f3541ec4eb67f06c4c92e5e36c633c910
+size 151455
diff --git a/breedsbenchmarksforsubpopulationshift/894090f6-e36b-4067-ac86-bf63136ca0da_model.json b/breedsbenchmarksforsubpopulationshift/894090f6-e36b-4067-ac86-bf63136ca0da_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..9490b0fca31b7cd1a13a31bc2ef0abd660681e97
--- /dev/null
+++ b/breedsbenchmarksforsubpopulationshift/894090f6-e36b-4067-ac86-bf63136ca0da_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:57526fb1dfd2afbb48bd8ad95a382e6abe16d0bc6d68cbf563a9871ac80b937c
+size 185046
diff --git a/breedsbenchmarksforsubpopulationshift/894090f6-e36b-4067-ac86-bf63136ca0da_origin.pdf b/breedsbenchmarksforsubpopulationshift/894090f6-e36b-4067-ac86-bf63136ca0da_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..d652bceb5b8084f6b2d9bd179795032dead939e7
--- /dev/null
+++ b/breedsbenchmarksforsubpopulationshift/894090f6-e36b-4067-ac86-bf63136ca0da_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ec70d12a0e6995916fabf1893b41b3e82707f295670ece6809eca20087394be1
+size 3739457
diff --git a/breedsbenchmarksforsubpopulationshift/full.md b/breedsbenchmarksforsubpopulationshift/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..9478ee64429072f42856a4386846c51f70030889
--- /dev/null
+++ b/breedsbenchmarksforsubpopulationshift/full.md
@@ -0,0 +1,540 @@
+# BREEDS: BENCHMARKS FOR SUBPOPULATION SHIFT
+
+Shibani Santurkar*
+MIT
+
+shibani@mit.edu
+
+Dimitris Tsipras*
+MIT
+
+tsipras@mit.edu
+
+Aleksander Madry
+MIT
+
+madry@mit.edu
+
+# ABSTRACT
+
+We develop a methodology for assessing the robustness of models to subpopulation shift—specifically, their ability to generalize to novel data subpopulations that were not observed during training. Our approach leverages the class structure underlying existing datasets to control the data subpopulations that comprise the training and test distributions. This enables us to synthesize realistic distribution shifts whose sources can be precisely controlled and characterized, within existing large-scale datasets. Applying this methodology to the ImageNet dataset, we create a suite of subpopulation shift benchmarks of varying granularity. We then validate that the corresponding shifts are tractable by obtaining human baselines. Finally, we utilize these benchmarks to measure the sensitivity of standard model architectures as well as the effectiveness of existing train-time robustness interventions.1
+
+# 1 INTRODUCTION
+
+Robustness to distribution shift has been the focus of a long line of work in machine learning (Schlimmer & Granger, 1986; Widmer & Kubat, 1993; Kelly et al., 1999; Shimodaira, 2000; Sugiyama et al., 2007; Quionero-Candela et al., 2009; Moreno-Torres et al., 2012; Sugiyama & Kawanabe, 2012). At a high-level, the goal is to ensure that models perform well not only on unseen samples from the datasets they are trained on, but also on the diverse set of inputs they are likely to encounter in the real world. However, building benchmarks for evaluating such robustness is challenging—it requires modeling realistic data variations in a way that is well-defined, controllable, and easy to simulate.
+
+Prior work in this context has focused on building benchmarks that capture distribution shifts caused by natural or adversarial input corruptions (Szegedy et al., 2014; Fawzi & Frossard, 2015; Fawzi et al., 2016; Engstrom et al., 2019b; Ford et al., 2019; Hendrycks & Dietterich, 2019; Kang et al., 2019), differences in data sources (Saenko et al., 2010; Torralba & Efros, 2011; Khosla et al., 2012; Tommasi & Tuytelaars, 2014; Recht et al., 2019), and changes in the frequencies of data subpopulations (Oren et al., 2019; Sagawa et al., 2020). While each of these approaches captures a different source of real-world distribution shift, we cannot expect any single benchmark to be comprehensive. Thus, to obtain a holistic understanding of model robustness, we need to keep expanding our testbed to encompass more natural modes of variation. In this work, we take another step in that direction by studying the following question:
+
+How well do models generalize to data subpopulations they have not seen during training?
+
+The notion of subpopulation shift this question refers to is quite pervasive. After all, our training datasets will inevitably fail to perfectly capture the diversity of the real word. Hence, during deployment, our models are bound to encounter unseen subpopulations—for instance, unexpected weather conditions in the self-driving car context or different diagnostic setups in medical applications.
+
+# OUR CONTRIBUTIONS
+
+The goal of our work is to create large-scale subpopulation shift benchmarks wherein the data subpopulations present during model training and evaluation differ. These benchmarks aim to
+
+assess how effectively models generalize beyond the limited diversity of their training datasets—e.g., whether models can recognize Dalmatians as “dogs” even when their training data for “dogs” comprises only Poodles and Terriers. We show how one can simulate such shifts, fairly naturally, within existing datasets, hence eliminating the need for (and the potential biases introduced by) crafting synthetic transformations or collecting additional data.
+
+BREEDS benchmarks. The crux of our approach is to leverage existing dataset labels and use them to identify superclasses—i.e., groups of semantically similar classes. This allows us to construct classification tasks over such superclasses, and repurpose the original dataset classes to be the subpopulations of interest. This, in turn, enables us to induce a subpopulation shift by directly making the subpopulations present in the training and test distributions disjoint. By applying this methodology to the ImageNet dataset (Deng et al., 2009), we create a suite of subpopulation shift benchmarks of varying difficulty. This involves modifying the existing ImageNet class hierarchy—WordNet (Miller, 1995)—to ensure that superclasses comprise visually coherent subpopulations. We conduct human studies to validate that the resulting benchmarks capture meaningful subpopulation shifts.
+
+Model robustness to subpopulation shift. In order to demonstrate the utility of our benchmarks, we employ them to evaluate the robustness of standard models to subpopulation shift. In general, we find that model performance drops significantly on the shifted distribution—even when this shift does not significantly affect humans. Still, models that are more accurate on the original distribution tend to also be more robust to these subpopulation shifts. Moreover, adapting models to the shifted domain, by retraining their last layer on this domain, only partially recovers the original model performance.
+
+Impact of robustness interventions. Finally, we examine whether various train-time interventions, designed to decrease model sensitivity to synthetic data corruptions (e.g., $\ell_2$ -bounded perturbations) make models more robust to subpopulation shift. We find that many of these methods offer small, yet non-trivial, improvements along this axis—at times, at the expense of performance on the original distribution. Often, these improvements become more pronounced after retraining the last layer of the model on the shifted distribution. Nevertheless, the increase in model robustness to subpopulation shifts due to these interventions is much smaller than what is observed for other families of input variations such as data corruptions (Hendrycks & Dietterich, 2019; Ford et al., 2019; Kang et al., 2019; Taori et al., 2020). This indicates that handling subpopulation shifts, such as those present in the BREEDS benchmarks, might require a different set of robustness tools.
+
+# 2 DESIGNING BENCHMARKS FOR DISTRIBUTION SHIFT
+
+When constructing distribution shift benchmarks, the key design choice lies in specifying the target distribution to be used during model evaluation. This distribution is meant to be a realistic variation of the source distribution, that was used for training. Typically, studies focus on variations due to:
+
+- Data corruptions: The target distribution is obtained by modifying inputs from the source distribution via a family of transformations that mimic real-world corruptions, as in Fawzi & Frossard (2015); Fawzi et al. (2016); Engstrom et al. (2019b); Hendrycks & Dietterich (2019); Ford et al. (2019); Kang et al. (2019); Shankar et al. (2019).
+- Differences in data sources: Here, the target distribution is an independent dataset for the same task (Saenko et al., 2010; Torralba & Efros, 2011; Tommasi & Tuytelaars, 2014; Recht et al., 2019)—e.g., collected at a different geographic location (Beery et al., 2018), time frame (Kumar et al., 2020) or user population (Caldas et al., 2018). For instance, this could involve using PASCAL VOC (Everingham et al., 2010) to evaluate Caltech101-trained classifiers (Fei-Fei et al., 2006). The goal is to test whether models are overly reliant on the idiosyncrasies of their training datasets (Ponce et al., 2006; Torralba & Efros, 2011).
+- Subpopulation representation: The source and target distributions differ in terms of how well-represented each subpopulation is. Work in this area typically studies whether models perform equally well across all subpopulations from the perspective of reliability (Meinschhausen et al., 2015; Hu et al., 2018; Duchi & Namkoong, 2018; Caldas et al., 2018; Oren et al., 2019; Sagawa et al., 2020) or algorithmic fairness (Dwork et al., 2012; Kleinberg et al., 2017; Jurgens et al., 2017; Buolamwini & Gebru, 2018; Hashimoto et al., 2018).
+
+
+Figure 1: Illustration of our pipeline to create subpopulation shift benchmarks. Given a dataset, we define superclasses based on the semantic hierarchy of dataset classes. This allows us to treat the dataset labels as subpopulation annotations. Then, we construct a BREEDS task of specified granularity (i.e., depth in the hierarchy) by posing the classification task in terms of superclasses at that depth and then partitioning their respective subpopulations into the source and target domains.
+
+
+
+
+Dataset classes
+Superclasses
+Source
+Target
+
+These variations simulate realistic ways in which the data encountered during deployment can deviate from training conditions. However, each of the aforementioned benchmarks capture only one facet of real-world distribution shifts. It is not clear a priori that robustness to any subset of these variations will necessarily translate to robustness with respect to the rest. Thus, to effectively assess and improve model robustness, we require a varied suite of distribution shift benchmarks.
+
+# 3 THE BREEDS METHODOLOGY
+
+In this work, we focus on modeling a pertinent, yet less studied, form of subpopulation shift: one wherein the target distribution (used for testing) contains subpopulations that are entirely absent from the source distribution that the model was trained on. To simulate such shifts, we need to precisely control the data subpopulations present in the source and target data distributions. Our procedure for doing this comprises two stages that are outlined below—see Figure 1 for an illustration and Appendix A.2 for pseudocode.
+
+Devising subpopulation structure. Typical datasets do not contain annotations for individual subpopulations. Since collecting such annotations would be challenging, we take an alternative approach: we bootstrap the existing dataset labels to simulate subpopulations. That is, we group semantically similar classes into broader superclasses which, in turn, allows us to re-purpose existing class labels as the desired subpopulation annotations. Moreover, we can group classes in a hierarchical manner, obtaining superclasses of different specificity. As we will see in Section 4, such class hierarchies are already present in large-scale benchmarks (Deng et al., 2009; Kuznetsova et al., 2018).
+
+Simulating subpopulation shifts. Given a set of superclasses, we can define a classification task over them: the inputs of each superclass correspond to pooling together the inputs of its subclasses (i.e., the original dataset classes). Within this setup, we can simulate subpopulation shift in a relatively straightforward manner. Specifically, for each superclass, we split its subclasses into two random and disjoint sets, and assign one of them to the source and the other to the target domain. Then, we can evaluate model robustness under subpopulation shift by simply training on the source domain and testing on the target domain. Note that the classification task remains identical between domains—both domains contain the same (super)classes but the subpopulations that comprise each (super)class differ. Intuitively, this corresponds to using different dog breeds to represent the class "dog" during training and testing—hence the name of our toolkit.
+
+This methodology is quite general and can be applied to a variety of setting to simulate realistic distribution shifts. Moreover, it has a number of additional benefits:
+
+- Flexibility: Different semantic groupings of a fixed set of classes lead to BREEDS tasks of varying granularity. For instance, by only grouping together classes that are quite similar
+
+one can reduce the severity of the subpopulation shift. Alternatively, one can consider broad superclasses, each having multiple subclasses, resulting in a more challenging benchmark.
+
+- Precise characterization: The exact subpopulation shift between the source and target domains is known. Since both domains are constructed from the same dataset, the impact of any external factors (e.g., differences in data collection pipelines) is minimized. Note that such external factors can significantly impact the difficulty of the task (Ponce et al., 2006; Torralba & Efros, 2011; Tsipras et al., 2020). In fact, minimizing these effects and ensuring that the shift between the source and target domain is caused solely by the intended input variations is one of the major challenges in building distribution shift benchmarks. For instance, recent work (Engstrom et al., 2020) demonstrates that statistical biases during data collection can significantly skew the intended target distribution.
+- Symmetry: Since subpopulations are split into the source and test domains randomly, we expect the resulting tasks to have comparable difficulty.
+- Reuse of existing datasets: No additional data collection or annotation is required other than choosing the class grouping. This approach can thus be used to also re-purpose other existing large-scale datasets—even beyond image recognition—with minimal effort.
+
+# 4 SIMULATING SUBPOPULATION SHIFTS WITHIN IMAGENET
+
+We now describe how our methodology can be applied to ImageNet (Deng et al., 2009)—specifically, the ILSVRC2012 subset (Russakovsky et al., 2015)—to create a suite of BREEDS benchmarks. ImageNet contains a large number of classes, making it particularly well-suited for our purpose.
+
+# 4.1 UTILIZING THE IMAGENET CLASS HIERARCHY
+
+Recall that creating BREEDS tasks requires grouping together similar classes. For ImageNet, such a semantic grouping already exists—ImageNet classes are a part of the WordNet hierarchy (Miller, 1995). However, WordNet is not a hierarchy of objects but rather one of word meanings. Thus, intermediate hierarchy nodes are not always well-suited for object recognition due to:
+
+- Abstract groupings: WordNet nodes often correspond to abstract concepts, e.g., related to the functionality of an object. Children of such nodes might thus share little visual similarity—e.g., "umbrella" and "roof" are visually different, despite both being "coverings."
+- Non-uniform categorization: The granularity of object categorization is vastly different across the WordNet hierarchy—e.g., the subtree rooted at "dog" is 25-times larger than the one rooted at "cat." Hence, the depth of a node in this hierarchy does not always reflect the specificity of the corresponding object category.
+- Lack of tree structure: Nodes in WordNet can have multiple parents and thus the resulting classification task would contain overlapping classes, making it inherently ambiguous.
+
+Due to these issues, we cannot directly use WordNet to identify superclasses that correspond to a well-calibrated classification task. To illustrate this, we present some of the superclasses that Huh et al. (2016) constructed by applying clustering algorithms directly to the WordNet hierarchy in Appendix Table 7. Even putting the issue of overlapping classes aside, a BREEDS task based on these superclasses would induce a very skewed subpopulation shift across classes—e.g., varying the types of "bread" is very different that doing the same for different "mammal" species.
+
+To better align the WordNet hierarchy with the task of object recognition in general, and BREEDS benchmarks in particular, we manually modify it according to the following two principles: (i) nodes should be grouped together based on their visual characteristics rather than abstract relationships like functionality, and (ii) nodes of similar specificity should be at the same distance from the root, irrespective of how detailed their categorization within WordNet is. Details of this procedure along with the resulting hierarchy are presented in Appendix A.4.
+
+
+ENTITY-13
+
+
+LIVING-17
+Figure 3: Sample images from random object categories for the ENTITY-13 and LIVING-17 tasks. For each task, the top and bottom row correspond to the source and target distributions respectively.
+
+# 4.2 CREATING BREEDS TASKS
+
+Once the modified version of the WordNet hierarchy is in place, BREEDS tasks can be created in an automated manner. Specifically, we first choose the desired granularity of the task by specifying the distance from the root ("entity") and retrieving all subclasses at that distance in a top-down manner. Each resulting superclass corresponds to a subtree of our hierarchy, with ImageNet classes as its leaves. Note that these subclasses are roughly of the same specificity, due to our hierarchy restructuring process. Then, we randomly sample a fixed number of subclasses for each superclass to produce a balanced dataset (omitting subclasses with an insufficient number of subclasses). Finally, as described in Section 3, we randomly split these subclasses into the source and target domain.
+
+For our analysis, we create four tasks (cf. Table 2) based on different levels/parts of the hierarchy. To illustrate what the corresponding subpopulation shifts look like, we present (random) image samples for a subset of the tasks in Figure 3. Note that while we focus on the tasks in Table 2 in our study, our methodology readily enables us to create other variants of these tasks in an automated manner.
+
+| Name | Subtree | Level | Subpopulations | Examples |
| ENTITY-13 | “entity” (root) | 3 | 20 | “mammal”, “appliance” |
| ENTITY-30 | “entity” (root) | 4 | 8 | “fruit”, “carnivore” |
| LIVING-17 | “living thing” | 5 | 4 | “ape”, “bear” |
| NON-LIVING-26 | “non-living thing” | 5 | 4 | “fence”, “ball” |
+
+Table 2: BREEDS benchmarks constructed using ImageNet. Here, "level" indicates the depth of the superclasses in the class hierarchy (task granularity), and the number of "subpopulations" (per superclass) is fixed to create balanced datasets. We also construct specialized tasks by focusing on subtrees in the hierarchy, e.g., only living (LIVING-17) or non-living (NON-LIVING-26) objects. Datasets naming reflects the root of the subtree and the number of superclasses they contain.
+
+BREEDS benchmarks beyond ImageNet. It is worth noting that the methodology we described is not restricted to ImageNet and can be readily applied to other datasets as well. The only requirement is that we have access to a semantic grouping of the dataset classes, which is the case for many popular vision datasets—e.g., CIFAR-100 (Krizhevsky, 2009), Pascal-VOC (Everingham et al., 2010), OpenImages (Kuznetsova et al., 2018), COCO-Stuff (Caesar et al., 2018). Moreover, even when a class hierarchy is entirely absent, the needed semantic class grouping can be manually constructed with relatively little effort (proportional to the number of classes, not the number of datapoints).
+
+More broadly, the methodology of utilizing existing dataset annotations to construct data subpopulations goes beyond image classification tasks. In particular, by splitting inputs into a source and target domain based on some attribute, we can measure how well models generalize along this axis. Examples would include grouping by brand in Amazon reviews (McAuley et al., 2015), by location in Berkeley DeepDrive (Yu et al., 2020), and by facial attributes in CelebA (Liu et al., 2015).
+
+# 4.3 CALIBRATING BREEDS BENCHMARKS VIA HUMAN STUDIES
+
+For a distribution shift benchmark to be meaningful, it is essential that the source and target domains capture the same high-level task—otherwise generalizing from one domain to the other would be impossible. To ensure that this is the case for the BREEDS task, we assess how significant the resulting distribution shifts are for human annotators (crowd-sourced via MTurk).
+
+Annotator task. To obtain meaningful performance estimates, it is crucial that annotators perform the task based only on the visual content of the images, without leveraging prior knowledge. To achieve this, we design the following annotation task. First, annotators are shown images from the source domain, grouped by superclass, without being aware of the superclass name (i.e., the grouping it corresponds to). Then, they are presented with images from the target domain and are asked to assign each of them to one of the groups. For simplicity, we present two random subclasses at a time, effectively simulating binary classification. Annotator accuracy can be measured directly as the fraction of images that they assign to the superclass to which they belong. We perform this experiment for each of the BREEDS tasks constructed in Section 4.2. For comparison, we repeat this experiment without subpopulation shift (test images are sampled from the source domain) and for the subclasses constructed by Huh et al. (2016) using the WordNet hierarchy directly (cf. Appendix A.6).
+
+
+Figure 4: Human performance on (binary) BREEDS tasks. Annotators are provided with labeled images from the source distribution for a pair of (undisclosed) superclasses, and asked to classify samples from the target domain ('T') into one of the two groups. As a baseline we also measure annotator performance without subpopulation shift (i.e., on test images from the source domain, 'S') and tasks created via the WordNet hierarchy (cf. Appendix A.6). We observe that annotators are fairly robust to subpopulation shift. Further, they consistently perform better on BREEDS task compared to those based on WordNet directly—indicating that our modified class hierarchy is indeed better calibrated for object recognition. (We discuss model performance in Section 5.)
+
+Human performance. We find that, across all tasks, annotators perform well on unseen data from the source domain, as expected. More importantly, annotators also appear to be quite robust to subpopulation shift, experiencing only a small accuracy drop between the source and target domains (cf. Figure 5). This indicates that the source and target domains are indeed perceptually similar for humans, making these benchmarks suitable for studying model robustness. Finally, across all benchmarks, annotators perform better on BREEDS tasks, compared to their WordNet equivalents—even on source domain samples. This indicates that our modified class hierarchy is indeed better aligned with the underlying visual recognition task.
+
+# 5 EVALUATING MODEL PERFORMANCE UNDER SUBPOPULATION SHIFT
+
+We can now use our suite of BREEDS tasks as a testbed for assessing model robustness to subpopulation shift as well as gauging the effectiveness of various train-time robustness interventions. Specifics of the evaluation setup and additional experimental results are provided in Appendices A.7 and C.2.
+
+# 5.1 STANDARD TRAINING
+
+We start by evaluating the performance of various model architectures trained in the standard fashion: empirical risk minimization (ERM) on the source distribution (cf. Appendix A.7.1). While models perform well on unseen inputs from the domain they are trained on, i.e., they achieve high source accuracy, their accuracy considerably drops under subpopulation shift—more than $30\%$ in most cases (cf. Figure 5). At the same time, models that are more accurate on the source domain also appear to be more robust to subpopulation shift. Specifically, the fraction of source accuracy that is preserved in the target domain typically increases with source accuracy. (If this were not the case, i.e., the model accuracy dropped by a constant fraction under distribution shift, the target accuracy would match the baseline in Figure 5.) This indicates that, improvements in source accuracy do correlate with models generalizing better to variations in testing conditions.
+
+
+Figure 5: Robustness of standard models to subpopulation shifts. For each task, we plot the accuracy of various model architectures (denoted by different symbols) on the target domain as a function of their source accuracy. We find that model accuracy drops significantly between domains (orange vs. dashed line). Still, models that are more accurate on the source domain seem to also be more robust (the improvements exceed the baseline (grey) which would correspond to a constant accuracy drop relative to AlexNet). Moreover, the drop in model performance can be significantly (but not fully) reduced by retraining the final model layer with data from the target domain (green).
+
+Models vs. Humans. We compare the best performing model (DenseNet-121 in this case) to our previously obtained human baselines in Figure 4. To allow for a fair comparison, model accuracy is measured on pairwise superclass classification tasks (cf. Appendix A.7). We observe that models do exceedingly well on unseen samples from the source domain—significantly outperforming annotators under our task setup. At the same time, models also appear to be more brittle, performing worse than humans on the target domain of these binary BREEDS tasks, despite their higher source accuracy.
+
+Adapting models to the target domain. Finally, we focus on the intermediate data representations learned by these models, to assess how suitable they are for distinguishing classes in the target domain. To evaluate this, we retrain the last (fully-connected) layer of models trained on the source domain with data from the target domain. We find that the target accuracy of these models increases significantly after retraining, indicating that the learned representations indeed generalize to the target domain. However, we cannot match the accuracy of models trained directly (end-to-end) on the target domain—see Figure 5—demonstrating that there is significant room for improvement.
+
+# 5.2 ROBUSTNESS INTERVENTIONS
+
+We now turn our attention to existing methods for decreasing model sensitivity to specific synthetic perturbations. Our goal is to assess if these methods enhance model robustness to subpopulation shift too. Concretely, we consider the following families of interventions (cf. Appendix A.7.3 for details): (i) adversarial training to enhance robustness to worst-case $\ell_p$ -bounded perturbations (in our case $\ell_2$ ) (Madry et al., 2018), (ii) training on a stylized version of ImageNet to encourage models to rely more on shape rather than texture (Geirhos et al., 2019), and (iii) training with random noise to make models robust to data corruptions (here, Gaussian and Erase noise (Zhong et al., 2020)).
+
+Note that these methods can be viewed as ways of imposing a prior on the features that the model relies on (Heinze-Deml & Meinshausen, 2017; Geirhos et al., 2019; Engstrom et al., 2019a). That is, by rendering certain features ineffective during training (e.g., texture) they incentivize the model to utilize alternative ones (e.g., shape). Since different feature families may manifest differently in the target domain, such interventions could significantly impact model robustness to subpopulation shift.
+
+
+Figure 6: Effect of train-time interventions on model robustness to subpopulation shift. We measure model performance in terms of relative accuracy-i.e., the ratio between its target and source accuracies. This allows us to visualize the accuracy-robustness trade-off along with the corresponding Pareto frontier (dashed). (Also shown are $95\%$ confidence intervals computed via bootstrapping.) We observe that some interventions do improve model robustness to subpopulation shift—specifically, erase noise and adversarial training—albeit by a small amount and often at the cost of source accuracy.
+
+Relative accuracy. To measure the impact of these interventions, we will focus on the models' relative accuracy—the ratio of target accuracy to source accuracy. This metric accounts for the fact that train-time interventions can impact model accuracy on the source domain itself. By measuring relative performance, we are able to compare different training methods on an equal footing.
+
+We find that robustness interventions do have a small, yet non-trivial, impact on the robustness of a model to subpopulation shift—see Figure 6. Specifically, for the case of adversarial training and erase noise, models often retain a larger fraction of their accuracy on the target domain compared to standard training, hence lying on the Pareto frontier of a robustness-accuracy trade-off. In fact, for some of these interventions, the target accuracy is slightly higher than models obtained via standard training, even without adjusting for their lower source accuracy (raw accuracies are in Appendix C.2.2). Nonetheless, it is important to note that none of these methods offer significant subpopulation robustness—relative accuracy is not improved by more than a few percentage points.
+
+Adapting models to the target domain. The impact of these interventions is more pronounced if we consider the accuracy of models after their last layer is retrained on the target domain (cf. Appendix Figure 21). In particular, we find that for adversarially robust models, retraining significantly boosts accuracy on the target domain—e.g., for LIVING-17 it is almost comparable to the initial source accuracy. This suggests that the feature priors imposed by these interventions incentivize models to learn representations that generalize to other domains—in line with recent results of Utrera et al. (2020); Salman et al. (2020). Moreover, we observe that models trained on stylized inputs perform consistently worse, suggesting that texture might be an important feature for these tasks.
+
+# 6 RELATED WORK
+
+In Section 2, we surveyed prior work on distribution shift benchmarks. Here, we discuss further the benchmarks most closely related to ours and defer discussing additional related work to Appendix B.
+
+Our benchmarks can be viewed as an instance of domain generalization. However, we focus on generalizing between different distributions of real-world images (photographs). This is in contrast to typical domain generalization benchmarks that focus on generalizing between different stylistic representations, e.g., from cartoons to drawings. Hence, the only comparable benchmark would be VLCS (Ghifary et al., 2015), which is however significantly smaller in scale and granularity than our
+
+benchmarks. In a similar vein, datasets used in federated learning (Caldas et al., 2018) can be viewed as subpopulation shift benchmarks since the users present during training and testing might differ. However, to the best of our knowledge, there has been no large-scale vision benchmark in this setting.
+
+Hendrycks & Dietterich (2019), in Appendix G, also (manually) construct a classification task over superclasses and use ImageNet classes outside of ILSVRC2012 (ImageNet-1k) to measure "subtype robustness". (Unfortunately, these classes are no longer publicly available (Yang et al., 2019).) Compared to their work, we use a general methodology to create a broader suite of benchmarks. Also, our analysis of architectures and robustness interventions is significantly more extensive.
+
+# 7 CONCLUSION
+
+In this work, we develop a methodology for constructing large-scale subpopulation shift benchmarks. The motivation behind our BREEDS benchmarks is to test if models can generalize beyond the limited diversity of their training datasets—specifically, to novel data subpopulations. A major advantage of our approach is its generality. It can be applied to any dataset with a meaningful class structure—including tasks beyond classification (e.g., object detection) and domains other than computer vision (e.g., natural language processing). Moreover, the subpopulation shifts are induced in a manner that is both controlled and natural, without altering inputs synthetically or requiring new data.
+
+By applying this approach to the ImageNet dataset, we construct a suite of benchmarks of varying difficulty, that we then use to assess model robustness and the efficacy of various train-time interventions. Further, we obtain human baselines for these tasks to both put model performance in context and validate that the corresponding subpopulation shifts do not significantly affect humans.
+
+Overall, our results indicate that existing models still have a long way to go before they can fully tackle BREEDS subpopulation shifts, even using current robustness interventions. We thus believe that our methodology provides a useful tool for studying and improving model robustness to distribution shift—an increasingly pertinent topic for real-world deployments of machine learning models.
+
+# ACKNOWLEDGEMENTS
+
+We thank Andrew Ilyas and Sam Park for helpful discussions.
+
+Work supported in part by the NSF grants CCF-1553428, CNS-1815221, the Google PhD Fellowship, and the Microsoft Corporation. This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001120C0015.
+
+Research was sponsored by the United States Air Force Research Laboratory and was accomplished under Cooperative Agreement Number FA8750-19-2-1000. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the United States Air Force or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
+
+# REFERENCES
+
+Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. arXiv preprint arXiv:1907.02893, 2019.
+Sara Beery, Grant Van Horn, and Pietro Perona. Recognition in terra incognita. In European Conference on Computer Vision (ECCV), 2018.
+Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. Analysis of representations for domain adaptation. In Neural Information Processing Systems (NeurIPS), 2007.
+Aharon Ben-Tal, Dick Den Hertog, Anja De Waegenaere, Bertrand Melenberg, and Gijs Rennen. Robust solutions of optimization problems affected by uncertain probabilities. In Management Science, 2013.
+
+Joy Buolamwini and Timnit Gebru. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency (FAccT), 2018.
+Holger Caesar, Jasper Uijlings, and Vittorio Ferrari. Coco-stuff: Thing and stuff classes in context. In Computer vision and pattern recognition (CVPR), 2018, 2018.
+Sebastian Caldas, Peter Wu, Tian Li, Jakub Konečný, H Brendan McMahan, Virginia Smith, and Ameet Talwalkar. Leaf: A benchmark for federated settings. arXiv preprint arXiv:1812.01097, 2018.
+Nicolas Courty, Rémi Flamary, Devis Tuia, and Alain Rakotomamonjy. Optimal transport for domain adaptation. In Transactions on Pattern Analysis and Machine Intelligence, 2016.
+Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition (CVPR), 2009.
+Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In International conference on machine learning (ICML), 2014.
+John Duchi and Hongseok Namkoong. Learning models with uniform performance via distributionally robust optimization. In arXiv preprint arXiv:1810.08750, 2018.
+John Duchi, Peter Glynn, and Hongseok Namkoong. Statistics of robust optimization: A generalized empirical likelihood approach. Mathematics of Operations Research, 2016.
+Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through awareness. In innovations in theoretical computer science conference (ITCS), 2012.
+Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Brandon Tran, and Aleksander Madry. Adversarial robustness as a prior for learned representations. In ArXiv preprint arXiv:1906.00945, 2019a.
+Logan Engstrom, Brandon Tran, Dimitris Tsipras, Ludwig Schmidt, and Aleksander Madry. Exploring the landscape of spatial robustness. In International Conference on Machine Learning (ICML), 2019b.
+Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Jacob Steinhardt, and Aleksander Madry. Identifying statistical bias in dataset replication. In International Conference on Machine Learning (ICML), 2020.
+Peyman Mohajerin Esfahani and Daniel Kuhn. Data-driven distributionally robust optimization using the wasserstein metric: Performance guarantees and tractable reformulations. Mathematical Programming, 2018.
+M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The Pascal visual object classes (voc) challenge. In International Journal of Computer Vision, 2010.
+Alhussein Fawzi and Pascal Frossard. Manifest: Are classifiers really invariant? In *British Machine Vision Conference (BMVC)*, 2015.
+Alhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, and Pascal Frossard. Robustness of classifiers: from adversarial to random noise. In Advances in Neural Information Processing Systems (NeurIPS), 2016.
+Li Fei-Fei, Rob Fergus, and Pietro Perona. One-shot learning of object categories. pattern analysis and machine intelligence (PAMI), 2006.
+Nic Ford, Justin Gilmer, Nicolas Carlini, and Dogus Cubuk. Adversarial examples are a natural consequence of test error in noise. In arXiv preprint arXiv:1901.10513, 2019.
+Andrea Frome, Greg S Corrado, Jon Shlens, Samy Bengio, Jeff Dean, Marc'Aurelio Ranzato, and Tomas Mikolov. Devise: A deep visual-semantic embedding model. In neural information processing systems (NeurIPS), 2013.
+
+Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. 2015.
+Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A. Wichmann, and Wieland Brendel. Imagenet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. In International Conference on Learning Representations (ICLR), 2019.
+Muhammad Ghifary, W Bastiaan Kleijn, Mengjie Zhang, and David Balduzzi. Domain generalization for object recognition with multi-task autoencoders. In Proceedings of the IEEE international conference on computer vision, pp. 2551-2559, 2015.
+Mingming Gong, Kun Zhang, Tongliang Liu, Dacheng Tao, Clark Glymour, and Bernhard Scholkopf. Domain adaptation with conditional transferable components. In International conference on machine learning (ICML), 2016.
+Tatsunori Hashimoto, Megha Srivastava, Hongseok Namkoong, and Percy Liang. Fairness without demographics in repeated loss minimization. In International Conference on Machine Learning (ICML), 2018.
+Christina Heinze-Deml and Nicolai Meinshausen. Conditional variance penalties and domain shift robustness. arXiv preprint arXiv:1710.11469, 2017.
+Dan Hendrycks and Thomas G. Dietterich. Benchmarking neural network robustness to common corruptions and surface variations. In International Conference on Learning Representations (ICLR), 2019.
+Weihua Hu, Gang Niu, Issei Sato, and Masashi Sugiyama. Does distributionally robust supervised learning give robust classifiers? In International Conference on Machine Learning (ICML), 2018.
+Minyoung Huh, Pulkit Agrawal, and Alexei A Efros. What makes imagenet good for transfer learning? arXiv preprint arXiv:1608.08614, 2016.
+David Jurgens, Yulia Tsvetkov, and Dan Jurafsky. Incorporating dialectal variability for socially equitable language identification. In Association for Computational Linguistics (ACL), 2017.
+Daniel Kang, Yi Sun, Dan Hendrycks, Tom Brown, and Jacob Steinhardt. Testing robustness against unforeseen adversaries. In *ArXiv preprint arxiv:1908.08016*, 2019.
+Mark G Kelly, David J Hand, and Niall M Adams. The impact of changing populations on classifier performance. In international conference on Knowledge discovery and data mining (SIGKDD), 1999.
+Aditya Khosla, Tinghui Zhou, Tomasz Malisiewicz, Alexei A Efros, and Antonio Torralba. Undoing the damage of dataset bias. In European Conference on Computer Vision (ECCV), 2012.
+Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. Inherent trade-offs in the fair determination of risk scores. In Innovations in Theoretical Computer Science (ITCS), 2017.
+Alex Krizhevsky. Learning multiple layers of features from tiny images. In Technical report, 2009.
+Ananya Kumar, Tengyu Ma, and Percy Liang. Understanding self-training for gradual domain adaptation. In International Conference on Machine Learning (ICML), 2020.
+Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Tom Duerig, et al. The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale. arXiv preprint arXiv:1811.00982, 2018.
+Christoph H Lampert, Hannes Nickisch, and Stefan Harmeling. Learning to detect unseen object classes by between-class attribute transfer. In Computer Vision and Pattern Recognition (CVPR), 2009.
+Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy M Hospedales. Deeper, broader and artier domain generalization. In international conference on computer vision (ICCV), 2017.
+
+Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaou Tang. Deep learning face attributes in the wild. In International Conference on Computer Vision (ICCV), 2015.
+Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations (ICLR), 2018.
+Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton Van Den Hengel. Image-based recommendations on styles and substitutes. In Research and development in Information Retrieval (SIGIR), 2015.
+Nicolai Meinshausen. Causality from a distributional robustness point of view. In Data Science Workshop (DSW), 2018.
+Nicolai Meinshausen, Peter Buhlmann, et al. Maximin effects in inhomogeneous large-scale data. The Annals of Statistics, 2015.
+Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems (NeurIPS), pp. 3111-3119, 2013.
+George A Miller. Wordnet: a lexical database for english. Communications of the ACM, 1995.
+Jose G Moreno-Torres, Troy Raeder, Roci O Alaiz-Rodriguez, Nitesh V Chawla, and Francisco Herrera. A unifying view on dataset shift in classification. Pattern recognition, 2012.
+Krikamol Muandet, David Balduzzi, and Bernhard Scholkopf. Domain generalization via invariant feature representation. In International Conference on Machine Learning (ICML), 2013.
+Hongseok Namkoong and John C Duchi. Stochastic gradient methods for distributionally robust optimization with f-divergences. In neural information processing systems (NeurIPS), 2016.
+Yonatan Oren, Shiori Sagawa, Tatsunori Hashimoto, and Percy Liang. Distributionally robust language modeling. In Empirical Methods in Natural Language Processing (EMNLP), 2019.
+Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In International Conference on Computer Vision (ICCV), 2019.
+Jean Ponce, Tamara L Berg, Mark Everingham, David A Forsyth, Martial Hebert, Svetlana Lazebnik, Marcin Marszalek, Cordelia Schmid, Bryan C Russell, Antonio Torralba, et al. Dataset issues in object recognition. In Toward category-level object recognition, 2006.
+Joaquin Quionero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D Lawrence. Dataset shift in machine learning. The MIT Press, 2009.
+Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. DoImagenet classifiers generalize toImagenet? In International Conference on Machine Learning (ICML), 2019.
+Bernardino Romera-Paredes and Philip Torr. An embarrassingly simple approach to zero-shot learning. In International Conference on Machine Learning (ICML), 2015.
+Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. In International Journal of Computer Vision (IJCV), 2015.
+Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. Adapting visual category models to new domains. In European conference on computer vision (ECCV), 2010.
+Shiori Sagawa, Pang Wei Koh, Tatsunori B. Hashimoto, and Percy Liang. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. In International Conference on Learning Representations, 2020.
+
+Hadi Salman, Andrew Ilyas, Logan Engstrom, Ashish Kapoor, and Aleksander Madry. Do adversarially robust imagenet models transfer better? In Advances in Neural Information Processing Systems (NeurIPS), 2020.
+Jeffrey C Schlimmer and Richard H Granger. Beyond incremental processing: Tracking concept drift. In AAAI, 1986.
+Vaishaal Shankar, Achal Dave, Rebecca Roelofs, Deva Ramanan, Benjamin Recht, and Ludwig Schmidt. Do image classifiers generalize across time? arXiv preprint arXiv:1906.02168, 2019.
+Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. Cnn features off-the-shelf: an astounding baseline for recognition. In conference on computer vision and pattern recognition (CVPR) workshops, 2014.
+Hidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of statistical planning and inference, 2000.
+Richard Socher, Milind Ganjoo, Christopher D Manning, and Andrew Ng. Zero-shot learning through cross-modal transfer. In neural information processing systems (NeurIPS), 2013.
+Masashi Sugiyama and Motoaki Kawanabe. Machine learning in non-stationary environments: Introduction to covariate shift adaptation. MIT press, 2012.
+Masashi Sugiyama, Matthias Krauledat, and Klaus-Robert MÁžller. Covariate shift adaptation by importance weighted cross validation. Journal of Machine Learning Research (JMLR), 2007.
+Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR), 2014.
+Rohan Taori, Achal Dave, Vaishaal Shankar, Nicholas Carlini, Benjamin Recht, and Ludwig Schmidt. Measuring robustness to natural distribution shifts in image classification. arXiv preprint arXiv:2007.00644, 2020.
+Tatiana Tommasi and Tinne Tuytelaars. A testbed for cross-dataset analysis. In European Conference on Computer Vision (ECCV), 2014.
+Antonio Torralba and Alexei A Efros. Unbiased look at dataset bias. In CVPR 2011, 2011.
+Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Andrew Ilyas, and Aleksander Madry. From imagenet to image classification: Contextualizing progress on benchmarks. In International Conference on Machine Learning (ICML), 2020.
+Francisco Utrera, Evan Kravitz, N. Benjamin Erichson, Rajiv Khanna, and Michael W. Mahoney. Adversarily-trained deep nets transfer better. In ArXiv preprint arXiv:2007.05869, 2020.
+Gerhard Widmer and Miroslav Kubat. Effective learning in dynamic environments by explicit context tracking. In European Conference on Machine Learning, 1993.
+Yongqin Xian, Bernt Schiele, and Zeynep Akata. Zero-shot learning-the good, the bad and the ugly. In Computer Vision and Pattern Recognition (CVPR), 2017.
+Kaiyu Yang, Clint Qinami, Li Fei-Fei, Jia Deng, and Olga Russakovsky. Towards fairer datasets: Filtering and balancing the distribution of the people subtree in the imagenet hierarchy. http://image-net.org update-sep-17-2019, 2019. Accessed: 2020-10-01.
+Fisher Yu, Haofeng Chen, Xin Wang, Wenqi Xian, Yingying Chen, Fangchen Liu, Vashisht Madhavan, and Trevor Darrell. Bdd100k: A diverse driving dataset for heterogeneous multitask learning. In Computer Vision and Pattern Recognition (CVPR), 2020.
+Zhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, and Yi Yang. Random erasing data augmentation. In AAAI, 2020.
+
+# A EXPERIMENTAL SETUP
+
+# A.1 DATASET
+
+We perform our analysis on the ILSVRC2012 dataset (Russakovsky et al., 2015). This dataset contains a thousand classes from the ImageNet dataset (Deng et al., 2009) with an independently collected validation set. The classes are part of the broader hierarchy, WordNet (Miller, 1995), through which words are organized based on their semantic meaning. We use this hierarchy as a starting point of our investigation but modify it as described in Appendix A.5.
+
+For all the BREEDS superclass classification tasks, the train and validation sets are obtained by aggregating the train and validation sets of the descendant ImageNet classes (i.e., subpopulations). Specifically, for a given subpopulation, the training and test splits from the original ImageNet dataset are used as is.
+
+# A.2 PIPELINE FORMALIZATION
+
+Recall that our process for evaluating model robustness under subpopulation shift (cf. Section 3) is as follows. We present the pseudocode for this process in Algorithm 1.
+
+1. Choose a level in the hierarchy and use it to define a set of superclasses by grouping the corresponding dataset classes together. Note that the original dataset classes form the subpopulations of the superclasses.
+2. For every superclass, select a (random) set of subpopulations (i.e., classes in the original dataset) and use them to train the model to distinguish between superclasses (we call this the source domain).
+3. For every superclass, use the remaining unseen subpopulations (i.e., classes in the original dataset) to test how well the model can distinguish between the subclasses (target domain).
+
+Algorithm 1 The BREEDS methodology. Evaluating the training method train on level $L$ of the hierarchy $H$ —restricted to the subtree under root—using $N_{sub}$ subpopulations per superclass.
+
+```matlab
+function createDatasets $(H,L,N_{sub},root)$ .. source,target $\leftarrow \left[\right],\left[\right]$ for node $\in H$ do if node.depth $= L$ and root $\in$ node.ancestors and len(node.leaves) $\geq N_{sub}$ then $y\gets$ node.label subclasses $\leftarrow$ random.choice(node.leaves, $N_{sub}$ 1 for $(i,c)\in$ enumerate(subclasses) do if $i\leq N_{sub} / 2$ then domain $\leftarrow$ source else domain $\leftarrow$ target for $x\in c$ .inputs do domain.append((x,y)) return (source,target)
+```
+
+function evaluateMethod(train, H, L, $N_{sub}$ , root):
+
+```txt
+source,target $\leftarrow$ createDatasets $(H,L,N_{sub},root)$
+model $\leftarrow$ train.source)
+correct,total $\leftarrow 0,0$
+for $(x,y)\in$ target do correct $+ =$ (model $(x) = y)$ total $+ = 1$
+targetAccuracy $\leftarrow \frac{\text{correct}}{\text{total}}$
+return targetAccuracy
+```
+
+# A.3 WORDNET ISSUES
+
+As discussed in Section 4, WordNet is a semantic rather than a visual hierarchy. That is, object classes are arranged based on their meaning rather than their visual appearance. Thus, using intermediate nodes for a visual object recognition task is not straightforward. To illustrate this, we examine a sample superclass grouping created by Huh et al. (2016) via automated bottom-up clustering in Table 7.
+
+| Superclass | Random ImageNet classes |
| instrumentality | fire engine, basketball, electric fan, wok, thresher, horse cart, harvester, balloon, ratchet, can opener, carton, gong, unicycle, toilet seat, carousel, hard disc, cello, mousetrap, neck brace, barrel |
| man-made structure | beacon, yurt, picket fence, barbershop, fountain, steel arch bridge, library, cinema, stone wall, worm fence, palace, suspension bridge, planetarium, monastery, mountain tent, sliding door, dam, bakery, megalith, pedestal |
| covering | window shade, vestment, running shoe, diaper, sweatshirt, breastplate, shower curtain, shoji, miniskirt, knee pad, apron, pajama, military uniform, theater curtain, jersey, football helmet, book jacket, bow tie, suit, cloak |
| commodity | espresso maker, maillot, iron, bath towel, lab coat, bow tie, washer, jersey, mask, waffle iron, mortarboard, diaper, bolo tie, seat belt, cowboy hat, wig, knee pad, vacuum, microwave, abaya |
| organism | thunder snake, stingray, grasshopper, barracouta, Newfoundland, Mexican hairless, Welsh springer spaniel, bluetick, golden retriever, keeshond, African chameleon, jacamar, water snake, Staffordshire bull-terrier, Old English sheepdog, pelican, sea lion, wire-haired fox terrier, flamingo, green mamba |
| produce | spaghetti squash, fig, cardoon, mashed potato, pineapple, zucchini, broccoli, cauliflower, butternut squash, custard apple, pomegranate, strawberry, Granny Smith, lemon, head cabbage, artichoke, cucumber, banana, bell pepper, acorn squash |
+
+Table 7: Superclasses constructed by Huh et al. (2016) via bottom-up clustering of WordNet to obtain 36 superclasses—for brevity, we only show superclasses with at least 20 ImageNet classes each.
+
+First, we can notice that these superclasses have vastly different granularities. For instance, "organism" contains the entire animal kingdom, hence being much broader than "produce". Moreover, "covering" is rather abstract class, and hence its subclasses often share little visual similarity (e.g., "window shade", "pajama"). Finally, due to the abstract nature of these superclasses, a large number of subclasses overlap—"covering" and "commodity" share 49 ImageNet descendants.
+
+# A.4 MANUAL CALIBRATION
+
+We manually modify the WordNet hierarchy according to the following two principles so as to make it better aligned for visual object recognition.
+
+1. Nodes should be grouped together based on their visual characteristics, rather than abstract relationships like functionality—e.g., we eliminate nodes that do not convey visual information such as "covering".
+2. Nodes of similar specificity should be at the same distance from the root, irrespective of how detailed their categorization within WordNet is—for instance, we placed "dog" at the same level as "cat" and "flower", even though the "dog" sub-tree in WordNet is much larger.
+
+Finally, we removed a number of ImageNet classes that did not naturally fit into the hierarchy. Concretely, we modified the WordNet hierarchy by applying the following operations:
+
+- Collapse node: Delete a node from the hierarchy and add edges from each parent to each child. Allows us to remove redundant or overly specific categorization while preserving the overall structure.
+- Insert node above: Add a dummy parent to push a node further down the hierarchy. Allows us to ensure that nodes of similar granularity are at the same level.
+- Delete node: Remove a node and all of its edges. Used to remove abstract nodes that do not reveal visual characteristics.
+- Add edge: Connect a node to a parent. Used to reassign the children of nodes deleted by the operation above.
+
+We manually examined the hierarchy and implemented these actions in order to produce superclasses that are calibrated for classification. The resulting hierarchy contains nodes of comparable granularity at the same level. Moreover, as a result of this process, each node ends up having a single parent and thus the resulting hierarchy is a tree. The full hierarchy can be explored using the notebooks provided with the hierarchy in the Supplementary Material.
+
+# A.5 RESULTING HIERARCHY
+
+The parameters for constructing the BREEDS benchmarks (hierarchy level, number of subclasses, and tree root) are given in Table 2. The resulting tasks—obtained by sampling disjoint ImageNet classes (i.e., subpopulations) for the source and target domain—are shown in Tables 8, 9, 10, and 11. Recall that for each superclass we randomly sample a fixed number of subclasses per superclass to ensure that the dataset is approximately balanced.
+
+| Superclass | Source | Target |
| garment | trench coat, abaya, gown, poncho, mil- itary uniform, jersey, cloak, bikini, miniskirt, swimming trunks | lab coat, brassiere, hoopskirt, cardigan, pajama, academic gown, apron, diaper, sweatshirt, sarong |
| bird | African grey, bee eater, coucal, Ameri- can coot, indigo bunting, king penguin, spoonbill, limpkin, quail, kite | prairie chicken, red-breasted mer- ganser, albatross, water ouzel, goose, oystercatcher, American regret, hen, lorikeet, ruffed grouse |
| reptile | Gila monster, agama, triceratops, African chameleon, thunder snake, In- dian cobra, green snake, mud turtle, water snake, loggerhead | sidewinder, leatherback turtle, boa con- strictor, garter snake, terrapin, box turtle, ringneck snake, rock python, American chameleon, green lizard |
| arthropod | rock crab, black and gold garden spi- der, tiger beetle, black widow, barn spi- der, leafhopper, ground beetle, fiddler crab, bee, walking stick | cabbage butterfly, admiral, lacewing, trilobite, sulphur butterfly, cicada, gar- den spider, leaf beetle, long-horned beetle, fly |
| mammal | Siamese cat, ibex, tiger, hippopot- mus, Norwegian elkound, dugong, colobus, Samoyed, Persian cat, Irish wolfhound | English setter, llama, lesser panda, ar- madillo, indri, giant Schnauzer, pug, Doberman, American Staffordshire ter- rier, beagle |
| accessory | bib, feather boa, stole, plastic bag, bathing cap, cowboy boot, necklace, crash helmet, gasmask, maillot | hair slide, umbrella, pickelhaube, mit- ten, sombrero, shower cap, sock, run- ning shoe, mortarboard, handkerchief |
| craft | catamaran, speedboat, fireboat, yawl, airliner, container ship, liner, trimaran, space shuttle, aircraft carrier | schooner, gondola, canoe, wreck, war- plane, balloon, submarine, pirate, lifeboat, airship |
| equipment | volleyball, notebook, basketball, hand- held computer, tripod, projector, bar- bell, monitor, croquet ball, balance beam | cassette player, snorkel, horizontal bar, soccer ball, racket, baseball, joystick, microphone, tape player, reflex camera |
| furniture | wardrobe, toilet seat, file, mosquito net, four-poster, bassinet, chiffonier, folding chair, fire screen, shoji | studio couch, throne, crib, rocking chair, dining table, park bench, chest, window screen, medicine chest, barber chair |
| instrument | upright, padlock, lighter, steel drum, parking meter, cleaver, syringe, aba- cus, scale, corkscrew | maraca, saltshaker, magnetic compass, accordion, digital clock, screw, can opener, odometer, organ, screwdriver |
| man-made structure | castle, bell cote, fountain, planetarium, traffic light, breakwater, cliff dwelling, monastery, prison, water tower | suspension bridge, worm fence, turn- stile, tile roof, beacon, street sign, maze, chainlink fence, bakery, drilling platform |
| wheeled vehicle | snowplow, trailer truck, racer, shop- ping cart, unicycle, motor scooter, pas- senger car, minibus, jeep, recreational vehicle | jinrikisha, golfcart, tow truck, ambu- lance, bullet train, fire engine, horse cart, streetcar, tank, Model T |
| produce | broccoli, corn, orange, cucumber, spaghetti squash, butternut squash, acorn squash, cauliflower, bell pepper, fig | pomegranate, mushroom, strawberry, lemon, head cabbage, Granny Smith, hip, ear, banana, artichoke |
+
+Table 8: Superclasses used for the ENTITY-13 task, along with the corresponding subpopulations that comprise the source and target domains.
+
+| Superclass | Source | Target |
| serpentes | green mamba, king snake, garter snake, thunder snake | boa constrictor, green snake, ringneck snake, rock python |
| passerine | goldfinch, brambling, water ouzel, chickadee | magpie, house finch, indigo bunting, bulbul |
| saurian | alligator lizard, Gila monster, Ameri-can chameleon, green lizard | Komodo dragon, African chameleon, agama, banded gecko |
| arachnid | harvestman, barn spider, scorpion, black widow | wolf spider, black and gold garden spi-der, tick, tarantula |
| aquatic bird | albatross, red-backed sandpiper, crane, white stork | goose, dowitcher, limpkin, drake |
| crustacean | crayfish, spiny lobster, hermit crab, Dungeness crab | king crab, rock crab, American lobster, fiddler crab |
| carnivore | Italian greyhound, black-footed ferret, Bedlington terrier, basenji | flat-coated retriever, otterhound, Shih-Tzu, Boston bull |
| insect | lacewing, fly, grasshopper, sulphur but-terfly | long-horned beetle, leafhopper, dung beetle, admiral |
| ungulate | llama, gazelle, zebra, ox | hog, hippopotamus, hartebeest, warthog |
| primate | baboon, howler monkey, Madagascar cat, chimpanzee | siamang, indri, capuchin, patas |
| bony fish | coho, tench, lionfish, rock beauty | sturgeon, puffer, eel, gar |
| barrier | breakwater, picket fence, turnstile, bannister | chainlink fence, stone wall, dam, worm fence |
| building | bookshop, castle, mosque, butcher shop | grocery store, toyshop, palace, beacon |
| electronic equipment | printer, pay-phone, microphone, computer keyboard | modem, cassette player, monitor, dial telephone |
| footwear | clog, Loafer, maillot, running shoe | sandal, knee pad, cowboy boot, Christmas stocking |
| garment | academic gown, apron, miniskirt, fur coat | jean, vestment, sarong, swimming trunks |
| headaddress | pickelhaube, hair slide, shower cap, bonnet | bathing cap, cowboy hat, bearskin, crash helmet |
| home appliance | washer, microwave, Crock Pot, vacuum | toaster, espresso maker, space heater, dishwasher |
| kitchen utensil | measuring cup, cleaver, coffeepot, spatula | frying pan, cocktail shaker, tray, cal-dron |
| measuring instrument | digital watch, analog clock, parking meter, magnetic compass | barometer, wall clock, hourglass, digital clock |
| motor vehicle | limousine, school bus, moped, convertible | trailer truck, beach wagon, police van, garbage truck |
| musical instrument | French horn, maraca, grand piano, up-right | acoustic guitar, organ, electric guitar, violin |
| neckwear | feather boa, neck brace, bib, Windsor tie | necklace, stole, bow tie, bolo tie |
| sports equipment | ski, dumbbell, croquet ball, ratchet | rugby ball, balance beam, horizontal bar, tennis ball |
| tableware | mixing bowl, water jug, beer glass, water bottle | goblet, wine bottle, coffee mug, plate |
| tool | quill, combination lock, padlock, screw | fountain pen, screwdriver, shovel, torch |
| vessel | container ship, lifeboat, aircraft carrier, trimaran | liner, wreck, catamaran, yawl |
| dish | potpie, mashed potato, pizza, cheese-burger | burrito, hot pot, meat loaf, hotdog |
| vegetable | zucchini, cucumber, butternut squash, artichoke | cauliflower, spaghetti squash, acorn squash, cardoon |
| fruit | strawberry, pineapple, jackfruit, Granny Smith | buckeye, corn, ear, acorn |
+
+Table 9: Superclasses used for the ENTITY-30 task, along with the corresponding subpopulations that comprise the source and target domains.
+
+| Superclass | Source | Target |
| salamander | eft, axolotl | common newt, spotted salamander |
| turtle | box turtle, leatherback turtle | loggerhead, mud turtle |
| lizard | whiptail, alligator lizard | African chameleon, banded gecko |
| snake | night snake, garter snake | sea snake, boa constrictor |
| spider | tarantula, black and gold garden spider | garden spider, wolf spider |
| grouse | ptarmigan, prairie chicken | ruffed grouse, black grouse |
| parrot | macaw, lorikeet | African grey, sulphur-crested cockatoo |
| crab | Dungeness crab, fiddler crab | rock crab, king crab |
| dog | bloodhound, Pekinese | Great Pyrenees, papillon |
| wolf | coyote, red wolf | white wolf, timber wolf |
| fox | grey fox, Arctic fox | red fox, kit fox |
| domestic cat | tiger cat, Egyptian cat | Persian cat, Siamese cat |
| bear | sloth bear, American black bear | ice bear, brown bear |
| beetle | dung beetle, rhinoceros beetle | ground beetle, long-horned beetle |
| butterfly | sulphur butterfly, admiral | cabbage butterfly, ringlet |
| ape | gibbon, orangutan | gorilla, chimpanzee |
| monkey | marmoset, titi | spider monkey, howler monkey |
+
+Table 10: Superclasses used for the LIVING-17 task, along with the corresponding subpopulations that comprise the source and target domains.
+
+| Superclass | Source | Target |
| bag | plastic bag, purse | mailbag, backpack |
| ball | volleyball, punching bag | ping-pong ball, soccer ball |
| boat | gondola, trimaran | catamaran, canoe |
| body armor | bulletproof vest, breastplate | chain mail, cuirass |
| bottle | pop bottle, beer bottle | wine bottle, water bottle |
| bus | trolleybus, minibus | school bus, recreational vehicle |
| car | racer, Model T | police van, ambulance |
| chair | folding chair, throne | rocking chair, barber chair |
| coat | lab coat, fur coat | kimono, vestment |
| digital computer | laptop, desktop computer | notebook, hand-held computer |
| dwelling | palace, monastery | mobile home, yurt |
| fence | worm fence, chainlink fence | stone wall, picket fence |
| hat | bearskin, bonnet | sombrero, cowboy hat |
| keyboard instrument | grand piano, organ | upright, accordion |
| merchantile establishment | butcher shop, barbershop | shoe shop, grocery store |
| outbuilding | greenhouse, apiary | barn, boathouse |
| percussion instrument | steel drum, marimba | drum, gong |
| pot | teapot, Dutch oven | coffeepot, caldron |
| roof | dome, vault | thatch, tile roof |
| ship | schooner, pirate | aircraft carrier, liner |
| skirt | hoopskirt, miniskirt | overskirt, sarong |
| stringed instrument | electric guitar, banjo | violin, acoustic guitar |
| timepiece | digital watch, stopwatch | parking meter, digital clock |
| truck | fire engine, pickup | tractor, forklift |
| wind instrument | oboe, sax | flute, bassoon |
| squash | spaghetti squash, acorn squash | zucchini, butternut squash |
+
+Table 11: Superclasses used for the NON-LIVING-26 task, along with the corresponding subpopulations that comprise the source and target domains.
+
+# A.6 ANNOTATOR TASK
+
+As described in Section 4.3, the goal of our human studies is to understand whether humans can classify images into superclasses even without knowing the semantic grouping. Thus, the task involved showing annotators two groups of images, each sampled from the source domain of a random superclass. Then, annotators were shown a new set of images from the target domain (or the source domain in the case of control) and were asked to assign each of them into one of the two groups. A screenshot of an (random) instance of our annotator task is shown in Figure 12.
+
+Each task contained 20 images from the source domain of each superclass and 12 images for annotators to classify (the images where rescaled and center-cropped to size $224 \times 224$ to match the input size use for model predictions). The two superclasses were randomly permuted at load time. To ensure good concentration of our accuracy estimates, for every superclass, we performed binary classification tasks w.r.t. 3 other (randomly chosen) subclasses. Further, we used 3 annotators per task and annotators were compensated $0.15 per task.
+
+Comparing with the original hierarchy. In order to compare our superclasses with those obtained by Huh et al. (2016) via WordNet clustering, we need to define a correspondence between them. To do so, for each of our tasks, we selected the clustering (either top-down or bottom-up) that had the closest number of superclasses. Following the terminology from that work, this mapping is: ENTITY- $13\rightarrow$ DOWNUP-36, ENTITY- $30\rightarrow$ UPDOWN-127, LIVING- $17\rightarrow$ DOWNUP-753 (restricted to "living" nodes), and NON-LIVING- $26\rightarrow$ DOWNUP-345 (restricted to "non-living" nodes).
+
+# Classify images into one of the following groups
+
+Below you are shown example images from two groups. Your task will be to look at new images and determine the group each of them belongs to.
+
+
+Group 1
+
+
+Group 2
+
+
+Grp 1 Grp 2
+
+
+Grp1 Grp2
+
+
+Grp1 Grp2
+
+
+Determine the group each image belongs to
+Grp 1 Grp 2
+
+
+Grp 1 Grp 2
+
+
+Grp 1 Grp 2
+
+
+
+
+
+
+Figure 12: Sample MTurk annotation task to obtain human baselines for BREEDs benchmarks.
+
+
+
+
+
+
+
+# A.7 EVALUATING MODEL PERFORMANCE
+
+# A.7.1 MODEL ARCHITECTURES AND TRAINING
+
+The model architectures used in our analysis are in Table 13 for which we used standard implementations from the PyTorch library (https://pytorch.org/docs/stable/torchvision/models.html). For training, we use a batch size of 128, weight decay of $10^{-4}$ , and learning rates listed in Table 13. Models were trained until convergence. On ENTITY-13 and ENTITY-30, this required a total of 300 epochs, with 10-fold drops in learning rate every 100 epochs, while on LIVING-17 and NON-LIVING-26, models a total of 450 epochs, with 10-fold learning rate drops every 150 epochs. For adapting models, we retrained the last (fully-connected) layer on the train split of the target domain, starting from the parameters of the source-trained model. We trained that layer using SGD with a batch size of 128 for 40,000 steps and chose the best learning rate out of [0.01, 0.1, 0.25, 0.5, 1.0, 2.0, 3.0, 5.0, 7.0, 8.0, 10.0, 11.0, 12.0], based on test accuracy.
+
+| Model | Learning Rate |
| alexnet | 0.01 |
| vgg11 | 0.01 |
| resnet18 | 0.1 |
| resnet34 | 0.1 |
| resnet50 | 0.1 |
| densenet121 | 0.1 |
+
+# A.7.2 MODEL PAIRWISE ACCURACY
+
+In order to make a fair comparison between the performance of models and human annotators on the BREEDS tasks, we evaluate model accuracy on pairs of superclasses. On images from that pair, we determine the model prediction to be the superclass for which the model's predicted probability is higher. A prediction is deemed correct if it matches the superclass label for the image. Repeating this process over random pairs of superclasses allows us to estimate model accuracy on the average-case binary classification task.
+
+# A.7.3 ROBUSTNESS INTERVENTIONS
+
+For model training, we use the hyperparameters provided in Appendix A.7.1. Additional intervention-specific hyperparameters are listed in Appendix Table 14. Due to computational constraints, we trained a restricted set of model architectures with robustness interventions—ResNet-18 and ResNet-50 for adversarial training, and ResNet-18 and ResNet-34 for all others. Adversarial training was implemented using the robustness library, while random erasing using the PyTorch transforms.
+
+Table 13: Models used in our analysis.
+
+| Eps | Step size | #Steps |
| 0.5 | 0.4 | 3 |
| 1 | 0.8 | 3 |
+
+(a) PGD-training (Madry et al., 2018)
+
+
+
+(b) Gaussian noise
+
+| Probability | Scale | Ratio |
| 0.5 | 0.02 - 0.33 | 0.3 - 3.3 |
+
+(c) Random erasing
+
+Table 14: Additional hyperparameters for robustness interventions.
+
+# B ADDITIONAL RELATED WORK
+
+In Section 2, we provide an overview of prior work that is focused on evaluating model robustness to distribution shift. In Section 6, we discuss existing benchmarks that are most similar to our work. Here, we discuss other research direction related to model robustness and generalization.
+
+Distributional robustness. Distribution shifts that are small with respect to some $f$ -divergence have been studied in prior theoretical work (Ben-Tal et al., 2013; Duchi et al., 2016; Esfahani & Kuhn, 2018; Namkoong & Duchi, 2016). However, this notion of robustness is typically too pessimistic to capture realistic data variations (Hu et al., 2018). Distributional robustness has also been connected to causality (Meinhausen, 2018): here, the typical approach is to inject spurious correlations into the dataset, and assess to what extent models rely on them for their predictions (Heinze-Deml & Meinhausen, 2017; Arjovsky et al., 2019; Sagawa et al., 2020).
+
+Domain adaptation and transfer learning. The goal here is to adapt models to the target domain with relatively few samples from it (Ben-David et al., 2007; Saenko et al., 2010; Ganin & Lempitsky, 2015; Courty et al., 2016; Gong et al., 2016; Donahue et al., 2014; Sharif Razavian et al., 2014). In domain adaptation, the task is the same in both domains, while in transfer learning, the task itself could vary. In a similar vein, the field of domain generalization aims to generalize to samples from a different domain (e.g., from ClipArt to photos) by training on a number of explicitly annotated domains (Muandet et al., 2013; Li et al., 2017; Peng et al., 2019).
+
+Zero-shot learning. Work in this domain focuses on learning to recognize previously unseen classes (Lampert et al., 2009; Xian et al., 2017), typically described via a semantic embedding (Lampert et al., 2009; Mikolov et al., 2013; Socher et al., 2013; Frome et al., 2013; Romera-Paredes & Torr, 2015). This differs from our setup, where the focus is on generalization to unseen subpopulations for the same set of classes.
+
+# C ADDITIONAL EXPERIMENTAL RESULTS
+
+# C.1 HUMAN BASELINES FOR BREEDS TASKS
+
+In Section 4.3, we evaluate human performance on binary versions of our BREEDS tasks. Appendix Figures 15a and 15b show the distribution of annotator accuracy over different pairs of superclasses for test data sampled from the source and target domains respectively.
+
+
+
+
+
+
+
+
+
+
+(a) Source domain (no subpopulation shift)
+
+
+
+
+(b) Target domain (with subpopulation shift)
+
+
+Figure 15: Distribution of annotator accuracy over pairwise superclass classification tasks. We observe that human annotators consistently perform better on tasks constructed using our modified ImageNet class hierarchy (i.e., BREEDS) as opposed to those obtained directly from WordNet.
+
+# C.2 MODEL EVALUATION
+
+In Figures 16- 18, we visualize model performance over BREEDS superclasses for different model architectures. We observe in general that models perform fairly uniformly over classes when the test data is drawn from the source domain. This indicates that the tasks are well-calibrated—the various superclasses are of comparable difficulty. At the same time, we see that model robustness to subpopulation shift, i.e., drop in accuracy on the target domain, varies widely over superclasses. This could be either due to some superclasses being broader by construction or due to models being more sensitive to subpopulation shift for some classes.
+
+
+
+
+
+
+
+
+Figure 16: Per-class source and target accuracies for AlexNet on BREEDS tasks.
+
+
+
+
+
+
+
+
+Figure 17: Per-class source and target accuracies for ResNet-50 on BREEDS tasks.
+
+
+
+
+
+
+
+
+Figure 18: Per-class source and target accuracies for DenseNet-121 on BREEDS tasks.
+
+# C.2.1 EFFECT OF DIFFERENT SPLITS
+
+As described in Section 3, to create BREEDS tasks, we first identify a set of relevant superclasses (at the chosen depth in the hierarchy), and then partition their subpopulations between the source and target domains. For all the tasks listed in Table 2, the superclasses are balanced—each of them comprise the same number of subpopulations. To ensure this is the case, the desired number of subpopulations is chosen among all superclass subpopulations at random. These subpopulations are then randomly split between the source and target domains.
+
+Instead of randomly partitioning subpopulations (of a given superclass) between the two domains, we could instead craft partitions to be more/less adversarial as illustrated in Figure 19. Specifically, we could control how similar the subpopulations in the target domain are to those in the source domain. For instance, a split would be less adversarial (good) if subpopulations in the source and target domain share a common parent. On the other hand, we could make a split more adversarial (bad) by ensuring a greater degree of separation (in terms of distance in the hierarchy) between the source and target domain subpopulations.
+
+
+Figure 19: Different ways to partition the subpopulations of a given superclass into the source and target domains. Depending on how closely related the subpopulations in the two domain are, we can construct splits that are more/less adversarial.
+
+We now evaluate model performance under such variations in the nature of the splits themselves—see Figure 20. As expected, models perform comparably well on test data from the source domain, independent of the how the subpopulations are partitioned into the two domains. However, model robustness to subpopulation shift varies considerably based on the nature of the split—it is lowest for the most adversarially chosen split. Finally, we observe that retraining the linear layer on data from the target domain recovers a considerable fraction of the accuracy drop in all cases—indicating that even for the more adversarial splits, models do learn features that transfer well to unknown subpopulations.
+
+# C.2.2 ROBUSTNESS INTERVENTIONS
+
+In Tables 22 and 23, we present the raw accuracies of models trained using various train-time robustness interventions.
+
+
+(a) ENTITY-13 task
+
+
+
+
+(b) ENTITY-30 task
+
+
+Figure 20: Model robustness as a function of the nature of subpopulation shift within specific BREEDS tasks. We vary how the underlying subpopulations of each superclass are split between the source and target domain—we compare random splits (used in the majority of our analysis), to ones that are more (bad) or less adversarial (good). When models are tested on samples from the source domain, they perform equally well across different splits, as one might expect. However, under subpopulation shift (i.e., on samples from the target domain), model robustness varies drastically, and is considerably worse when the split is more adversarial. Yet, for all the splits, models have comparable target accuracy after retraining their final layer.
+
+
+Figure 21: Target accuracy of models after they have been retrained (only the final linear layer) on data from the target domain (with $95\%$ bootstrap confidence intervals). Models trained with robustness interventions often have higher target accuracy than standard models post retraining.
+
+| ResNet-18 |
| Task | ε | Source | Accuracy (%) Target | Target-RT |
| ENTITY-13 | 0 | 90.91 ± 0.73 | 61.52 ± 1.23 | 76.71 ± 1.09 |
| 0.5 | 89.23 ± 0.80 | 61.10 ± 1.23 | 74.92 ± 1.04 |
| 1.0 | 88.45 ± 0.81 | 58.53 ± 1.26 | 73.35 ± 1.11 |
| ENTITY-30 | 0 | 87.88 ± 0.89 | 49.96 ± 1.31 | 73.05 ± 1.17 |
| 0.5 | 85.68 ± 0.91 | 48.93 ± 1.34 | 71.34 ± 1.14 |
| 1.0 | 84.23 ± 0.91 | 47.66 ± 1.23 | 70.27 ± 1.17 |
| LIVING-17 | 0 | 92.01 ± 1.30 | 58.21 ± 2.32 | 83.38 ± 1.79 |
| 0.5 | 90.35 ± 1.35 | 55.79 ± 2.44 | 83.00 ± 1.89 |
| 1.0 | 88.56 ± 1.50 | 53.89 ± 2.36 | 80.90 ± 1.92 |
| NON-LIVING-26 | 0 | 88.09 ± 1.28 | 41.87 ± 2.01 | 73.52 ± 1.71 |
| 0.5 | 86.28 ± 1.32 | 41.02 ± 1.91 | 72.41 ± 1.71 |
| 1.0 | 85.19 ± 1.38 | 40.23 ± 1.92 | 70.61 ± 1.73 |
+
+| ResNet-50 |
| Task | ε | Source | Accuracy (%) Target | Target-RT |
| ENTITY-13 | 0 | 91.54 ± 0.64 | 62.48 ± 1.16 | 79.32 ± 1.01 |
| 0.5 | 89.87 ± 0.80 | 63.01 ± 1.15 | 80.14 ± 1.00 |
| 1.0 | 89.71 ± 0.74 | 61.21 ± 1.22 | 78.58 ± 0.98 |
| ENTITY-30 | 0 | 89.26 ± 0.78 | 51.18 ± 1.24 | 77.60 ± 1.17 |
| 0.5 | 87.51 ± 0.88 | 50.72 ± 1.28 | 78.92 ± 1.06 |
| 1.0 | 86.63 ± 0.88 | 50.99 ± 1.27 | 78.63 ± 1.03 |
| LIVING-17 | 0 | 92.40 ± 1.28 | 58.22 ± 2.42 | 85.96 ± 1.72 |
| 0.5 | 90.79 ± 1.55 | 55.97 ± 2.38 | 87.22 ± 1.66 |
| 1.0 | 89.64 ± 1.47 | 54.64 ± 2.48 | 85.63 ± 1.73 |
| NON-LIVING-26 | 0 | 88.13 ± 1.30 | 41.82 ± 1.86 | 76.58 ± 1.69 |
| 0.5 | 88.20 ± 1.20 | 42.57 ± 2.03 | 78.84 ± 1.62 |
| 1.0 | 86.17 ± 1.36 | 41.69 ± 1.96 | 76.16 ± 1.61 |
+
+Table 22: Effect of adversarial training on model robustness to subpopulation shift. All models are trained on samples from the source domain—either using standard training ( $\varepsilon = 0.0$ ) or using adversarial training. Models are then evaluated in terms of: (a) source accuracy, (b) target accuracy and (c) target accuracy after retraining the linear layer of the model with data from the target domain. Confidence intervals (95%) obtained via bootstrapping. Maximum task accuracy over $\varepsilon$ (taking into account confidence interval) shown in bold.
+
+| ResNet-18 |
| Task | Intervention | Source | Accuracy (%) Target | Target-RT |
| ENTITY-13 | Standard | 90.91 ± 0.73 | 61.52 ± 1.23 | 76.71 ± 1.09 |
| Erase Noise | 91.01 ± 0.68 | 62.79 ± 1.27 | 78.10 ± 1.09 |
| Gaussian Noise | 77.00 ± 1.04 | 47.90 ± 1.21 | 70.37 ± 1.17 |
| Stylized ImageNet | 76.85 ± 1.00 | 50.18 ± 1.21 | 65.91 ± 1.17 |
| ENTITY-30 | Standard | 87.88 ± 0.89 | 49.96 ± 1.31 | 73.05 ± 1.17 |
| Erase Noise | 88.09 ± 0.80 | 49.98 ± 1.31 | 74.27 ± 1.15 |
| Gaussian Noise | 74.12 ± 1.16 | 35.79 ± 1.21 | 65.62 ± 1.28 |
| Stylized ImageNet | 70.96 ± 1.16 | 37.67 ± 1.21 | 60.45 ± 1.22 |
| LIVING-17 | Standard | 92.01 ± 1.30 | 58.21 ± 2.32 | 83.38 ± 1.79 |
| Erase Noise | 93.09 ± 1.27 | 59.60 ± 2.40 | 85.12 ± 1.71 |
| Gaussian Noise | 80.13 ± 1.99 | 46.16 ± 2.57 | 77.31 ± 2.08 |
| Stylized ImageNet | 79.21 ± 1.85 | 43.96 ± 2.38 | 72.74 ± 2.09 |
| NON-LIVING-26 | Standard | 88.09 ± 1.28 | 41.87 ± 2.01 | 73.52 ± 1.71 |
| Erase Noise | 88.68 ± 1.18 | 43.17 ± 2.10 | 73.91 ± 1.78 |
| Gaussian Noise | 78.14 ± 1.60 | 35.13 ± 1.94 | 67.79 ± 1.79 |
| Stylized ImageNet | 71.43 ± 1.73 | 30.56 ± 1.75 | 61.83 ± 1.98 |
+
+| ResNet-34 |
| Task | Intervention | Source | Accuracy (%) Target | Target-RT |
| ENTITY-13 | Standard | 91.75 ± 0.70 | 63.45 ± 1.13 | 78.07 ± 1.02 |
| Erase Noise | 91.76 ± 0.70 | 62.71 ± 1.25 | 77.43 ± 1.06 |
| Gaussian Noise | 81.60 ± 0.97 | 50.69 ± 1.28 | 71.50 ± 1.13 |
| Stylized ImageNet | 78.66 ± 0.94 | 51.05 ± 1.30 | 67.38 ± 1.16 |
| ENTITY-30 | Standard | 88.81 ± 0.81 | 51.68 ± 1.28 | 75.12 ± 1.11 |
| Erase Noise | 89.07 ± 0.82 | 51.04 ± 1.27 | 74.88 ± 1.08 |
| Gaussian Noise | 75.05 ± 1.11 | 38.31 ± 1.26 | 67.47 ± 1.22 |
| Stylized ImageNet | 72.51 ± 1.10 | 38.98 ± 1.22 | 61.65 ± 1.25 |
| LIVING-17 | Standard | 92.83 ± 1.19 | 59.74 ± 2.27 | 85.46 ± 1.83 |
| Erase Noise | 92.96 ± 1.32 | 61.13 ± 2.30 | 85.66 ± 1.78 |
| Gaussian Noise | 84.06 ± 1.71 | 48.38 ± 2.44 | 78.79 ± 1.91 |
| Stylized ImageNet | 80.94 ± 2.00 | 44.16 ± 2.43 | 72.77 ± 2.18 |
| NON-LIVING-26 | Standard | 89.64 ± 1.17 | 43.03 ± 1.99 | 74.99 ± 1.66 |
| Erase Noise | 89.62 ± 1.31 | 43.53 ± 1.89 | 75.04 ± 1.70 |
| Gaussian Noise | 79.26 ± 1.61 | 34.89 ± 1.91 | 68.07 ± 1.78 |
| Stylized ImageNet | 71.49 ± 1.65 | 31.10 ± 1.80 | 62.94 ± 1.90 |
+
+Table 23: Effect of various train-time interventions on model robustness to subpopulation shift. All models are trained on samples from the the source domain. Models are then evaluated in terms of: (a) source accuracy, (b) target accuracy and (c) target accuracy after retraining the linear layer of the model with data from the target domain. Confidence intervals (95%) obtained via bootstrapping. Maximum task accuracy over $\varepsilon$ (taking into account confidence interval) shown in bold.
\ No newline at end of file
diff --git a/breedsbenchmarksforsubpopulationshift/images.zip b/breedsbenchmarksforsubpopulationshift/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..d531b15c1e5a81a68a3ddfc4a72e709ee6be5252
--- /dev/null
+++ b/breedsbenchmarksforsubpopulationshift/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:43383c45be25e53144096b6e990ee6b1427eeb868f119677564988b1b69456c5
+size 2622071
diff --git a/breedsbenchmarksforsubpopulationshift/layout.json b/breedsbenchmarksforsubpopulationshift/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..6153393ffac24d05b491d29df9c345b6c474bbd8
--- /dev/null
+++ b/breedsbenchmarksforsubpopulationshift/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a87083c9535108b8d561c4c06ccce00b97cecaab1fab715688f447ad89a1fd8d
+size 672713
diff --git a/bsqexploringbitlevelsparsityformixedprecisionneuralnetworkquantization/2242de17-1ee1-4bb6-8fe4-09ad170c3986_content_list.json b/bsqexploringbitlevelsparsityformixedprecisionneuralnetworkquantization/2242de17-1ee1-4bb6-8fe4-09ad170c3986_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..73ee497a22f441310e433708646c8ac3291027de
--- /dev/null
+++ b/bsqexploringbitlevelsparsityformixedprecisionneuralnetworkquantization/2242de17-1ee1-4bb6-8fe4-09ad170c3986_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4bf6ed0844c4b77c1112a4761b01503b07317c40b7d3765fb315ac9d768b7e4a
+size 94901
diff --git a/bsqexploringbitlevelsparsityformixedprecisionneuralnetworkquantization/2242de17-1ee1-4bb6-8fe4-09ad170c3986_model.json b/bsqexploringbitlevelsparsityformixedprecisionneuralnetworkquantization/2242de17-1ee1-4bb6-8fe4-09ad170c3986_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..2903318437f6f9f6635b476210d6c4dc65bf25b9
--- /dev/null
+++ b/bsqexploringbitlevelsparsityformixedprecisionneuralnetworkquantization/2242de17-1ee1-4bb6-8fe4-09ad170c3986_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b7c04eefe0c2b9f6cb0c3d46ea1048a66c4cab01f1b4b18ecd98e20aedc2db13
+size 109985
diff --git a/bsqexploringbitlevelsparsityformixedprecisionneuralnetworkquantization/2242de17-1ee1-4bb6-8fe4-09ad170c3986_origin.pdf b/bsqexploringbitlevelsparsityformixedprecisionneuralnetworkquantization/2242de17-1ee1-4bb6-8fe4-09ad170c3986_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..156b418ccb8bf2031f185cc734fec818ab902e05
--- /dev/null
+++ b/bsqexploringbitlevelsparsityformixedprecisionneuralnetworkquantization/2242de17-1ee1-4bb6-8fe4-09ad170c3986_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:58d3998ef03cb7ecf1da02df9faf81c2c93b243bbe449a6658fd059eef37413f
+size 842512
diff --git a/bsqexploringbitlevelsparsityformixedprecisionneuralnetworkquantization/full.md b/bsqexploringbitlevelsparsityformixedprecisionneuralnetworkquantization/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..f1a585725b875c0c9eed683f4f86c8f7e0926a46
--- /dev/null
+++ b/bsqexploringbitlevelsparsityformixedprecisionneuralnetworkquantization/full.md
@@ -0,0 +1,276 @@
+# BSQ: EXPLORING BIT-LEVEL SPARSITY FOR MIXED-PRECISION NEURAL NETWORK QUANTIZATION
+
+Huanrui Yang, Lin Duan, Yiran Chen & Hai Li
+
+Department of Electrical and Computer Engineering
+
+Duke University
+
+Durham, NC 27708, USA
+
+{huanrui.yang, lin.duan, yiran.chen, hai.li}@duke.edu
+
+# ABSTRACT
+
+Mixed-precision quantization can potentially achieve the optimal tradeoff between performance and compression rate of deep neural networks, and thus, have been widely investigated. However, it lacks a systematic method to determine the exact quantization scheme. Previous methods either examine only a small manually-designed search space or utilize a cumbersome neural architecture search to explore the vast search space. These approaches cannot lead to an optimal quantization scheme efficiently. This work proposes bit-level sparsity quantization (BSQ) to tackle the mixed-precision quantization from a new angle of inducing bit-level sparsity. We consider each bit of quantized weights as an independent trainable variable and introduce a differentiable bit-sparsity regularizer. BSQ can induce all-zero bits across a group of weight elements and realize the dynamic precision reduction, leading to a mixed-precision quantization scheme of the original model. Our method enables the exploration of the full mixed-precision space with a single gradient-based optimization process, with only one hyperparameter to tradeoff the performance and compression. BSQ achieves both higher accuracy and higher bit reduction on various model architectures on the CIFAR-10 and ImageNet datasets comparing to previous methods.
+
+# 1 INTRODUCTION
+
+Numerous deep neural network (DNN) models have been designed to tackle real-world problems and achieved beyond-human performance. DNN models commonly demand extremely high computation cost and large memory consumption, making the deployment and real-time processing on embedded and edge devices difficult (Han et al., 2015b; Wen et al., 2016). To address this challenge, model compression techniques, such as pruning (Han et al., 2015b; Wen et al., 2016; Yang et al., 2020), factorization (Jaderberg et al., 2014; Zhang et al., 2015) and fixed-point quantization (Zhou et al., 2016; Wu et al., 2019; Dong et al., 2019), have been extensively studied. Among them, fixed-point quantization works directly on the data representation by converting weight parameters originally in the 32-bit floating-point form to low-precision values in a fixed-point format. For a DNN model, its quantized version requires much less memory for weight storage. Moreover, it can better utilize fixed-point processing units in mobile and edge devices to run much faster and more efficiently.
+
+Typically, model compression techniques aim to reduce a DNN model size while maintaining its performance. The two optimization objectives in this tradeoff, however, have a contrary nature: the performance can be formulated as a differentiable loss function $\mathcal{L}(W)$ w.r.t. the model's weights $W$ ; yet the model size, typically measured by the number of non-zero parameters or operations, is a discrete function determined mainly by the model architecture. To co-optimize the performance and model size, some previous pruning and factorization methods relax the representation of model size as a differentiable regularization term $R(W)$ . For example, group Lasso (Wen et al., 2016) and DeepHoyer (Yang et al., 2020) induce weight sparsity for pruning, and the attractive force regularizer (Wen et al., 2017) and nuclear norm (Xu et al., 2018) are utilized to induce low rank. The combined objective $\mathcal{L}(W) + \alpha R(W)$ can be directly minimized with a gradient-based optimizer for optimizing the performance and model size simultaneously. Here, the hyperparameter $\alpha$ controls the strength of the regularization and governs the performance-size tradeoff of the compressed model.
+
+Unlike for pruning and factorization, there lacks a well-defined differentiable regularization term that can effectively induce quantization schemes. Early works in quantization mitigate the tradeoff exploration complexity by applying the same precision to the entire model. This line of research focuses on improving the accuracy of ultra low-precision DNN models, e.g., quantizing all the weights to 3 or less bits (Zhou et al., 2016; Zhang et al., 2018), even to 1-bit (Rastegari et al., 2016). These models commonly incur significant accuracy loss, even after integrating emerging training techniques like straight-through estimator (Bengio et al., 2013; Zhou et al., 2016), dynamic range scaling (Polino et al., 2018) and non-linear trainable quantizers (Zhang et al., 2018). As different layers of a DNN model present different sensitivities with performance, a mixed-precision quantization scheme would be ideal for the performance-size tradeoff (Dong et al., 2019). There have also been accelerator designs to support the efficient inference of mixed-precision DNN models (Sharma et al., 2018). However, to achieve the optimal layer-wise precision configuration, it needs to exhaustively explore the aforementioned discrete search space, the size of which grows exponentially with the number of layers. Moreover, the dynamic change of each layer's precision cannot be formulated into a differentiable objective, which hinders the efficiency of the design space exploration. Prior studies (Wu et al., 2019; Wang et al., 2019) utilize neural architecture search (NAS), which suffers from extremely high searching cost due to the large space of mixed-precision quantization scheme. Recently, Dong et al. (2019) propose to rank each layer based on the corresponding Hessian information and then determine the relative precision order of layers based on their ranking. The method, however, still requires to manually select the precision level for each layer.
+
+Here, we propose to revisit the fixed-point quantization process from a new angle of bit-level sparsity: decreasing the precision of a fixed-point number can be taken as forcing one or a few bits, most likely the least significant bit (LSB), to be zero; and reducing the precision of a layer is equivalent to zeroing out a specific bit of all the weight parameters of the layer. In other words, the precision reduction can be viewed as increasing the layer-wise bit-level sparsity. By considering the bits of fixed-point DNN parameters as continuous trainable variables during DNN training, we can utilize a sparsity-inducing regularizer to explore the bit-level sparsity with gradient-based optimization, dynamically reduce the layer precision and lead to a series of mixed-precision quantization schemes. More specific, we propose Bit-level Sparsity Quantization $(BSQ)$ method with the following contributions:
+
+- We propose a gradient based training algorithm for bit-level quantized DNN models. The algorithm considers each bit of quantized weights as an independent trainable variable and enables the gradient-based optimization with straight-through estimator (STE).
+- We propose a bit-level group Lasso regularizer to dynamically reduce the weight precision of every layer and therefore induce mixed-precision quantization schemes.
+- BSQ uses only one hyperparameter, the strength of the regularizer, to trade-off the model performance and size, making the exploration more efficient.
+
+This work exclusively focuses on layer-wise mixed-precision quantization, which is the granularity considered in most previous works. However, the flexibility of BSQ enables it to explore mixed-precision quantization of any granularity with the same cost regardless of the search space size.
+
+# 2 RELATED WORKS ON DNN QUANTIZATION
+
+Quantization techniques convert floating-point weight parameters to low-precision fixed-point representations. Directly quantizing a pre-trained model inevitably introduces significant accuracy loss. So many of early research focus on how to finetune quantized models in low-precision configurations. As the quantized weights adopt discrete values, conventional gradient-based methods that are designed for continuous space cannot be directly used for training quantized models. To mitigate this problem, algorithms like DoReFa-Net utilize a straight-through estimator (STE) to approximate the quantized model training with trainable floating-point parameters (Zhou et al., 2016). As shown in Equation (1), a floating-point weight element $w$ is kept throughout the entire training process. Along the forward pass, the STE will quantize $w$ to $n$ -bit fixed-point representation $w_{q}$ , which will be used to compute the model output and loss $\mathcal{L}$ . During the backward pass, the STE will directly pass the gradient w.r.t. $w_{q}$ onto $w$ , which enables $w$ to be updated with the standard gradient-based optimizer.
+
+$$
+\text {F o r w a r d}: w _ {q} = \frac {1}{2 ^ {n} - 1} \operatorname {R o u n d} [ (2 ^ {n} - 1) w ]; \quad \text {B a c k w a r d}: \frac {\partial \mathcal {L}}{\partial w} = \frac {\partial \mathcal {L}}{\partial w _ {q}}. \tag {1}
+$$
+
+Early studies revealed that weights of different layers have different dynamic ranges. It is important to keep the dynamic range of each layer for maintaining the model performance, especially for quantized models. He et al. (2016b) and Polino et al. (2018) propose to explicitly keep track of the dynamic range of each layer by scaling all the weight elements in a layer to the range of [0,1] at every training step, before applying the quantization STE. Other techniques, such as learnable nonlinear quantifier function (Zhang et al., 2018) and incremental quantization (Zhou et al., 2017), are also useful in improving the performance of quantized models. However, it is still very difficult to quantize the entire DNN model to a unified ultra-low precision without incurring significant accuracy loss.
+
+Recent research shows that different layers in a DNN model contribute to the overall performance in varying extents. Therefore mixed-precision quantization scheme that assigns different precision to layers (Wu et al., 2019; Dong et al., 2019) presents a better accuracy-compression tradeoff. The challenge lies in how to determine the quantization scheme, i.e., the precision of each layer, as it needs to explore a large and discrete search space. Some works design quantization criteria based on concepts like "noise gain" (Sakr & Shanbhag, 2018; 2019) to constraint the relationship between each layer's precision and thus largely reduce the search space, yet those criteria are often heuristic, preventing these methods to reach ultra-low precision and find the optimal tradeoff point between model size and accuracy. Other works utilize neural architecture search (NAS). For example, Wang et al. (2019) consider the precision assignment of each layer as an action and seek for the optimal design policy via reinforcement learning. Wu et al. (2019) combine all possible design choices into a "stochastic super net" and approximate the optimal scheme via sampling. However, the cost of NAS methods scales up quickly as the quantization search space grows exponentially with the number of layers. Common practices of constraining the search cost include limiting the precision choices or designing the quantization scheme in a coarse granularity. A recent line of research work attempts to rank layers based on their importance measured by the sensitivity or Hessian information. Higher precision will then be assigned to more important layers (Dong et al., 2019). The exact precision of each layer, however, needs to be manually selected. So these methods cannot adequately explore the whole search space for the optimal quantization scheme.
+
+# 3 THE BSQ METHOD
+
+BSQ aims to obtain an optimal mixed-precision quantization scheme through a single-pass training process of a quantized model. In this section, we first introduce how to convert a DNN model to the bit representation and propose a gradient-based algorithm for training the resulted bit-level model. A bit-level group Lasso regularizer is then proposed to induce precision reduction. In the end, we elaborate the overall training objective of BSQ and the dynamical precision adjustment procedure.
+
+# 3.1 TRAINING THE BIT REPRESENTATION OF DNN
+
+As illustrated in Figure 1(a), we convert a floating-point weight matrix $W$ of a pretrained network to its bit representation through a pipeline of scaling, quantization and binary conversion. Similar to the practice in (He et al., 2016b; Polino et al., 2018), we retain the dynamic range of $W$ by scaling all the elements to the range of [0, 1] before applying quantization. However, these prior works always scale the largest element to 1 to fully utilize all the quantized bins at every training step, which makes the dynamic precision reduction impossible. Instead, our method conducts the scaling only
+
+
+Figure 1: An example of DNN training under the bit representation with precision $n = 3$ . (a) Pipeline of converting from the floating-point weight $W$ to the bit representation; (b) Training the bit-level model weight with STE.
+
+
+
+once, which is right before the bit representation training. Formally, before converting $W$ to its bit representation, we first extract its dynamic range as $W = s \cdot W_{s}$ , where $s = \max |W|$ is the scaling factor and $W_{s}$ is the scaled weight matrix. The absolute value of any element $w_{s}$ in $W_{s}$ is within the range of [0, 1]. Now we apply an $n$ -bit uniform quantization to the absolute value of $w_{s}$ such as $w_{q} = \text{Round}[\left|w_{s}\right|\times (2^{n} - 1)] / (2^{n} - 1)$ . Then $w_{q}$ can be exactly represented by a $n$ -bit binary number as $w_{q} = \left[\sum_{b = 0}^{n - 1}w_{s}^{(b)}2^{b}\right] / (2^{n} - 1)$ , where $w_{s}^{(b)}$ denotes the $b^{th}$ bit in the binary representation. Till this point, $W$ in the floating-point form is replaced with
+
+$$
+W \equiv \operatorname {s i g n} (W) \odot s W _ {q} \equiv \operatorname {s i g n} (W) \odot \frac {s}{2 ^ {n} - 1} \sum_ {b = 0} ^ {n - 1} W _ {s} ^ {(b)} 2 ^ {b}, \tag {2}
+$$
+
+where $\odot$ denotes the element-wise Hadamard product. We consider the bit representation $W_{s}^{(b)}$ where $b\in [0,n - 1]$ and the scaling factor $S$ as independent trainable variables in the training process.
+
+Note that $W_{s}^{(b)}$ is composed of binary values by definition and $\text{sign}(W)$ is a discrete function. Neither of them can be directly trained with gradient descent. To mitigate the binary constraint of $W_{s}^{(b)}$ , we adopt the STE proposed by Bengio et al. (2013) during the training process. As shown in Equation (1), STE enables a quantized model to be trained with continuous floating-point weights. Specifically, the STE for the bit representation training is defined as:
+
+$$
+\text {F o r w a r d}: W _ {q} = \frac {1}{2 ^ {n} - 1} \operatorname {R o u n d} \left[ \sum_ {b = 0} ^ {n - 1} W _ {s} ^ {(b)} 2 ^ {b} \right]; \quad \text {B a c k w a r d}: \frac {\partial \mathcal {L}}{\partial W _ {s} ^ {(b)}} = \frac {2 ^ {b}}{2 ^ {n} - 1} \frac {\partial \mathcal {L}}{\partial W _ {q}}. \tag {3}
+$$
+
+STE relaxes the binary constraint and allows gradient updates for the elements in $W_{s}^{(b)}$ . As illustrated in Figure 1(b), during the forward pass, $s \cdot W_{q}$ will be used to reconstruct the model weight $W$ and compute the loss, which demonstrates the performance of the current model after quantization. The gradient w.r.t. $W_{q}$ from the back-propagation will be passed through the rounding function and updated on the continuous values of $W_{s}^{(b)}$ . The proposed bit representation can therefore be trained with any gradient-based optimizer.
+
+The proposed bit representation training only leads to minimal computational and run-time memory overhead comparing to the normal back propagation procedure. From the memory consumption perspective, the bit representation training treats each bit as separated floating-point trainable variables, so a $N$ -bit model in bit representation will have $N$ times more parameters and gradients to be stored comparing to that of the baseline training. Though for actual run-time memory consumption, the hidden feature between each layer consumes a significantly larger memory than weights and gradients. As the bit representation does not affect the hidden features, the increase in trainable variables does not lead to significant increase in run-time memory consumption. From the perspective of computation cost, note that the gradient w.r.t. each $W_{s}^{(b)}$ can be computed as the gradient w.r.t. the corresponding $W_{q}$ scaled by a power of 2. So under a $N$ -bit scheme there will only be $N$ additional scaling for each parameter comparing to the normal training. These additional computations are very cheap comparing to the floating-point operations involved in back propagation. So the proposed bit representation training only leads to minimal computational overhead comparing to a normal back propagation.
+
+We restrict the value of $W_{s}^{(b)}$ within [0, 2] throughout the training, so that the corresponding $W_{q}$ has the chance to increase or decrease its precision in the "precision adjustment" step, which will be discussed in Section 3.3. This is enforced by trimming $W_{s}^{(b)}$ to 0 or 2 if it exceeds the range after a training step.
+
+To enable the dynamic update of $\text{sign}(W)$ during training, we separate the positive and negative elements in $W_s$ as $W_s = (W_p - W_n)$ before quantization. Here $W_p = W_s \odot \mathbb{1}(W_s \geq 0)$ contains all the positive elements and $W_n = -W_s \odot \mathbb{1}(W_s < 0)$ includes the absolute value of all the negative weight elements. $W_p$ and $W_n$ will be respectively converted to $W_p^{(b)}$ and $W_n^{(b)}$ by following the process in Equation (2), so that $W_s^{(b)} = W_p^{(b)} - W_n^{(b)}$ . Note that the replacement of $W_s^{(b)}$ with $W_p^{(b)} - W_n^{(b)}$ does not introduce any non-differentiable function. Therefore all elements in $W_p^{(b)}$ and $W_n^{(b)}$ can take continuous values between [0, 2] and be trained with the bit representation STE in Equation (3). As such, the original weight matrix $W$ is converted into trainable variables $W_p^{(b)}, W_n^{(b)}$ and $s$ throughout the BSQ training process.
+
+# 3.2 BIT-LEVEL GROUP LASSO
+
+To induce the mixed-precision quantization scheme of a DNN model during training, we propose a bit-level group Lasso $(B_{GL})$ regularizer based on the group Lasso (Hastie et al., 2015) and apply it to $W_{p}^{(b)}$ and $W_{n}^{(b)}$ that are converted from a group of weights $W^{g}$ . The regularizer is defined as:
+
+$$
+B _ {G L} \left(W ^ {g}\right) = \sum_ {b = 0} ^ {n - 1} \left\| \left[ W _ {p} ^ {(b)}; W _ {n} ^ {(b)} \right] \right\| _ {2}, \tag {4}
+$$
+
+where $W_{p}^{(b)}$ and $W_{n}^{(b)}$ are bit representations converted from $W^{g}$ , and $[\cdot ;\cdot ]$ denotes the concatenation of matrices. $B_{GL}$ could make a certain bit $b$ of all elements in both $W_{p}^{(b)}$ and $W_{n}^{(b)}$ zero simultaneously. The bit can thus be safely removed for the precision reduction. Note that the granularity of the quantization scheme induced by $B_{GL}$ is determined by how $W^{g}$ is grouped. Our experiments organize $W^{g}$ in a layer-wise fashion. So all elements in a layer have the same precision, which is a common setting in previous mixed-precision quantization work. $W^{g}$ can also be arranged as any group of weight elements, such as block-wise, filter-wise or even element-wise if needed. Accordingly, the formulation of the regularizer need to be revised to assist the exploration of the mixed-precision quantization at the given granularity. The cost for evaluating and optimizing the regularizer will remain the same under different granularity settings.
+
+# 3.3 OVERALL TRAINING PROCESS
+
+The overall training process starts with converting each layer of a pretrained floating-point model to the bit representation with a relatively high initial precision (e.g., 8-bit fixed-point). BSQ training is then preformed on the achieved bit representation with bit-level group Lasso integrated into the training objective. Re-quantization steps are conducted periodically to identify the bit-level sparsity induced by the regularizer and allow dynamic precision adjustment. As the mixed-precision quantization scheme is finalized, the achieved model is further finetuned for a higher accuracy.
+
+Objective of BSQ training. For higher memory efficiency it is desired to find a mixed-precision quantization scheme that minimizes the total number of bits in the model. Thus, in BSQ training we propose to penalize more on the layers with more bits by performing a memory consumption-aware reweighing to $B_{GL}$ across layers. Specifically, the overall objective of training a $L$ -layer DNN model with BSQ is formulated as:
+
+$$
+\mathcal {L} = \mathcal {L} _ {C E} \left(W _ {q} ^ {(1: L)}\right) + \alpha \sum_ {l = 1} ^ {L} \frac {\# P a r a \left(W ^ {l}\right) \times \# B i t \left(W ^ {l}\right)}{\# P a r a \left(W ^ {(1 : L)}\right)} B _ {G L} \left(W ^ {l}\right). \tag {5}
+$$
+
+Here $\mathcal{L}_{CE}(W_q^{(1:L)})$ is the original cross entropy loss evaluated with the quantized weight $W_{q}$ acquired from the STE in Equation (3), $\alpha$ is a hyperparameter controlling the regularization strength, and $\# Para(W^l)$ and $\# Bit(W^l)$ respectively denote the parameter number and precision of layer $l$ . The loss function in Equation (5) enables a layer-wise adjustment in the regularization strength by applying a stronger regularization on a layer with higher memory usage.
+
+Re-quantization and precision adjustment. As BSQ trains the bit representation of the model with floating-point variables, we perform re-quantization to convert $W_{p}^{(b)}$ and $W_{n}^{(b)}$ to exact binary values and identify the all-zero bits that can be removed for precision reduction. The re-quantization step reconstructs the quantized scaled weight $W_{q}^{\prime}$ from $W_{p}^{(b)}$ and $W_{n}^{(b)}$ as: $W_{q}^{\prime} = \text{Round}\left[\sum_{b=0}^{n-1}W_{p}^{(b)}2^{b}-\sum_{b=0}^{n-1}W_{n}^{(b)}2^{b}\right]$ . As we allow the values of $W_{p}^{(b)}$ and $W_{n}^{(b)}$ to be within [0, 2], the reconstructed $W_{q}^{\prime}$ has a maximum absolute value of $2^{n+1}$ . In this way, $W_{q}^{\prime}$ is converted to a $(n+1)$ -bit binary number, where each bit is denoted by $W_{q}^{(b)}$ . After the re-quantization, we will adjust the precision of each layer. Specifically, we first check $W_{q}^{(b)}$ from the MSB down to the LSB and remove the bits with all zero elements until the first non-zero bit. The scaling factor $s$ of the layer remains unchanged during this process. A similar check will then be conducted from the LSB up to the MSB. $s$ needs to be doubled when a bit from the LSB side is removed, as all elements in $W_{q}^{\prime}$ are shifted right for one bit. Assume that the precision adjustment makes the precision of a layer change from $n$ to $n^{\prime}$ , the scaling factor will be updated as $s^{\prime}=s\frac{2^{n^{\prime}}-1}{2^{n}-1}$ . In this way, the bit representations
+
+
+Figure 2: Quantization schemes achieved with or without layer-wise regularization reweighing. The compression rate and the accuracy after finetuning are listed in the legend.
+
+of $W$ before and after the precision adjustment are equivalent, as indicated in Equation (6). The precision of a $n$ -bit layer may change between 0 and $(n + 1)$ -bit after the precision adjustment.
+
+$$
+W \equiv \frac {s}{2 ^ {n} - 1} \sum_ {b = 0} ^ {n - 1} W _ {q} ^ {(b)} 2 ^ {b} \equiv \frac {s ^ {\prime}}{2 ^ {n ^ {\prime}} - 1} \sum_ {b = 0} ^ {n ^ {\prime} - 1} W _ {q} ^ {(b)} 2 ^ {b}. \tag {6}
+$$
+
+As formulated in Equation (5), the regularization strength assigned to each layer will change with the quantization scheme of the model. The re-quantization and precision adjustment step will be performed periodically during the training process, with an interval of several training epochs. After each precision adjustment, we separate the positive elements and negative elements in $W_{q}^{\prime}$ to form the new $W_{p}^{(b)}$ and $W_{n}^{(b)}$ , respectively. The training can then resume with the newly adjusted $W_{p}^{(b)}$ and $W_{n}^{(b)}$ and scaling factor $s'$ . It is worth mentioning that $sW_{q}$ from the forward pass STE remains unchanged before and after the re-quantization and precision adjustment, so the model performance and the gradient from the loss $\mathcal{L}_{CE}$ will not be affected. The interval between re-quantizations needs to be carefully chosen: it shall promptly and properly adjust the regularization strength for stable convergence. The ablation study on re-quantization interval selection is presented in Appendix B.1.
+
+Activation quantization. Since BSQ modifies only the precision of weights but not affecting the precision of activations, we predetermine the activation precision and fix it throughout the BSQ training process. The activations are quantized in the same way as proposed by Polino et al. (2018). For training stability, we use ReLU-6 activation function for layers with 4-bit or above activations, and use PACT (Choi et al., 2018) for layers with a lower activation precision.
+
+Post-training finetuning. At the end of the BSQ training, we perform a final re-quantization and precision adjustment to get the final mixed-quantization scheme. The achieved model can be further finetuned under the obtained precision for improving the overall accuracy. As the quantization scheme is fixed, we adopt the quantization-aware training method proposed by Polino et al. (2018) for finetuning in our experiment.
+
+# 4 ABLATION STUDY
+
+We perform the ablation studies on key design choices of the BSQ algorithm. This section presents the effectiveness of layer-wise memory consumption-aware regularization reweighing and the model size-accuracy tradeoff under different regularization strengths. All experiments are conducted with ResNet-20 models (He et al., 2016a) with 4-bit activation on the CIFAR-10 dataset (Krizhevsky & Hinton, 2009). Detailed experiment setup and hyperparameter choices can be found in Appendix A.
+
+# 4.1 EFFECT OF LAYER-WISE REGULARIZATION REWEIGHING
+
+As stated in Equation (5), we propose to apply layer-wise memory consumption-aware reweighing on the $B_{GL}$ regularizer to penalize more on larger layers during the BSQ training. Figure 2 compares the quantization scheme and the model performance achieved when performing the BSQ training with or without such a reweighing term. Here we set the regularization strength $\alpha$ to 5e-3 when training with the reweighing, and to 2e-3 when training without the reweighing to achieve comparable compression rates. All the other hyperparameters are kept the same. As shown in the figure, training without
+
+
+Figure 3: Layer-wise precision comparison of the quantization schemes achieved under different regularization strengths.
+
+Table 1: Accuracy-#Bits tradeoff under different regularization strengths. "FT" stands for finetuning. The last row is achieved by training with quantization schemes achieved by BSQ from scratch.
+
+| Strength α | 3e-3 | 5e-3 | 7e-3 | 1e-2 | 2e-2 |
| #Bits per Para / Comp (×) | 3.02 / 10.60 | 2.25 / 14.24 | 1.66 / 19.24 | 1.37 / 23.44 | 0.87 / 36.63 |
| BSQ acc before / after FT (%) | 91.30 / 92.60 | 90.98 / 92.32 | 90.42 / 91.48 | 90.35 / 91.16 | 85.77 / 89.49 |
| Train from scratch acc (%) | 91.72 | 91.45 | 91.12 | 89.57 | 89.14 |
+
+the reweighing term will lead to over-penalization on earlier layers with fewer parameters, while later layers with more parameters are not compressed enough. Therefore the achieved quantized model will have less accuracy even with a smaller compression rate comparing to the model achieved with layer-wise regularization reweighing. As we only show one pair of comparison here, the difference between BSQ training with or without the reweighing term is consistent when varying the regularization strength $\alpha$ . Additional results with other $\alpha$ values are shown in Appendix B.2.
+
+# 4.2 ACCURACY-#BITS TRADEOFF UNDER DIFFERENT REGULARIZATION STRENGTHS
+
+We fix all the other hyperparameters while varying only the regularization strength $\alpha$ from 3e-3 to 2e-2, to control the tradeoff between the model size and accuracy achieved by BSQ. The quantization schemes achieved by running BSQ with different $\alpha$ 's are shown in Figure 3, and the detailed comparison on the compression rate comparing to the 32-bit floating point model (denoted as "Comp") and the validation accuracy (denoted as "Acc") is summarized in Table 1. As shown in Figure 3, the relative ranking of the precision assignment is mostly consistent under different $\alpha$ 's, which is consistent with the previous observation that more important layers should be assigned with higher precision. This effect is further illustrated in Appendix B.3, where we compare the quantization scheme achieved by BSQ with the layer importance measured in HAWQ (Dong et al., 2019). Furthermore, as $\alpha$ increases, the overall bit reduction increases with the cost of a small performance loss. This tradeoff is also observed on models trained with 2-bit or 3-bit activation as we show their quantization schemes and performances in Appendix B.4. Note that some layers achieve 0-bit precision under large regularization strength, indicating all the weights become zero and the layer can be skipped. This is possible as the shortcut connection existing in the ResNet architecture enables the pass of information even if the weights are all zero in some layers. We also note that BSQ not only finds the desired mixed-precision quantization scheme, but also provides a model with higher performance under the same quantization scheme. As shown in Table 1, when training a model with the same quantization scheme as achieved by BSQ using the DoReFa-Net algorithm (Zhou et al., 2016) from scratch, the resulted accuracy is always lower than the BSQ model after finetuning.
+
+# 5 EXPERIMENTAL RESULTS
+
+In this section we compare BSQ with previous state-of-the-art methods. Here, ResNet-20 models are used for the comparison on the CIFAR-10 dataset, and ResNet-50 and Inception-V3 models (Szegedy et al., 2016) are utilized for the experiments on the ImageNet dataset (Russakovsky et al., 2015). The hyperparameters used for BSQ training and finetuning are listed in Appendix A. All the compression
+
+Table 2: Quantization results of ResNet-20 models on the CIFAR-10 dataset. BSQ is compared with DoReFa-Net (Zhou et al., 2016), PACT (Choi et al., 2018), LQ-Net (Zhang et al., 2018), DNAS (Wu et al., 2019) and HAWQ (Dong et al., 2019). "MP" denotes mixed-precision quantization.
+
+| Benchmarks | BSQ |
| Act. Prec. | Method | Weight Prec. | Comp (×) | Acc (%) | α | Comp (×) | Acc (%) |
| 32-bit | Baseline | 32 | 1.00 | 92.62 | | | |
| LQ-Nets | 3 | 10.67 | 92.00 | 5e-3 | 14.24 | 92.77 |
| DNAS | MP | 11.60 | 92.72 | 7e-3 | 19.24 | 91.87 |
| LQ-Nets | 2 | 16.00 | 91.80 | | | |
| 4-bit | HAWQ | MP | 13.11 | 92.22 | 5e-3 | 14.24 | 92.32 |
| 3-bit | LQ-Nets | 3 | 10.67 | 91.60 | 2e-3 | 11.04 | 92.16 |
| PACT | 3 | 10.67 | 91.10 | 5e-3 | 16.37 | 91.72 |
| DoReFa | 3 | 10.67 | 89.90 | | | |
| 2-bit | LQ-Nets | 2 | 16.00 | 90.20 | | | |
| PACT | 2 | 16.00 | 89.70 | 5e-3 | 18.85 | 90.19 |
| DoReFa | 2 | 16.00 | 88.20 | | | |
+
+Table 3: Quantization results of ResNet-50 and Inception-V3 models on the ImageNet dataset. BSQ is compared with DoReFa-Net (Zhou et al., 2016), PACT (Choi et al., 2018), LSQ (Esser et al., 2019), LQ-Net (Zhang et al., 2018), Deep Compression (DC) (Han et al., 2015a), Integer (Jacob et al., 2018), RVQ (Park et al., 2018), HAQ (Wang et al., 2019) and HAWQ (Dong et al., 2019).
+
+| ResNet-50 | Inception-V3 |
| Method | Prec. | Comp (×) | Top1 (%) | Method | Prec. | Comp (×) | Top1 (%) |
| Baseline | 32 | 1.00 | 76.13 | Baseline | 32 | 1.00 | 77.21 |
| DoReFa | 3 | 10.67 | 69.90 | Integer | 8 | 4.00 | 75.40 |
| PACT | 3 | 10.67 | 75.30 | Integer | 7 | 4.57 | 75.00 |
| LQ-Nets | 3 | 10.67 | 74.20 | RVQ | MP | 10.67 | 74.14 |
| DC | 3 | 10.41 | 75.10 | HAWQ | MP | 12.04 | 75.52 |
| HAQ | MP | 10.57 | 75.30 | | | | |
| LSQ | 3 | 10.67 | 75.80 | | | | |
| BSQ 5e-3 | MP | 11.90 | 75.29 | BSQ 1e-2 | MP | 11.38 | 76.60 |
| BSQ 7e-3 | MP | 13.90 | 75.16 | BSQ 2e-2 | MP | 12.89 | 75.90 |
+
+rates reported in Table 2 and Table 3 are compared to the 32-bit floating point model, and all the accuracy reported is the testing accuracy evaluated on models after finetuning.
+
+Table 2 reports the quantization results of ResNet-20 models on the CIFAR-10 dataset. Here we set the activation of the first convolutional layer and the final FC layer to 8 bits while all the other activations to 4, 3 or 2 bits respectively to match the settings of previous methods. The reported 32-bit activation model performance is achieved by finetuning the 4-bit activation model under full precision activation. The exact BSQ quantization schemes of the 4-bit activation models are listed in Figure 3, while those of the 2-bit and 3-bit activation models can be found in Appendix B.4. Comparing to previous mixed-precision quantization methods, the model obtained by BSQ with 4-bit activation and $\alpha = 5\mathrm{e} - 3$ has slightly higher accuracy as the model achieved by HAWQ (Dong et al., 2019), but a higher compression rate $(14.24\times$ vs. $13.11\times)$ . The same model with 32-bit activation obtains $23\%$ more compression rate with the same accuracy as the model found by DNAS (Wu et al., 2019), with a much less training cost as our method does not involve the costly neural architecture search. The advantage of BSQ is even larger comparing to single-precision quantization methods (Zhou et al., 2016; Choi et al., 2018; Zhang et al., 2018), as BSQ achieves both higher compression rate and higher accuracy comparing to all methods with the same activation precision.
+
+The results of BSQ and previous quantization methods on the ImageNet dataset are summarized in Table 3. The exact BSQ quantization schemes can be found in Appendix C. For ResNet models, the activation of the first and the final layer are set to 8 bits while all the other activations are set to 4
+
+bits. For Inception-V3 models the activation of all the layers are set to 6 bits. On ResNet-50 models, BSQ with $\alpha = 5\mathrm{e} - 3$ achieves the same top-1 accuracy as PACT (Choi et al., 2018) and $0.5\%$ less top-1 accuracy as the best available method LSQ (Esser et al., 2019) with a higher compression rate $(11.90\times$ vs. $10.67\times)$ , showing competitive accuracy-compression tradeoff. BSQ can further increase the compression rate of ResNet-50 to $13.90\times$ with $\alpha = 7\mathrm{e} - 3$ , with only $0.13\%$ top-1 accuracy loss over the "5e-3" model. On Inception-V3 models, BSQ with $\alpha = 2\mathrm{e} - 2$ achieves both higher accuracy $(75.90\%$ vs. $75.52\%)$ and higher compression rate $(12.89\times$ vs $12.04\times)$ comparing to the best previous method HAWQ (Dong et al., 2019). Adopting a smaller $\alpha = 1\mathrm{e} - 2$ makes BSQ to achieve $0.7\%$ accuracy improvement trading off $\sim 10\%$ less compression rate comparing to the "2e-2" model.
+
+# 6 CONCLUSIONS
+
+In this work, we propose BSQ, which fully explores the accuracy-model size tradeoff of DNN's mixed-precision quantization schemes with a differentiable training algorithm using DNN's bit representation as trainable variables. A bit-level group Lasso regularizer with memory consumption-aware layer-wise reweighing is applied to induce bit-level sparsity, which leads to the dynamic adjustment of each layer's precision and finally a mixed-precision quantization scheme through a single-pass gradient-based training process. This enables BSQ to dynamically produce a series of quantization schemes trading off accuracy and model size and provides models with higher accuracy comparing to training from scratch under the same quantization scheme. We apply BSQ in training ResNet-20 models on the CIFAR-10 dataset and training ResNet-50 and Inception-V3 models on the ImageNet dataset. In all the experiments, BSQ demonstrates the ability to reach both a better accuracy and a higher compression rate comparing to previous quantization methods. Our results prove that BSQ can successfully fill in the gap of inducing a mixed-precision quantization scheme with a differentiable regularizer, so as to effectively explore the tradeoff between accuracy and compression rate for finding DNN models with both higher accuracy and fewer bits.
+
+# ACKNOWLEDGMENTS
+
+This work is supported in part by NSF CCF-1910299 and NSF CNS-1822085.
+
+# REFERENCES
+
+Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
+Jungwook Choi, Zhuo Wang, Swagath Venkataramani, Pierce I-Jen Chuang, Vijayalakshmi Srinivasan, and Kailash Gopalakrishnan. Pact: Parameterized clipping activation for quantized neural networks. arXiv preprint arXiv:1805.06085, 2018.
+Zhen Dong, Zhewei Yao, Amir Gholami, Michael W Mahoney, and Kurt Keutzer. Hawq: Hessian aware quantization of neural networks with mixed-precision. In Proceedings of the IEEE International Conference on Computer Vision, pp. 293–302, 2019.
+Steven K Esser, Jeffrey L McKinstry, Deepika Bablani, Rathinakumar Appuswamy, and Dharmendra S Modha. Learned step size quantization. arXiv preprint arXiv:1902.08153, 2019.
+Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015a.
+Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In Advances in neural information processing systems, pp. 1135-1143, 2015b.
+Trevor Hastie, Robert Tibshirani, and Martin Wainwright. Statistical learning with sparsity: the lasso and generalizations. CRC press, 2015.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016a.
+Qinyao He, He Wen, Shuchang Zhou, Yuxin Wu, Cong Yao, Xinyu Zhou, and Yuheng Zou. Effective quantization methods for recurrent neural networks. arXiv preprint arXiv:1611.10176, 2016b.
+
+Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, and Dmitry Kalenichenko. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2704-2713, 2018.
+Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks with low rank expansions. arXiv preprint arXiv:1405.3866, 2014.
+Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, CiteSeer, 2009.
+Eunhyeok Park, Sungjoo Yoo, and Peter Vajda. Value-aware quantization for training and inference of neural networks. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 580-595, 2018.
+A. Polino, R. Pascanu, and D. Alistarh. Model compression via distillation and quantization. arXiv preprint arXiv:1802.05668, 2018.
+Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. In European conference on computer vision, pp. 525-542. Springer, 2016.
+Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211-252, 2015. doi: 10.1007/s11263-015-0816-y.
+Charbel Sakr and Naresh Shanbhag. An analytical method to determine minimum per-layer precision of deep neural networks. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1090-1094. IEEE, 2018.
+Charbel Sakr and Naresh Shanbhag. Per-tensor fixed-point quantization of the back-propagation algorithm. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=rkxaNja9Ym.
+Hardik Sharma, Jongse Park, Naveen Suda, Liangzhen Lai, Benson Chau, Vikas Chandra, and Hadi Esmaeilzadeh. Bit fusion: Bit-level dynamically composable architecture for accelerating deep neural network. In 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA), pp. 764-775. IEEE, 2018.
+Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In International conference on machine learning, pp. 1139-1147, 2013.
+Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818-2826, 2016.
+Kuan Wang, Zhijian Liu, Yujun Lin, Ji Lin, and Song Han. Haq: Hardware-aware automated quantization with mixed precision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8612-8620, 2019.
+Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. In Advances in neural information processing systems, pp. 2074-2082, 2016.
+Wei Wen, Cong Xu, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Coordinating filters for faster deep neural networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 658-666, 2017.
+Bichen Wu, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong Tian, Peter Vajda, Yangqing Jia, and Kurt Keutzer. Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10734-10742, 2019.
+Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Yingyong Qi, Yiran Chen, Weiyao Lin, and Hongkai Xiong. Trained rank pruning for efficient deep neural networks. arXiv preprint arXiv:1812.02402, 2018.
+Huanrui Yang, Wei Wen, and Hai Li. Deephoyer: Learning sparser neural network with differentiable scale-invariant sparsity measures. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=rylBK34FDS.
+
+Dongqing Zhang, Jiaolong Yang, Dongqiangzi Ye, and Gang Hua. Lq-nets: Learned quantization for highly accurate and compact deep neural networks. In Proceedings of the European conference on computer vision (ECCV), pp. 365-382, 2018.
+Xiangyu Zhang, Jianhua Zou, Kaiming He, and Jian Sun. Accelerating very deep convolutional networks for classification and detection. IEEE transactions on pattern analysis and machine intelligence, 38(10): 1943-1955, 2015.
+Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, and Yurong Chen. Incremental network quantization: Towards lossless cnns with low-precision weights. arXiv preprint arXiv:1702.03044, 2017.
+Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160, 2016.
+
+# A HYPERPARAMETER CHOICES IN THE EXPERIMENTS
+
+# A.1 CIFAR-10 EXPERIMENTS
+
+We use ResNet-20 models on the CIFAR-10 dataset (Krizhevsky & Hinton, 2009) to do all of our ablation studies and evaluate the performance of BSQ. The CIFAR-10 dataset can be directly accessed through the dataset API provided in the "torchvision" python package. We do not change the splitting between the training and the test set. Standard preprocessing procedures, including random crop with a padding of 4, random horizontal flip and normalization, are used on the training set to train the model. The validation set is normalized with the same mean and variance as the training set. We implemented ResNet-20 models following the description in (He et al., 2016a), and pretrain the model for 350 epochs. The learning rate is set to 0.1 initially, and decayed by 0.1 at epoch 150, 250 and 325. The weights of all the layers except the batch normalization are then quantized to 8-bit before the BSQ training. The batch normalization layers are kept in the floating-point format throughout the training process. Similar to previous quantization works, we also apply the activation quantization during the training. For 4-bit or above activation precision we replace all the ReLU activation function in the model with the ReLU6 activation function. For lower activation precision we use the trainable PACT activation (Choi et al., 2018) with weight decay 0.0001. These changes will help achieving higher accuracy and better training stability when the activation is quantized as it eliminates extremely large activation values. As BSQ does not consider activation quantization as an objective, we fix the activation precision throughout the BSQ training and the finetuning process.
+
+We start the BSQ training with the 8-bit quantized pretrained model following the process described in Section 3.3. The BSQ training is done for 350 epochs, with the first 250 epochs using learning rate 0.1 and the rest using learning rate 0.01. Unless otherwise specified, the re-quantization and precision adjustment is done every 100 epochs, as well as after the BSQ training is finished to adjust and finalize the quantization scheme. Different regularization strengths $\alpha$ are tried to explore the tradeoff between accuracy and compression rate. The exact $\alpha$ used for each set of experiment is reported alongside the results in the main article. For comparing with previous methods, we further finetune the achieved mixed-precision model with the DoReFa-Net algorithm (Zhou et al., 2016) while fixing the quantization scheme. The finetuning is performed for 300 epochs with an initial learning rate 0.01 and the learning rate decay by 0.1 at epoch 150 and 250. The "train from scratch" accuracy reported in Table 1 is achieved by first quantizing a pretrained floating-point model to the mixed precision quantization scheme achieved by BSQ, then performing DoReFa-Net quantization aware training on the model. The training is done for 350 epochs, with an initial learning rate 0.1 and the learning rate decay by 0.1 at epoch 150, 250 and 325. All the training tasks are optimized with the SGD optimizer (Sutskever et al., 2013) with momentum 0.9 and weight decay 0.0001, and the batch size is set to 128. All the training processes are done on a single Titan XP GPU.
+
+# A.2 IMAGENET EXPERIMENTS
+
+The ImageNet dataset is used to further compare BSQ with previous methods in Table 3. The ImageNet dataset is a large-scale color-image dataset containing 1.2 million images of 1,000 categories (Russakovsky et al., 2015), which has long been utilized as an important benchmark on image classification problems. In this paper, we use the "ILSVRC2012" version of the dataset, which can be found at http://www(image-net.org/challenges/LSVRC/2012/nonpub-downloads. We use all the data in the provided training set to train our model, and use the provided validation set to evaluate our model and report the testing accuracy. We follow the data reading and preprocessing pipeline suggested by the official PyTorch ImageNet example. For training images, we first perform the random sized crop on the training images with the desired input size, then apply random horizontal flipping and finally normalize them before feeding them into the network. We use an input size of $224 \times 224$ for experiments on the ResNet-50, and use an size of $299 \times 299$ for the Inception-V3 experiments. Validation images are resized then center cropped to the desired input size and normalized before used for testing. For both the ResNet-50 and the Inception-V3 model, the model architecture and the pretrained model provided in the "torchvision" package are directly utilized. The first convolutional layer of the ResNet-50 model and the first 5 conventional layers of the Inception-V3 model are quantized to 8-bit, while all the other layers are quantized to 6-bits before the BSQ training. Similar to the CIFAR-10 experiments, the batch normalization layers are kept as floating-point.
+
+
+Figure 4: Range of testing accuracy and bit reduction rate achieved from 5 repeated runs with different random seeds. Solid line links the average performance, error bar marks the maximal and minimal performance achieved with each set of hyperparameters.
+
+We start the BSQ training with the quantized pretrained model. For both ResNet-50 and Inception-V3 models, the BSQ training is done for 90 epochs, with the first 30 epochs using learning rate 0.01 and the rest using learning rate 0.001. The re-quantization interval is set to 10 epochs for all the ImageNet experiments. The regularization strength $\alpha$ used is reported alongside the results in Table 3. The model after BSQ training is further finetuned with DoReFa-Net for 90 epochs, with the initial learning rate 0.001 and a learning rate decay by 0.1 after 30 epochs. All the models are optimized with the SGD optimizer with momentum 0.9 and weight decay 0.0001, and the batch size is set as 256 for all the experiments. Two Titan RTX GPUs are used in parallel for the BSQ training and finetuning of both ResNet-50 and Inception-V3 models.
+
+# B ADDITIONAL ABLATION STUDY RESULTS
+
+# B.1 CHOICE OF RE-QUANTIZATION INTERVAL
+
+We propose the layer-wise regularization reweighing in Section 3.3 and show its importance in Section 4.1. This reweighing can be more effective if we adjust the precision of each layer regularly throughout the BSQ training routine. The precision adjustment is done through periodic re-quantization. From the one hand, a smaller re-quantization interval would help the precision to be adjusted in-time. From the other hand, it may cause the training unstable due to the frequent change in bit representation and regularizer values. So here we gradually increase the re-quantization interval to find the best choice that can reach high and stable performance. Figure 4 demonstrates the stability and performance under re-quantization intervals 20, 50, 100 and compare them with the performance achieved without re-quantization during the training. Each point in the figure corresponds to the averaged compression rate and accuracy after 5-time repeated BSQ training with a fixed regularization strength $\alpha$ but with different random seeds. The observation in the figure supports our analysis that as re-quantization is important to reach a better accuracy-\# bits tradeoff, applying it too frequent will make the training unstable and hinders the overall performance. Comparing to not performing the re-quantization and applying it every 20 epochs, re-quantizing every 50 or 100 epochs yields similarly better tradeoff between accuracy and compression rate. Re-quantization interval 100 leads to a higher accuracy in a wider range of compression rate comparing to the Int 50 model, and the performance is more stable throughout the repeated trails. Therefore in all the other CIFAR-10 experiments we set the re-quantization interval to 100 epochs.
+
+# B.2 ADDITIONAL RESULTS ON REGULARIZATION REWEIGHING
+
+Figure 5 and Figure 6 compares the quantization scheme and the model performance achieved when performing the BSQ training with or without the memory consumption-aware reweighing of the bit-level group Lasso regularizer under additional choices of regularization strength $\alpha$ . The $\alpha$ used for each set of experiment are chosen so that comparable compression rates are achieved with or without reweighing. The $\alpha$ used are listed in the caption of the figures. All the other hyperparameters are kept the same. From both figures we can observe a consistent trend that training without the reweighing
+
+
+
+
+Figure 5: Quantization schemes achieved with or without layer-wise regularization reweighing. The compression rate and the accuracy after finetuning are listed in the legend. $\alpha = 6\mathrm{e} - 3$ with reweighing and $\alpha = 3\mathrm{e} - 3$ without reweighing.
+Figure 6: Quantization schemes achieved with or without layer-wise regularization reweighing. The compression rate and the accuracy after finetuning are listed in the legend. $\alpha = 0.015$ with reweighing and $\alpha = 5\mathrm{e} - 3$ without reweighing.
+
+term will lead to less precision assigned to earlier layers with fewer parameters, while later layers with more parameters are not compressed enough. Therefore the achieved quantized model will have less accuracy and smaller compression rate comparing to the model achieved with layer-wise regularization reweighing. This observation is consistent with the results shown in Section 4.1. All these results show that the memory consumption-aware reweighing proposed in BSQ training is crucial for generating models with both higher compression rate and higher accuracy.
+
+# B.3 QUANTIZATION SCHEME COMPARISON WITH HAWQ
+
+As discussed in Section 4.2 and shown in Figure 3, for the same model architecture the relative ranking of the precision assignment by BSQ is mostly consistent under different $\alpha$ 's. Here we compare quantization schemes achieved by BSQ with the "layer importance ranking" measured in HAWQ (Dong et al., 2019) to further analyze this consistency. HAWQ proposes to rank all the layers in the model with an importance score $S_{i} = \lambda_{i} / n_{i}$ , where $\lambda_{i}$ denotes the top eigenvalue of the Hessian matrix of layer $i$ , and $n_i$ represents the number of parameters in layer $i$ . A layer with a higher $S_{i}$ will be assigned with higher precision in the mixed-precision quantization scheme. The quantization schemes achieved by BSQ and HAWQ are compared in Figure 7, where the black dotted
+
+
+Figure 7: Layer-wise precision comparison between the quantization schemes achieved with BSQ and the scheme achieved with HAWQ (Dong et al., 2019) on the ResNet-20 model.
+
+
+Figure 8: Layer-wise precision comparison of the quantization schemes achieved under different regularization strengths with 2-bit activation.
+
+Table 4: Accuracy-#Bits tradeoff with 2-bit activation. "FT" stands for finetuning.
+
+| Strength α | 1e-3 | 2e-3 | 3e-3 | 5e-3 |
| #Bits per Para / Comp (×) | 3.77 / 8.48 | 2.86 / 11.20 | 2.26 / 14.13 | 1.70 / 18.85 |
| BSQ acc before / after FT (%) | 91.03 / 91.21 | 90.19 / 90.70 | 89.54 / 90.39 | 88.13 / 90.19 |
+
+line shows the HAWQ scheme while the color solid lines demonstrate the schemes achieved by BSQ under different $\alpha$ . It can be observed that the relative ranking of BSQ's precision is consistent with the ranking of precision in HAWQ, which to some extent shows that BSQ can dynamically identify the important layers during the training and assign higher precision to them. Note that HAWQ can only come up with the precision ranking of each layer, while the exact precision is designed manually. BSQ on the other hand is able to explicitly assign the precision to each layer during a single training process, and can dynamically tradeoff model size and accuracy by only changing $\alpha$ . Thus BSQ can easily find better tradeoff points with both higher accuracy and higher compression rate comparing to HAWQ and other quantization methods, as discussed in Section 5.
+
+# B.4 MODELS ACHIEVED UNDER DIFFERENT ACTIVATION PRECISION
+
+As we discuss the BSQ quantization schemes and model performances under 4-bit activation in Figure 3 and Table 1, here we show the tradeoff between model size and accuracy under different regularization strength $\alpha$ with 2-bit activation in Figure 8 and Table 4, as well as those with 3-bit activation in Figure 9 and Table 5. In both cases we observe that the relative ranking of the precision assignment is mostly consistent under different $\alpha$ 's. As $\alpha$ increases, less bits are assigned to each layer, leading to increasing overall bit reduction with the cost of a small performance loss. This tradeoff is consistent with our previous observations on the 4-bit activation models.
+
+# C DETAILED QUANTIZATION SCHEMES FOR IMAGENET EXPERIMENTS
+
+The quantization schemes of the reported ResNet-50 and Inception-V3 models can be found in Table 6 and Table 7 respectively.
+
+
+Figure 9: Layer-wise precision comparison of the quantization schemes achieved under different regularization strengths with 3-bit activation.
+
+Table 5: Accuracy-#Bits tradeoff with 3-bit activation. "FT" stands for finetuning.
+
+| Strength α | 2e-3 | 5e-3 | 8e-3 | 1e-2 |
| #Bits per Para / Comp (×) | 2.90 / 11.04 | 1.95 / 16.37 | 1.39 / 23.04 | 1.28 / 25.06 |
| BSQ acc before / after FT (%) | 90.45 / 92.16 | 90.44 / 91.72 | 89.01 / 90.93 | 88.41 / 90.51 |
+
+Table 6: Quantization schemes of ResNet-50 models on ImageNet dataset achieved by BSQ in Table 3. The scheme on the left is achieved with $\alpha = 5\mathrm{e} - 3$ and the one on the right is achieved with $\alpha = 7\mathrm{e} - 3$ . Except the first row for the leading convolutional layer and the last row for the FC layer, each row in the table reports the precision assigned to the 3 layers in a residual block, with layer 1-3 listed from left to right.
+
+ | BSQ 5e-3 | BSQ 7e-3 |
| Conv 1 | 7 | | | 7 | | |
| Block 1-0 | 7 | 6 | 6 | 7 | 6 | 6 |
| Block 1-1 | 6 | 6 | 6 | 6 | 6 | 6 |
| Block 1-2 | 6 | 6 | 6 | 6 | 5 | 6 |
| Block 2-0 | 4 | 3 | 4 | 4 | 3 | 4 |
| Block 2-1 | 4 | 4 | 4 | 4 | 3 | 4 |
| Block 2-2 | 4 | 4 | 4 | 4 | 3 | 4 |
| Block 2-3 | 4 | 3 | 4 | 3 | 3 | 4 |
| Block 3-0 | 4 | 3 | 3 | 4 | 3 | 3 |
| Block 3-1 | 3 | 3 | 4 | 3 | 3 | 3 |
| Block 3-2 | 3 | 3 | 3 | 3 | 3 | 3 |
| Block 3-3 | 3 | 3 | 3 | 3 | 2 | 3 |
| Block 3-4 | 3 | 3 | 3 | 3 | 3 | 3 |
| Block 3-5 | 3 | 3 | 3 | 3 | 3 | 3 |
| Block 4-0 | 3 | 2 | 2 | 3 | 2 | 2 |
| Block 4-1 | 2 | 2 | 3 | 2 | 2 | 2 |
| Block 4-2 | 2 | 3 | 3 | 2 | 2 | 2 |
| FC | 3 | | | 2 | | |
+
+Table 7: Quantization schemes of Inception-V3 model on ImageNet dataset achieved by BSQ in Table 3. The scheme on the left is achieved with $\alpha = 1\mathrm{e - }2$ and the one on the right is achieved with $\alpha =$ 2e-2. Except for the first 5 convolutional layers and the final FC layer, each row in the table reports the precision assigned to the layers within the inception block. The order from left to right follows the parameter definition order provided in the torchvision package implementation (https://github. com/pytorch/vision/blob/master/torchvision/models/inception.py).
+
+ | BSQ 1e-2 | BSQ 2e-2 |
| Conv 1a | 8 | | | | | | | | | | 8 | | | | | | | | |
| Conv 2a | 7 | | | | | | | | | | 7 | | | | | | | | |
| Conv 2b | 6 | | | | | | | | | | 6 | | | | | | | | |
| Conv 3b | 8 | | | | | | | | | | 8 | | | | | | | | |
| Conv 4a | 5 | | | | | | | | | | 4 | | | | | | | | |
| Mixed 5b | 4 | 4 | 4 | 4 | 4 | 3 | 4 | | | | 4 | 4 | 3 | 4 | 3 | 3 | 4 | | |
| Mixed 5c | 4 | 4 | 3 | 4 | 3 | 3 | 4 | | | | 4 | 4 | 3 | 4 | 3 | 3 | 4 | | |
| Mixed 5d | 4 | 4 | 4 | 4 | 4 | 3 | 4 | | | | 4 | 4 | 3 | 4 | 3 | 3 | 4 | | |
| Mixed 6a | 2 | 4 | 4 | 3 | | | | | | | 2 | 4 | 3 | 3 | | | | | |
| Mixed 6b | 4 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 |
| Mixed 6c | 4 | 4 | 3 | 3 | 4 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 2 | 2 | 3 |
| Mixed 6d | 5 | 3 | 3 | 3 | 4 | 3 | 3 | 3 | 3 | 3 | 5 | 3 | 3 | 3 | 4 | 3 | 3 | 2 | 3 |
| Mixed 6e | 5 | 4 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 4 | 4 | 3 | 2 | 2 | 3 | 3 | 3 | 3 | 3 |
| Mixed 7a | 3 | 3 | 4 | 3 | 3 | 2 | | | | | 3 | 3 | 3 | 3 | 3 | 2 | | | |
| Mixed 7b | 2 | 3 | 3 | 3 | 2 | 2 | 3 | 2 | 3 | | 2 | 2 | 3 | 3 | 2 | 1 | 2 | 2 | 2 |
| Mixed 7c | 2 | 2 | 3 | 3 | 3 | 2 | 3 | 3 | 2 | | 2 | 2 | 3 | 3 | 2 | 2 | 3 | 3 | 2 |
| FC | 3 | | | | | | | | | | 3 | | | | | | | | |
\ No newline at end of file
diff --git a/bsqexploringbitlevelsparsityformixedprecisionneuralnetworkquantization/images.zip b/bsqexploringbitlevelsparsityformixedprecisionneuralnetworkquantization/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..71a7f623e85a6495d0e0f63d27cba703ba7ccf02
--- /dev/null
+++ b/bsqexploringbitlevelsparsityformixedprecisionneuralnetworkquantization/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5a55927ad5143371024439d6ab0b0def1522906c59830aa282be52283845fa34
+size 699055
diff --git a/bsqexploringbitlevelsparsityformixedprecisionneuralnetworkquantization/layout.json b/bsqexploringbitlevelsparsityformixedprecisionneuralnetworkquantization/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..c7eeae725ec0ec1a44dad6c1a869080573b1d49d
--- /dev/null
+++ b/bsqexploringbitlevelsparsityformixedprecisionneuralnetworkquantization/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4f8fbb7364f096fb70012669035eacd196114fb56969e8765be21b09405f7cbd
+size 466352
diff --git a/bypassingtheambientdimensionprivatesgdwithgradientsubspaceidentification/14225477-9cc0-4511-860c-1ee1ef27e18a_content_list.json b/bypassingtheambientdimensionprivatesgdwithgradientsubspaceidentification/14225477-9cc0-4511-860c-1ee1ef27e18a_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..fc4df53ab469f4135f658c5bc0c2ccf264256590
--- /dev/null
+++ b/bypassingtheambientdimensionprivatesgdwithgradientsubspaceidentification/14225477-9cc0-4511-860c-1ee1ef27e18a_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:83d10d061699ab1c98098e9a5b8823bf248c93b4794a0b97a8f17a982379ea82
+size 208646
diff --git a/bypassingtheambientdimensionprivatesgdwithgradientsubspaceidentification/14225477-9cc0-4511-860c-1ee1ef27e18a_model.json b/bypassingtheambientdimensionprivatesgdwithgradientsubspaceidentification/14225477-9cc0-4511-860c-1ee1ef27e18a_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..66f05c7e388dd01317450fbd08c12e48083ee0c1
--- /dev/null
+++ b/bypassingtheambientdimensionprivatesgdwithgradientsubspaceidentification/14225477-9cc0-4511-860c-1ee1ef27e18a_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7b4dcdeef83e97211b1d8782a7a33389d7678be6bce9a4ed5737a79a3a46d2f6
+size 244108
diff --git a/bypassingtheambientdimensionprivatesgdwithgradientsubspaceidentification/14225477-9cc0-4511-860c-1ee1ef27e18a_origin.pdf b/bypassingtheambientdimensionprivatesgdwithgradientsubspaceidentification/14225477-9cc0-4511-860c-1ee1ef27e18a_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..5ac6190b99de12b71b8bf1e6d34907a8499bbeb7
--- /dev/null
+++ b/bypassingtheambientdimensionprivatesgdwithgradientsubspaceidentification/14225477-9cc0-4511-860c-1ee1ef27e18a_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0eeb0c6791e3eadc6abac2776915330e7b027f9e7e58c37a3db92a56dc8f0981
+size 3643859
diff --git a/bypassingtheambientdimensionprivatesgdwithgradientsubspaceidentification/full.md b/bypassingtheambientdimensionprivatesgdwithgradientsubspaceidentification/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..9c96d50cf46f83e8e8777dd1f9f9e4a2371231cb
--- /dev/null
+++ b/bypassingtheambientdimensionprivatesgdwithgradientsubspaceidentification/full.md
@@ -0,0 +1,961 @@
+# BYPASSING THE AMBIENT DIMENSION: PRIVATE SGD WITH GRADIENT SUBSPACE IDENTIFICATION
+
+Yingxue Zhou†, Zhiwei Steven Wu†, Arindam Banerjee§
+
+† Department of Computer Science & Engineering, University of Minnesota
+$\ddagger$ School of Computer Science, Carnegie Mellon University.
+$\S$ Department of Computer Science, University of Illinois Urbana-Champaign
+† zhou0877@umn.edu, ‡zstevenwu@cmu.edu, §arindamb@illinois.edu
+
+# ABSTRACT
+
+Differentially private SGD (DP-SGD) is one of the most popular methods for solving differentially private empirical risk minimization (ERM). Due to its noisy perturbation on each gradient update, the error rate of DP-SGD scales with the ambient dimension $p$ , the number of parameters in the model. Such dependence can be problematic for over-parameterized models where $p \gg n$ , the number of training samples. Existing lower bounds on private ERM show that such dependence on $p$ is inevitable in the worst case. In this paper, we circumvent the dependence on the ambient dimension by leveraging a low-dimensional structure of gradient space in deep networks—that is, the stochastic gradients for deep nets usually stay in a low dimensional subspace in the training process. We propose Projected DP-SGD that performs noise reduction by projecting the noisy gradients to a low-dimensional subspace, which is given by the top gradient eigenspace on a small public dataset. We provide a general sample complexity analysis on the public dataset for the gradient subspace identification problem and demonstrate that under certain low-dimensional assumptions the public sample complexity only grows logarithmically in $p$ . Finally, we provide a theoretical analysis and empirical evaluations to show that our method can substantially improve the accuracy of DP-SGD in the high privacy regime (corresponding to low privacy loss $\epsilon$ ).
+
+# 1 INTRODUCTION
+
+Many fundamental machine learning tasks involve solving empirical risk minimization (ERM): given a loss function $\ell$ , find a model $\mathbf{w} \in \mathbb{R}^p$ that minimizes the empirical risk $\hat{L}_n(\mathbf{w}) = \frac{1}{n}\sum_{i=1}^{n}\ell(\mathbf{w},z_i)$ , where $z_1,\ldots,z_n$ are i.i.d. examples drawn from a distribution $\mathcal{P}$ . In many applications, the training data may contain highly sensitive information about some individuals. When the models are given by deep neural networks, their rich representation can potentially reveal fine details of the private data.
+
+Differential privacy (DP) (Dwork et al., 2006) has by now become the standard approach to provide principled and rigorous privacy guarantees in machine learning. Roughly speaking, DP is a stability notion that requires that no individual example has a significant influence on the trained model. One of the most commonly used algorithm for solving private ERM is the differentially-private stochastic gradient descent (DP-SGD) (Abadi et al., 2016; Bassily et al., 2014; Song et al., 2013)—a private variant of SGD that perturbs each gradient update with random noise vector drawn from an isotropic Gaussian distribution $\mathcal{N}(\mathbf{0},\sigma^2\mathbb{I}_p)$ , with appropriately chosen variance $\sigma^2$ .
+
+Due to the gradient perturbation drawn from an isotropic Gaussian distribution, the error rate of DP-SGD has a dependence on the ambient dimension $p$ — the number of parameters in the model. In the case of convex loss $\ell$ , Bassily et al. (2014) show that DP-SGD achieves the optimal empirical excess risk of $\tilde{O}\left(\sqrt{p} / (n\epsilon)\right)$ . For non-convex loss $\ell$ , which is more common in neural network training, minimizing $\hat{L}_n(\mathbf{w})$ is in general intractable. However, many (non-private) gradient-based optimization methods are shown to be effective in practice and can provably find approximate stationary points with vanishing gradient norm $\| \nabla \hat{L}_n(\mathbf{w}) \|_2$ (see e.g. Nesterov (2014); Ghadimi and
+
+
+(a) SGD
+
+
+(b) DP-SGD, $\sigma = 1.0$
+Figure 1: Top 500 eigen-value spectrum of the gradient second moment matrix along the training trajectory of SGD, DP-SGD with $\sigma = 1,2$ . Dataset: MNIST; model: 2-layer ReLU with 128 nodes each layer. The network has roughly 130,000 parameters and is trained on MNIST dataset. The Y-axis is the eigenvalue and X-axis is order of eigenvalues from largest to smallest.
+
+
+(c) DP-SGD, $\sigma = 2.0$
+
+Lan (2013)). Moreover, for a wide family of loss functions $\hat{L}_n$ under the Polyak-Lojasiewicz condition (Polyak, 1963), the minimization of gradient norm implies achieving global optimum. With privacy constraint, Wang and Xu (2019) recently showed that DP-SGD minimize the empirical gradient norm down to $\tilde{O}\left(p^{1/4} / \sqrt{n\epsilon}\right)$ when the loss function $\ell$ is smooth. Furthermore, existing lower bounds results on private ERM (Bassily et al., 2014) show that such dependence on $p$ is inevitable in the worst case. However, many modern machine learning tasks now involve training extremely large models, with the number of parameters substantially larger than the number of training samples. For these large models, the error dependence on $p$ can be a barrier to practical private ERM.
+
+In this paper, we aim to overcome such dependence on the ambient dimension $p$ by leveraging the structure of the gradient space in the training of neural networks. We take inspiration from the empirical observation from Li et al. (2020); Gur-Ari et al. (2018); Papyan (2019) that even though the ambient dimension of the gradients is large, the set of sample gradients at most iterations along the optimization trajectory is often contained in a much lower-dimensional subspace. While this observation has been made mostly for non-private SGD algorithm, we also provide our empirical evaluation of this structure (in terms of eigenvalues of the gradient second moments matrix) in Figure 1. Based on this observation, we provide a modular private ERM optimization framework with two components. At each iteration $t$ , the algorithm performs the following two steps:
+
+1) Gradient dimension reduction. Let $\mathbf{g}_t$ be the mini-batch gradient at iteration $t$ . In general, this subroutines solves the following problem: given any $k < p$ , find a linear projection $\hat{V}_k(t) \in \mathbb{R}^{p \times k}$ such that the reconstruction error $\| \mathbf{g}_t - \hat{V}_k(t)\hat{V}_k(t)^\top \mathbf{g}_t\|$ is small. To implement this subroutine, we follow a long line of work that studies private data analysis with access to an auxiliary public dataset $S_h$ drawn from the same distribution $\mathcal{P}$ , for which we don't need to provide formal privacy guarantee (Bassily et al., 2019b; 2020; Feldman et al., 2018; Avent et al., 2017; Papernot et al., 2017). In our case, we compute $\hat{V}_k(t)$ which is given by the top- $k$ eigenspace of the gradients evaluated on $S_h$ . Alternatively, this subroutine can potentially be implemented through private subspace identification on the private dataset. However, to our best knowledge, all existing methods have reconstruction error scaling with $\sqrt{p}$ (Dwork et al., 2014), which will be propagated to the optimization error.
+
+2) Projected DP-SGD (PDP-SGD). Given the projection $\hat{V}_k(t)$ , we perturb gradient in the projected subspace: $\tilde{\mathbf{g}}_t = \hat{V}_k(t)\hat{V}_k(t)^\intercal (\mathbf{g}_t + \mathbf{b}^t)$ , where $\mathbf{b}^t$ is a $p$ -dimensional Gaussian vector. The projection mapping provides a large reduction of the noise and enables higher accuracy for PDP-SGD.
+
+Our results. We provide both theoretical analyses and empirical evaluations of PDP-SGD:
+
+Uniform convergence for projections. A key step in our theoretical analysis is to bound the reconstruction error on the gradients from projection of $\hat{V}_k(t)$ . This reduces to bounding the deviation $\| \hat{V}_k(t) \hat{V}_k(t)^{\mathsf{T}} - V_k V_k(t)^{\mathsf{T}} \|_2$ , where $V_k(t)$ denotes the top- $k$ eigenspace of the population second moment matrix $\mathbb{E}[\nabla \ell(\mathbf{w}_t, z) \nabla \ell(\mathbf{w}_t, z)^{\mathsf{T}}]$ . To handle the adaptivity of the sequence of iterates, we provide a uniform deviation bound for all $\mathbf{w} \in \mathcal{W}$ , where the set $\mathcal{W}$ contains all of the iterates. By leveraging generic chaining techniques, we provide a deviation bound that scales linearly with a complexity measure—the $\gamma_2$ function due to Talagrand (2014)—of the set $\mathcal{W}$ . We provide low-complexity examples of $\mathcal{W}$ that are supported by empirical observations and show that their $\gamma_2$ function only scales logarithmically with $p$ .
+
+Convergence for convex and non-convex optimization. Building on the reconstruction error bound, we provide convergence and sample complexity results for our method PDP-SGD in two types of loss functions, including 1) smooth and non-convex, 2) Lipschitz convex. Under suitable assumptions on the gradient space, our rates only scales logarithmically on $p$ .
+
+Empirical evaluation. We provide an empirical evaluation of PDP-SGD on two real datasets. In our experiments, we construct the "public" datasets by taking very small random sub-samples of these two datasets (100 samples). While these two public datasets are not sufficient for training an accurate predictor, we demonstrate that they provide useful gradient subspace projection and substantial accuracy improvement over DP-SGD.
+
+Related work. Beyond the aforementioned work, there has been recent work on private ERM that also leverages the low-dimensional structure of the problem. Jain and Thakurta (2014); Song et al. (2020) show dimension independent excess empirical risk bounds for convex generalized linear problems, when the input data matrix is low-rank. Kairouz et al. (2020) study unconstrained convex empirical risk minimization and provide a noisy AdaGrad method that achieves dimension-free excess risk bound, provided that the gradients along the optimization trajectory lie in a low-dimensional subspace. In comparison, our work studies both convex and non-convex problems and our analysis applies to more general low-dimensional structures that can be characterized by small $\gamma_{2}$ functions (Talagrand, 2014; Gunasekar et al., 2015) (e.g., low-rank gradients and fast decay in the magnitude of the gradient coordinates). Recently, Tramer and Boneh (2021) show that private learning with features learned on public data from a similar domain can significantly improve the utility. Zhang et al. (2021) leverage the sparsity of the gradients in deep nets to improve the dependence on dimension in the error rate. We also note a recent work (Yu et al., 2021) that proposes an algorithm similar to PDP-SGD. However, in addition to perturbing the projected gradient in the top eigenspaces in the public data, their algorithm also adds noise to the residual gradient. Their error rate scales with dimension $p$ in general due to the noise added to the full space. To achieve a dimension independent error bound, their analyses require fresh public samples drawn from the same distribution at each step, which consequently requires a large public data set with size scaling linearly with $T$ . In comparison, our analysis does not require fresh public samples at each iteration, and our experiments demonstrate that a small public data set of size no more than 150 suffices. $^{1}$
+
+# 2 PRELIMINARIES
+
+Given a private dataset $S = \{z_{1},\dots,z_{n}\}$ drawn i.i.d. from the underlying distribution $\mathcal{P}$ , we want to solve the following empirical risk minimization (ERM) problem subject to differential privacy: $\min_{\mathbf{w}}\hat{L}_n(\mathbf{w}) = \frac{1}{n}\sum_{i = 1}^{n}\ell (\mathbf{w},z_i)$ . Where the parameter $\mathbf{w}\in \mathbb{R}^p$ . We optimize this objective with an iterative algorithm. At each step $t$ , we write $\mathbf{w}_t$ as the algorithm's iterate and use $\mathbf{g}_t$ to denote the mini-batch gradient, and $\nabla \hat{L}_n(\mathbf{w}_t) = \frac{1}{n}\sum_{i = 1}^{n}\nabla \ell (\mathbf{w}_t,z_i)$ to denote the empirical gradient. In addition to the private dataset, the algorithm can also freely access to a small public dataset $S_{h} = \{\tilde{z}_{1},\ldots ,\tilde{z}_{m}\}$ drawn from the same distribution $\mathcal{P}$ , without any privacy constraint.
+
+Notation. We write $M_t \in \mathbb{R}^{p \times p}$ to denote the second moment matrix of gradients evaluated on public dataset $S_h$ , i.e., $M_t = \frac{1}{m} \sum_{i=1}^m \nabla \ell(\mathbf{w}_t, \tilde{z}_i) \nabla \ell(\mathbf{w}_t, \tilde{z}_i)^\top$ and write $\Sigma_t \in \mathbb{R}^{p \times p}$ to denote the population second moment matrix, i.e., $\Sigma_t = \mathbb{E}_{z \sim \mathcal{P}}[\nabla \ell(\mathbf{w}_t, z) \nabla \ell(\mathbf{w}_t, z)^\top]$ . We use $V(t) \in \mathbb{R}^{p \times p}$ as the full eigenspace of $\Sigma_t$ . We use $\hat{V}_k(t) \in \mathbb{R}^{p \times k}$ as the top- $k$ eigenspace of $M_t$ and $V_k(t) \in \mathbb{R}^{p \times k}$ as the top- $k$ eigenspace of $\Sigma_t$ . To present our result in the subsequent sections, we introduce the eigen-gap notation $\alpha_t$ , i.e., let $\lambda_1(\Sigma_t) \geq \ldots \geq \lambda_p(\Sigma_t)$ be the eigenvalue of $\Sigma_t$ , we use $\alpha_t$ to denote the eigen-gap between $\lambda_k(\Sigma_t)$ and $\lambda_{k+1}(\Sigma_t)$ , i.e., $\lambda_k(\Sigma_t) - \lambda_{k+1}(\Sigma_t) \geq \alpha_t$ . We also define $\mathcal{W} \in \mathbb{R}^p$ as the set that contains all the possible iterates $\mathbf{w}_t \in \mathcal{W}$ for $t \in [T]$ . Throughout, for any matrix $A$ and vector $\mathbf{v}$ , $\|A\|_2$ denotes spectral norm and $\|\mathbf{v}\|_2$ denotes $\ell_2$ norm.
+
+Definition 1 (Differential Privacy (Dwork et al., 2006)) A randomized algorithm $\mathcal{R}$ is $(\epsilon, \delta)$ -differentially private if for any pair of datasets $D$ , $D'$ differ in exactly one data point and for all event
+
+$\mathcal{Y} \subseteq \operatorname{Range}(\mathcal{R})$ in the output range of $\mathcal{R}$ , we have $P\{\mathcal{R}(D) \in \mathcal{Y}\} \leq \exp(\epsilon) P\{\mathcal{R}(D') \in \mathcal{Y}\} + \delta$ , where the probability is taken over the randomness of $\mathcal{R}$ .
+
+To establish the privacy guarantee of our algorithm, we will combine three standard tools in differential privacy, including 1) the Gaussian mechanism (Dwork et al., 2006) that releases an aggregate statistic (e.g., the empirical average gradient) by Gaussian perturbation, 2) privacy amplification via subsampling (Kasiviswanathan et al., 2008) that reduces the privacy parameters $\epsilon$ and $\delta$ by running the private computation on a random subsample, and 3) advanced composition theorem (Dwork et al., 2010) that tracks the cumulative privacy loss over the course of the algorithm.
+
+We analyze our method under two assumptions on the gradients of $\ell$ .
+
+Assumption 1 For any $\mathbf{w} \in \mathbb{R}^p$ and example $z$ , $\|\nabla \ell(\mathbf{w}, z)\|_2 \leq G$ .
+
+Assumption 2 For any example $z$ , the gradient $\nabla \ell(\mathbf{w}, z)$ is $\rho$ -Lipschitz with respect to a suitable pseudo-metric $d: \mathbb{R}^p \times \mathbb{R}^p \mapsto \mathbb{R}$ , i.e., $\| \nabla \ell(\mathbf{w}, z) - \nabla \ell(\mathbf{w}', z) \|_2 \leq \rho d(\mathbf{w}, \mathbf{w}')$ , $\forall \mathbf{w}, \mathbf{w}' \in \mathbb{R}^p$ .
+
+Note that Assumption 1 implies that $\hat{L}_n(\mathbf{w})$ is $G$ -Lipschitz and Assumption 2 implies that $\hat{L}_n(\mathbf{w})$ is $\rho$ -smooth when $d$ is the $\ell_2$ -distance. We will discuss additional assumptions regarding the structure of the stochastic gradients and the error rate for different type of functions in Section 3.
+
+# 3 PROJECTED PRIVATE GRADIENT DESCENT
+
+The PDP-SGD follows the classical noisy gradient descent algorithm DP-SGD (Wang et al., 2017; Wang and Xu, 2019; Bassily et al., 2014). DP-SGD adds isotropic Gaussian noise $\mathbf{b}_t \sim \mathcal{N}(0, \sigma^2 \mathbb{I}_p)$ to the gradient $\mathbf{g}_t$ , i.e., each coordinate of the gradient $\mathbf{g}_t$ is perturbed by the Gaussian noise. Given the dimension of gradient to be $p$ , this method ends up in getting a factor of $p$ in the error rate (Bassily et al., 2014; 2019a). Our algorithm is inspired by the recent observations that stochastic gradients stay in a low-dimensional space in the training of deep nets (Li et al., 2020; Gur-Ari et al., 2018). Such observation is also valid for the private training algorithm, i.e., DP-SGD (Figure 1 (b) and (c)). Intuitively, the most information needed for gradient descent is embedded in the top eigenspace of the stochastic gradients. Thus, PDP-SGD performs noise reduction by projecting the noisy gradient $\mathbf{g}_t + \mathbf{b}_t$ to an approximation of such a subspace given by a public dataset $S_h$ .
+
+# Algorithm 1 Projected DP-SGD (PDP-SGD)
+
+1: Input: Training set $S$ , public set $S_{h}$ , certain loss $\ell(\cdot)$ , initial point $\mathbf{w}_0$
+2: Set: Noise parameter $\sigma$ , iteration time $T$ , step size $\eta_t$ .
+3: for $t = 0, \dots, T$ do
+4: Compute top- $k$ eigenspace $\tilde{V}_k(t)$ of $M_{t} = \frac{1}{|S_{h}|}\sum_{\tilde{z}_{i}\in S_{h}}\nabla \ell (\mathbf{w}_{t},\tilde{z}_{i})\nabla \ell (\mathbf{w}_{t},\tilde{z}_{i})^{\intercal}$ .
+5: $\mathbf{g}_t = \frac{1}{|B_t|}\sum_{z_i\in B_t}\nabla \ell (\mathbf{w}_t,z_i)$ , with $B_{t}$ uniformly sampled from $S$ with replacement.
+6: Project noisy gradient using $\hat{V}_k(t)$ : $\tilde{\mathbf{g}}_t = \hat{V}_k(t)\hat{V}_k(t)^\intercal (\mathbf{g}_t + \mathbf{b}_t)$ , where $\mathbf{b}_t\sim \mathcal{N}(0,\sigma^2\mathbb{I}_p)$ .
+7: Update parameter using projected noisy gradient: $\mathbf{w}_{t + 1} = \mathbf{w}_t - \eta_t\tilde{\mathbf{g}}_t$
+8: end for
+
+Thus, our algorithm involves two steps at each iteration, i.e., subspace identification and noisy gradient projection. The pseudo-code of PDP-SGD is given in Algorithm 1. At each iteration $t$ , in order to obtain an approximated subspace without leaking the information of the private dataset $S$ , we evaluate the second moment matrix $M_t$ on $S_h$ and compute the top- $k$ eigenvectors $\hat{V}_k(t)$ of $M_t$ (line 4 in Algorithm 1). Then we project the noisy gradient $\mathbf{g}_t + \mathbf{b}_t$ to the top- $k$ eigenspace, i.e., $\tilde{\mathbf{g}}_t = \hat{V}_k(t)\tilde{V}_k(t)^\top (\mathbf{g}_t + \mathbf{b}_t)$ (line 6 in Algorithm 1). Then PDP-SGD uses the projected noisy gradient $\tilde{\mathbf{g}}_t$ to update the parameter $\mathbf{w}_{t+1} = \mathbf{w}_t - \eta_t\tilde{\mathbf{g}}_t$ .3 Let us first state its privacy guarantee.
+
+Theorem 1 (Privacy) Under Assumption 1, there exist constants $c_1$ and $c_2$ so that given the number of iterations $T$ , for any $\epsilon \leq c_1 q^2 T$ , where $q = \frac{|B_t|}{n}$ , PDP-SGD (Algorithm 1) is $(\epsilon, \delta)$ -differentially private for any $\delta > 0$ , if $\sigma^2 \geq c_2 \frac{G^2 T \ln\left(\frac{1}{\delta}\right)}{n^2 \epsilon^2}$ .
+
+
+(a) DPSGD, $\sigma = 1.0$
+
+
+(b) DPSGD, $\sigma = 2.0$
+Figure 2: Sorted components of population gradients for DP-SGD with $\sigma = 1.0$ , 2.0, 4.0. Dataset: MNIST; model: 2-layer ReLU with 128 nodes each layer. The network has roughly 130,000 parameters. Y-axis is the absolute value of sorted gradient coordinates, i.e., $|\mathbf{m}_t(j)|$ , X-axis is the order of sorted gradient component.
+
+
+(c) DP-SGD, $\sigma = 4.0$
+
+The privacy proof essentially follows from the same proof of DP-SGD (Abadi et al., 2016). At each iteration, the update step PDP-SGD is essentially post-processing of Gaussian Mechanism that computes a noisy estimate of the gradient $\mathbf{g}_t + \mathbf{b}_t$ . Then the privacy guarantee of releasing the sequence of $\{\mathbf{g}_t + \mathbf{b}_t\}_t$ is exactly the same as the privacy proof of Theorem 1 of Abadi et al. (2016).
+
+# 3.1 GRADIENT SUBSPACE IDENTIFICATION
+
+We now analyze the gradient deviation between the approximated subspace $\hat{V}_k(t)\hat{V}_k(t)^\intercal$ and true (population) subspace $V_{k}(t)V_{k}(t)^{\intercal}$ , i.e., $\| \hat{V}_k(t)\hat{V}_k(t)^\intercal - V_k(t)V_k(t)^\intercal \|_2$ . To bound $\| \hat{V}_k(t)\hat{V}_k(t)^\intercal - V_k(t)V_k(t)^\intercal \|_2$ , we first bound the deviation between second moment matrix $\| M_t - \Sigma_t\|$ (Dwork et al., 2014; McSherry, 2004). Note that, if $M_t = \frac{1}{m}\sum_{i=1}^m \nabla \ell(\mathbf{w}_t, \tilde{z}_i) \nabla \ell(\mathbf{w}_t, \tilde{z}_i)^\intercal$ is evaluated on fresh public samples, the $\Sigma_t$ is the expectation of $M_t$ , and the deviation of $M_t$ from $\Sigma_t$ can be easily analyzed by the Ahlswede-Winter Inequality (Horn and Johnson, 2012; Wainwright, 2019), i.e., at any iteration $t$ , if we have fresh public sample $\{\tilde{z}_1(t), \dots, \tilde{z}_m(t)\}$ drawn i.i.d. from the distribution $\mathcal{P}$ , with suitable assumptions we have, $\left\| \frac{1}{m}\sum_{i=1}^m \nabla \ell(\mathbf{w}_t, \tilde{z}_i(t)) \nabla \ell(\mathbf{w}_t, \tilde{z}_i(t))^\intercal - \mathbb{E}[\nabla \ell(\mathbf{w}_t, \tilde{z}_i(t))\nabla \ell(\mathbf{w}_t, \tilde{z}_i(t))^\intercal] \right\|_2 > u$ , $\forall u \in [0,1]$ , with probability at most $p\exp(-mu^2/4G)$ and $G$ is as in Assumption 1.
+
+However, this concentration bound does not hold for $\mathbf{w}_t$ , $\forall t > 0$ in general, since the public dataset $S_h$ is reused over the iterations and the parameter $\mathbf{w}_t$ depends on $S_h$ . To handle the dependency issue, we bound $\| M_t - \Sigma_t\|_2$ uniformly over all iterations $t \in [T]$ to bound the worst-case counterparts that consider all possible iterates. Our uniform bound analysis is based on generic chaining (GC) (Talagrand, 2014), an advanced tool from probability theory. Eventually, the error bound is expressed in terms of a complexity measure called $\gamma_2$ function (Talagrand, 2014). Note that one may consider the idea of sample splitting to bypass the dependency issue by splitting $m$ public samples into $T$ disjoint subsets for each iteration. Based on Ahlswede-Winter Inequality, the deviation error scales with $O\left(\frac{\sqrt{T}}{\sqrt{m}}\right)$ leading to a worse trade-off between the subspace construction error and optimization error due to the dependence on $T$ .
+
+Definition 2 ( $\gamma_{2}$ function (Talagrand, 2014)) For a metric space $(\mathcal{A}, d)$ , an admissible sequence of $\mathcal{A}$ is a collection of subsets of $\mathcal{A}$ , $\Gamma = \{\mathcal{A}_n : n \geq 0\}$ , with $|\mathcal{A}_0| = 1$ and $|\mathcal{A}_n| \leq 2^{2^n}$ for all $n \geq 1$ , the $\gamma_{2}$ functional is defined by $\gamma_{2}(\mathcal{A}, d) = \inf_{\Gamma} \sup_{A \in \mathcal{A}} \sum_{n \geq 0} 2^{n/2} d(A, \mathcal{A}_n)$ , where the infimum is over all admissible sequences of $\mathcal{A}$ .
+
+In Theorem 2, we show that the uniform convergence bound of $\| M_t - \Sigma_t \|_2$ scales with $\gamma_2(\mathcal{W}, d)$ , where $d$ is the pseudo metric as in Assumption 2 and $\mathcal{W} \in \mathbb{R}^p$ is the set that contains all possible iterates in the algorithm, i.e., $\mathbf{w}_t \in \mathcal{W}$ for all $t \in [T]$ .
+
+Based on the majorizing measure theorem (e.g., Theorem 2.4.1 in Talagrand (2014)), if the metric $d$ is $\ell_2$ -norm, $\gamma_2(\mathcal{W}, d)$ can be expressed as Gaussian width (Vershynin, 2018; Wainwright, 2019) of the set $\mathcal{W}$ , i.e., $w(\mathcal{W}) = \mathbb{E}_{\mathbf{v}}[\sup_{\mathbf{w} \in \mathcal{W}} \langle \mathbf{w}, \mathbf{v} \rangle]$ where $\mathbf{v} \sim \mathcal{N}(0, \mathbb{I}_p)$ , which only depends on the size of the $\mathcal{W}$ . In Appendix A.2, we show the complexity measure $\gamma_2(\mathcal{W}, d)$ can be expressed as the $\gamma_2$ function measure on the gradient space by mapping the parameter space $\mathcal{W}$ to the gradient space, i.e., $f: \mathcal{W} \mapsto \mathcal{M}$ , where $f$ can be considered as $f(\mathbf{w}) = \mathbb{E}_{z \in \mathcal{P}} [\nabla(\mathbf{w}, z)]$ . To simplify the
+
+notation, we write $\mathbf{m} = \mathbb{E}_{z\in \mathcal{P}}[\nabla (\mathbf{w},z)]$ as the population gradient at $\mathbf{w}$ and $\mathcal{M}$ as the space of the population gradient. Considering $d(\mathbf{m},\mathbf{m}^{\prime}) = \| \mathbf{m} - \mathbf{m}^{\prime}\|_{2}$ for $\mathbf{m},\mathbf{m}^{\prime}\in \mathcal{M}$ , $\gamma_{2}(\mathcal{M},d)$ will be the same order as the Gaussian width $w(\mathcal{M})$ .
+
+To measure the value of $\gamma_{2}(\mathcal{M},d)$ , we empirically explore the gradient space $\mathcal{M}$ for deep nets. Figure 2 gives an example of the population gradient along the training trajectory of DP-SGD with $\sigma = \{1,2,4\}$ for training a 2-layer ReLU on MNIST dataset. Figure 2 shows that each coordinate of the gradient is of small value and gradient components decay very fast (Li and Banerjee, 2021). Thus, it is fair that the gradient space $\mathcal{M}$ is a union of ellipsoids, i.e., there exists $\mathbf{e} \in \mathbb{R}^p$ such that $\mathcal{M} = \{\mathbf{m} \in \mathbb{R}^p \mid \sum_{j=1}^{p} \mathbf{m}(j)^2 / \mathbf{e}(j)^2 \leq 1, \mathbf{e} \in \mathbb{R}^p\}$ , where $j$ denotes the $j$ -th coordinate. Then we have $\gamma_{2}(\mathcal{M},d) \leq c_{1} w(\mathcal{M}) \leq c_{2} \| \mathbf{e} \|_{2}$ (Talagrand, 2014), where $c_{1}$ and $c_{2}$ are absolute constants. If the elements of $\mathbf{e}$ are sorted in a decreasing order satisfy $\mathbf{e}(j) \leq c_{3} / \sqrt{j}$ for all $j \in [p]$ , then $\gamma_{2}(\mathcal{M},d) \leq O\left(\sqrt{\log p}\right)$ . Now we give the uniform convergence bound of $\| M_{t} - \Sigma_{t} \|_{2}$ .
+
+Theorem 2 (Second Moment Concentration) Under Assumption 1, 2, the second moment matrix of the public gradient $M_{t} = \frac{1}{m}\sum_{i = 1}^{m}\nabla \ell (\mathbf{w}_{t},\tilde{z}_{i})\nabla \ell (\mathbf{w}_{t},\tilde{z}_{i})^{\intercal}$ approximates the population second moment matrix $\Sigma_{t} = \mathbb{E}_{z\sim \mathcal{P}}[\nabla \ell (\mathbf{w}_{t},z)\nabla \ell (\mathbf{w}_{t},z)^{\intercal}]$ uniformly over all iterations, i.e., for any $u > 0$ ,
+
+$$
+\sup _ {t \in [ T ]} \| M _ {t} - \Sigma_ {t} \| _ {2} \leq O \left(\frac {u G \rho \sqrt {\ln p} \gamma_ {2} (\mathcal {W} , d)}{\sqrt {m}}\right), \tag {1}
+$$
+
+with probability at least $1 - c\exp \left(-u^{2} / 4\right)$ , where $c$ is an absolute constant.
+
+Theorem 2 shows that $M_{t}$ approximates the population second moment matrix $\Sigma_t$ uniformly over all iterations. This uniform bound is derived by the technique GC, which develops sharp upper bounds to suprema of stochastic processes indexed by a set with a metric structure in terms of $\gamma_2$ functions. In our case, $\| M_t - \Sigma_t\|$ is treated as the stochastic process indexed by the set $\mathcal{W} \in \mathbb{R}^p$ such that $\mathbf{w}_t \in \mathcal{W}$ , which is the set of all possible iterates. The metric $d$ is the pseudo-metric $d: \mathbb{R}^p \times \mathbb{R}^p \mapsto \mathbb{R}$ defined in Assumption 2. To get a more practical bound, following the above discussion, instead of working with $\mathcal{W}$ over parameters, one can consider working with the set $\mathcal{M}$ of population gradients by defining the pseudo-metric as $d(\mathbf{w}, \mathbf{w}') = d(f(\mathbf{w}), f(\mathbf{w}')) = d(\mathbf{m}, \mathbf{m}')$ , where $f(\mathbf{w}) = \mathbb{E}_{z \in \mathcal{P}}[\nabla (\mathbf{w}, z)]$ that maps the parameter space to gradient space. Thus, the complexity measure $\gamma_2(\mathcal{W}, d)$ can be expressed as the $\gamma_2$ function measure on the population gradient space, i.e., $\gamma_2(\mathcal{M}, d)$ . As discussed above, using the $\ell_2$ -norm as $d$ , the $\gamma_2(\mathcal{M}, d)$ will be a constant if assuming the gradient space is a union of ellipsoids and uniform bound only depends on logarithmically on $p$ .
+
+Using the result in Theorem 2 and Davis-Kahan $\sin -\theta$ theorem (McSherry, 2004), we obtain the subspace construction error $\| \hat{V}_k(t)\hat{V}_k(t)^\top - V_k(t)V_k(t)^\top \|_2$ in the following theorem.
+
+Theorem 3 (Subspace Closeness) Under Assumption 1 and 2, with $V_{k}(t)$ to be the top- $k$ eigenvectors of the population second moment matrix $\Sigma_{t}$ and $\alpha_{t}$ be the eigen-gap at $t$ -th iterate such that $\lambda_{k}\left(\Sigma_{t}\right) - \lambda_{k + 1}\left(\Sigma_{t}\right)\geq \alpha_{t}$ , for the $\hat{V}_k(t)$ in Algorithm 1, if $m\geq \frac{O\left(G\rho\sqrt{\ln p}\gamma_2(\mathcal{W},d)\right)^2}{\min_t\alpha_t^2}$ , we have
+
+$$
+\mathbb {E} \left[ \| \hat {V} _ {k} (t) \hat {V} _ {k} (t) ^ {\top} - V _ {k} (t) V _ {k} (t) ^ {\top} \| _ {2} \right] \leq O \left(\frac {G \rho \sqrt {\ln p} \gamma_ {2} (\mathcal {W} , d)}{\alpha_ {t} \sqrt {m}}\right), \forall t \in [ T ]. \tag {2}
+$$
+
+Theorem 3 gives the sample complexity of the public sample size and the reconstruction error, i.e., the difference between $\hat{V}_k(t)\hat{V}_k(t)^\top$ evaluated on the public dataset $S_h$ and $V_{k}(t)V_{k}(t)^{\top}$ given by the population second moment $\Sigma_t$ . The sample complexity and the reconstruction error both depend on the $\gamma_{2}$ function and eigen-gap $\alpha_{t}$ . A small eigen-gap $\alpha_{t}$ requires larger public sample $m$ .
+
+# 3.2 EMPIRICAL RISK CONVERGENCE ANALYSIS
+
+In this section, we present the error rate of PDP-SGD for non-convex (smooth) functions. The error rate for convex functions is deferred to Appendix C. For non-convex case, we first give the error rate of the $\ell_2$ -norm of the principal component of the gradient, i.e., $\| V_k(t)V_k(t)^{\mathsf{T}}\nabla \hat{L}_n(\mathbf{w}_t)\| _2$ . Then we show that the gradient norm also converges if the principal component dominates the residual component of the gradient as suggested by Figure 1 and recent observations (Papyan, 2019; Li et al.,
+
+2020). To present our results, we introduce some new notations here. We write $[\nabla \hat{L}_n(\mathbf{w}_t)]^{\parallel} = V_k(t)V_k(t)^\intercal\nabla \hat{L}_n(\mathbf{w}_t)$ as the principal component of the gradient and $[\nabla \hat{L}_n(\mathbf{w}_t)]^\perp = \nabla \hat{L}_n(\mathbf{w}_t) - V_k(t)V_k(t)^\intercal\nabla \hat{L}_n(\mathbf{w}_t)$ as the residual component.
+
+Theorem 4 (Smooth and Non-convex) For $\rho$ -smooth function $\hat{L}_n(\mathbf{w})$ , under Assumptions 1 and 2, let $\Lambda = \frac{\sum_{t=1}^{T} 1 / \alpha_t^2}{T}$ , for any $\epsilon, \delta > 0$ , with $T = O(n^2 \epsilon^2)$ and $\eta_t = \frac{1}{\sqrt{T}}$ , PDP-SGD achieves:
+
+$$
+\frac {1}{T} \sum_ {t = 1} ^ {T} \mathbb {E} \| V _ {k} (t) V _ {k} (t) ^ {\intercal} \nabla \hat {L} _ {n} (\mathbf {w} _ {t}) \| _ {2} ^ {2} \leq \tilde {O} \left(\frac {k \rho G ^ {2}}{n \epsilon}\right) + O \left(\frac {\Lambda G ^ {4} \rho^ {2} \gamma_ {2} ^ {2} (\mathcal {W} , d) \ln p}{m}\right). \tag {3}
+$$
+
+Additionally, assuming the principal component of the gradient dominates, i.e., there exist $c > 0$ , such that $\frac{1}{T}\sum_{t=1}^{T}\|\left[\nabla\hat{L}_n(\mathbf{w}_t)\right]^\perp\|_2^2 \leq c\frac{1}{T}\sum_{t=1}^{T}\\|\left[\nabla\hat{L}_n(\mathbf{w}_t)\right]^\parallel\|_2^2$ , we have
+
+$$
+\mathbb {E} \| \nabla \hat {L} _ {n} (\mathbf {w} _ {R}) \| _ {2} ^ {2} \leq \tilde {O} \left(\frac {k \rho G ^ {2}}{n \epsilon}\right) + O \left(\frac {\Lambda G ^ {4} \rho^ {2} \gamma_ {2} ^ {2} (\mathcal {W} , d) \ln p}{m}\right), \tag {4}
+$$
+
+where $\mathbf{w}_R$ is uniformly sampled from $\{\mathbf{w}_1, \dots, \mathbf{w}_T\}$ .
+
+Theorem 4 shows that PDP-SGD reduces the error rate of a factor of $p$ to $k$ compared to existing results for non-convex and smooth functions (Wang and Xu, 2019). The error rate also includes a term depending on the $\gamma_{2}$ function and the eigen-gap $\alpha_{t}$ , i.e., $\Lambda = \sum_{t=1}^{T} 1 / \alpha_{t}^{2} / T$ . This term comes from the subspace reconstruction error. As discussed in the previous section, as the gradients stay in a union of ellipsoids, the $\gamma_{2}$ is a constant. The term $\Lambda$ depends on the eigen-gap $\alpha_{t}$ , i.e., $\lambda_{k}(\Sigma_{t}) - \lambda_{k+1}(\Sigma_{t}) \geq \alpha_{t}$ . As shown by the Figure 1, along the training trajectory, there are a few dominated eigenvalues and the eigen-gap stays significant (even at the last epoch). Then the term $\Lambda$ will be a constant and the bound scales logarithmically with $p$ . If one considers the eigen-gap $\alpha_{t}$ decays as training proceeds, e.g., $\alpha_{t} = \frac{1}{t^{1/4}}$ for $t > 0$ , then we have $\Lambda = O(\sqrt{T})$ . In this case, with $T = n^{2}\epsilon^{2}$ , PDP-SGD requires the public data size $m = O(n\epsilon)$ .
+
+# 4 EXPERIMENTS
+
+We empirically evaluate PDP-SGD on training neural networks with two datasets: the MNIST (LeCun et al., 1998) and Fashion MNIST (Xiao et al., 2017). We compare the performance of PDP-SGD with the baseline DP-SGD for various privacy levels $\epsilon$ . In addition, we also explore a heuristic method, i.e., DP-SGD with random projection by replacing the projector with a $\mathbb{R}^{k\times p}$ Gaussian random projector (Bingham and Mannila, 2001; Blocki et al., 2012). We call this method randomly projected DP-SGD (RPDP-SGD). We present the experimental results after discussing the experimental setup. More details and additional results are in Appendix D.
+
+Datasets and Network Structure. The MNIST and Fashion MNIST datasets both consist of 60,000 training examples and 10,000 test examples. To construct the private training set, we randomly sample 10,000 samples from the original training set of MNIST and Fashion MNIST, then we randomly sample 100 samples from the rest to construct the public dataset. Note that the smaller private datasets make the private learning problem more challenging. For both datasets, we use a convolutional neural network that follows the structure in Papernot et al. (2020).
+
+Training and Hyper-parameter Setting. Cross-entropy is used as our loss function throughout experiments. The mini-batch size is set to be 250 for both MNIST and Fashion MNIST. For the step size, we follow the grid search method with search space and step size used for DP-SGD, PDP-SGD and RPDP-SGD listed in Appendix D. For training, a fixed budget on the number of epochs i.e., 30 is assigned for the each task. We repeat each experiments 3 times and report the mean and standard deviation of the accuracy on the training and test set. For PDP-SGD, we use Lanczos algorithm to compute the top $k$ eigen-space of the gradient second moment matrix on public dataset. We use $k = 50$ for MNIST and $k = 70$ for Fashion MNIST. For RPDP-SGD, we use $k = 800$ for both datasets. Instead of doing the projection for all epochs, we also explored a start point for the projection, i.e., executing the projection from the 1-st epoch, 15-th epoch. We found that for Fashion MNIST, PDP-SGD and RPDP-SGD perform better when starting projection from the 15-th epoch.
+
+Privacy Parameter Setting: We consider different choices of the noise scale, i.e., $\sigma = \{18, 14, 10, 8, 6, 4\}$ for MNIST and $\sigma = \{18, 14, 10, 6, 4, 2\}$ for Fashion MNIST. Since gradient
+
+
+(a) MNIST
+
+
+Figure 3: Training and test accuracy for DP-SGD, PDP-SGD and RPDP-SGD with different privacy levels for (a) MNIST and (b) Fashion MNIST. The X-axis is the $\epsilon$ , and the Y-axis is the train/test accuracy. For small $\epsilon$ regime, which is more favorable for privacy, PDP-SGD outperforms DP-SGD.
+
+
+(b) Fashion MNIST
+
+
+
+
+(a) MNIST, $\epsilon = 0.23$
+
+
+Figure 4: Training dynamics of DP-SGD, PDP-SGD and RPDP-SGD for (a) MNIST $(\epsilon = 0.23)$ and (b) Fashion MNIST $(\epsilon = 0.30)$ . The X-axis is the number of epochs, and the Y-axis is the train/test accuracy. For Fashion MNIST, PDP-SGD and RPDP-SGD start projection at 15-th epoch.
+
+
+(b) Fashion MNIST, $\epsilon = 0.30$
+
+
+
+norm bound $G$ is unknown for deep learning, we follow the gradient clipping method in Abadi et al. (2016) to guarantee the privacy. We choose gradient clip size to be 1.0 for both datasets. We follow the Moment Accountant (MA) method (Abadi et al., 2016; Bu et al., 2019) to calculate the accumulated privacy cost, which depends on the number of epochs, the batch size, $\delta$ , and noise $\sigma$ . With 30 epochs, batch size 250, 10,000 training samples, and fixing $\delta = 10^{-5}$ , the $\epsilon$ is $\{2.41, 1.09, 0.72, 0.42, 0.30, 0.23\}$ for $\sigma \in \{2, 4, 6, 10, 14, 18\}$ for Fashion MNIST. For MNIST, $\epsilon$ is $\{1.09, 0.72, 0.53, 0.42, 0.30, 0.23\}$ for $\sigma \in \{4, 6, 8, 10, 14, 18\}$ . Note that $\epsilon$ presented in this paper is w.r.t. a subset i.e., 10,000 samples from MNIST and Fashion MNIST.
+
+Experimental Results. The training accuracy and test accuracy for different $\epsilon$ , are reported in Figure 3. For small $\epsilon$ regime, i.e., $\epsilon \leq 0.42$ with MNIST (Figure 3 (a)) and $\epsilon \leq 0.72$ with Fashion MNIST (Figure 3 (b)), PDP-SGD outperforms DP-SGD. For large $\epsilon$ (small noise scale), we think DP-SGD performs better than PDP-SGD because the subspace reconstruction error dominates the error from the injected noise. For most choices of $\epsilon$ , RPDP-SGD fails to improve the accuracy over DP-SGD because the subspace reconstruction error introduced by the random projector is larger than the noise error reduced by projection. To the best of our knowledge, we noticed that when $\epsilon < 1$ for MNIST, PDP-SGD and DP-SGD perform better than the benchmark reported in Papernot et al. (2020) even with a subset from MNIST (Figure 3 (a)). We acknowledge that Papernot et al. (2020) report the test accuracy as a training dynamic in terms of privacy loss $\epsilon$ . Figure 4 provides two examples of the training dynamics, i.e., MNIST with $\epsilon = 0.23$ and Fashion MNIST with $\epsilon = 0.30$ , showing that PDP-SGD outperforms DP-SGD for large noise scale since PDP-SGD efficiently reduces the noise. We also validate this observation on larger training samples in Appendix D.
+
+We also study the role of projection dimension $k$ and pubic sample size $m$ . Figure 5(a) and Figure 5(b) present the training and test accuracy for PDP-SGD with $k \in \{10, 20, 30, 50\}$ and PDP-SGD with $m \in \{50, 100, 150\}$ for $\epsilon = 0.23$ for MNIST dataset. Among the choices of $k$ , PDP-SGD with $k = 50$ achieves the best accuracy. DP-SGD with $k = 10$ proceeds slower than the rest, due to the larger reconstruction error introduced by projecting the gradient to a much smaller subspace, i.e., $k = 10$ . However, compared to the gradient dimension $p \approx 25,000$ , it is impressive that PDP-SGD with $k = 50$ can achieve better accuracy than DP-SGD for a certain range of $\epsilon$ . Figure 5(b) shows
+
+
+(a) MNIST, $\epsilon = 0.23$
+
+
+Figure 5: Training accuracy and test accuracy for (a) PDP-SGD with $k = \{10,20,30,50\}$ ; (b) PDP-SGD with $m = \{50,100,150\}$ for MNIST ( $\epsilon = 0.23$ ). The X-axis and Y-axis refer to Figure 4. The performance of PDP-SGD increases as projection dimension $k$ and public sample size $m$ increase.
+
+
+(b) MNIST, $\epsilon = 0.23$
+
+
+
+
+(a) MNIST, $\epsilon = 0.23$
+
+
+Figure 6: Training and test accuracy for PDP-SGD with different frequency of eigen-space computation for (a) MNIST $(\epsilon = 0.23)$ and (b) Fashion MNIST $(\epsilon = 0.23)$ . $s = \{1, 10, 20\}$ is the frequency of subspace update, i.e., compute the eigen-space every $s$ iterates. The X-axis and Y-axis refer to Figure 4. For Fashion MNIST, PDP-SGD starts projection at 15-th epoch. PDP-SGD with a reduced eigen-space computation also improves the accuracy over DP-SGD.
+
+
+(b) Fashion MNIST, $\epsilon = 0.23$
+
+
+
+that the accuracy of PDP-SGD improves as the $m$ increases from 50 to 150. This is consistent with the theoretical analysis that increasing $m$ helps to reduce the subspace reconstruction error. Also, PDP-SGD with $m = 100$ performs similar to PDP-SGD with $m = 150$ . The results suggest that while a small number of public datasets are not sufficient for training an accurate predictor, they provide useful gradient subspace projection and accuracy improvement over DP-SGD.
+
+To reduce the computation complexity introduced by eigen-value decomposition, we explored PDP-SGD with sparse eigen-space computation, i.e., update the projector every $s$ iterates. Note that PDP-SGD with $s = 1$ means computing the top eigen-space at every iteration. Figure 6 reports PDP-SGD with $s = \{1, 10, 20\}$ for (a) MNIST and (b) Fashion MNIST showing that PDP-SGD with a reduced eigen-space computation also outperforms DP-SGD, even though there is a mild decay for PDP-SGD with fewer eigen-space computations.
+
+# 5 CONCLUSION
+
+While DP-SGD and variants have been well studied for private ERM, the error rate of DP-SGD depends on the ambient dimension $p$ . In this paper, we aim to bypass such dependence by leveraging the low-dimensional structure of the observed gradients in the training of deep networks. We propose PDP-SGD which projects the noisy gradient to an approximated subspace evaluated on a public dataset. We show theoretically that PDP-SGD can obtain (near) dimension-independent error rate. We evaluate the proposed algorithms on two popular deep learning tasks and demonstrate the empirical advantages of PDP-SGD.
+
+# ACKNOWLEDGEMENT
+
+The research was supported by NSF grants IIS-1908104, OAC-1934634, IIS-1563950, a Google Faculty Research Award, a J.P. Morgan Faculty Award, and a Mozilla research grant. We would like to thank the Minnesota Super-computing Institute (MSI) for providing computational resources and support.
+
+# REFERENCES
+
+M. Abadi, A. Chu, I. Goodfellow, B. McMahan, I. Mironov, K. Talwar, and L. Zhang. Deep learning with differential privacy. In 23rd ACM Conference on Computer and Communications Security, pages 308-318, 2016. URL https://arxiv.org/abs/1607.00133.
+B. Avent, A. Korolova, D. Zeber, T. Hovden, and B. Livshits. BLENDER: enabling local search with a hybrid differential privacy model. In 26th USENIX Security Symposium, pages 747-764, 2017. URL https://www.usenix.org/conference/usenixsecurity17/technical-sessions/presentation/avent.
+R. Bassily, A. Smith, and A. Thakurta. Private empirical risk minimization: Efficient algorithms and tight error bounds. In 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, pages 464-473. IEEE, 2014.
+R. Bassily, K. Nissim, A. D. Smith, T. Steinke, U. Stemmer, and J. Ullman. Algorithmic stability for adaptive data analysis. In Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, pages 1046-1059, 2016. doi: 10.1145/2897518.2897566. URL https://doi.org/10.1145/2897518.2897566.
+R. Bassily, V. Feldman, K. Talwar, and A. G. Thakurta. Private stochastic convex optimization with optimal rates. In Advances in Neural Information Processing Systems, pages 11282-11291, 2019a.
+R. Bassily, S. Moran, and N. Alon. Limits of private learning with access to public data. In Advances in Neural Information Processing Systems, pages 10342-10352, 2019b. URL http://papers.nips.cc/paper/9222-limits-of-private-learning-with-access-to-public-data.
+R. Bassily, A. Cheu, S. Moran, A. Nikolov, J. Ullman, and Z. S. Wu. Private query release assisted by public data. CoRR, abs/2004.10941, 2020. URL https://arxiv.org/abs/2004.10941.
+E. Bingham and H. Mannila. Random projection in dimensionality reduction: applications to image and text data. In Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, pages 245-250, 2001.
+J. Blocki, A. Blum, A. Datta, and O. Sheffet. The johnson-lindenstrauss transform itself preserves differential privacy. In 2012 IEEE 53rd Annual Symposium on Foundations of Computer Science, pages 410-419. IEEE, 2012.
+Z. Bu, J. Dong, Q. Long, and W. J. Su. Deep learning with gaussian differential privacy. arXiv preprint arXiv:1911.11607, 2019.
+C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography Conference, pages 265-284. Springer, 2006.
+C. Dwork, G. N. Rothblum, and S. P. Vadhan. Boosting and differential privacy. In 51th Annual IEEE Symposium on Foundations of Computer Science, pages 51-60. IEEE Computer Society, 2010. doi: 10.1109/FOCS.2010.12. URL https://doi.org/10.1109/FOCS.2010.12.
+C. Dwork, K. Talwar, A. Thakurta, and L. Zhang. Analyze gauss: optimal bounds for privacy-preserving principal component analysis. In Symposium on Theory of Computing, pages 11-20. ACM, 2014. doi: 10.1145/2591796.2591883. URL https://doi.org/10.1145/2591796.2591883.
+C. Dwork, V. Feldman, M. Hardt, T. Pitassi, O. Reingold, and A. L. Roth. Preserving statistical validity in adaptive data analysis. In Proceedings of the 47th Annual ACM on Symposium on Theory of Computing, pages 117-126. ACM, 2015. doi: 10.1145/2746539.2746580. URL https://doi.org/10.1145/2746539.2746580.
+V. Feldman, I. Mironov, K. Talwar, and A. Thakurta. Privacy amplification by iteration. In 59th IEEE Annual Symposium on Foundations of Computer Science, pages 521-532, 2018. doi: 10.1109/FOCS.2018.00056. URL https://doi.org/10.1109/FOCS.2018.00056.
+
+S. Ghadimi and G. Lan. Stochastic first- and zeroth-order methods for nonconvex stochastic programming. SIAM Journal on Optimization, 23(4):2341-2368, 2013. doi: 10.1137/120880811. URL https://doi.org/10.1137/120880811.
+G. H. Golub and C. F. Van Loan. Matrix Computations. The Johns Hopkins University Press, third edition, 1996.
+S. Gunasekar, A. Banerjee, and J. Ghosh. Unified view of matrix completion under general structural constraints. In Proceedings of the 28th International Conference on Neural Information Processing Systems-Volume 1, pages 1180-1188, 2015.
+G. Gur-Ari, D. A. Roberts, and E. Dyer. Gradient descent happens in a tiny subspace. CoRR, abs/1812.04754, 2018. URL http://arxiv.org/abs/1812.04754.
+R. A. Horn and C. R. Johnson. Matrix analysis. Cambridge university press, 2012.
+P. Jain and A. G. Thakurta. (near) dimension independent risk bounds for differentially private learning. volume 32 of Proceedings of Machine Learning Research, pages 476-484, Beijing, China, 22-24 Jun 2014. PMLR. URL http://proceedings.mlr.press/v32/jain14.html.
+C. Jung, K. Ligett, S. Neel, A. Roth, S. Sharifi-Malvajerdi, and M. Shenfeld. A new analysis of differential privacy's generalization guarantees. volume 151, pages 31:1-31:17, 2020. doi: 10.4230/LIPIcs.ITCS.2020.31. URL https://doi.org/10.4230/LIPIcs.ITCS.2020.31.
+P. Kairouz, M. Ribero, K. Rush, and A. Thakurta. Dimension independence in unconstrained private ERM via adaptive preconditioning. CoRR, abs/2008.06570, 2020. URL https://arxiv.org/abs/2008.06570.
+S. P. Kasiviswanathan, H. K. Lee, K. Nissim, S. Raskhodnikova, and A. Smith. What can we learn privately? In 2008 49th Annual IEEE Symposium on Foundations of Computer Science, 2008.
+Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998.
+X. Li and A. Banerjee. Experiments with rich regime training for deep learning. arXiv preprint arXiv:2102.13522, 2021.
+X. Li, Q. Gu, Y. Zhou, T. Chen, and A. Banerjee. Hessian based analysis of SGD for deep nets: Dynamics and generalization. In Proceedings of the 2020 SIAM International Conference on Data Mining, pages 190-198. SIAM, 2020. doi: 10.1137/1.9781611976236.22. URL https://doi.org/10.1137/1.9781611976236.22.
+F. McSherry. Spectral methods for data analysis. PhD thesis, University of Washington, 2004.
+Y. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Springer Publishing Company, Incorporated, 1 edition, 2014. ISBN 1461346916.
+N. Papernot, M. Abadi, Ü. Erlingsson, I. J. Goodfellow, and K. Talwar. Semi-supervised knowledge transfer for deep learning from private training data. In 5th International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=HkwoSDPgg.
+N. Papernot, A. Thakurta, S. Song, S. Chien, and Ülfar Erlingsson. Tempered sigmoid activations for deep learning with differential privacy, 2020.
+V. Papyan. Measurements of three-level hierarchical structure in the outliers in the spectrum of deepnet hESSians. In International Conference on Machine Learning, pages 5012-5021, 2019.
+B. Polyak. Gradient methods for the minimisation of functionals. Ussr Computational Mathematics and Mathematical Physics, 3:864-878, 12 1963. doi: 10.1016/0041-5553(63)90382-3.
+
+S. Song, K. Chaudhuri, and A. D. Sarwate. Stochastic gradient descent with differentially private updates. In IEEE Global Conference on Signal and Information Processing, pages 245-248. IEEE, 2013. doi: 10.1109/GlobalSIP.2013.6736861. URL https://doi.org/10.1109/GlobalSIP.2013.6736861.
+S. Song, O. Thakkar, and A. Thakurta. Characterizing private clipped gradient descent on convex generalized linear problems. arXiv preprint arXiv:2006.06783, 2020.
+M. Talagrand. Upper and Lower Bounds for Stochastic Processes. Springer, 2014.
+F. Tramer and D. Boneh. Differentially private learning needs better features (or much more data). In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=YTWGvpFOQD-.
+R. Vershynin. High-Dimensional Probability: An Introduction with Applications in Data Science. Cambridge University Press, 2018. doi: 10.1017/9781108231596.
+M. J. Wainwright. High-dimensional statistics: A non-asymptotic viewpoint, volume 48. Cambridge University Press, 2019.
+D. Wang and J. Xu. Differentially private empirical risk minimization with smooth non-convex loss functions: A non-stationary view. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 1182-1189, 2019.
+D. Wang, M. Ye, and J. Xu. Differentially private empirical risk minimization revisited: Faster and more general. In Advances in Neural Information Processing Systems, pages 2722-2731, 2017.
+H. Xiao, K. Rasul, and R. Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017.
+D. Yu, H. Zhang, W. Chen, and T.-Y. Liu. Do not let privacy overbill utility: Gradient embedding perturbation for private learning. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=7aogOj_VY00.
+H. Zhang, I. Mironov, and M. Hejazinia. Wide network learning with differential privacy. arXiv preprint arXiv:2103.01294, 2021.
+
+# A UNIFORM CONVERGENCE FOR SUBSPACES: PROOFS FOR SECTION 3.1
+
+In this section, we provide the proofs for Section 3.1. We first show that the second moment matrix $M_{t}$ converges to the population second moment matrix $\Sigma_{t}$ uniform over all iterations $t\in [T]$ , i.e., $\sup_{t\in [T]}\| M_t - \Sigma_t\|$ . Then we show that the top- $k$ subspace of $M_{t}$ uniformly converges to the top- $k$ subspace of $\Sigma_{t}$ , i.e., $\| \hat{V}_k(t)\hat{V}_k(t)^{\mathsf{T}} - V_k(t)V_k(t)^{\mathsf{T}}\|$ for all $t\in [T]$ . Our bound depends on $\gamma_{2}(\mathcal{W},d)$ where $\mathcal{W}$ is the set of all possible parameters along the training trajectory. In Section A.2, we show that the bound can be derived by $\gamma_{2}(\mathcal{M},d)$ as well, where $\mathcal{M}$ is the set of population gradients along the training trajectory. Then, we provide examples of the set $\mathcal{M}$ and corresponding value of $\gamma_{2}(\mathcal{M},d)$ .
+
+# A.1 UNIFORM CONVERGENCE BOUND
+
+Our proofs of Theorem 2 heavily rely on the advanced probability tool, Generic Chaining (GC) (Talagrand, 2014). Typically the results in generic chaining are characterized by the so-called $\gamma_{2}$ function (see Definition 2). Talagrand (2014) shows that for a process $(X_{t})_{t\in T}$ and a given metric space $(T,d)$ , if $(X_{t})_{t\in T}$ satisfies the increment condition
+
+$$
+\forall u > 0, \mathbb {P} \left(\left| X _ {s} - X _ {t} \right| \geq u\right) \leq 2 \exp \left(- \frac {u ^ {2}}{2 d (s , t) ^ {2}}\right), \tag {5}
+$$
+
+then the size of the process can be bounded as
+
+$$
+\mathbb {E} \sup _ {t \in T} X _ {t} \leq c \gamma_ {2} (T, d), \tag {6}
+$$
+
+with $c$ to be an absolute constant.
+
+To apply the GC result to establish the Theorem 2, we treat $\| M_t - \Sigma_t\|_2$ as the process $X_t$ over the iterations. In detail, since $M_t = \frac{1}{m}\sum_{i=1}^{m}\nabla\ell(\mathbf{w}_t,\tilde{z}_i)\nabla\ell(\mathbf{w}_t,\tilde{z}_i)^\top$ , and $\Sigma_t = \mathbb{E}_{z\sim\mathcal{P}}[\nabla\ell(\mathbf{w}_t,z)\nabla\ell(\mathbf{w}_t,z)^\top]$ , the $\| M_t - \Sigma_t\|_2$ is a random process indexed by $\mathbf{w}_t \in \mathcal{W}$ , with $\mathcal{W}$ to be the set of all possible iterates obtained by the algorithm.
+
+We first show that the variable $\| M_t - \Sigma_t\|_2$ satisfies the increment condition as stated in equation 5 in Lemma 1. Before we present the proof of Lemma 1, we introduce the Ahlswede-Winter Inequality (Horn and Johnson, 2012; Wainwright, 2019), which will be used in the proof of Lemma 1. Ahlswede-Winter Inequality shows that positive semi-definite random matrix with bounded spectral norm concentrates to its expectation with high probability.
+
+Theorem 5 (Ahlswede-Winter Inequality) Let $Y$ be a random, symmetric, positive semi-definite $p \times p$ matrix. Such that such that $\| \mathbb{E}[Y] \| \leq 1$ . Suppose $\| Y \| \leq R$ for some fixed scalar $R \geq 1$ . Let $\{Y_1, \ldots, Y_m\}$ be independent copies of $Y$ (i.e., independently sampled matrices with the same distribution as $Y$ ). For any $u \in [0,1]$ , we have
+
+$$
+\mathbb {P} \left(\left\| \frac {1}{m} \sum_ {i = 1} ^ {m} Y _ {i} - \mathbb {E} [ Y _ {i} ] \right\| _ {2} > u\right) \leq 2 p \cdot \exp (- m u ^ {2} / 4 R). \tag {7}
+$$
+
+To make the argument clear, we use a more informative notation for $M_{t}$ and $\Sigma_t$ . Recall the notation of $M_{t}$ and $\Sigma_t$ such that
+
+$$
+M _ {t} = \frac {1}{m} \sum_ {i = 1} ^ {m} \nabla \ell \left(\mathbf {w} _ {t}, \tilde {z} _ {i}\right) \nabla \ell \left(\mathbf {w} _ {t}, \tilde {z} _ {i}\right) ^ {\intercal}, \tag {8}
+$$
+
+and
+
+$$
+\Sigma_ {t} = \mathbb {E} _ {z \sim \mathcal {P}} \left[ \nabla \ell \left(\mathbf {w} _ {t}, z\right) \nabla \ell \left(\mathbf {w} _ {t}, z\right) ^ {\mathrm {T}} \right], \tag {9}
+$$
+
+given the dataset $S_{h} = \{\tilde{z}_{1},\dots,\tilde{z}_{m}\}$ and distribution $\mathcal{P}$ where $\tilde{z}_i\sim \mathcal{P}$ for $i\in [m]$ , the $M_t$ and $\Sigma_t$ are functions of parameter $\mathbf{w}_t$ , so we use $M(\mathbf{w}_t)$ and $\Sigma (\mathbf{w}_t)$ for $M_{t}$ and $\Sigma_t$ interchangeably in the rest of this section, i.e,
+
+$$
+M (\mathbf {w}) = \frac {1}{m} \sum_ {i = 1} ^ {m} \nabla \ell \left(\mathbf {w}, \tilde {z} _ {i}\right) \nabla \ell \left(\mathbf {w}, \tilde {z} _ {i}\right) ^ {\intercal} \tag {10}
+$$
+
+and
+
+$$
+\Sigma (\mathbf {w}) = \mathbb {E} _ {z \sim \mathcal {P}} \left[ \nabla \ell (\mathbf {w}, z) \nabla \ell (\mathbf {w}, z) ^ {\mathsf {T}} \right]. \tag {11}
+$$
+
+Lemma 1 With Assumption 2 and 1 hold, for any $\mathbf{w},\mathbf{w}^{\prime}\in \mathcal{W}$ and $\forall u > 0$ we have
+
+$$
+\mathbb {P} \left(\| M (\mathbf {w}) - \Sigma (\mathbf {w}) \| _ {2} - \| M \left(\mathbf {w} ^ {\prime}\right) - \Sigma \left(\mathbf {w} ^ {\prime}\right) \| _ {2} \geq \frac {u}{\sqrt {m}} \cdot 4 G \rho d (\mathbf {w}, \mathbf {w} ^ {\prime})\right) \leq 2 p \cdot \exp \left(- u ^ {2} / 4\right), \tag {12}
+$$
+
+where $d:\mathcal{W}\times \mathcal{W}\mapsto \mathbb{R}$ is the pseudo-metric in Assumption 2.
+
+Proof: We consider random variable
+
+$$
+X _ {i} = \nabla \ell (\mathbf {w}, \tilde {z} _ {i}) \nabla \ell (\mathbf {w}, \tilde {z} _ {i}) ^ {\intercal} - \nabla \ell \left(\mathbf {w} ^ {\prime}, \tilde {z} _ {i}\right) \nabla \ell \left(\mathbf {w} ^ {\prime}, \tilde {z} _ {i}\right) ^ {\intercal} + 2 G \rho d (\mathbf {w}, \mathbf {w} ^ {\prime}) \mathbb {I} _ {p}, \tag {13}
+$$
+
+where $\mathbb{I}_p\in \mathbb{R}^{p\times p}$ is the identity matrix.
+
+Note that $2G\rho d(\mathbf{w},\mathbf{w}^{\prime})$ is deterministic and the randomness of $X_{i}$ comes from $\nabla \ell (\mathbf{w},\tilde{z}_i)$ and $\nabla \ell (\mathbf{w}^{\prime},\tilde{z}_{i})$
+
+By triangle inequality and the construction of $X_{i}$ , we have
+
+$$
+\begin{array}{l} \left\| M (\mathbf {w}) - \Sigma (\mathbf {w}) \right\| _ {2} - \left\| M \left(\mathbf {w} ^ {\prime}\right) - \Sigma \left(\mathbf {w} ^ {\prime}\right) \right\| _ {2} \\ = \| M (\mathbf {w}) - \mathbb {E} [ M (\mathbf {w}) ] \| _ {2} - \| M (\mathbf {w} ^ {\prime}) - \mathbb {E} [ M (\mathbf {w} ^ {\prime}) ] \| _ {2} \\ \leq \| M (\mathbf {w}) - \mathbb {E} [ M (\mathbf {w}) ] - (M (\mathbf {w} ^ {\prime}) - \mathbb {E} [ M (\mathbf {w} ^ {\prime}) ]) \| _ {2} \\ = \| M (\mathbf {w}) - M \left(\mathbf {w} ^ {\prime}\right) - \mathbb {E} \left[ \left(M (\mathbf {w}) - M \left(\mathbf {w} ^ {\prime}\right)\right) \right] \| _ {2} \\ = \left\| \frac {1}{m} \sum_ {i = 1} ^ {m} \left(\nabla \ell (\mathbf {w}, \tilde {z} _ {i}) \nabla \ell (\mathbf {w}, \tilde {z} _ {i}) ^ {\intercal} - \nabla \ell \left(\mathbf {w} ^ {\prime}, \tilde {z} _ {i}\right) \nabla \ell \left(\mathbf {w} ^ {\prime}, \tilde {z} _ {i}\right) ^ {\intercal}\right) - \mathbb {E} \left[ \nabla \ell (\mathbf {w}, \tilde {z} _ {i}) \nabla \ell (\mathbf {w}, \tilde {z} _ {i}) ^ {\intercal} - \nabla \ell \left(\mathbf {w} ^ {\prime}, \tilde {z} _ {i}\right) \nabla \ell \left(\mathbf {w} ^ {\prime}, \tilbar {z} _ {i}\right) ^ {\intercal} \right] \right\| _ {2} \\ = \left\| \frac {1}{m} \sum_ {i = 1} ^ {m} \left(\nabla \ell (\mathbf {w}, \tilde {z} _ {i}) \nabla \ell (\mathbf {w}, \tilde {z} _ {i}) ^ {\intercal} - \nabla \ell (\mathbf {w} ^ {\prime}, \tilde {z} _ {i}) \nabla \ell (\mathbf {w} ^ {\prime}, \tilde {z} _ {i}) ^ {\intercal} + 2 G \rho d (\mathbf {w}, \mathbf {w} ^ {\prime}) \mathbb {I} _ {p}\right) \right. \\ - \left. \mathbb {E} \left[ \nabla \ell (\mathbf {w}, \tilde {z} _ {i}) \nabla \ell (\mathbf {w}, \tilde {z} _ {i}) ^ {\intercal} - \nabla \ell (\mathbf {w} ^ {\prime}, \tilde {z} _ {i}) \nabla \ell (\mathbf {w} ^ {\prime}, \tilde {z} _ {i}) ^ {\intercal} + 2 G \rho d (\mathbf {w}, \mathbf {w} ^ {\prime}) \mathbb {I} _ {p} \right] \right\| _ {2} \\ = \left\| \frac {1}{m} \sum X _ {i} - \mathbb {E} [ X _ {i} ] \right\| _ {2} \tag {14} \\ \end{array}
+$$
+
+To apply Theorem 5 for $\left\| \frac{1}{m}\sum X_i - \mathbb{E}[X_i]\right\|_2$ , we first show that the random symmetric matrix $X_{i}$ is positive semi-definite.
+
+By Assumption 1 and Assumption 2 and definition
+
+$$
+\begin{array}{l} \left\| \nabla \ell (\mathbf {w}, \tilde {z} _ {i}) \nabla \ell (\mathbf {w}, \tilde {z} _ {i}) ^ {\intercal} - \nabla \ell (\mathbf {w} ^ {\prime}, \tilde {z} _ {i}) \nabla \ell (\mathbf {w} ^ {\prime}, \tilde {z} _ {i}) ^ {\intercal} \right\| _ {2} \\ = \sup _ {\mathbf {x}: \| \mathbf {x} \| = 1} \mathbf {x} ^ {\intercal} \left(\nabla \ell (\mathbf {w}, \tilde {z} _ {i}) \nabla \ell (\mathbf {w}, \tilde {z} _ {i}) ^ {\intercal} - \nabla \ell (\mathbf {w} ^ {\prime}, \tilde {z} _ {i}) \nabla \ell (\mathbf {w} ^ {\prime}, \tilde {z} _ {i}) ^ {\intercal}\right) \mathbf {x} \\ = \sup _ {\mathbf {x}: \| \mathbf {x} \| = 1} \mathbf {x} ^ {\mathsf {T}} \nabla \ell (\mathbf {w}, \tilde {z} _ {i}) \nabla \ell (\mathbf {w}, \tilde {z} _ {i}) ^ {\mathsf {T}} \mathbf {x} - \mathbf {x} ^ {\mathsf {T}} \nabla \ell \left(\mathbf {w} ^ {\prime}, \tilde {z} _ {i}\right) \nabla \ell \left(\mathbf {w} ^ {\prime}, \tilde {z} _ {i}\right) ^ {\mathsf {T}} \mathbf {x} \\ = \sup _ {\mathbf {x}: \| \mathbf {x} \| = 1} \left\langle \mathbf {x}, \nabla \ell \left(\mathbf {w}, \tilde {z} _ {i}\right) \right\rangle^ {2} - \left\langle \mathbf {x}, \nabla \ell \left(\mathbf {w} ^ {\prime}, \tilde {z} _ {i}\right) \right\rangle^ {2} \\ = \sup _ {\mathbf {x}: \| \mathbf {x} \| = 1} \left(\langle \mathbf {x}, \nabla \ell (\mathbf {w}, \tilde {z} _ {i}) \rangle + \langle \mathbf {x}, \nabla \ell \left(\mathbf {w} ^ {\prime}, \tilde {z} _ {i}\right) \rangle\right) \left(\langle \mathbf {x}, \nabla \ell (\mathbf {w}, \tilde {z} _ {i}) \rangle - \langle \mathbf {x}, \nabla \ell \left(\mathbf {w} ^ {\prime}, \tilde {z} _ {i}\right) \rangle\right) \\ \leq \sup _ {\mathbf {x}: \| \mathbf {x} \| = 1} 2 G \left(\left\langle \mathbf {x}, \nabla \ell (\mathbf {w}, \tilde {z} _ {i}) \right\rangle - \left\langle \mathbf {x}, \nabla \ell \left(\mathbf {w} ^ {\prime}, \tilde {z} _ {i}\right) \right\rangle\right) \\ = 2 G \| \nabla \ell (\mathbf {w}, \tilde {z} _ {i}) - \nabla \ell \left(\mathbf {w} ^ {\prime}, \tilde {z} _ {i}\right) \| _ {2} \\ \leq 2 G \rho d \left(\mathbf {w}, \mathbf {w} ^ {\prime}\right) \tag {15} \\ \end{array}
+$$
+
+For any non-zero vector $\mathbf{x} \in \mathbb{R}^p$ , we have
+
+$$
+\begin{array}{l} \mathbf {x} ^ {\intercal} X _ {i} \mathbf {x} = \mathbf {x} ^ {\intercal} (\nabla \ell (\mathbf {w}, \tilde {z} _ {i}) \nabla \ell (\mathbf {w}, \tilde {z} _ {i}) ^ {\intercal} - \nabla \ell (\mathbf {w} ^ {\prime}, \tilde {z} _ {i}) \nabla \ell (\mathbf {w} ^ {\prime}, \tilde {z} _ {i}) ^ {\intercal} + 2 G \rho d (\mathbf {w}, \mathbf {w} ^ {\prime}) \mathbb {I} _ {p}) \mathbf {x} \\ = \mathbf {x} ^ {\intercal} \left(\nabla \ell (\mathbf {w}, \tilde {z} _ {i}) \nabla \ell (\mathbf {w}, \tilde {z} _ {i}) ^ {\intercal} - \nabla \ell \left(\mathbf {w} ^ {\prime}, \tilde {z} _ {i}\right) \nabla \ell \left(\mathbf {w} ^ {\prime}, \tilde {z} _ {i}\right) ^ {\intercal}\right) \mathbf {x} + 2 G \rho d (\mathbf {w}, \mathbf {w} ^ {\prime}) \| \mathbf {x} \| _ {2} ^ {2} \\ \geq - \left\| \nabla \ell (\mathbf {w}, \tilde {z} _ {i}) \nabla \ell (\mathbf {w}, \tilde {z} _ {i}) ^ {\top} - \nabla \ell \left(\mathbf {w} ^ {\prime}, \tilde {z} _ {i}\right) \nabla \ell \left(\mathbf {w} ^ {\prime}, \tilde {z} _ {i}\right) ^ {\top} \right\| _ {2} \| \mathbf {x} \| _ {2} ^ {2} + 2 G \rho d (\mathbf {w}, \mathbf {w} ^ {\prime}) \| \mathbf {x} \| _ {2} ^ {2} \\ = (2 G \rho d (\mathbf {w}, \mathbf {w} ^ {\prime}) - \| \nabla \ell (\mathbf {w}, \tilde {z} _ {i}) \nabla \ell (\mathbf {w}, \tilde {z} _ {i}) ^ {\intercal} - \nabla \ell (\mathbf {w} ^ {\prime}, \tilde {z} _ {i}) \nabla \ell (\mathbf {w} ^ {\prime}, \tilde {z} _ {i}) ^ {\intercal} \| _ {2}) \| \mathbf {x} \| _ {2} ^ {2} \\ \geq \begin{array}{l} (a) \\ 0, \end{array} \tag {16} \\ \end{array}
+$$
+
+where (a) is true because $\| \nabla \ell (\mathbf{w},\tilde{z}_i)\nabla \ell (\mathbf{w},\tilde{z}_i)^{\intercal} - \nabla \ell (\mathbf{w}',\tilde{z}_i)\nabla \ell (\mathbf{w}',\tilde{z}_i)^{\intercal}\| _2\leq 2G\rho d(\mathbf{w},\mathbf{w}')$ as shown in equation 15.
+
+Let $Y_{i} = \frac{X_{i}}{4G\rho d(\mathbf{w},\mathbf{w}^{\prime})}$ , with equation 15, we have
+
+$$
+\begin{array}{l} \| Y _ {i} \| _ {2} = \frac {\| \nabla \ell (\mathbf {w} , \tilde {z} _ {i}) \nabla \ell (\mathbf {w} , \tilde {z} _ {i}) ^ {\intercal} - \nabla \ell (\mathbf {w} ^ {\prime} , \tilde {z} _ {i}) \nabla \ell (\mathbf {w} ^ {\prime} , \tilde {z} _ {i}) ^ {\intercal} + 2 G \rho d (\mathbf {w} , \mathbf {w} ^ {\prime}) \mathbb {I} _ {p} \| _ {2}}{4 G \rho d (\mathbf {w} , \mathbf {w} ^ {\prime})} \\ \leq \frac {\| \nabla \ell (\mathbf {w} , \tilde {z} _ {i}) \nabla \ell (\mathbf {w} , \tilde {z} _ {i}) ^ {\intercal} - \nabla \ell (\mathbf {w} ^ {\prime} , \tilde {z} _ {i}) \nabla \ell (\mathbf {w} ^ {\prime} , \tilde {z} _ {i}) ^ {\intercal} \| _ {2} + \| 2 G \rho d (\mathbf {w} , \mathbf {w} ^ {\prime}) \mathbb {I} _ {p} \| _ {2}}{4 G \rho d (\mathbf {w} , \mathbf {w} ^ {\prime})} \\ \leq \frac {2 G \rho d (\mathbf {w} , \mathbf {w} ^ {\prime}) + 2 G \rho d (\mathbf {w} , \mathbf {w} ^ {\prime})}{4 G \rho d (\mathbf {w} , \mathbf {w} ^ {\prime})} \\ = 1. \tag {17} \\ \end{array}
+$$
+
+So that $\| Y_i\| _2\leq 1$ and $\| \mathbb{E}[Y_i]\| _2\leq 1$ . Then, from Theorem 5, with $R = 1$ , we have for any $u\in [0,1]$
+
+$$
+\mathbb {P} \left(\left\| \frac {1}{m} \sum_ {i = 1} ^ {m} Y _ {i} - \mathrm {E} \left[ Y _ {i} \right] \right\| > u\right) \leq 2 p \cdot \exp (- m u ^ {2} / 4). \tag {18}
+$$
+
+Note that $\left\| \frac{1}{m}\sum_{i = 1}^{m}Y_{i} - \operatorname {E}[Y_{i}]\right\|$ is always bounded by 1 since $\| Y_i\| _2\leq 1$ and $\| \mathbb{E}[Y_i]\| _2\leq 1$ . So the above inequality holds for any $u > 1$ with probability 0 which is bounded by $2p\cdot \exp \left(-mu^2 /4\right)$ . So that we have for any $u > 0$
+
+$$
+\mathbb {P} \left(\left\| \frac {1}{m} \sum_ {i = 1} ^ {m} Y _ {i} - \mathrm {E} \left[ Y _ {i} \right] \right\| > u\right) \leq 2 p \cdot \exp (- m u ^ {2} / 4). \tag {19}
+$$
+
+So that for any $u > 0$
+
+$$
+\mathbb {P} \left(\left\| \frac {1}{m} \sum_ {i = 1} ^ {m} X _ {i} - \operatorname {E} \left[ X _ {i} \right] \right\| > \frac {u}{\sqrt {m}} \cdot 4 G \rho d \left(\mathbf {w}, \mathbf {w} ^ {\prime}\right)\right) \leq 2 p \cdot \exp (- u ^ {2} / 4). \tag {20}
+$$
+
+Combine equation 20 and equation 14, we have
+
+$$
+\begin{array}{l} \mathbb {P} \left(\| M (\mathbf {w}) - \Sigma (\mathbf {w}) \| _ {2} - \| M (\mathbf {w} ^ {\prime}) - \Sigma (\mathbf {w} ^ {\prime}) \| _ {2} \geq \frac {u}{\sqrt {m}} \cdot 4 G \rho d (\mathbf {w}, \mathbf {w} ^ {\prime})\right) \\ = \mathbb {P} \left(\| M (\mathbf {w}) - \mathbb {E} [ M (\mathbf {w}) ] \| _ {2} - \| M (\mathbf {w} ^ {\prime}) - \mathbb {E} [ M (\mathbf {w} ^ {\prime}) ] \| _ {2} > \frac {u}{\sqrt {m}} \cdot 4 G \rho d (\mathbf {w}, \mathbf {w} ^ {\prime})\right) \\ \leq \mathbb {P} \left(\left\| \frac {1}{m} \sum_ {i = 1} ^ {m} X _ {i} - \operatorname {E} \left[ X _ {i} \right] \right\| > \frac {u}{\sqrt {m}} \cdot 4 G \rho d (\mathbf {w}, \mathbf {w} ^ {\prime})\right) \\ \leq 2 p \cdot \exp (- u ^ {2} / 4) \tag {21} \\ \end{array}
+$$
+
+That completes the proof.
+
+Based on the above result, now we come to the proof of Theorem 2. The proof follows the Generic Chaining argument, i.e., Chapter 2 of Talagrand (2014).
+
+Theorem 2 (Second Moment Concentration) Under Assumption 1, 2, the second moment matrix of the public gradient $M_{t} = \frac{1}{m}\sum_{i = 1}^{m}\nabla \ell (\mathbf{w}_{t},\tilde{z}_{i})\nabla \ell (\mathbf{w}_{t},\tilde{z}_{i})^{\intercal}$ approximates the population second moment matrix $\Sigma_{t} = \mathbb{E}_{z\sim \mathcal{P}}[\nabla \ell (\mathbf{w}_{t},z)\nabla \ell (\mathbf{w}_{t},z)^{\intercal}]$ uniformly over all iterations, i.e., for any $u > 0$ ,
+
+$$
+\sup _ {t \in [ T ]} \| M _ {t} - \Sigma_ {t} \| _ {2} \leq O \left(\frac {u G \rho \sqrt {\ln p} \gamma_ {2} (\mathcal {W} , d)}{\sqrt {m}}\right), \tag {1}
+$$
+
+with probability at least $1 - c \exp \left( -u^2 / 4 \right)$ , where $c$ is an absolute constant.
+
+Proof: Note that equation equation 1 is a uniform bound over iteration $t \in [T]$ . To bound $\sup_{t \in [T]} \| M_t - \Sigma_t \|_2$ , it is sufficient to bound
+
+$$
+\sup _ {\mathbf {w} \in \mathcal {W}} \| M (\mathbf {w}) - \Sigma (\mathbf {w}) \| _ {2}, \tag {22}
+$$
+
+where $\mathcal{W}$ contains all the possible trajectories of $\mathbf{w}_1,\dots,\mathbf{w}_T$
+
+We consider a sequence of subsets $\mathcal{W}_n$ of $\mathcal{W}$ , and
+
+$$
+\operatorname {c a r d} \mathcal {W} _ {n} \leq N _ {n}, \tag {23}
+$$
+
+where $N_0 = 1; N_n = 2^{2^n}$ if $n \geq 1$ .
+
+Let $\pi_n(\mathbf{w})\in \mathcal{W}_n$ be the approximation of any $\mathbf{w}\in \mathcal{W}$ . We decompose the $\| M(\mathbf{w}) - \Sigma (\mathbf{w})\| _2$ as
+
+$$
+\begin{array}{l} \left\| M (\mathbf {w}) - \Sigma (\mathbf {w}) \right\| _ {2} - \left\| M \left(\pi_ {0} (\mathbf {w})\right) - \Sigma \left(\pi_ {0} (\mathbf {w})\right) \right\| _ {2} \\ = \sum_ {n \geq 1} \left(\| M \left(\pi_ {n} (\mathbf {w})\right) - \Sigma \left(\pi_ {n} (\mathbf {w})\right) \| _ {2} - \| M \left(\pi_ {n - 1} (\mathbf {w})\right) - \Sigma \left(\pi_ {n - 1} (\mathbf {w})\right) \| _ {2}\right), \tag {24} \\ \end{array}
+$$
+
+which holds since $\pi_n(\mathbf{w}) = \mathbf{w}$ for $n$ large enough.
+
+Based on Lemma 1, for any $u > 0$ , we have
+
+$$
+\left\| M \left(\pi_ {n} (\mathbf {w})\right) - \Sigma \left(\pi_ {n} (\mathbf {w})\right) \right\| _ {2} - \left\| M \left(\pi_ {n - 1} (\mathbf {w})\right) - \Sigma \left(\pi_ {n - 1} (\mathbf {w})\right) \right\| _ {2} \geq \frac {u}{\sqrt {m}} \cdot 4 G \rho d \left(\pi_ {n} (\mathbf {w}), \pi_ {n - 1} (\mathbf {w})\right), \tag {25}
+$$
+
+with probability at most $2p\exp \left(-\frac{u^2}{4}\right)$ .
+
+For any $n > 0$ and $\mathbf{w} \in \mathcal{W}$ , the number of possible pairs $(\pi_n(\mathbf{w}), \pi_{n-1}(\mathbf{w}))$ is
+
+$$
+\operatorname {c a r d} \mathcal {W} _ {n} \cdot \operatorname {c a r d} \mathcal {W} _ {n - 1} \leq N _ {n} N _ {n - 1} \leq N _ {n + 1} = 2 ^ {2 ^ {n + 1}}. \tag {26}
+$$
+
+Apply union bound over all the possible pairs of $(\pi_n(\mathbf{w}), \pi_{n-1}(\mathbf{w}))$ , following Talagrand (2014) (Chapter 2.2), for any $n > 0$ , $u > 0$ , and $\mathbf{w} \in \mathcal{W}$ , we have
+
+$$
+\left\| M \left(\pi_ {n} (\mathbf {w})\right) - \Sigma \left(\pi_ {n} (\mathbf {w})\right) \right\| _ {2} - \left\| M \left(\pi_ {n - 1} (\mathbf {w})\right) - \Sigma \left(\pi_ {n - 1} (\mathbf {w})\right) \right\| _ {2} \geq u 2 ^ {n / 2} d \left(\pi_ {n} (\mathbf {w}), \pi_ {n - 1} (\mathbf {w})\right) \cdot \frac {4 G \rho}{\sqrt {m}} \tag {27}
+$$
+
+with probability
+
+$$
+\sum_ {n \geq 1} 2 p \cdot 2 ^ {2 ^ {n + 1}} \exp \left(- u ^ {2} 2 ^ {n - 2}\right) \leq c ^ {\prime} p \exp \left(- \frac {u ^ {2}}{4}\right), \tag {28}
+$$
+
+where $c^\prime$ is a universal constant.
+
+Then we have
+
+$$
+\begin{array}{l} \sum_ {n \geq 1} \left(\| M \left(\pi_ {n} (\mathbf {w})\right) - \Sigma \left(\pi_ {n} (\mathbf {w})\right) \| _ {2} - \| M \left(\pi_ {n - 1} (\mathbf {w})\right) - \Sigma \left(\pi_ {n - 1} (\mathbf {w})\right) \| _ {2}\right) \\ \geq \sum_ {n \geq 1} u 2 ^ {n / 2} d \left(\pi_ {n} (\mathbf {w}), \pi_ {n - 1} (\mathbf {w})\right) \cdot \frac {4 G \rho}{\sqrt {m}} \\ \geq \sum_ {n \geq 0} u 2 ^ {n / 2} d \left(\mathbf {w}, \mathcal {W} _ {n}\right) \cdot \frac {4 G \rho}{\sqrt {m}} \tag {29} \\ \end{array}
+$$
+
+with probability at most $c'p \exp \left(-\frac{u^2}{4}\right)$ .
+
+From Theorem 5, let $Y_{i} = \frac{\nabla\ell(\pi_{0}(\mathbf{w}),\tilde{z}_{i})\nabla\ell(\pi_{0}(\mathbf{w}),\tilde{z}_{i})^{\intercal}}{G}$ , so that $\| Y_{i}\| \leq 1$ and $\| \mathbb{E}[Y_i]\| \leq 1$ . Then we have
+
+$$
+\left\| M \left(\pi_ {0} (\mathbf {w})\right) - \Sigma \left(\pi_ {0} (\mathbf {w})\right) \right\| _ {2} \geq u \frac {G}{\sqrt {m}}, \tag {30}
+$$
+
+with probability at most $2p\exp \left(-\frac{u^2}{4}\right)$
+
+Combine equation 24, equation 29 and equation 30, we have
+
+$$
+\begin{array}{l} \sup _ {\mathbf {w} \in \mathcal {W}} \| M (\mathbf {w}) - \Sigma (\mathbf {w}) \| _ {2} \geq \sup _ {\mathbf {w} \in \mathcal {W}} \sum_ {n \geq 0} u 2 ^ {n / 2} d (\mathbf {w}, \mathcal {W} _ {n}) \cdot \frac {4 G \rho}{\sqrt {m}} + u \frac {G}{\sqrt {m}} \\ = u \left(\frac {G \left(4 \rho \gamma_ {2} (\mathcal {W} , d) + 1\right)}{\sqrt {m}}\right) \tag {31} \\ \end{array}
+$$
+
+with probability at most $(c' + 2)p\exp \left(-\frac{u^2}{4}\right)$ .
+
+That completes the proof.
+
+Now we provide the proof of Theorem 3.
+
+Theorem 3 (Subspace Closeness) Under Assumption 1 and 2, with $V_{k}(t)$ to be the top- $k$ eigenvectors of the population second moment matrix $\Sigma_{t}$ and $\alpha_{t}$ be the eigen-gap at $t$ -th iterate such that $\lambda_{k}\left(\Sigma_{t}\right) - \lambda_{k + 1}\left(\Sigma_{t}\right)\geq \alpha_{t}$ , for the $\hat{V}_k(t)$ in Algorithm 1, if $m\geq \frac{O\left(G\rho\sqrt{\ln p}\gamma_2(\mathcal{W},d)\right)^2}{\min_t\alpha_t^2}$ , we have
+
+$$
+\mathbb {E} \left[ \| \hat {V} _ {k} (t) \hat {V} _ {k} (t) ^ {\top} - V _ {k} (t) V _ {k} (t) ^ {\top} \| _ {2} \right] \leq O \left(\frac {G \rho \sqrt {\ln p} \gamma_ {2} (\mathcal {W} , d)}{\alpha_ {t} \sqrt {m}}\right), \forall t \in [ T ]. \tag {2}
+$$
+
+Proof: Recall that $\hat{V}_k(t)$ is the top- $k$ eigenspace of $M_t$ . Let $V_k(t)$ be the top- $k$ eigenspace of $\Sigma_t$ .
+
+$$
+\begin{array}{l} \hat {V} _ {k} (t) \hat {V} _ {k} (t) ^ {\intercal} - V _ {k} (t) V _ {k} (t) ^ {\intercal} = \Pi_ {M _ {t}} ^ {(k)} (\mathbb {I} - \Pi_ {\Sigma_ {t}} ^ {(k)}) + (\mathbb {I} - \Pi_ {M _ {t}} ^ {(k)}) \Pi_ {\Sigma_ {t}} ^ {(k)} (32) \\ \Rightarrow \quad \| \hat {V} _ {k} (t) \hat {V} _ {k} (t) ^ {\intercal} - V _ {k} (t) V _ {k} (t) ^ {\intercal} \| _ {2} \leq \| \Pi_ {M _ {t}} ^ {(k)} (\mathbb {I} - \Pi_ {\Sigma_ {t}} ^ {(k)}) \| _ {2} + \| (\mathbb {I} - \Pi_ {M _ {t}} ^ {(k)}) \Pi_ {\Sigma_ {t}} ^ {(k)} \| _ {2}. (33) \\ \end{array}
+$$
+
+where $\Pi_{M_t}^{(k)} = \hat{V}_k(t)\hat{V}_k(t)^\intercal$ denotes the projection to the top- $k$ subspace of the symmetric PSD $M_t$ and $\Pi_{\Sigma_t^{(k)}} = V_k(t)V_k(t)^\intercal$ denotes the projection to the top- $k$ subspace of the symmetric PSD $\Sigma_t$ . Then, from Davis-Kahan (Corollary 8 in McSherry (2004)) and using the fact for symmetric PSD matrices eigen-values and singular values are the same, we have
+
+$$
+\left\| \Pi_ {M _ {t}} ^ {(k)} \left(\mathbb {I} - \Pi_ {\Sigma_ {t}} ^ {(k)}\right) \right\| _ {2} \leq \frac {\left\| M _ {t} - \Sigma_ {t} \right\| _ {2}}{\lambda_ {k} \left(M _ {t}\right) - \lambda_ {k + 1} \left(\Sigma_ {t}\right)} \tag {34}
+$$
+
+$$
+\left\| \left(\mathbb {I} - \Pi_ {M _ {t}} ^ {(k)}\right) \Pi_ {\Sigma_ {t}} ^ {(k)} \right\| _ {2} \leq \frac {\left\| M _ {t} - \Sigma_ {t} \right\| _ {2}}{\lambda_ {k} (\Sigma_ {t}) - \lambda_ {k + 1} (M _ {t})}. \tag {35}
+$$
+
+Recall, from Horn and Johnson (2012) (Section 4.3) and Golub and Van Loan (1996) (Section 8.1.2), e.g., Corollary 8.1.6, we have
+
+$$
+\left| \lambda_ {k} \left(M _ {t}\right) - \lambda_ {k} \left(\Sigma_ {t}\right) \right| \leq \left\| M _ {t} - \Sigma_ {t} \right\| _ {2}. \tag {36}
+$$
+
+From Theorem 2 and Lemma 2, with $Y = \| M_t - \Sigma_t\|_2$ , $A = c$ , and $B = \frac{4G\rho\sqrt{\ln p}\gamma_2(\mathcal{W},d)}{\sqrt{m}}$ , we have
+
+$$
+\mathbb {E} \left[ \| M _ {t} - \Sigma_ {t} \| _ {2} \right] \leq O \left(\frac {G \rho \sqrt {\ln p} \gamma_ {2} (\mathcal {W} , d)}{\sqrt {m}}\right). \tag {37}
+$$
+
+Let $c_{0} = O\left(G\rho \sqrt{\ln p}\gamma_{2}(\mathcal{W},d)\right)$ . For $m\geq \frac{c_0^2}{\alpha_t^2}$ , we have
+
+$$
+\mathbb {E} \left[ \| M _ {t} - \Sigma_ {t} \| _ {2} \right] \leq \frac {c _ {0}}{\sqrt {m}} \leq \frac {\alpha_ {t}}{2}. \tag {38}
+$$
+
+Then, for equation 34, we have
+
+$$
+\begin{array}{l} \left\| \Pi_ {M _ {t}} ^ {(k)} (\mathbb {I} - \Pi_ {\Sigma_ {t}} ^ {(k)}) \right\| _ {2} \leq \frac {\left\| M _ {t} - \Sigma_ {t} \right\| _ {2}}{\lambda_ {k} (M _ {t}) - \lambda_ {k + 1} (\Sigma_ {t})} \\ = \frac {\left\| M _ {t} - \Sigma_ {t} \right\| _ {2}}{\left(\lambda_ {k} (\Sigma_ {t}) - \lambda_ {k + 1} (\Sigma_ {t})\right) - \left(\lambda_ {k} (\Sigma_ {t}) - \lambda_ {k} (M _ {t})\right)} \\ \leq \frac {\left\| M _ {t} - \Sigma_ {t} \right\| _ {2}}{\alpha_ {t} - \left\| M _ {t} - \Sigma_ {t} \right\| _ {2}}. \tag {39} \\ \end{array}
+$$
+
+Then, for equation 35, we have
+
+$$
+\begin{array}{l} \| (\mathbb {I} - \Pi_ {M _ {t}} ^ {(k)}) \Pi_ {\Sigma_ {t}} ^ {(k)} \| _ {2} \leq \frac {\| M _ {t} - \Sigma_ {t} \| _ {2}}{\lambda_ {k} (\Sigma_ {t}) - \lambda_ {k + 1} (M _ {t})} \\ = \frac {\left\| M _ {t} - \Sigma_ {t} \right\| _ {2}}{\left(\lambda_ {k} \left(\Sigma_ {t}\right) - \lambda_ {k + 1} \left(\Sigma_ {t}\right)\right) + \left(\lambda_ {k + 1} \left(\Sigma_ {t}\right) - \lambda_ {k + 1} \left(M _ {t}\right)\right)} \\ \leq \frac {\left\| M _ {t} - \Sigma_ {t} \right\| _ {2}}{\alpha_ {t} - \left\| M _ {t} - \Sigma_ {t} \right\| _ {2}}. \tag {40} \\ \end{array}
+$$
+
+Combining these two bounds and equation 37, with $c_{0} = O\left(G\rho \sqrt{\ln p}\gamma_{2}(\mathcal{W},d)\right)$ , we have
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \| \hat {V} _ {k} (t) \hat {V} _ {k} (t) ^ {\intercal} - V _ {k} (t) V _ {k} (t) ^ {\intercal} \| _ {2} \right] \leq \frac {2 \alpha_ {t}}{\alpha_ {t} - \mathbb {E} \left[ \| M _ {t} - \Sigma_ {t} \| _ {2} \right]} - 2 \\ \leq \frac {\frac {2 c _ {0}}{\sqrt {m}}}{\alpha_ {t} - \frac {c _ {0}}{\sqrt {m}}} \tag {41} \\ \end{array}
+$$
+
+Using equation 38 such that $\frac{c_0}{\sqrt{m}} \leq \frac{\alpha_t}{2}$ , we have
+
+$$
+\mathbb {E} \left[ \| \hat {V} _ {k} (t) \hat {V} _ {k} (t) ^ {\top} - V _ {k} (t) V _ {k} (t) ^ {\top} \| _ {2} \right] \leq O \left(\frac {G \rho \sqrt {\ln p} \gamma_ {2} (\mathcal {W} , d)}{\alpha_ {t} \sqrt {m}}\right). \tag {42}
+$$
+
+That completes the proof.
+
+Lemma 2 (Lemma 2.2.3 in Talagrand (2014)) Consider a r.v. $Y \geq 0$ which satisfies
+
+$$
+\forall u > 0, \mathbb {P} (Y \geq u) \leq A \exp \left(- \frac {u ^ {2}}{B ^ {2}}\right) \tag {43}
+$$
+
+for certain numbers $A \geq 2$ and $B > 0$ . Then
+
+$$
+\mathbb {E} Y \leq C B \sqrt {\log A}, \tag {44}
+$$
+
+where $C$ denotes a universal constant.
+
+# A.2 GEOMETRY OF GRADIENTS AND $\gamma_{2}$ FUNCTIONS.
+
+In this section, we provide more intuitions and explanations of $\gamma_{2}$ functions. We justify our assumptions about the gradient space and provide more examples of the gradient space structure and the corresponding $\gamma_{2}$ functions.
+
+At a high level, for a metric space $(M,d)$ , $\gamma_{2}(M,d)$ is related to $\sqrt{\log N(M,d,\epsilon)}$ where $N(M,d,\epsilon)$ is the covering number of $M$ with $\epsilon$ balls with metric $d$ , but it is considerably sharper. Such sharpening has happened in two stages in the literature: first, based on chaining, which considers an integral over all $\epsilon$ yielding the Dudley bound, and subsequently, based on generic chaining, which considers a hierarchical covering, developed by Talagrand and colleagues, and which yields the sharpest bounds of this type. The official perspective of generic chaining is to view $\gamma_{2}(M,d)$ as an upper (and lower) bound on suprema of Gaussian processes indexed on $M$ and with metric $d$ [Theorem 2.4.1 in Talagrand (2014)].
+
+Considering $d$ to be the $\ell_2$ norm distance, $\gamma_{2}(M,d)$ will be the same order as the Gaussian width of $M$ (Vershynin, 2018), which is a scaled version of the mean width of $M$ . Structured sets (of gradients) have small Gaussian widths, e.g., a $L_{1}$ unit ball in $\mathbb{R}^p$ has a Gaussian width of $O(\sqrt{\log p})$ whereas a $L_{2}$ unit ball in $\mathbb{R}^p$ has a Gaussian width of $O(\sqrt{p})$ .
+
+To utilize the structure of gradients as example shown in Figure 2, instead of focusing on the $\gamma_{2}(\mathcal{W},d)$ , we can also derive the uniform convergence bound using the measurement $d$ on the set of population gradient $\mathbf{m} = \mathbb{E}_{z\in \mathcal{P}}[\nabla (\mathbf{w},z)]$ . We consider a mapping $f:\mathcal{W}\mapsto \mathcal{M}$ from the parameter space $\mathcal{W}$ to the gradient space $\mathcal{M}$ , where $f$ can be considered as $f(\mathbf{w}) = \mathbb{E}_{z\in \mathcal{P}}[\nabla (\mathbf{w},z)]$ . With $\mathbf{m}_t = \mathbb{E}_{z\in \mathcal{P}}[\nabla (\mathbf{w},z)]$ and $\mathcal{M}$ to be the space of the population gradient $\mathbf{m}_t\in \mathcal{M}$ , the pseudo metric $d(\mathbf{w},\mathbf{w}^{\prime})$ can be written as $d(\mathbf{w},\mathbf{w}^{\prime}) = d(f(\mathbf{w}),f(\mathbf{w}^{\prime})) = d(\mathbf{m},\mathbf{m}^{\prime})$ with $\mathbf{m} = \mathbb{E}_{z\in \mathcal{P}}[\nabla (\mathbf{w},z)]$ and $\mathbf{m}^{\prime} = \mathbb{E}_{z\in \mathcal{P}}[\nabla (\mathbf{w}^{\prime},z)]$ . With such a mapping $f$ , the admissible sequence $\Gamma_{\mathcal{W}} = \{\mathcal{W}_n:n\geq 0\}$ of $\mathcal{W}$ in the proof of Theorem 2 corresponds to the admissible sequence $\Gamma_{\mathcal{M}} = \{\mathcal{M}_n:n\geq 0\}$ of $\mathcal{M}$ . The $\gamma_{2}(\mathcal{W},d) = \inf_{\Gamma_{\mathcal{W}}}\sup_{\mathbf{w}\in \mathcal{W}}\sum_{n\geq 0}2^{n / 2}d(\mathbf{w},\mathcal{W}_n) = \inf_{\Gamma_{\mathcal{M}}}\sup_{\mathbf{m}\in \mathcal{M}}\sum_{n\geq 0}2^{n / 2}d(\mathbf{m},\mathcal{M}_n) = \gamma_{2}(\mathcal{M},d)$ . Considering $d(\mathbf{m},\mathbf{m}^{\prime}) = \| \mathbf{m} - \mathbf{m}^{\prime}\| _2$ , the $\gamma_{2}(\mathcal{M},d)$ will be the same order as the Gaussian width of $\mathcal{M}$ , i.e., $w(\mathcal{M}) = \mathbb{E}_{\mathbf{v}}[\sup_{\mathbf{m}\in \mathcal{M}}\langle \mathbf{m},\mathbf{v}\rangle ]$ where $\mathbf{v}\sim N(0,\mathbb{I}_p)$ . Below, we provide more examples of the gradient space structure and the $\gamma_{2}$ functions.
+
+Ellipsoid. The Gaussian width $w(\mathcal{M})$ depends on the structure of the gradient $\mathbf{m}$ . In Figure 2, we observe that, for each coordinates of the gradient is of small value along the training trajectory and thus $\mathcal{M}$ includes all gradients living in an ellipsoid, i.e., $\mathcal{M} = \{\mathbf{m}_t \in \mathbb{R}^p \mid \sum_{j=1}^p \mathbf{m}_t(j)^2 / \mathbf{e}(j)^2 \leq 1, \mathbf{e} \in \mathbb{R}^p\}$ . Then we have $\gamma_2(\mathcal{M}, d) \leq c_1 w(\mathcal{M}) \leq c_2 \| \mathbf{e} \|_2$ Talagrand (2014), where $c_1$ and $c_2$ are absolute constants. If the elements of $\mathbf{e}$ sorted in decreasing order satisfy $\mathbf{e}(j) \leq c_3 / \sqrt{j}$ for all $j \in [p]$ , then $\gamma_2(\mathcal{M}, d) \leq O\left(\sqrt{\log p}\right)$ .
+
+Composition. Based on the composition properties of $\gamma_{2}$ functions Talagrand (2014), one can construct additional examples of the gradient spaces. If $\mathcal{M} = \mathcal{M}_1 + \mathcal{M}_2 = \{\mathbf{m}_1 + \mathbf{m}_2, \mathbf{m}_1 \in \mathcal{M}_1, \mathbf{m}_2 \in \mathcal{M}_2\}$ , the Minkowski sum, then $\gamma_{2}(\mathcal{M}, d) \leq c(\gamma_{2}(\mathcal{M}_{1}, d) + \gamma_{2}(\mathcal{M}_{2}, d))$ (Theorem 2.4.15 in Talagrand (2014)), where $c$ is an absolute constant. If $\mathcal{M}$ is a union of several subsets, i.e., $\mathcal{M} = \cup_{h=1}^{D} M_h$ , then by using an union bound on Theorem 2, we have $\gamma_{2}(\mathcal{M}, d) \leq \sqrt{\log D} \max_{h} \gamma_{2}(\mathcal{M}_{h}, d)$ . Thus, if $\mathcal{M}$ is an union of $D = p^s$ ellipsoids, i.e., polynomial in $p$ , then $\gamma_{2}(\mathcal{M}, \| \cdot \|_2) \leq O(\sqrt{s} \log p)$ .
+
+# B PROOFS FOR SECTION 3.2
+
+In this section, we present the proofs for Section 3.2. Then we present the error rate for convex problems in the subsequent section.
+
+Theorem 4 (Smooth and Non-convex) For $\rho$ -smooth function $\hat{L}_n(\mathbf{w})$ , under Assumptions 1 and 2, let $\Lambda = \frac{\sum_{t=1}^{T} 1 / \alpha_t^2}{T}$ , for any $\epsilon, \delta > 0$ , with $T = O(n^2 \epsilon^2)$ and $\eta_t = \frac{1}{\sqrt{T}}$ , PDP-SGD achieves:
+
+$$
+\frac {1}{T} \sum_ {t = 1} ^ {T} \mathbb {E} \| V _ {k} (t) V _ {k} (t) ^ {\intercal} \nabla \hat {L} _ {n} (\mathbf {w} _ {t}) \| _ {2} ^ {2} \leq \tilde {O} \left(\frac {k \rho G ^ {2}}{n \epsilon}\right) + O \left(\frac {\Lambda G ^ {4} \rho^ {2} \gamma_ {2} ^ {2} (\mathcal {W} , d) \ln p}{m}\right). \tag {3}
+$$
+
+Additionally, assuming the principal component of the gradient dominates, i.e., there exist $c > 0$ such that $\frac{1}{T}\sum_{t=1}^{T}\|\left[\nabla\hat{L}_n(\mathbf{w}_t)\right]^\perp\|_2^2 \leq c\frac{1}{T}\sum_{t=1}^{T}\\|\left[\nabla\hat{L}_n(\mathbf{w}_t)\right]^\|_2^2$ , we have
+
+$$
+\mathbb {E} \| \nabla \hat {L} _ {n} (\mathbf {w} _ {R}) \| _ {2} ^ {2} \leq \tilde {O} \left(\frac {k \rho G ^ {2}}{n \epsilon}\right) + O \left(\frac {\Lambda G ^ {4} \rho^ {2} \gamma_ {2} ^ {2} (\mathcal {W} , d) \ln p}{m}\right), \tag {4}
+$$
+
+where $\mathbf{w}_R$ is uniformly sampled from $\{\mathbf{w}_1, \dots, \mathbf{w}_T\}$ .
+
+Proof: Recall that $\mathbf{g}_t = \frac{1}{|B_t|}\sum_{z_i\in B_t}\nabla \ell (\mathbf{w}_t,z_i)$ . With $B_{t}$ uniformly sampled from $S$ , we have
+
+$$
+\mathbb {E} _ {t} \left[ \mathbf {g} _ {t} \right] = \nabla \hat {L} _ {n} \left(\mathbf {w} _ {t}\right). \tag {45}
+$$
+
+Recall that the update of Algorithm 1 is
+
+$$
+\tilde {\mathbf {g}} _ {t} = \hat {V} _ {k} \hat {V} _ {k} ^ {\intercal} \left(\mathbf {g} _ {t} + \mathbf {b} _ {t}\right), \text {a n d} \mathbf {w} _ {t + 1} = \mathbf {w} _ {t} - \eta_ {t} \tilde {\mathbf {g}} _ {t}. \tag {46}
+$$
+
+Let $\bar{\mathbf{g}}_t = \mathbf{g}_t + \hat{V}_k\hat{V}_k^\top \mathbf{b}_t$ , and $\Delta_{t} = \hat{V}_{k}\hat{V}_{k}^{\top}\mathbf{g}_{t} - \mathbf{g}_{t}$ . Then we have
+
+$$
+\tilde {\mathbf {g}} _ {t} = \bar {\mathbf {g}} _ {t} + \Delta_ {t}. \tag {47}
+$$
+
+Since $\mathbf{b}_t$ is a zero mean Gaussian vector, we have
+
+$$
+\mathbb {E} _ {t} \left[ \bar {\mathbf {g}} _ {t} \right] = \mathbf {g} _ {t}. \tag {48}
+$$
+
+For $\rho$ -smooth $^5$ function $\hat{L}_n(\mathbf{w})$ , conditioned on $\mathbf{w}_t$ , we have
+
+$$
+\begin{array}{l} \mathbb {E} _ {t} \left[ \hat {L} _ {n} (\mathbf {w} _ {t + 1}) \right] \leq \hat {L} _ {n} (\mathbf {w} _ {t}) + \mathbb {E} _ {t} \left[ \left\langle \nabla \hat {L} _ {n} (\mathbf {w} _ {t}), \mathbf {w} _ {t + 1} - \mathbf {w} _ {t} \right\rangle \right] + \frac {\rho}{2} \eta_ {t} ^ {2} \mathbb {E} _ {t} \left[ \| \tilde {\mathbf {g}} _ {t} \| _ {2} ^ {2} \right] \\ = \hat {L} _ {n} (\mathbf {w} _ {t}) - \eta_ {t} \mathbb {E} _ {t} \left[ \left\langle \nabla \hat {L} _ {n} (\mathbf {w} _ {t}), \bar {\mathbf {g}} _ {t} + \Delta_ {t} \right\rangle \right] + \frac {\rho}{2} \eta_ {t} ^ {2} \mathbb {E} _ {t} \left[ \| \tilde {\mathbf {g}} _ {t} \| _ {2} ^ {2} \right] \\ = \hat {L} _ {n} (\mathbf {w} _ {t}) - \eta_ {t} \left\langle \nabla \hat {L} _ {n} (\mathbf {w} _ {t}), \nabla \hat {L} _ {n} (\mathbf {w} _ {t}) + \mathbb {E} _ {t} [ \Delta_ {t} ] \right\rangle + \frac {\rho}{2} \eta_ {t} ^ {2} \mathbb {E} _ {t} \left[ \| \tilde {\mathbf {g}} _ {t} \| _ {2} ^ {2} \right] \\ = \hat {L} _ {n} \left(\mathbf {w} _ {t}\right) - \eta_ {t} \left\| \nabla \hat {L} _ {n} \left(\mathbf {w} _ {t}\right) \right\| _ {2} ^ {2} - \eta_ {t} \left\langle \nabla \hat {L} _ {n} \left(\mathbf {w} _ {t}\right), \mathbb {E} _ {t} \left[ \Delta_ {t} \right] \right\rangle + \frac {\rho}{2} \eta_ {t} ^ {2} \mathbb {E} _ {t} \left[ \| \tilde {\mathbf {g}} _ {t} \| _ {2} ^ {2} \right] \tag {49} \\ \end{array}
+$$
+
+Rearrange the above inequality, we have
+
+$$
+\left. \right. \eta_ {t} \left\| \nabla \hat {L} _ {n} \left(\mathbf {w} _ {t}\right)\right\| _ {2} ^ {2} + \eta_ {t} \underbrace {\mathbb {E} _ {t} \left\langle \nabla \hat {L} _ {n} \left(\mathbf {w} _ {t}\right) , \Delta_ {t} \right\rangle} _ {D _ {t, 1}} \leq \hat {L} _ {n} \left(\mathbf {w} _ {t}\right) - \mathbb {E} _ {t} \left[ \hat {L} _ {n} \left(\mathbf {w} _ {t + 1}\right)\right] + \frac {\rho}{2} \eta_ {t} ^ {2} \underbrace {\mathbb {E} _ {t} \left[ \| \tilde {\mathbf {g}} _ {t} \| _ {2} ^ {2} \right]} _ {D _ {t, 2}} \tag {50}
+$$
+
+For $D_{t,1}$ , let
+
+$$
+\nabla \hat {L} _ {n} (\mathbf {w} _ {t}) = \Pi_ {Q _ {t}} [ \nabla \hat {L} _ {n} (\mathbf {w} _ {t}) ] + \Pi_ {Q _ {t} ^ {\perp}} [ \nabla \hat {L} _ {n} (\mathbf {w} _ {t}) ] \tag {51}
+$$
+
+where $Q_{t}$ is $\hat{V}_k(t)\hat{V}_k(t)^\top$ and $\Pi_{Q_t}$ is the projection. $Q_{t}^{\perp}$ is the null space of $Q_{t}$ .
+
+$$
+\Pi_ {Q _ {t}} \left[ \nabla \hat {L} _ {n} \left(\mathbf {w} _ {t}\right) \right] = \hat {V} _ {k} (t) \hat {V} _ {k} (t) ^ {\intercal} \nabla \hat {L} _ {n} \left(\mathbf {w} _ {t}\right) \tag {52}
+$$
+
+and
+
+$$
+\Pi_ {Q _ {t} ^ {\perp}} [ \nabla \hat {L} _ {n} (\mathbf {w} _ {t}) ] = \left[ \mathbb {I} - \hat {V} _ {k} (t) \hat {V} _ {k} (t) ^ {\intercal} \right] \nabla \hat {L} _ {n} (\mathbf {w} _ {t}). \tag {53}
+$$
+
+We have
+
+$$
+\begin{array}{l} \Delta_ {t} = \Pi_ {Q _ {t}} [ \hat {V} _ {k} (t) \hat {V} _ {k} (t) ^ {\intercal} \mathbf {g} _ {t} - \mathbf {g} _ {t} ] + \Pi_ {Q _ {t} ^ {\perp}} [ \hat {V} _ {k} (t) \hat {V} _ {k} (t) ^ {\intercal} \mathbf {g} _ {t} - \mathbf {g} _ {t} ] \\ = - \Pi_ {Q _ {t} ^ {\perp}} [ \mathbf {g} _ {t} ]. \tag {54} \\ \end{array}
+$$
+
+So that we have
+
+$$
+\begin{array}{l} D _ {t, 1} = \mathbb {E} _ {t} \left\langle \nabla \hat {L} _ {n} (\mathbf {w} _ {t}), \Delta_ {t} \right\rangle \\ = - \mathbb {E} _ {t} \left\langle \Pi_ {Q _ {t}} [ \nabla \hat {L} _ {n} (\mathbf {w} _ {t}) ] + \Pi_ {Q _ {t} ^ {\perp}} [ \nabla \hat {L} _ {n} (\mathbf {w} _ {t}) ], \Pi_ {Q _ {t} ^ {\perp}} [ \mathbf {g} _ {t} ] \right\rangle \\ = - \mathbb {E} _ {t} \left\langle \Pi_ {Q _ {t} ^ {\perp}} [ \nabla \hat {L} _ {n} (\mathbf {w} _ {t}) ], \left[ \mathbb {I} - \hat {V} _ {k} (t) \hat {V} _ {k} (t) ^ {\intercal} \right] \mathbf {g} _ {t} \right\rangle \\ = - \left\langle \left[ \mathbb {I} - \hat {V} _ {k} (t) \hat {V} _ {k} (t) ^ {\intercal} \right] \nabla \hat {L} _ {n} (\mathbf {w} _ {t}), \left[ \mathbb {I} - \hat {V} _ {k} (t) \hat {V} _ {k} (t) ^ {\intercal} \right] \nabla \hat {L} _ {n} (\mathbf {w} _ {t}) \right\rangle \\ = - \nabla \hat {L} _ {n} \left(\mathbf {w} _ {t}\right) ^ {\intercal} \left[ \mathbb {I} - \hat {V} _ {k} (t) \hat {V} _ {k} (t) ^ {\intercal} \right] \nabla \hat {L} _ {n} \left(\mathbf {w} _ {t}\right). \tag {55} \\ \end{array}
+$$
+
+Bringing the above to equation 50, the right-hand side of equation 50 becomes
+
+$$
+\begin{array}{l} \eta_ {t} \| \nabla \hat {L} _ {n} (\mathbf {w} _ {t}) \| _ {2} ^ {2} + \eta_ {t} D _ {t, 1} = \eta_ {t} \nabla \hat {L} _ {n} (\mathbf {w} _ {t}) ^ {\intercal} [ \mathbb {I} ] \nabla \hat {L} _ {n} (\mathbf {w} _ {t}) - \eta_ {t} \nabla \hat {L} _ {n} (\mathbf {w} _ {t}) ^ {\intercal} [ \mathbb {I} - \hat {V} _ {k} (t) \hat {V} _ {k} (t) ^ {\intercal} ] \nabla \hat {L} _ {n} (\mathbf {w} _ {t}) \\ = \eta_ {t} \nabla \hat {L} _ {n} (\mathbf {w} _ {t}) ^ {\intercal} \left[ \hat {V} _ {k} (t) \hat {V} _ {k} (t) ^ {\intercal} \right] \nabla \hat {L} _ {n} (\mathbf {w} _ {t}) \\ = \eta_ {t} \| \hat {V} _ {k} (t) \hat {V} _ {k} (t) ^ {\intercal} \nabla \hat {L} _ {n} (\mathbf {w} _ {t}) \| _ {2} ^ {2}. \tag {56} \\ \end{array}
+$$
+
+So that we have
+
+$$
+\eta_ {t} \| \hat {V} _ {k} (t) \hat {V} _ {k} (t) ^ {\intercal} \nabla \hat {L} _ {n} (\mathbf {w} _ {t}) \| _ {2} ^ {2} \leq \hat {L} _ {n} (\mathbf {w} _ {t}) - \mathbb {E} _ {t} \left[ \hat {L} _ {n} (\mathbf {w} _ {t + 1}) \right] + \frac {\rho}{2} \eta_ {t} ^ {2} \underbrace {\mathbb {E} _ {t} \left[ \| \tilde {\mathbf {g}} _ {t} \| _ {2} ^ {2} \right]} _ {D _ {t, 2}}. \tag {57}
+$$
+
+For $D_{t,2}$ , we have
+
+$$
+\begin{array}{l} \mathbb {E} _ {t} \left\| \tilde {\mathbf {g}} _ {t} \right\| _ {2} ^ {2} = \mathbb {E} _ {t} \left[ \left\| \hat {V} _ {k} \hat {V} _ {k} ^ {T} \mathbf {g} _ {t} + \hat {V} _ {k} \hat {V} _ {k} ^ {T} \mathbf {b} _ {t} \right\| _ {2} ^ {2} \right] \\ = \mathbb {E} _ {t} \left[ \left\| \hat {V} _ {k} \hat {V} _ {k} ^ {T} \mathbf {g} _ {t} \right\| _ {2} ^ {2} + \left\| \hat {V} _ {k} \hat {V} _ {k} ^ {T} \mathbf {b} _ {t} \right\| _ {2} ^ {2} \right] \\ \leq G ^ {2} + k \sigma^ {2}. \tag {58} \\ \end{array}
+$$
+
+Thus,
+
+$$
+D _ {t, 2} = \mathbb {E} _ {t} \left[ \| \bar {\mathbf {g}} _ {t} \| _ {2} ^ {2} + \| \Delta_ {t} \| _ {2} ^ {2} \right] \leq G ^ {2} + k \sigma^ {2}. \tag {59}
+$$
+
+Bringing the upper bound of $D_{t,2}$ to equation 57, setting $\eta_t = \frac{1}{\sqrt{T}}$ , using telescoping sum and taking the expectation over all iterations, we have
+
+$$
+\frac {1}{T} \sum_ {t = 1} ^ {T} \mathbb {E} \| \hat {V} _ {k} (t) \hat {V} _ {k} (t) ^ {\intercal} \nabla \hat {L} _ {n} (\mathbf {w} _ {t}) \| _ {2} ^ {2} \leq \frac {\hat {L} _ {n} (\mathbf {w} _ {1}) - \hat {L} _ {n} ^ {\star}}{\sqrt {T}} + \frac {\rho (G ^ {2} + k \sigma^ {2})}{2 \sqrt {T}} \tag {60}
+$$
+
+With triangle inequality and Theorem 3, we have
+
+$$
+\begin{array}{l} \left\| V _ {k} (t) V _ {k} (t) ^ {\intercal} \nabla \hat {L} _ {n} (\mathbf {w} _ {t}) \right\| _ {2} ^ {2} \leq 2 \left\| \hat {V} _ {k} (t) \hat {V} _ {k} (t) ^ {\intercal} \nabla \hat {L} _ {n} (\mathbf {w} _ {t}) \right\| _ {2} ^ {2} + 2 \left\| \left(V _ {k} (t) V _ {k} (t) ^ {\intercal} - \hat {V} _ {k} (t) \hat {V} _ {k} (t) ^ {\intercal}\right) \nabla \hat {L} _ {n} (\mathbf {w} _ {t}) \right\| _ {2} ^ {2} \\ \leq 2 \left\| \hat {V} _ {k} (t) \hat {V} _ {k} (t) ^ {\intercal} \nabla \hat {L} _ {n} \left(\mathbf {w} _ {t}\right) \right\| _ {2} ^ {2} + O \left(\frac {G ^ {4} \rho^ {2} \ln p \gamma_ {2} ^ {2} (\mathcal {W} , d)}{\alpha_ {t} ^ {2} m}\right). \tag {61} \\ \end{array}
+$$
+
+With equation 60, we have
+
+$$
+\frac {1}{T} \sum_ {t = 1} ^ {T} \mathbb {E} \| V _ {k} (t) V _ {k} (t) ^ {\intercal} \nabla \hat {L} _ {n} (\mathbf {w} _ {t}) \| _ {2} ^ {2} \leq \frac {\hat {L} _ {n} (\mathbf {w} _ {1}) - \hat {L} _ {n} ^ {\star}}{\sqrt {T} / 2} + \frac {\rho \left(G ^ {2} + k \sigma^ {2}\right)}{\sqrt {T}} + O \left(\frac {\Lambda G ^ {4} L \rho^ {2} \gamma_ {2} ^ {2} (\mathcal {W} , d) \ln p}{m}\right). \tag {62}
+$$
+
+where $\Lambda = \sum_{t=1}^{T} \frac{1}{\alpha_t^2}$ .
+
+Let $\left[\nabla \hat{L}_n(\mathbf{w}_t)\right]^\parallel = V_k(t)V_k(t)^\intercal\nabla \hat{L}_n(\mathbf{w}_t)$ and $\left[\nabla \hat{L}_n(\mathbf{w}_t)\right]^\perp = \nabla \hat{L}_n(\mathbf{w}_t) - V_k(t)V_k(t)^\intercal\nabla \hat{L}_n(\mathbf{w}_t)$ .
+
+Assuming there exist $c > 0$ , we have
+
+$$
+\sum_ {t = 1} ^ {T} \mathbb {E} \left\| \left[ \nabla \hat {L} _ {n} (\mathbf {w} _ {t}) \right] ^ {\perp} \right\| _ {2} ^ {2} \leq c \sum_ {t = 1} ^ {T} \mathbb {E} \left\| \left[ \nabla \hat {L} _ {n} (\mathbf {w} _ {t}) \right] ^ {\parallel} \right\| _ {2} ^ {2}. \tag {63}
+$$
+
+Then we have
+
+$$
+\frac {1}{T} \sum_ {t = 1} ^ {T} \mathbb {E} \| \nabla \hat {L} _ {n} (\mathbf {w} _ {t}) \| _ {2} ^ {2} \leq (1 + c) \left(\frac {\hat {L} _ {n} (\mathbf {w} _ {1}) - \hat {L} _ {n} ^ {\star}}{\sqrt {T} / 2} + \frac {\rho \left(G ^ {2} + k \sigma^ {2}\right)}{\sqrt {T}} + O \left(\frac {\Lambda G ^ {4} L \rho^ {2} \gamma_ {2} ^ {2} (\mathcal {W} , d) \ln p}{m}\right)\right). \tag {64}
+$$
+
+Take $T = n^2\epsilon^2$ , with $\mathbb{E}\left[\| \nabla \hat{L}_n(\mathbf{w}_R)\| _2^2\right] = \frac{1}{T}\sum_{t = 1}^{T}\mathbb{E}\| \nabla \hat{L}_n(\mathbf{w}_t)\| _2^2$ , we have
+
+$$
+\mathbb {E} \| \nabla \hat {L} _ {n} (\mathbf {w} _ {R}) \| _ {2} ^ {2} \leq \tilde {O} \left(\frac {k \rho G ^ {2}}{n \epsilon}\right) + O \left(\frac {\Lambda G ^ {4} \rho^ {2} \gamma_ {2} ^ {2} (\mathcal {W} , d) \ln p}{m}\right), \tag {65}
+$$
+
+where $\mathbf{w}_R$ is uniformly sampled from $\{\mathbf{w}_1,\dots ,\mathbf{w}_T\}$
+
+# C ERROR RATE OF CONVEX PROBLEMS
+
+For the convex and Lipschitz functions, we consider the low-rank structure of the gradient space, i.e., the population gradient second moment $\Sigma_{t}$ is of rank- $k$ , which is a special case of the principal gradient dominate assumption when $|\left[\nabla \hat{L}_n(\mathbf{w}_t)\right]^\perp \|_2 = 0$ .
+
+Theorem 6 (Convex and Lipschitz) For $G$ -Lipschitz and convex function $\hat{L}_n(\mathbf{w})$ , under Assumptions 1,2 and assuming $\Sigma_t$ is of rank- $k$ , let $\Lambda = \frac{\sum_{t=1}^{T} 1 / \alpha_t}{T}$ , for any $\epsilon, \delta > 0$ , with $T = O(n^2 \epsilon^2)$ , step size $\eta_t = \frac{1}{\sqrt{T}}$ , the PDP-SGD achieves
+
+$$
+\mathbb {E} \left[ \hat {L} _ {n} (\bar {\mathbf {w}}) \right] - \hat {L} _ {n} \left(\mathbf {w} ^ {\star}\right) \leq O \left(\frac {k G ^ {2}}{n \epsilon}\right) + O \left(\frac {\Lambda G \rho \gamma_ {2} (\mathcal {W} , d) \ln p}{\sqrt {m}}\right), \tag {66}
+$$
+
+where $\bar{\mathbf{w}} = \frac{\sum_{t=1}^{T} \mathbf{w}_t}{T}$ , and $\mathbf{w}^{\star}$ is the minima of $\hat{L}_n(\mathbf{w})$ .
+
+PDP-SGD also demonstrates an improvement from a factor of $p$ to $k$ compared to the error rate of DP-SGD for convex functions (Bassily et al., 2014; 2019a). PDP-SGD also has the subspace reconstruction error, depending on the $\gamma_{2}$ function and eigen-gap term $\Lambda = \sum_{t=1}^{T} 1 / \alpha_{t} / T$ . From previous discussions, the $\gamma_{2}$ is a constant with suitable assumptions of the gradient structure. For the eigen-gap term, if $\alpha_{t}$ stays as a constant in the training procedure as shown by Figure 1, $\Lambda$ will be a constant and the bound scales logarithmically with $p$ . If one assumes the eigen-gap $\alpha_{t}$ decays as training proceed, e.g., $\alpha_{t} = \frac{1}{t^{1/2}}$ for $t > 0$ , then we have $\Lambda = O(\sqrt{T})$ . In this case, with $T = O(n^{2}\epsilon^{2})$ , PDP-SGD requires public data size $m = O(n\epsilon)$ .
+
+Recently, Kairouz et al. (2020) propose a noisy version of AdaGrad algorithm for unconstrained convex empirical risk minimization. Their algorithm operates with only private data and also achieves a dimension-free excess risk bound, i.e., $\tilde{O}(r / n\epsilon)$ , by assuming the gradients along the path of optimization lie in a low-dimensional subspace with constant rank $r$ . The bounds are not directly comparable since the assumptions in Kairouz et al. (2020) are different from those in this paper, i.e., Kairouz et al. (2020) assume that the accumulated gradients along the training process lie in a constant subspace which requires the rank of the gradient space does not explode when adding more stochastic gradients. Our work does not impose such a constant subspace assumption on $\Sigma_t$ allowing the subspace of $\Sigma_t$ to be different along the training process. In other words, our bounds hold even when the rank of the accumulated stochastic gradients space increases as training proceeds.
+
+Proof of Theorem 6: By the convexity of $\hat{L}_n(\mathbf{w})$ , we have
+
+$$
+\hat {L} _ {n} \left(\mathbf {w} _ {t}\right) - \hat {L} _ {n} \left(\mathbf {w} ^ {\star}\right) \leq \langle \mathbf {w} _ {t} - \mathbf {w} ^ {\star}, \nabla \hat {L} _ {n} \left(\mathbf {w} _ {t}\right) \rangle . \tag {67}
+$$
+
+At iteration $t$ , we have $\mathbf{w}_{t + 1} = \mathbf{w}_t - \eta_t\tilde{\mathbf{g}}_t$ , where $\tilde{\mathbf{g}}_t = \hat{V}_k\hat{V}_k^\top \mathbf{g}_t + \hat{V}_k\hat{V}_k^\top \mathbf{b}_t$ .
+
+Let $\bar{\mathbf{g}}_t = \mathbf{g}_t + \hat{V}_k\hat{V}_k^\intercal \mathbf{b}_t$ and $\Delta_t = \hat{V}_k\hat{V}_k^\intercal \mathbf{g}_t - \mathbf{g}_t$ . Then we have
+
+$$
+\tilde {\mathbf {g}} _ {t} = \bar {\mathbf {g}} _ {t} + \Delta_ {t}. \tag {68}
+$$
+
+Recall that $\mathbf{g}_t = \frac{1}{|B_t|}\sum_{z_i\in B_t}\nabla \ell (\mathbf{w}_t,z_i)$ . With $B_{t}$ uniformly sampled from $S$ , we have
+
+$$
+\mathbb {E} _ {t} \left[ \mathbf {g} _ {t} \right] = \nabla \hat {L} _ {n} \left(\mathbf {w} _ {t}\right). \tag {69}
+$$
+
+Since $\mathbf{b}_t$ is a zero mean Gaussian vector, we have $\mathbb{E}_t[\bar{\mathbf{g}}_t] = \mathbf{g}_t$ .
+
+By convexity, conditioned at $\mathbf{w}_t$ , we have
+
+$$
+\begin{array}{l} \mathbb {E} _ {t} \left[ \hat {L} _ {n} (\mathbf {w} _ {t}) - \hat {L} _ {n} (\mathbf {w} ^ {\star}) \right] \\ \leq \mathbb {E} _ {t} \left\langle \mathbf {w} _ {t} - \mathbf {w} ^ {\star}, \mathbf {g} _ {t} \right\rangle \\ = \mathbb {E} _ {t} \left\langle \mathbf {w} _ {t} - \mathbf {w} ^ {\star}, \bar {\mathbf {g}} _ {t} \right\rangle \\ = \frac {1}{\eta_ {t}} \mathbb {E} _ {t} \left\langle \mathbf {w} _ {t} - \mathbf {w} ^ {\star}, \mathbf {w} _ {t} - \mathbf {w} _ {t + 1} - \eta_ {t} \Delta_ {t} \right\rangle \\ = \frac {1}{2 \eta_ {t}} \mathbb {E} _ {t} \left(\| \mathbf {w} _ {t} - \mathbf {w} ^ {\star} \| _ {2} ^ {2} + \eta_ {t} ^ {2} \| \bar {\mathbf {g}} _ {t} \| _ {2} ^ {2} - \| \mathbf {w} _ {t + 1} - \mathbf {w} ^ {\star} - \eta_ {t} \Delta_ {t} \| _ {2} ^ {2}\right) \\ = \frac {1}{2 \eta_ {t}} \mathbb {E} _ {t} \left(\| \mathbf {w} _ {t} - \mathbf {w} ^ {\star} \| _ {2} ^ {2} + \eta_ {t} ^ {2} \| \bar {\mathbf {g}} _ {t} \| _ {2} ^ {2} - \| \mathbf {w} _ {t + 1} - \mathbf {w} ^ {\star} \| _ {2} ^ {2} - \eta_ {t} ^ {2} \| \Delta_ {t} \| _ {2} ^ {2} + 2 \langle \mathbf {w} _ {t + 1} - \mathbf {w} ^ {\star}, \eta_ {t} \Delta_ {t} \rangle\right) \\ = \frac {1}{2 \eta_ {t}} \mathbb {E} _ {t} \left(\| \mathbf {w} _ {t} - \mathbf {w} ^ {\star} \| _ {2} ^ {2} - \| \mathbf {w} _ {t + 1} - \mathbf {w} ^ {\star} \| _ {2} ^ {2}\right) + \frac {\eta_ {t}}{2} \mathbb {E} _ {t} \left[ \| \bar {\mathbf {g}} _ {t} \| _ {2} ^ {2} \right] - \frac {\eta_ {t}}{2} \mathbb {E} _ {t} \left[ \| \Delta_ {t} \| _ {2} ^ {2} \right] \\ + \mathbb {E} _ {t} \left\langle \mathbf {w} _ {t + 1} - \mathbf {w} ^ {\star}, \Delta_ {t} \right\rangle \\ \end{array}
+$$
+
+$$
+\stackrel {(a)} {\leq} \frac {1}{2 \eta_ {t}} \mathbb {E} _ {t} \left(\| \mathbf {w} _ {t} - \mathbf {w} ^ {\star} \| _ {2} ^ {2} - \| \mathbf {w} _ {t + 1} - \mathbf {w} ^ {\star} \| _ {2} ^ {2}\right) + \frac {\eta_ {t}}{2} \left(G ^ {2} + k \sigma^ {2}\right) + B \mathbb {E} _ {t} \| \Delta_ {t} \| _ {2}, \tag {70}
+$$
+
+where $(a)$ is true since
+
+$$
+\begin{array}{l} \mathbb {E} _ {t} \| \bar {\mathbf {g}} _ {t} \| _ {2} ^ {2} \leq \mathbb {E} _ {t} [ \| \mathbf {g} _ {t} \| _ {2} ^ {2} + \| \hat {V} _ {k} \hat {V} _ {k} ^ {\intercal} \mathbf {b} _ {t} \| _ {2} ^ {2} ] \\ \leq G ^ {2} + k \sigma^ {2}, \tag {71} \\ \end{array}
+$$
+
+and $\| \mathbf{w}_{t + 1} - \mathbf{w}^{\star}\|_{2}\leq B^{6}$
+
+Let $\eta_t = \frac{1}{\sqrt{T}}$ , taking the expectation over all iterations and sum over $t = 1,..,T$ , we have
+
+$$
+\frac {1}{T} \mathbb {E} \left[ \sum_ {t = 1} ^ {\tau} \hat {L} _ {n} \left(\mathbf {w} _ {t}\right) - \hat {L} _ {n} \left(\mathbf {w} ^ {\star}\right) \right] \leq \frac {\left\| \mathbf {w} _ {0} - \mathbf {w} ^ {\star} \right\| _ {2} ^ {2} + G ^ {2} + k \sigma^ {2}}{2 \sqrt {T}} + \frac {B \sum_ {t = 1} ^ {\tau} \mathbb {E} _ {t} \left\| \Delta_ {t} \right\| _ {2}}{T}. \tag {72}
+$$
+
+From Theorem 3, we have
+
+$$
+\begin{array}{l} \mathbb {E} _ {t} \left[ \| \Delta_ {t} \| _ {2} \right] = \mathbb {E} \left[ \left\| \hat {V} _ {k} (t) \hat {V} _ {k} (t) ^ {\intercal} \mathbf {g} _ {t} - V (t) V (t) ^ {\intercal} \mathbf {g} _ {t} \right\| _ {2} \right] \\ \leq \mathbb {E} \left[ \left\| \hat {V} _ {k} (t) \hat {V} _ {k} (t) ^ {\top} - V (t) V (t) ^ {\top} \right\| _ {2} G \right] \\ \leq G \mathbb {E} \left[ \left\| \hat {V} _ {k} (t) \hat {V} _ {k} (t) ^ {\top} - V _ {k} (t) V _ {k} (t) ^ {\top} \right\| _ {2} + \| V _ {k} (t) V _ {k} (t) ^ {\top} - V (t) V (t) ^ {\top} \| _ {2} \right] \\ \leq O \left(\frac {G \rho \ln p \gamma_ {2} (\mathcal {W} , d)}{\alpha_ {t} \sqrt {m}}\right), \tag {73} \\ \end{array}
+$$
+
+where the last inequality holds because the $\Sigma_{t}$ is of rank $k$ and $V(t) = V_{k}(t)$ .
+
+Bring this to equation 72, use the fact that $\| \mathbf{w}_0 - \mathbf{w}^\star \| < B$ , with Jensen's inequality we have
+
+$$
+\mathbb {E} \left[ \hat {L} _ {n} (\bar {\mathbf {w}}) \right] - \hat {L} _ {n} \left(\mathbf {w} ^ {\star}\right) \leq \frac {B ^ {2} + G ^ {2} + k \sigma^ {2}}{2 \sqrt {T}} + O \left(\frac {\Lambda G \rho \ln p \gamma_ {2} (\mathcal {W} , d)}{\sqrt {m}}\right). \tag {74}
+$$
+
+where $\Lambda = \sum_{t=1}^{T} \frac{1}{\alpha_t}$ .
+
+With $\sigma^2 = \frac{G^2T}{n^2\epsilon^2}$ , let $T = n^2\epsilon^2$ , we have
+
+$$
+\mathbb {E} \left[ \hat {L} _ {n} (\bar {\mathbf {w}}) \right] - \hat {L} _ {n} \left(\mathbf {w} ^ {\star}\right) \leq O \left(\frac {k G ^ {2} B ^ {2}}{n \epsilon}\right) + O \left(\frac {\Lambda G \rho \ln p \gamma_ {2} (\mathcal {W} , d)}{\sqrt {m}}\right). \tag {75}
+$$
+
+That completes the proof.
+
+# D EXPERIMENTAL SETUP AND ADDITIONAL RESULTS
+
+Datasets and Network Structure. The MNIST and Fashion MNIST datasets both consist of 60,000 training examples and 10,000 test examples. To construct the private training set, we randomly sample 10,000 samples from the original training set of MNIST, then we randomly sample 100 samples from the rest to construct as the public dataset $^{7}$ . Details refer to Table 2. For both MNIST and Fashion MNIST, we use a convolutional neural network that follows the structure in Papernot et al. (2020) whose architecture is described in Table 1. All experiments have been run on NVIDIA Tesla K40 GPUs.
+
+Table 1: Network architecture for MNIST and Fashion MNIST.
+
+| Layer | Parameters |
| Convolution | 16 filters of 8 × 8, strides 2 |
| Max-Pooling | 2 × 2 |
| Convolution | 32 filters of 4x4, strides 2 |
| Max-Pooling | 2 × 2 |
| Fully connected | 32 units |
| Softmax | 10 units |
+
+Table 2: Neural network and datasets setup.
+
+| Dataset | Model | features | classes | Training size | Public size | Test size |
| MNIST | CNN | 28 × 28 | 10 | 10,000 | 100 | 10,000 |
| Fashion MNIST | CNN | 28 × 28 | 10 | 10,000 | 100 | 10,000 |
+
+Hyper-parameter Setting. We consider different choices of the noise scale, i.e., $\sigma = \{18, 14, 10, 8, 6, 4\}$ for MNIST and $\sigma = \{18, 14, 10, 6, 4, 2\}$ for Fashion MNIST. Cross-entropy is used as our loss function throughout experiments. The mini-batch size is set to be 250 for both MNIST and Fashion MNIST. For the step size, we follow the grid search method with search space $\{0.01, 0.05, 0.1, 0.2\}$ to tune the step size for MNIST and the search space is $\{0.01, 0.02, 0.05, 0.1, 0.2\}$ for Fashion MNIST. We choose the step size based on the training accuracy at the last epoch. The best step sizes for DP-SGD and PDP-SGD for different privacy levels are presented in Table 3 and Table 4 for MNIST and Fashion MNIST, respectively. For training, a fixed budget on the number of epochs i.e., 30 is assigned for the each task. We repeat each experiments 3 times and report the mean and standard deviation of the accuracy on the training and test set. For PDP-SGD, the projection dimension $k$ is a hyper-parameter and it illustrates a trade-off between the reconstruction error and the noise reduction. A small $k$ implies more noise amount will be reduced, and a larger reconstruction error will be introduced. We explored $k = \{20, 30, 50\}$ for MNIST and $k = \{30, 50, 70\}$ for Fashion MNIST, and we found that $k = 50$ and $k = 70$ achieve the best performance for MNIST and Fashion MNIST respectively among the search space we consider. Instead of doing the projection for all epochs, we also explored a start point for the projection, i.e., executing the projection from the 1-th epoch, 15-th epoch. The information of projection dimension $k$ and the starting epoch for projection are also given in Table 3 and Table 4 for MNIST and Fashion MNIST, respectively.
+
+Privacy Parameter Setting. Since gradient norm bound $G$ is unknown for deep learning, we follow the gradient clipping method in Abadi et al. (2016) to guarantee the privacy. We implement the micro-batch clipping method in PyTorch $^8$ . We use micro-batch $= 1$ and micro-batch $= 5$ for MNIST and Fashion MNIST, respectively. Note that training with micro-batch clipping will need the noise scaled by micro-batch size to guarantee the same privacy. But it takes less time than training with per-sample clipping, i.e., micro-batch $= 1$ . We follow the Moment Accountant (MA) method (Bu et al., 2019) to calculate the accumulated privacy cost, which depends on the number of epochs, the batch size, $\delta$ , and noise variance $\sigma$ . With 30 epochs, batch size 250,
+
+Table 3: Hyper-parameter settings for DP-SGD and PDP-SGD for MNIST.
+
+ | σ = 18
+(ε = 0.23) | σ = 14
+(ε = 0.30) | σ = 10
+(ε = 0.42) | σ = 8
+(ε = 0.53) | σ = 6
+(ε = 0.72) | σ = 4
+(ε = 1.09) |
| DP-SGD |
| Step size | 0.05 | 0.05 | 0.05 | 0.05 | 0.1 | 0.1 |
| PDP-SGD |
| Step size | 0.1 | 0.2 | 0.2 | 0.1 | 0.1 | 0.2 |
| Starting epoch for projection | 1 | 1 | 1 | 15 | 15 | 15 |
| Projection dimension k | 50 | 50 | 50 | 50 | 50 | 50 |
+
+10,000 training samples, and fixing $\delta = 10^{-5}$ , the $\epsilon$ is $\{2.41, 1.09, 0.72, 0.42, 0.30, 0.23\}$ for $\sigma \in \{2, 4, 6, 10, 14, 18\}$ for Fashion MNIST. For MNIST, $\epsilon$ is $\{1.09, 0.72, 0.53, 0.42, 0.30, 0.23\}$ corresponding to $\sigma = \{4, 6, 8, 10, 14, 18\}$ . Note that the $\epsilon$ presented in this paper is w.r.t. a subset i.e., 10,000 samples from MNIST and Fashion MNIST. Also, one can fix the value of $\epsilon$ and do a search over the epochs, batch size and noise scale to boost the performance for a fixed privacy level $\epsilon$ . We omit such a complicated hyper-parameter tuning since it has a high risk of privacy leakage.
+
+Table 4: Hyper-parameter settings for DP-SGD and PDP-SGD for Fashion MNIST.
+
+ | σ = 18
+(ε = 0.23) | σ = 14
+(ε = 0.30) | σ = 10
+(ε = 0.42) | σ = 6
+(ε = 0.72) | σ = 4
+(ε = 1.09) | σ = 2
+(ε = 2.41) |
| DP-SGD |
| Step size | 0.01 | 0.01 | 0.01 | 0.02 | 0.02 | 0.05 |
| PDP-SGD |
| Step size | 0.01 | 0.01 | 0.02 | 0.02 | 0.02 | 0.05 |
| Starting epoch for projection | 15 | 15 | 15 | 15 | 15 | 15 |
| Projection dimension k | 70 | 70 | 70 | 70 | 70 | 70 |
+
+Additional Experimental Results. Training dynamics of DP-SGD and PDP-SGD with different privacy levels are presented in Figure 7 and Figure 8 respectively for MNIST and Fashion MNIST. The results suggest that for small $\epsilon$ , PDP-SGD can efficiently reduce the noise variance injected to the gradient, which improves the training and test accuracy over DP-SGD.
+
+In order to understand the role of projection dimension $k$ , we run PDP-SGD with projection starting from the first epoch. Figure 9 reports the PDP-SGD with $k \in \{10,20,30,50\}$ for MNIST with $\epsilon = 0.30$ (Figure 9(a)) and $\epsilon = 0.53$ (Figure 9(b)). Among the choice of $k$ , we can see that PDP-SGD with $k = 50$ performs better than the others in terms of the training and test accuracy. PDP-SGD with $k = 10$ proceeds slower than PDP-SGD with $k = 20$ and $k = 50$ . This is due to the larger reconstruction error introduced by projecting the gradient to a much smaller subspace, i.e., $k = 10$ . However, compared to the gradient dimension $p \approx 25,000$ , it is impressive that PDP-SGD with $k = 50$ which projects the gradient to a much smaller subspace, can achieve better accuracy than DP-SGD.
+
+We also empirically evaluate the effect of the public sample size $m$ . Figure 11(a) and Figure 11(b) present the training and test accuracy for PDP-SGD with $m \in \{50, 150, 200\}$ for $\epsilon = 0.23$ and $\epsilon = 1.09$ for Fashion MNIST dataset. The training and test accuracy of PDP-SGD increases as the public sample size increases from 50 to 150. This is consistent with the theoretical analysis that increasing $m$ helps reducing the subspace reconstruction error as suggested by the theoretical bound. Also, PDP-SGD with $m = 150$ and $m = 200$ performs slightly better than $m = 50$ in terms of the training and test accuracy. The results suggest that while a small amount of public datasets are not sufficient for training an accurate predictor, they provide useful gradient subspace projection and accuracy improvement over DP-SGD.
+
+We also compare PDP-SGD and DP-SGD for different number of training samples, i.e., MNIST with 20,000 samples (Figure 12(a)) and Fashion MNIST with 50,000 samples (Figure 12(a)) (100
+
+public samples for both case). The observation that PDP-SGD outperforms DP-SGD for small $\epsilon$ regime in Figure 3 also holds for other number of training samples.
+
+We also explore PDP-SGD with sparse eigen-space computation, i.e., update the projector every $s$ iterates. Note that PDP-SGD with $s = 1$ means computing the top eigen-space at every iteration. Figure 13 reports PDP-SGD with $s = \{1,10,20\}$ for (a) MNIST with 50,000 samples and (b) Fashion MNIST with 50,000 samples showing that there is a mild decay for PDP-SGD with fewer eigen-space computation. PDP-SGD with a reduced eigen-space computation also improves the accuracy over DP-SGD.
+
+
+(a) Training accuracy, $\epsilon = 0.23$
+
+
+(b) Training accuracy, $\epsilon = 0.30$
+
+
+(c) Training accuracy, $\epsilon = 0.42$
+
+
+(d) Test accuracy, $\epsilon = 0.23$
+
+
+(e) Test accuracy, $\epsilon = 0.30$
+
+
+(f) Test accuracy, $\epsilon = 0.42$
+
+
+Figure 7: Comparison of DP-SGD and PDP-SGD for MNIST. (a-c) report the training accuracy and (d-f) report the test accuracy for $\epsilon = \{0.23, 0.30, 0.42\}$ . The X-axis is the number of epochs, and the Y-axis is the train/test accuracy. DPD-SGD outperforms DP-SGD for small $\epsilon$ .
+
+
+(b) Training accuracy, $\epsilon = 0.30$
+
+
+(a) Training accuracy, $\epsilon = 0.23$
+(c) Test accuracy, $\epsilon = 0.23$
+
+
+(d) Test accuracy, $\epsilon = 0.30$
+Figure 8: Comparison of DP-SGD and PDP-SGD for Fashion MNIST. (a-b) report the training accuracy and (c-d) report the test accuracy for $\epsilon = \{0.23, 0.30\}$ . Learning rare is 0.01 for both PDP-SGD and DP-SGD. PDP-SGD starts projection at 15-th epoch. The X-axis is the number of epochs, and the Y-axis is the train/test accuracy. DPD-SGD outperforms DP-SGD for small $\epsilon$ .
+
+
+(a) MNIST, $\epsilon = 0.30$
+
+
+Figure 9: Training accuracy and test accuracy for PDP-SGD with $k = \{10,20,30,50\}$ for (a) MNIST with $\epsilon = 0.30$ ; (b) MNIST with $\epsilon = 0.53$ . The X-axis and Y-axis refer to Figure 4. PDP-SGD with $k = 50$ performs better that the others in terms of the training and test accuracy.
+
+
+(b) MNST, $\epsilon = 0.53$
+
+
+
+
+(a) MNIST, $\epsilon = 0.30$
+
+
+Figure 10: Training accuracy and test accuracy for PDP-SGD with $m = \{50, 100, 150\}$ for (a) MNIST with $\epsilon = 0.30$ ; (b) MNIST with $\epsilon = 0.53$ . The X-axis and Y-axis refer to Figure 4. PDP-SGD with $m = 150$ and $m = 100$ perform better than the others in terms of the training and test accuracy.
+
+
+(b) MNST, $\epsilon = 0.42$
+
+
+
+
+(a) MNIST, $\epsilon = 0.23$
+
+
+Figure 11: Training accuracy and test accuracy for PDP-SGD with $m = \{50,150,200\}$ for (a) MNIST with $\epsilon = 0.23$ ; (b) MNIST with $\epsilon = 0.43$ . The X-axis and Y-axis refer to Figure 4. PDP-SGD with $m = 150$ and $m = 200$ performs slightly better that the other one in terms of the training and test accuracy.
+
+
+(b) MNST, $\epsilon = 1.09$
+
+
+
+
+(a) MNIST
+
+
+Figure 12: Training and test accuracy for DP-SGD and PDP-SGD with different privacy levels for (a) MNIST with 20,000 samples and (b) Fashion MNIST with 50,000 samples. The X-axis and Y-axis refer to Figure 3. For small privacy loss $\epsilon$ , PDP-SGD outperforms DP-SGD.
+
+
+(b) Fashion MNIST
+
+
+
+
+(a) MNIST, $\epsilon = 0.06$
+
+
+Figure 13: Training and test accuracy for PDP-SGD with different frequency of eigen-space computation for (a) MNIST with 50,000 samples and (b) Fashion MNIST with 50,000 samples. $s = \{5,10,20\}$ is the frequency of subspace update, i.e., compute the eigen-space every $s$ iterates. The X-axis and Y-axis refer to Figure 4. PDP-SGD with a reduced eigen-space computation also improves the accuracy over DP-SGD.
+
+
+(b) Fashion MNIST, $\epsilon = 0.07$
+
+
\ No newline at end of file
diff --git a/bypassingtheambientdimensionprivatesgdwithgradientsubspaceidentification/images.zip b/bypassingtheambientdimensionprivatesgdwithgradientsubspaceidentification/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..b4c7a697e987f9e89d322f1c9f895ed5c2badb59
--- /dev/null
+++ b/bypassingtheambientdimensionprivatesgdwithgradientsubspaceidentification/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d93f8c22c7b32c032d82df1e754c338581e2740c10170bde3f3252310185466e
+size 1626307
diff --git a/bypassingtheambientdimensionprivatesgdwithgradientsubspaceidentification/layout.json b/bypassingtheambientdimensionprivatesgdwithgradientsubspaceidentification/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..d4fbedf1aebf6587b643a8ab4625be83052e16d6
--- /dev/null
+++ b/bypassingtheambientdimensionprivatesgdwithgradientsubspaceidentification/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9613c8b0f5c17573d80e743ee488d50fad6c0cacb379bd901a120abb3b4d5e6f
+size 1492498
diff --git a/byzantineresilientnonconvexstochasticgradientdescent/ddb45c75-10fb-4e2c-8a30-563034a8aa17_content_list.json b/byzantineresilientnonconvexstochasticgradientdescent/ddb45c75-10fb-4e2c-8a30-563034a8aa17_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..9d68c0e6437caf5a5d5e0a91b4d69a09e2841e0d
--- /dev/null
+++ b/byzantineresilientnonconvexstochasticgradientdescent/ddb45c75-10fb-4e2c-8a30-563034a8aa17_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5e4bad44757a5198767525ea63649d55f03df031301be5edd3367c7b618649de
+size 188257
diff --git a/byzantineresilientnonconvexstochasticgradientdescent/ddb45c75-10fb-4e2c-8a30-563034a8aa17_model.json b/byzantineresilientnonconvexstochasticgradientdescent/ddb45c75-10fb-4e2c-8a30-563034a8aa17_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..d3b52c9302857cee12700f26de093f78211eaee6
--- /dev/null
+++ b/byzantineresilientnonconvexstochasticgradientdescent/ddb45c75-10fb-4e2c-8a30-563034a8aa17_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2d84a2f9eb26eeb75a24fd9a9f1bcdf2674f047fdab1af4c33cf8ddb342489c4
+size 231258
diff --git a/byzantineresilientnonconvexstochasticgradientdescent/ddb45c75-10fb-4e2c-8a30-563034a8aa17_origin.pdf b/byzantineresilientnonconvexstochasticgradientdescent/ddb45c75-10fb-4e2c-8a30-563034a8aa17_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..abc04faea63155fd6c9a16b3be332f30e1cb7e92
--- /dev/null
+++ b/byzantineresilientnonconvexstochasticgradientdescent/ddb45c75-10fb-4e2c-8a30-563034a8aa17_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:102472ced6a555e15a3c41205d398504f77092af2c12fe456a65de3aceced906
+size 1968749
diff --git a/byzantineresilientnonconvexstochasticgradientdescent/full.md b/byzantineresilientnonconvexstochasticgradientdescent/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..ba7cb4bb6bf87b6e8ac2d42d8d5bea3ce9377ad2
--- /dev/null
+++ b/byzantineresilientnonconvexstochasticgradientdescent/full.md
@@ -0,0 +1,883 @@
+# BYZANTINE-RESILIENT NON-CONVEX STOCHASTIC GRADIENT DESCENT*
+
+Zeyuan Allen-Zhu† Faeze Ebrahimian‡ Jerry Li§ Dan Alisthar¶
+
+# ABSTRACT
+
+We study adversary-resilient stochastic distributed optimization, in which $m$ machines can independently compute stochastic gradients, and cooperate to jointly optimize over their local objective functions. However, an $\alpha$ -fraction of the machines are Byzantine, in that they may behave in arbitrary, adversarial ways. We consider a variant of this procedure in the challenging non-convex case. Our main result is a new algorithm SafeguardSGD which can provably escape saddle points and find approximate local minima of the non-convex objective. The algorithm is based on a new concentration filtering technique, and its sample and time complexity bounds match the best known theoretical bounds in the stochastic, distributed setting when no Byzantine machines are present.
+
+Our algorithm is very practical: it improves upon the performance of all prior methods when training deep neural networks, it is relatively lightweight, and it is the first method to withstand two recently-proposed Byzantine attacks.
+
+# 1 INTRODUCTION
+
+Motivated by the pervasiveness of large-scale distributed machine learning, there has recently been significant interest in providing distributed optimization algorithms with strong fault-tolerance guarantees. In this context, the strongest, most stringent fault model is that of Byzantine faults (Lamport et al., 1982): given $m$ machines, each having access to private data, at most an $\alpha$ fraction of the machines can behave in arbitrary, possibly adversarial ways, with the goal of breaking or slowing down the algorithm. Although extremely harsh, this fault model is the "gold standard" in distributed computing (Lynch, 1996; Lamport et al., 1982; Castro et al., 1999), as algorithms proven to be correct in this setting are guaranteed to converge under arbitrary system behaviour.
+
+A setting of particular interest in this context has been that of distributed stochastic optimization. Here, the task is to minimize some stochastic function $f(x) = \mathbb{E}_{s\sim \mathcal{D}}[f_s(x)]$ over a distribution $\mathcal{D}$ , where $f_{s}(\cdot)$ can be viewed as the loss function for sample $s\sim \mathcal{D}$ . We assume there are $m$ machines (workers) and an honest master, and $\alpha < 1 / 2$ fraction of the workers may be Byzantine. In each iteration $t$ , each worker has access to a version of the global iterate $x_{t}$ , which is maintained by the master. The worker can independently sample $s\sim \mathcal{D}$ , compute $\nabla f_{s}(x_{t})$ , and then synchronously send this stochastic gradient to the master. The master aggregates the workers' messages, and sends an updated iterate $x_{t + 1}$ to all the workers. Eventually, the master has to output an approximate minimizer of $f$ . Clearly, the above description only applies to honest workers; Byzantine workers may deviate arbitrarily and return adversarial "gradient" vectors to the master in every iteration.
+
+This distributed framework is quite general and well studied. One of the first references in this setting studied distributed PCA and regression (Feng et al., 2014). Other early approaches (Blanchard et al., 2017; Chen et al., 2017; Su & Vaidya, 2016a;b; Xie et al., 2018a) relied on defining generalizations of the geometric median. These approaches can withstand up to half of the nodes being malicious, but can have relatively high local computational cost $\Omega(m^2 d)$ (Blanchard et al., 2017; Chen et al., 2017), where $m$ is the number of nodes and $d$ is the problem dimension, and usually have suboptimal sample and iteration complexities.
+
+Follow-up work resolved this last issue when the objective $f(\cdot)$ is convex, leading to tight sample
+
+*The full and future editions of this paper can be found on https://arxiv.org/abs/2012.14368.
+$^{\dagger}$ Microsoft Research Redmond, zeyuan@csail.mit.edu
+$^{\ddagger}$ University of Waterloo, faezeeb75@gmail.com
+Microsoft Research Redmond, jerrl@microsoft.com
+IST Austria, dan.alistarh@ist.ac.at
+
+complexity bounds. Specifically, Yin et al. (2018) provided bounds for gradient descent-type algorithms, and showed that the bounds are tight when the dimension is constant. Alistarh et al. (2018) provided a stochastic gradient descent (SGD) type algorithm and showed that its sample and time complexities are asymptotically optimal even when the dimension is large.
+
+Non-convex Byzantine-resilient stochastic optimization. In this paper, we focus on the more challenging non-convex setting, and shoot for the strong goal of finding approximate local minima (a.k.a. second-order critical points). In a nutshell, our main result is the following. Fix $d$ to denote the dimension, and let the objective $f:\mathbb{R}^d\to \mathbb{R}$ be Lipschitz smooth and second-order smooth. We have $m$ worker machines, each having access to unbiased, bounded estimators of the gradient of $f$ . Given an initial point $x_0$ , the SafeguardSGD algorithm ensures that, even if at most $\alpha < 1 / 2$ fraction of the machines are Byzantine, after
+
+$$
+T = \widetilde {O} \left(\left(\alpha^ {2} + \frac {1}{m}\right) \frac {d (f (x _ {0}) - \min f (x))}{\varepsilon^ {4}}\right) \quad \text {p a r a l l e t i r a t i o n s},
+$$
+
+for at least a constant fraction of the indices $t \in [T]$ , the following hold:
+
+$$
+\| \nabla f (x _ {t}) \| \leq \varepsilon \quad \text {a n d} \quad \nabla^ {2} f (x _ {t}) \succeq - \sqrt {\varepsilon} \mathbf {I}.
+$$
+
+If the goal is simply $\| \nabla f(x_{t})\| \leq \varepsilon$ , then $T = \widetilde{O}\big(\left(\alpha^{2} + \frac{1}{m}\right)\frac{(f(x_{0}) - \min f(x))}{\varepsilon^{4}}\big)$ iterations suffice. Here, the $\widetilde{O}$ notation serves to hide logarithmic factors for readability. We spell out these factors in the detailed analysis.
+
+- When $\alpha < 1/\sqrt{m}$ , our sample complexity ( $= mT$ ) matches the best known result in the non-Byzantine case (Jin et al., 2019) without additional assumptions, and enjoys linear parallel speed-up: with $m$ workers of which $< \sqrt{m}$ are Byzantine, the parallel speedup is $\widetilde{\Omega}(m)$ .1
+- For $\alpha \in [1/\sqrt{m}, 1/2)$ , our parallel time complexity is $\widetilde{O}(\alpha^2)$ times that needed when no parallelism is used. This still gives parallel speedup. This $\alpha^2$ factor appears in convex Byzantine distributed optimization, where it is tight (Yin et al., 2018; Alistarh et al., 2018).
+- The Lipschitz and second-order smoothness assumptions are the minimal assumptions needed to derive convergence rates for finding second-order critical points (Jin et al., 2019).
+
+Comparison with prior bounds. The closest known bounds are by Yin et al. (2019), who derived three gradient descent-type of algorithms (based on median, mean, and iterative filtering) to find a weaker type of approximate local minima. Since it relies on full gradients, their algorithm is arguably less practical, and their time complexities are generally higher than ours (see Section 2.1).
+
+Other prior works consider a weaker goal: to find approximate stationary points $\| \nabla f(x)\| \leq \varepsilon$ only: Bulusu et al. (2020) additionally assumed there is a guaranteed good (i.e. non-Byzantine) worker known by the master, Xie et al. (2018b) gave a practical algorithm when the Byzantine attackers have no information about the loss function or its gradient, Yang et al. (2019); Xie et al. (2018a); Blanchard et al. (2017) derived eventual convergence without an explicit complexity bound, and the non-convex result obtained in Yin et al. (2018) is subsumed by Yin et al. (2019), discussed above.
+
+Our algorithm and techniques. The structure of our algorithm is deceptively simple. The master node keeps track of the sum of gradients produced by each worker across time. It labels (allegedly) good workers as those whose sum of gradients "concentrate" well with respect to a surrogate of the median vector, and labels bad workers otherwise. Once a worker is labelled bad, it is removed from consideration forever. The master then performs the vanilla SGD, by moving in the negative direction of the average gradients produced by those workers currently labelled as good.
+
+We call our algorithm SafeguardSGD, since it behaves like having a safe guard to filter away bad workers. Its processing overhead at the master is $O(md)$ , negligible compared to standard SGD.
+
+As the astute reader may have guessed, the key non-trivial technical ingredient is to identify the right quantity to check for concentration, and make it compatible with the task of non-convex optimization. In particular, we manage to construct such quantities so that (1) good non-Byzantine workers never get mislabelled as bad ones; (2) Byzantine workers may be labelled as good ones (which is inevitable) but when they do, the convergence rates are not impacted significantly; and (3) the notion does not require additional assumptions or running time overhead.
+
+The idea of using concentration (for each worker across time) to filter out Byzantine machines
+
+traces back to the convex setting (Alistarh et al., 2018). However, the quantities used in (Alistarh et al., 2018) to check for concentration are necessarily different from this paper, and our analysis is completely new, as deriving non-convex rates is known to be much more delicate and challenging. Recently, Bulusu et al. (2020) used similar concentration filters to Alistarh et al. (2018) in the nonconvex setting, but under stronger assumptions, and for the simpler task of finding stationary points.
+
+Many other algorithms do not rely on concentration filters. In each iteration, they ask each worker to compute a batch of stochastic gradients, and then use coordinate-wise median or mean over the batch average (e.g. Yin et al. (2018; 2019); Yang et al. (2019)) or iterative filtering (e.g. Su & Xu (2018); Yin et al. (2019)) by the master to derive a "robust mean." These works fundamentally rely on each iteration to calculate an almost precise full gradient, so that they can apply a surrogate of full gradient descent. Such algorithms can introduce higher sample and time complexities (see Section 2), are less practical than stochastic gradient schemes, require additional restrictions on the resilience factor $\alpha$ , e.g. $\alpha < 1/4$ (Su & Xu, 2018), and, critically, have been shown to be vulnerable to recent attacks (Baruch et al., 2019; Xie et al., 2020).
+
+Attack resilience and experimental validation. There is a growing literature on customized attacks against Byzantine-resilient algorithms, showing that many defenses can be entirely circumvented in real-world scenarios (Baruch et al., 2019; Xie et al., 2020). Our algorithm is provably correct against these attacks, a fact we also validate experimentally. We implemented SafeguardSGD to examine its practical performance against a range of prior works (Xie et al., 2018b; Blanchard et al., 2017; Chen et al., 2017; Yin et al., 2018; 2019), and against recent attacks on the distributed task of training deep neural networks. Our experiments show that SafeguardSGD generally outperforms previous methods in convergence speed and final accuracy, sometimes by a wide accuracy margin. This is true not only against known Byzantine attacks, but also against attack variants we fine-crafted to specifically slow down our algorithm, and against transient node failures.
+
+# 2 STATEMENT OF OUR THEORETICAL RESULT
+
+We denote by $\| \cdot \|$ the Euclidean norm and $[n] \coloneqq \{1,2,\dots ,n\}$ . Given symmetric matrices $\mathbf{A}, \mathbf{B}$ , we let $\| \mathbf{A}\|_2$ denote the spectral norm of $\mathbf{A}$ . We use $\succeq$ to denote Loewner ordering, i.e. $\mathbf{A} \succeq \mathbf{B}$ if $\mathbf{A} - \mathbf{B}$ is positive semi-definite. We denote by $\lambda_{\min}(\mathbf{A})$ the minimum eigenvalue of matrix $\mathbf{A}$ .
+
+We consider arbitrary $d$ -dimensional non-convex functions $f \colon \mathbb{R}^d \to \mathbb{R}$ satisfying the following:
+
+- $f(x)$ is $L$ -Lipschitz smooth: meaning $\| \nabla f(x) - \nabla f(y) \| \leq L \| x - y \|$ for any $x, y \in \mathbb{R}^d$ ;
+- $f(x)$ is $L_{2}$ -second-order smooth: $\| \nabla^2 f(x) - \nabla^2 f(y) \|_2 \leq L_2 \cdot \| x - y \|$ for any $x, y \in \mathbb{R}^d$ ;
+
+For notational simplicity of the proofs, we assume $L = L_{2} = \mathcal{V} = 1$ . Note that we have also assumed the domain of $f$ is the entire space $\mathbb{R}^d$ . If instead there is a compact domain $\mathcal{X} \subset \mathbb{R}^d$ , then one can use projected SGD and re-derive similar results of this paper. We choose to present our result in the simplest setting to convey our main ideas.
+
+Byzantine non-convex stochastic distributed optimization. We let $m$ be the number of worker machines and assume at most an $\alpha$ fraction of them are Byzantine for $\alpha \in \left[0, \frac{1}{2}\right)$ . We denote by good $\subseteq [m]$ the set of good (i.e. non-Byzantine) machines, and the algorithm does not know good.
+
+Assumption 2.1. In each iteration $t$ , the algorithm (on the master) is allowed to specify a point $x_{t}$ and query $m$ machines. Each machine $i \in [m]$ gives back a vector $\nabla_{t,i} \in \mathbb{R}^d$ satisfying
+
+- If $i \in \mathrm{good}$ , the stochastic gradient $\nabla_{t,i}$ satisfies $\mathbb{E}[\nabla f(x_t)] = \nabla f(x_t)$ and $\| \nabla f(x_t) - \nabla_{t,i} \| \leq \mathcal{V}$ .3
+- If $i \in [m] \setminus \text{good}$ , then $\nabla_{t,i}$ can be arbitrary (w.l.o.g. we assume $\| \nabla f(x_t) - \nabla_{t,i} \| \leq \mathcal{V}$ ).4
+
+Remark 2.2. For each $t$ and $i \notin \mathbb{G}$ , the vector $\nabla_{t,i}$ can be adversarially chosen and may depend
+
+Algorithm 1 SafeguardSGD: perturbed SGD with double safe guard
+Input: point $x_0\in \mathbb{R}^d$ , rate $\eta >0$ , lengths $T\geq T_{1}\geq T_{0}\geq 1$ , threshold $\mathfrak{T}_1 > \mathfrak{T}_0 > 0$ .
+1: good $\leftarrow [m]$ .
+2: for $t\gets 0$ to $T - 1$ do
+3: last1 $\leftarrow$ max{t1∈[t]: t1is a multiple of $T_{1}\} ;$
+4: lasto $\leftarrow$ max{t0∈[t]: t0is a multiple of $T_{0}\}$
+5: for each i e good do
+6: receive $\nabla_{t,i}\in \mathbb{R}^d$ from machine i;
+7: $A_{i}\gets \sum_{k = last_{1}}^{t}\frac{\nabla_{k,i}}{|good_{k}|}$ and $B_{i}\gets \sum_{k = last_{0}}^{t}\frac{\nabla_{k,i}}{|good_{k}|};$
+8: $A_{med}\gets A_{i}$ where i e good is any machine s.t. {j e good: ||Aj-Ai||≤T} > m/2.
+9: $B_{med}\leftarrow B_{i}$ where i e good is any machine s.t. {j e good: ||Bj-Bi||≤T} > m/2.
+10: good+1 $\leftarrow \{i\in \mathrm{good}_{t}\colon \| A_{i} - A_{\mathrm{med}}\| \leq 2\mathfrak{T}_{1}\wedge \| B_{i} - B_{\mathrm{med}}\| \leq 2\mathfrak{T}_{0}\}$ .
+11: $x_{t + 1} = x_t - \eta \left(\xi_t + \frac{1}{|good_t|}\sum_{i\in \mathrm{good}_t}\nabla_{t,i}\right);$ Gaussian noise $\xi_t\sim \mathcal{N}(0,\nu^2\mathbf{I})$
+
+on $\{\nabla_{t',i}\}_{t' \leq t, i \in [m]}$ . In particular, the Byzantine machines can even collude during an iteration.
+
+# 2.1 OUR ALGORITHM AND THEOREM
+
+Our algorithm is based on arguably the simplest possible method for achieving this goal, (perturbed) stochastic gradient descent (SGD) (Ge et al., 2015). Our techniques more broadly apply to more complicated methods (e.g. at least to Allen-Zhu (2018a;b)), but we choose to analyze the simplest variant of SGD, since it is the most widely applied method in modern non-convex machine learning.
+
+As illustrated in Algorithm 1, in each iteration $t = 0, 1, \dots, T - 1$ , we maintain a set of (allegedly) good machines $\mathrm{good}_t \subseteq [m]$ . We begin with $\mathrm{good}_0 = [m]$ and start to detect malicious machines and remove them from the set. We choose a learning rate $\eta > 0$ , and perform the SGD update
+
+$$
+x _ {t + 1} = x _ {t} + \xi_ {t} - \eta \frac {1}{| \mathbf {g o o d} _ {t} |} \sum_ {i \in \mathbf {g o o d} _ {t}} \nabla_ {t, i}
+$$
+
+where $\xi_t \sim \mathcal{N}(0, \nu^2\mathbf{I})$ is a random Gaussian perturbation that is added for theoretical purpose.
+
+For each machine $i \in [m]$ , we keep track of the history of its stochastic gradients up to two windows. Namely, $A_i \gets \sum_{k=last1}^{t} \frac{\nabla_{k,i}}{|good_k|}$ and $B_i \gets \sum_{k=last0}^{t} \frac{\nabla_{k,i}}{|good_k|}$ , for windows sizes $T_0 \leq T_1 \leq T$ . We compare among remaining machines in good, and kick out those ones whose $A_i$ or $B_i$ deviate "more than usual" to construct good.
+
+Our theory makes sure that, when the "window sizes" and the thresholds for "more than usual" are defined properly, then $\mathrm{good}_t$ shall always include good, and the algorithm shall proceed to find approximate local minima. Formally, we have (letting the $\widetilde{O}$ notion to hide polylogarithmic factors)
+
+Theorem 2.3. Let $C_3 = \alpha^2 +\frac{1}{m}$ . Suppose we choose $\nu^{2} = \widetilde{\Theta} (C_{3})$ , $\eta = \widetilde{\Theta} (\frac{\varepsilon^2}{dC_3})$ , $T_{0} = \widetilde{\Theta} (\frac{1}{\eta})$ , $T_{1} = \widetilde{\Theta} (\frac{1}{\eta\sqrt{\varepsilon}})$ , $\mathfrak{T}_0 = \widetilde{\Theta} (\sqrt{T_0})$ , and $\mathfrak{T}_1 = \widetilde{\Theta} (\sqrt{T_1})$ , then after
+
+$$
+T = \widetilde {O} \left(\frac {(f (x _ {0}) - \min f (x)) d}{\varepsilon^ {4}} \left(\alpha^ {2} + \frac {1}{m}\right)\right)
+$$
+
+iterations, with high probability, for at least constant fraction of the indices $t \in [T]$ , they satisfy
+
+$$
+\| \nabla f (x _ {t}) \| \leq \varepsilon \quad a n d \quad \nabla^ {2} f (x _ {t}) \succeq - \sqrt {\varepsilon} \mathbf {I}.
+$$
+
+Remark 2.4. If one only wishes to achieve a significantly simpler goal — finding first-order critical points $\| \nabla f(x_{t})\| \leq \varepsilon$ — the analysis becomes much easier (see Section 3.1). In particular, having one safe guard without perturbation (i.e. $\nu = 0$ ) suffices, and the iteration complexity reduces to $T = \widetilde{O}\left(\frac{f(x_0) - \min f(x)}{\varepsilon^4} (\alpha^2 +\frac{1}{m})\right)$ . Bulusu et al. (2020) achieves this easier goal but requires an additional assumption: there is one guaranteed good worker known by the master.
+
+Our contribution. We reiterate our theoretical contributions from three perspectives. 1) When $\alpha < 1 / \sqrt{m}$ , our algorithm requires $mT = \widetilde{O}\big(\frac{(f(x_0) - \min f(x))d}{\varepsilon^4}\big)$ stochastic gradient computations. This matches the best known result (Jin et al., 2019) under our minimal assumptions of the nonconvex objective. (There exist other works in the stochastic setting that break the $\varepsilon^{-4}$ barrier
+
+and get rid of the dimension dependence $d$ under stronger assumptions.)5. 2) When $\alpha < 1/\sqrt{m}$ , our algorithm enjoys linear parallel speed-up: the parallel time complexity reduces by a factor of $\Theta(m)$ . When $\alpha \in [1/\sqrt{m}, 1/2)$ , our parallel time complexity is $\widetilde{O}(\alpha^2)$ times that needed when no parallelism is used, still giving noticeable speedup. The $\alpha^2$ factor also appeared in convex Byzantine distributed optimization (and is known to be tight there) (Yin et al., 2018; Alistarh et al., 2018).
+
+Comparison to (Yin et al., 2019). Yin et al. (2019) derived three gradient descent-type algorithms to find points with a weaker (and less standard) guarantee: $\| \nabla f(x)\| \leq \varepsilon$ and $\nabla^2 f(x)\succeq -(\varepsilon^2 d)^{1 / 5}\mathbf{I}$ . Despite practical differences (namely, gradient descent may be less favorable comparing to stochastic gradient descent especially in deep learning applications), the parallel time complexities derived from their result are also generally larger than ours.
+
+Their paper focuses on bounding the number of sampled stochastic functions, as opposed to the number of stochastic gradient evaluations like we do. When translated to our language, each of the workers in their setting needs to evaluate $T$ stochastic gradients, where (1) $T = \widetilde{O}\left(\frac{\alpha^2d}{\varepsilon^4} +\frac{d^2}{\varepsilon^4m} +\frac{\sqrt{d}}{\varepsilon^3}\right)$ if using coordinate-wise median, (2) $T = \widetilde{O}\left(\frac{\alpha^2d^2}{\varepsilon^4} +\frac{d^2}{\varepsilon^4m}\right)$ if using trimmed mean, and (3) $T = \widetilde{O}\left(\frac{\alpha}{\varepsilon^4} +\frac{d}{\varepsilon^4m}\right)$ if using iterative filtering. The complexities (1) and (2) are larger than ours (also with a weaker guarantee); the complexity (3) seems incomparable to ours, but when translating to the more standard $(\varepsilon ,\sqrt{\varepsilon})$ guarantee, becomes $T = \widetilde{O}\left(\frac{\alpha d^2}{\varepsilon^5} +\frac{d^3}{\varepsilon^5m}\right)$ so is also larger than ours. It is worth noting that (3) requires $\alpha < 1 / 4$ so cannot withstand half of the machines being Byzantine.
+
+Resilience against practical attacks. Our algorithm's filtering is based upon tracking $B_{i}$ (resp. $A_{i}$ ), the stochastic gradients of each machine $i$ averaged over a window of $T_{0}$ (resp. $T_{1}$ ) iterations. This is a departure from previous defenses, most of which are history-less, and enables us to be provably Byzantine-resilient against state-of-the-art attacks (Baruch et al., 2019; Xie et al., 2020).
+
+In Baruch et al. (2019), Byzantine workers collude to shift the gradient mean by a factor $\beta$ times the standard deviation of the (true stochastic) gradient, while staying within population variance. They noticed $\beta$ can be quite large especially in neural network training. Their attack circumvent existing defenses because those defense algorithms are "historyless", while their attack is statistically indistinguishable from an honest execution in any single iteration. However, our algorithm can provably defend against this attack since it has memory: Byzantine workers following their strategy will progressively diverge from the (honest) "median" $B_{\mathrm{med}}$ (by an amount proportional to $\Omega(T)$ in $T$ iterations as opposed to $\sqrt{T}$ ), and be marked as malicious by our algorithm. (See Figure 2(a).) In Xie et al. (2020), Byzantine workers deviate in the negative direction of the gradient. However, to avoid being caught by our algorithm, the maximum "magnitude" of this attack has to stay within our thresholds. We implemented both attacks and showed our algorithm's robustness experimentally.
+
+Finally, we note that prior "historyless" schemes, such as Krum or median-based schemes, could be thought of as providing stronger guarantees, as they in theory allow Byzantine nodes to change IDs during the computation: such schemes only require an upper bound on the number of Byzantine agents in each round. However, the attack of Baruch et al. (2019) essentially shows that all such schemes are vulnerable to variance attacks, and that such attacks are eminently plausible in practice. Thus, this suggests that the use of historical information, which requires that Byzantine nodes cannot change their IDs during the execution, may be necessary for Byzantine resilience.
+
+Tolerating transient failures and node ID relabeling. Our algorithm can also withstand transient node failures and some degrees of $ID$ relabeling, by resetting the set of good nodes $\mathrm{good}_t$ to include all nodes every $T_1$ steps. The algorithm then proceeds as usual. The key observation behind this relaxation is the fact that our analysis only requires that the attack conditions hold inside the current window. (Please see the Theorem B.1 for details.) We validate this experimentally in Section 5.
+
+# 3 WARMUP: SINGLE SAFE GUARD
+
+As a warmup, let us first analyze the behavior of perturbed SGD with a single safe guard. Consider Algorithm 2, where we start with a point $w_0$ , a set $\mathrm{good}_0 \supseteq$ good, and perform $T$ steps of perturbed SGD. (We use the $w_t$ sequence instead of the $x_t$ sequence to emphasize that we are in Algorithm 2.)
+
+Algorithm 2 Perturbed SGD with single safe guard (for analysis purpose only)
+Input: point $w_0 \in \mathbb{R}^d$ , set $\mathrm{good}_0 \supseteq$ good, rate $\eta > 0$ , length $T \geq 1$ , threshold $\mathfrak{T} > 0$ ;
+1: for $t \gets 0$ to $T - 1$ do
+2: for each $i \in \mathrm{good}_t$ do
+3: receive $\nabla_{t,i} \in \mathbb{R}^d$ from machine $i$ ;
+4: $B_i \gets \sum_{k=0}^{t} \frac{\nabla_{k,i}}{|\mathrm{good}_k|}$ ;
+5: $B_{\mathrm{med}} \gets B_i$ where $i \in \mathrm{good}_t$ is any machine s.t. $\left|\{j \in \mathrm{good}_t : \|B_j - B_i\| \leq \mathfrak{T}\}\right| > m/2$ .
+6: $\mathrm{good}_{t+1} \gets \left\{i \in \mathrm{good}_t : \|B_i - B_{\mathrm{med}}\| \leq 2\mathfrak{T}\right\}$ ;
+7: $w_{t+1} = w_t - \eta\left(\xi_t + \frac{1}{|\mathrm{good}_t|} \sum_{i \in \mathrm{good}_t} \nabla_{t,i}\right)$ ;
+ $\diamond$ Gaussian noise $\xi_t \sim \mathcal{N}(0, \nu^2\mathbf{I})$
+
+Definition 3.1. We make the following definition to simplify notations: let $\Xi_t \coloneqq \sigma_t + \Delta_t$ where
+
+$\sigma_t \coloneqq \frac{1}{|\mathbf{good}_t|} \sum_{i \in \mathbf{good}} (\nabla_{t,i} - \nabla f(w_t))$
+$\Delta_t \coloneqq \frac{1}{|\mathbf{good}_t|} \sum_{i \in \mathbf{good}_t \setminus \mathbf{good}} (\nabla_{t,i} - \nabla f(w_t))$
+
+Therefore, we can re-write the SGD update as $w_{t + 1} = w_t - \eta (\nabla f(w_t) + \xi_t + \Xi_t)$ .
+
+The following lemma is fairly immediate to prove:
+
+Lemma 3.2 (single safe guard). In Algorithm 2, suppose we choose $\mathfrak{T} = 8\sqrt{T\log(16mT / p)}$ . Then, with probability at least $1 - p / 4$ , for every $t = 0,\dots ,T - 1$
+
+- $\mathrm{good}_t \supseteq \mathrm{good}$ .
+$\| \sigma_t\| ^2\leq O(\frac{\log(T / p)}{m})$ and $\| \sigma_0 + \dots +\sigma_{t - 1}\| ^2\leq O(\frac{T\log(T / p)}{m})$
+- $\| \Delta_t\|^2 \leq \alpha^2$ and $\| \Delta_0 + \dots + \Delta_{t - 1}\|^2 \leq O(\alpha^2 T\log (mT / p))$
+- $\left| {\langle \nabla f\left( {w}_{t}\right) ,{\xi }_{t}\rangle }\right| \leq \begin{Vmatrix}{\nabla f}\left( {w}_{t}\right) \end{Vmatrix} \cdot O\left( {\nu \sqrt{\log \left( {T/p}\right) }}\right)$ ,
+- $\| \xi_t\|^2 \leq O(\nu^2 d\log (T / p)), \| \xi_0 + \dots + \xi_{t - 1}\|^2 \leq O(\nu^2 dT\log (T / p))$
+
+We call this probabilistic event $\text{Event}_T^{\text{single}}(w_0)$ and $\Pr[\text{Event}_T^{\text{single}}(w_0)] \geq 1 - p/4$ .
+
+(The third property above is ensured by our choice of $\mathfrak{T}$ and the use of safe guard, and the rest of the properties follow from simple martingale concentration arguments. Details are in Appendix A.1.)
+
+# 3.1 CORE TECHNICAL LEMMA 1: OBJECTIVE DECREASE
+
+Our first main technical lemma is the following:
+
+Lemma 3.3. Suppose we choose $\mathfrak{T}$ as in Lemma 3.2. Denote by $C_1 = \log (T / p)$ and $C_2 = \alpha^2\log \frac{mT}{p} +\frac{\log(T / p)}{m}$ . Suppose $\eta \leq 0.01\min \{1,\frac{1}{C_2}\}$ , $T = \frac{1}{100\eta(1 + \sqrt{C_2})}$ and we start from $w_{0}$ and apply Algorithm 2. Under event Event $_T^{\mathrm{single}}$ ( $w_{0}$ ), it satisfies
+
+$$
+f (w _ {0}) - f (w _ {T}) \geq 0. 7 \eta \sum_ {t = 0} ^ {T - 1} \left(\| \nabla f (w _ {t}) \| ^ {2} - \eta \cdot O (C _ {2} + (C _ {2}) ^ {1. 5}) - O (C _ {1} \nu^ {2} \eta (d + \sqrt {C _ {2}}))\right)
+$$
+
+Lemma 3.3 says after $T \approx \frac{1}{\eta}$ steps of perturbed SGD, the objective value decreases by, up to some small additive error and up to logarithmic factors, $f(w_0) - f(w_T) \geq 0.7\eta \sum_{t=0}^{T-1} (\|\nabla f(w_t)\|^2 - \eta C_2)$ . This immediately implies, if we choose $\eta \approx \frac{\varepsilon^2}{C_2}$ , then by repeating this analysis for $O\left(\frac{C_2}{\varepsilon^4}\right) = O\left(\frac{\alpha^2 + 1/m}{\varepsilon^4}\right)$ iterations, we can find approximate critical point $x$ with $\| \nabla f(x) \| \leq \varepsilon$ .
+
+Proof sketch of Lemma 3.3. The full proof is in Appendix A.2 but we illustrate the main idea and difficulties below. After simple manipulations, it is not hard to derive that
+
+$$
+f (w _ {0}) - f (w _ {T}) \gtrsim 0. 9 \eta \sum_ {t = 0} ^ {T - 1} \left(\| \nabla f (w _ {t}) \| ^ {2} - \eta\right) + \underbrace {\eta \sum_ {t = 0} ^ {T - 1} \langle \nabla f (w _ {t}) , \Xi_ {t} \rangle} _ {\text {r e m a i n d e r t e r m s}}
+$$
+
+where recall that $\Xi_t = \sigma_t + \Delta_t$ . When there are no Byzantine machines, we have $\mathbb{E}[\Xi_t] = \mathbb{E}[\sigma_t] = 0$ so the remainder terms must be small by martingale concentration. Therefore, the main technical difficulty arises to deal with those Byzantine machines, who can adversely design their $\nabla_t$ (even by collusion) so as to negatively correlate with $\nabla f(w_t)$ to "maximally destroy" the above inequality.
+
+Our main idea is to use second-order smoothness to write $\nabla f(w_{t})\approx \nabla f(w_{0}) + \nabla^{2}f(w_{0})\cdot (w_{t} - w_{0})$ To illustrate our idea, let us ignore the constant vector and assume that the Hessian is the identity: that is, imagine as if $\nabla f(w_t)\approx w_t - w_0$ . Using $w_{t} - w_{0} = -\sum_{k < t}\Xi_{t} + \xi_{t}$ , we immediately have
+
+$$
+- \left\langle \nabla f \left(w _ {t}\right), \Xi_ {t} \right\rangle \approx - \left\langle w _ {t} - w _ {0}, \Xi_ {t} \right\rangle = \sum_ {k < t} \left\langle \Xi_ {k}, \Xi_ {t} \right\rangle + \sum_ {k < t} \left\langle \xi_ {k}, \Xi_ {t} \right\rangle \tag {3.1}
+$$
+
+For the first partial sum $\langle \sum_{k < t} \Xi_k, \Xi_t \rangle$ in (3.1), it is easy to bound its magnitude using our safeguard. Indeed, we have $\left| \sum_{t} \langle \sum_{k < t} \Xi_k, \Xi_t \rangle \right| \leq \| \sum_{t} \Xi_t \|^2 + \sum_{t} \| \Xi_t \|^2$ so we can apply Lemma 3.2. For the second partial sum $\sum_{t} \sum_{k < t} \langle \xi_k, \Xi_t \rangle$ , we can apply the concentration Proposition 3.4 below.
+
+Proposition 3.4. Fix the dimension parameter $d \geq 1$ . Suppose $\xi_0, \ldots, \xi_{T-1} \in \mathbb{R}^d$ are i.i.d. drawn from $\mathcal{N}(0,\mathbf{I})$ , and that $\Delta_1, \ldots, \Delta_{T-1}$ are arbitrary vectors in $\mathbb{R}^d$ . Here, each vector $\Delta_t$ with $t = 1, \ldots, T-1$ can depend on $\xi_0, \ldots, \xi_{t-1}$ but not on $\xi_t, \ldots, \xi_{T-1}$ . Suppose that these vectors satisfy $\| \Delta_1 + \dots + \Delta_t \|^2 \leq \mathfrak{T}$ for every $t = 1, \ldots, T-1$ . Then, with probability at least $1 - p$ ,
+
+$$
+\left| \sum_ {t = 1} ^ {T - 1} \left\langle \xi_ {0} + \dots + \xi_ {t - 1}, \Delta_ {t} \right\rangle \right| \leq O \left(\sqrt {d T \mathfrak {T} \log (T / p)}\right).
+$$
+
+# 3.2 CORE TECHNICAL LEMMA 2: RANDOMNESS COUPLING
+
+Our next technical lemma studies that, if run Algorithm 2 from a point $w_0$ so that the Hessian $\nabla^2 f(w_0)$ has a eigenvalue which is less than $-\delta$ (think of $w_0$ as a saddle point), then with good probability, after sufficiently many iterations, the sequence $w_1, w_2, \ldots, w_T$ shall escape from $w_0$ to distance at least $R$ for some parameter $R \approx \delta$ . To prove this, motivated by Jin et al. (2017), we study two executions of Algorithm 2 where their randomness are coupled. We then argue that at least one of them has to escape from $w_0$ . For any vector $v$ , let $[v]_i$ denote the $i$ -th coordinate of $v$ .
+
+Lemma 3.5. Suppose we choose $\mathfrak{T}$ as in Lemma 3.2 and $C_1, C_2$ as in Lemma 3.3. Suppose $w_0 \in \mathbb{R}^d$ satisfies $\lambda_{\min}(\nabla^2 f(w_0)) = -\delta$ for some $\delta \geq 0$ . Without loss of generality let $\mathbf{e}_1$ be the eigenvector of $\nabla^2 f(w_0)$ with smallest eigenvalue. Consider now two executions of Algorithm 2, both starting from $w_0^{\mathrm{a}} = w_0^{\mathrm{b}} = w_0$ , and suppose their randomness $\{\xi_t^{\mathrm{a}}\}_{t}$ and $\{\xi_t^{\mathrm{b}}\}_{t}$ are coupled so that $[\xi_t^{\mathrm{a}}]_1 = -[\xi_t^{\mathrm{b}}]_1$ but $[\xi_t^{\mathrm{a}}]_i = [\xi_t^{\mathrm{b}}]_i$ for $i > 1$ . In words, the randomness is the same orthogonal to $\mathbf{e}_1$ , but along $\mathbf{e}_1$ , the two have opposite signs. Now, suppose we perform $T = \Theta\left(\frac{1}{\eta\delta}\log \frac{R^2\delta}{\eta\nu^2}\right)$ steps of perturbed SGD from $w_0^{\mathrm{a}}, w_0^{\mathrm{b}}$ respectively using Algorithm 2. Suppose
+
+$$
+R \leq O \left(\frac {\delta}{\sqrt {C _ {1}} \log \left(R ^ {2} \delta / \eta \nu^ {2}\right)}\right) \quad a n d \quad \nu^ {2} \geq \Omega \left(C _ {2} \log \frac {R ^ {2} \delta}{\eta \nu}\right).
+$$
+
+Then, under events $\mathsf{Event}_T^{\mathsf{single}}(w_0^{\mathsf{a}})$ and $\mathsf{Event}_T^{\mathsf{single}}(w_0^{\mathsf{b}})$ , with probability at least 0.98, either $\| w_t^{\mathsf{a}} - w_0\| > R$ or $\| w_t^{\mathsf{b}} - w_0\| > R$ for some $t \in [T]$ .
+
+Proof details in Appendix A.4. The main proof difficulty is to analyze a noisy version of the power method, where the noise comes from (1) Gaussian perturbation (which is the good noise), (2) stochastic gradients (which has zero mean), and (3) Byzantine workers (which can be adversarial).
+
+# 4 FROM WARMUP TO FINAL THEOREM WITH DOUBLE SAFE GUARDS
+
+At a high level, Lemma 3.3 ensures that if we keep encountering points with large gradient $\| \nabla f(w_{t})\|$ , then the objective should sufficiently decrease; in contrast, Lemma 3.5 says that if we keep encountering points with negative Hessian directions (i.e., $\lambda_{\mathrm{min}}(\nabla^2 f(w_t)) < -\delta$ ), then the points must move a lot (i.e., by more than $R$ in $T$ iterations, which can also lead to sufficient objective decrease, see Lemma B.4). Therefore, at a high level, when the two lemmas are combined, they tell that we must not encounter points with $\| \nabla f(x)\|$ being large, or $\lambda_{\mathrm{min}}(\nabla^{2}f(x))$ being very negative, for too many iterations. Therefore, the algorithm can find approximate local minima.
+
+The reason we need two safe guards, is because the number of rounds $T$ for Lemma 3.3 and Lemma 3.5 differ by a factor. We need two safe guards with different window sizes to ensure the two lemmas simultaneously hold. We encourage the reader to examine the full analysis in Appendix B.
+
+# 5 EXPERIMENTAL VALIDATION
+
+We evaluate the convergence of SafeguardSGD to examine its practical performance against prior works. We perform the non-convex task of training a residual network ResNet-20 (He et al., 2016) on the CIFAR-10/100 datasets (Krizhevsky et al., 2014). More details are given in Appendix C.
+
+
+(a) variance attack
+
+
+(b) sign-flipping attack
+
+
+(c) label-flipping attack
+
+
+(d) delayed-gradient attack
+
+
+(e) safeguard(x0.6) attack
+
+
+(f) safeguard(x0.7) attack
+Figure 1: Convergence comparison (CIFAR-10 test accuracy) under different attacks. (In Appendix C.2, one can find additional CIFAR-100 experiments, more discussions, and bigger plots.)
+
+We instantiate $m = 10$ workers and one master node executing data-parallel SGD for 140 passes (i.e. epochs) over the training dataset. The results for higher number of workers and epochs are similar, and therefore omitted. We compare against Geometric Median (Chen et al., 2017), Coordinate-wise Median (Yin et al., 2018; 2019), Krum (Blanchard et al., 2017), and Zeno (Xie et al., 2018b). Overall, our experimental setup is very similar to Zeno (Xie et al., 2018b) but with additional attacks. We implemented the approach of Yang et al. (2019), but found it very sensitive to hyper-parameter values and were unable to make it converge across all attacks even after significant tuning of its $\gamma$ parameter. We also implemented the convex algorithm of Alistarh et al. (2018), and executed it in our non-convex setting. We found their algorithm can be easily attacked on our ResNet training tasks. There exists a simple attack, described in Appendix C.4 which causes their algorithm to either mislabel most good workers as Byzantine, or diverge, or converge to very poor solutions. This is not surprising, since their algorithm is designed for, and only guaranteed to work in, the convex setting. To make the comparison stronger, when implementing SafeguardSGD, we have chosen fixed window sizes $T_0 = 1$ epoch and $T_1 = 6$ epochs across all experiments, and adopted an automated process to select $\mathfrak{T}_0, \mathfrak{T}_1$ . Determining these thresholds requires being able to pre-run the task on an honest worker. We have also implemented a single safeguard variant of SafeguardSGD, with window size $T = 3$ epochs.
+
+Attacks. We set $\alpha = 0.4$ , which means that there are 4 Byzantine workers. (This exceeds the fault-tolerance of Krum, and so we also tested Krum with only 3 Byzantine workers.)
+
+- LABEL-FLIPPING ATTACK: each Byzantine worker computes its gradient based on the cross-entropy loss with flipped labels: for CIFAR-10, label $\ell \in \{0,\dots ,9\}$ is flipped to $9 - \ell$
+- DELAYED-GRADIENT ATTACK: each Byzantine worker sends an old gradient to master. In our experiments, the delay is of $D = 1000$ iterations.
+- VARIANCE ATTACK (Baruch et al., 2019): Byzantine workers measure the mean and the standard-deviation of gradients at each round, and collude to move the mean by the largest value which still operates within population variance. (For our parameter settings, this is 0.3 times the standard deviation. We discuss results for additional parameter values in the Appendix.)
+- SIGN-FLIPPING ATTACK: each Byzantine worker sends the negative gradient to the master.
+- SAFEGUARD ATTACK: each Byzantine workers sends a negative but re-scaled gradient to the master. We use re-scale factors 0.6 and 0.7 in our experiments. The re-scale factor 0.6 avoids triggering the safe-guard conditions at the master, and the re-scale factor 0.7 occasionally triggers the safe-guard conditions. This attack is an instantiation of the inner-product attack (Xie et al., 2020), customized specifically to maximally affect our SafeguardSGD algorithm.
+
+Main experimental results. The ideal test accuracy is $91.7\%$ , which corresponds to applying SGD using only the stochastic gradients from the honest workers. Figure 1 compares the performances
+
+Figure 2
+
+(a) $\| B_{i} - B_{\mathrm{med}} \|$ between a good node (blue), and a bad node (red) which pretends to be honest and then starts to apply the variance attack.
+
+
+(b) Convergence for our safeguard algorithms under the variance attack, after periodically resetting the set of good nodes.
+
+in test accuracy. Below we summarize our main findings for the experiments, and we defer detailed discussions (and additional experiments for CIFAR-100) to Appendix C.
+
+- SafeguardSGD generally outperforms all the previous methods in test accuracy. The test accuracy difference can be “90% vs. < 40%” between our algorithm and the best prior work.
+- The variance attack is indeed very strong, in that it severely affects the accuracy of all prior works (test accuracy $< 35\%$ ). This is because these defenses are "historyless." By contrast, our algorithm not only provably but also empirically defends against it.
+- Our safeguard attack (especially with re-scale factor 0.7) is as strong as the variance attack, and even stronger on the CIFAR-100 dataset; please see the results in Appendix C.2.5.
+- The label-flipping attack is rather weak: although some defenses, such as Zeno, did not determine which of the workers are malicious, they still converge well under this attack.
+- The sign-flipping and delayed-gradient attacks are moderate: the best prior works can achieve accuracy $60\% \sim 70\%$ . It is worth noting that the sign-flipping attack can already nullify the Zeno defence (test accuracy $20\%$ ). The issue seems to be that it can be very hard for Zeno to use relatively few samples to determine if the gradient direction is flipped to negative.
+- SafeguardSGD can easily catch all the bad workers under sign-flipping and variance attacks, and thus leads to gives ideal performance. It cannot catch any bad worker for label-flipping and delayed-gradient attacks, but there is no performance loss anyways if we use such bad gradients.
+- The safeguard attacks, designed to maximally impact the performance of our SafeguardSGD, can indeed affect our performance. Specifically, under re-scale factor 0.6, the test accuracy drops from $91.7\%$ to $89.3\%$ because SafeguardSGD cannot catch any bad worker; however, under re-scale factor 0.7, the test accuracy no longer drops because SafeguardSGD can begin to catch some bad workers (it can catch between 0 and 4 bad workers depending on the randomness.)
+- In most cases, the single-safeguard algorithm is close to double-safeguard, except for the safeguard(x0.7) attack, in which using double-safeguard one can more easily catch bad workers. (This is more apparent in the CIFAR-100 experiment, see Appendix C.2.5.)
+
+We conclude that SafeguardSGD can be practical, and outperforms previous approaches.
+
+A deeper dive: how the algorithm works. Let us explain the inner workings of our algorithm in the context of a "delayed" attack, where the Byzantine nodes collude to execute an attack only after a specific, given point in the execution (in this case, the first half-epoch). Figure 2(a) presents the results from the perspective of the value of $\| B_i - B_{\mathrm{med}} \|$ registered at the master server, for two nodes, an honest one, and a Byzantine one. The value of $\| B_i - B_{\mathrm{med}} \|$ increases for all the nodes (at a rate of roughly $\sqrt{t}$ at step $t$ ); but, once the attack starts, the statistic for the Byzantine node grows linearly in $t$ , leading to fast detection.
+
+Transient attacks and node ID relabeling. Finally, in Figure 2(b) we analyze the behaviour of our algorithm when it periodically (every 3 epochs for single safeguard and 6 epochs for double safeguard) resets the set of good nodes to include all nodes, restarting the detection process from scratch. Our theoretical result still applies after this relaxation. This relaxation has two benefits. First, it benefits from bad workers that under transient failures (e.g., the node fails for 10 epochs but resumes to work correctly after a while), and thus benefits from the data stored on this worker. Second, it can defend against certain degree of node ID relabeling: it supports the case when good and bad workers exchange their IDs every 6 epochs. In Figure 2(b), we see even under the (very strong) variance attack, relaxed safeguard maintains good performance.
+
+# ACKNOWLEDGMENTS
+
+F. E. and D. A. were supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 805223 ScaleML).
+
+# REFERENCES
+
+Dan Alistarh, Zeyuan Allen-Zhu, and Jerry Li. Byzantine stochastic gradient descent. In Advances in Neural Information Processing Systems, pp. 4613-4623, 2018.
+Zeyuan Allen-Zhu. Natasha 2: Faster Non-Convex Optimization Than SGD. In NeurIPS, 2018a. Full version available at http://arxiv.org/abs/1708.08694.
+Zeyuan Allen-Zhu. How To Make the Gradients Small Stochastically. In NeurIPS, 2018b. Full version available at http://arxiv.org/abs/1801.02982.
+Zeyuan Allen-Zhu and Yuanzhi Li. Feature purification: How adversarial training performs robust deep learning. arXiv preprint arXiv:2005.10190, 2020.
+Gilad Baruch, Moran Baruch, and Yoav Goldberg. A little is enough: Circumventing defenses for distributed learning. In Advances in Neural Information Processing Systems, pp. 8635-8645, 2019.
+Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, and Julien Stainer. Machine learning with adversaries: Byzantine tolerant gradient descent. In NIPS, pp. 118-128, 2017.
+Saikiran Bulusu, Prashant Khanduri, Pranay Sharma, and Pramod K Varshney. On distributed stochastic gradient descent for nonconvex functions in the presence of byzantines. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3137-3141. IEEE, 2020.
+Miguel Castro, Barbara Liskov, et al. Practical byzantine fault tolerance. In OSDI, 1999.
+Yudong Chen, Lili Su, and Jiaming Xu. Distributed statistical machine learning in adversarial settings: Byzantine gradient descent. Proceedings of the ACM on Measurement and Analysis of Computing Systems, 1(2):1-25, 2017.
+Cong Fang, Chris Junchi Li, Zhouchen Lin, and Tong Zhang. Spider: Near-optimal non-convex optimization via stochastic path-integrated differential estimator. In Advances in Neural Information Processing Systems, pp. 689-699, 2018.
+Jiashi Feng, Huan Xu, and Shie Mannor. Distributed robust learning. arXiv preprint arXiv:1409.5937, 2014.
+Rong Ge, Furong Huang, Chi Jin, and Yang Yuan. Escaping from saddle points—online stochastic gradient for tensor decomposition. In Proceedings of the 28th Annual Conference on Learning Theory, COLT 2015, 2015.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
+Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. In Advances in Neural Information Processing Systems, pp. 125-136, 2019.
+Chi Jin, Rong Ge, Praneeth Netrapalli, Sham M Kakade, and Michael I Jordan. How to escape saddle points efficiently. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1724-1732. JMLR.org, 2017.
+Chi Jin, Praneeth Netrapalli, Rong Ge, Sham M Kakade, and Michael I. Jordan. On nonconvex optimization for machine learning: Gradients, stochasticity, and saddle points. arXiv preprint arXiv:1902.04811, 2019.
+Jakub Konečný, H. Brendan McMahan, Felix X Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492, 2016.
+Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. The CIFAR-10/100 dataset. https://www.cs.toronto.edu/~kriz/cifar.html, 55, 2014.
+Leslie Lamport, Robert Shostak, and Marshall Pease. The byzantine generals problem. ACM Transactions on Programming Languages and Systems (TOPLAS), 4(3):382-401, 1982.
+Lihua Lei, Cheng Ju, Jianbo Chen, and Michael I Jordan. Nonconvex Finite-Sum Optimization Via SCSG Methods. In NIPS, 2017.
+Nancy A Lynch. Distributed algorithms. Elsevier, 1996.
+Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In ICLR. arXiv preprint arXiv:1706.06083, 2018.
+Lam M Nguyen, Jie Liu, Katya Scheinberg, and Martin Takáč. Sarah: A novel method for machine learning problems using stochastic recursive gradient. In Proceedings of the 34th International Conference on
+
+Machine Learning-Volume 70, pp. 2613-2621. JMLR.org, 2017.
+Iosif Pinelis. Optimum bounds for the distributions of martingales in banach spaces. The Annals of Probability, pp. 1679-1706, 1994.
+Lili Su and Nitin H Vaidya. Fault-tolerant multi-agent optimization: optimal iterative distributed algorithms. In PODC, pp. 425-434. ACM, 2016a.
+Lili Su and Nitin H Vaidya. Defending non-bayesian learning against adversarial attacks. ISDC, 2016b.
+Lili Su and Jiaming Xu. Securing distributed machine learning in high dimensions. arXiv preprint arXiv:1804.10140, 2018.
+Nilesh Tripuraneni, Mitchell Stern, Chi Jin, Jeffrey Regier, and Michael I Jordan. Stochastic Cubic Regularization for Fast Nonconvex Optimization. ArXiv e-prints, abs/1711.02838, November 2017.
+Cong Xie, Oluwasanmi Koyejo, and Indranil Gupta. Generalized Byzantine-tolerant SGD. arXiv preprint arXiv:1802.10116, 2018a.
+Cong Xie, Oluwasanmi Koyejo, and Indranil Gupta. Zeno: Byzantine-suspicious stochastic gradient descent. arXiv preprint arXiv:1805.10032, 2018b.
+Cong Xie, Oluwasanmi Koyejo, and Indranil Gupta. Fall of empires: Breaking byzantine-tolerant SGD by inner product manipulation. In Uncertainty in Artificial Intelligence, pp. 261-270. PMLR, 2020.
+Haibo Yang, Xin Zhang, Minghong Fang, and Jia Liu. Byzantine-resilient stochastic gradient descent for distributed learning: A lipschitz-inspired coordinate-wise median approach. arXiv preprint arXiv:1909.04532, 2019.
+Dong Yin, Yudong Chen, Kanna Ramchandran, and Peter Bartlett. Byzantine-robust distributed learning: Towards optimal statistical rates. arXiv preprint arXiv:1803.01498, 2018.
+Dong Yin, Yudong Chen, Ramchandran Kannan, and Peter Bartlett. Defending against saddle point attack in byzantine-robust distributed learning. In International Conference on Machine Learning, pp. 7074-7084, 2019.
+
+# APPENDIX
+
+# A MISSING PROOFS FOR SECTION 3
+
+# A.1 PROOF OF LEMMA 3.2
+
+Recall the following, useful inequality.
+
+Lemma A.1 (Pinelis' 1994 inequality (Pinelis, 1994)). Let $X_{1},\ldots ,X_{T}\in \mathbb{R}^{d}$ be a random process satisfying $\mathbb{E}[X_t|X_1,\dots,X_{t - 1}] = 0$ and $\| X_{t}\| \leq M$ . Then, $\operatorname *{Pr}\left[\| X_1 + \dots +X_T\| ^2 >2\log (2 / \delta)M^2 T\right]\leq \delta$
+
+Lemma 3.2 is in fact a direct corollary of the following claim, whose proof is quite classical. Denote by $C = \log(16mT / p)$ . Denote by
+
+$$
+B _ {i} ^ {(t)} := \frac {\nabla_ {0 , i}}{| \mathsf {g o o d} _ {0} |} + \dots + \frac {\nabla_ {t - 1 , i}}{| \mathsf {g o o d} _ {t - 1} |} \quad \text {a n d} \quad B _ {\star} ^ {(t)} := \frac {\nabla f (w _ {0})}{| \mathsf {g o o d} _ {0} |} + \dots + \frac {\nabla f (w _ {t - 1})}{| \mathsf {g o o d} _ {t - 1} |}.
+$$
+
+Recall at iteration $t - 1$ , Algorithm 2 computes $\{B_1^{(t)},\ldots ,B_m^{(t)}\}$ as well as some $B_{\mathrm{med}}^{(t)} = B_i^{(t)}$ where $i$ is any machine in $\mathrm{good}_{t - 1}$ such that at least half of $j\in [m]$ satisfies $\| B_j^{(t)} - B_i^{(t)}\| \leq 8\sqrt{tC} /m$ .
+
+Claim A.2. Let $C = \log(16mT / p)$ . Then, with probability at least $1 - p / 4$ , we have
+
+(a) for all $i\in \mathbf{good}$ and $t\in [T]$ $\| B_i^{(t)} - B_\star^{(t)}\| \leq 4\sqrt{tC} /m.$
+(b) for all $t \in [T]$ , each $i \in \mathbf{good}$ is a valid choice for $B_{\mathbf{med}}^{(t)} = B_i^{(t)}$ .
+(c) for all $i \in \mathbf{good}$ and $t \in [T]$ , $\| B_i^{(t)} - B_{\mathbf{med}}^{(t)}\| \leq 16\sqrt{tC} / m$ and $\| B_\star^{(t)} - B_{\mathbf{med}}^{(t)}\| \leq 12\sqrt{tC} / m$
+(d) for all $i \in \mathbf{good}$ and $t \in [T]$ , we also have $i \in \mathbf{good}_{t+1}$ .
+(e) $\left\| \sum_{i\in \mathrm{good}}\left(B_i^{(t)} - B_\star^{(t)}\right)\right\| \leq O(\sqrt{t\log(T / p)} /\sqrt{m}).$
+
+Proof of Claim A.2. We prove by induction. Suppose the statements hold for $t - 1$ and we now move to $t$ .
+
+(a) For each $i \in \mathbf{good}$ , note $\mathbb{E}[\nabla_{t,i}] = \nabla_t$ and $\| \nabla_{t,i} - \nabla_t\| \leq 1$ . Let $X_t = \frac{\nabla_{t,i} - \nabla_t}{|\mathrm{good}_t|}$ , so that $\| X_t\| \leq \frac{1}{|\mathrm{good}_t|} \leq \frac{1}{(1 - \alpha)m} \leq \frac{2}{m}$ . We can thus apply Lemma A.1 to the $X_t$ and then take a union bound over all $i \in \mathbf{good}$ . Thus, with probability at least $1 - \frac{p}{8T}$ we have $\| B_i^{(t)} - B_\star^{(t)}\| \leq 4\sqrt{tC}/m$ for all $i \in \mathbf{good}$ . The result follows from a further union bound over $t \in [T]$ .
+(b) Claim A.2a implies for every $i, j \in \mathbf{good}$ we have $\| B_i^{(t)} - B_j^{(t)}\| \leq 8\sqrt{tC} / m$ . Therefore each $i \in \mathbf{good}$ is a valid choice for setting $B_{\mathbf{med}}^{(t)} = B_i^{(t)}$ .
+(c) This is a consequence of the previous items and the definition of $B_{\mathrm{med}}^{(t)}$
+(d) This is a consequence of the previous item.
+(e) We can apply Lemma A.1 with $\{X_1, X_2, \ldots, X_{t|\mathrm{good}|}\} = \{\frac{\nabla_{k,i} - \nabla f(w_k)}{|\mathrm{good}_k|}\}_{k\in [t],i\in \mathrm{good}}$ . It holds with probability at least $1 - \frac{p}{8T}$ that $\left\| \sum_{i\in \mathrm{good}}\left(B_i^{(t)} - B_\star^{(t)}\right)\right\| \leq O(\sqrt{t\log(T / p)} /\sqrt{m})$
+
+
+
+Proof of Lemma 3.2. The property $\mathrm{good}_t\supseteq$ good is from Claim A.2d.
+
+The property $\| \sigma_t\|^2 \leq O\left(\frac{\log(T / p)}{m}\right)$ is by standard concentration inequalities for sums of bounded random vectors.
+
+The property $\| \sigma_0 + \dots +\sigma_{t - 1}\| ^2\leq O\big(\frac{T\log(T / p)}{m}\big)$ is from Claim A.2e.
+
+The property $\| \Delta_t\| \leq \alpha$ is obvious as we have at most $\alpha$ fraction of the bad machines.
+
+The bound on $\|\Delta_0 + \cdots + \Delta_{t-1}\|^2$ can be derived as follows. For every $i \in [m] \setminus \text{good}$ , let $t$ be the last iteration $i$ satisfies $i \in \text{good}_t$ . Then, by the triangle inequality,
+
+$$
+\left\| B _ {i} ^ {(t + 1)} - B _ {\star} ^ {(t + 1)} \right\| \leq \frac {2}{m} + \left\| B _ {i} ^ {(t)} - B _ {\star} ^ {(t)} \right\|
+$$
+
+On the other hand, $t \in \mathrm{good}_t$ implies $\| B_i^{(t)} - B_{\mathrm{med}}^{(t)} \| \leq 16 \sqrt{tC} / m$ by the algorithm; combining this with $\| B_\star^{(t)} - B_{\mathrm{med}}^{(t)} \| \leq 12 \sqrt{tC} / m$ , and summing up over all such bad machines $i$ finishes the proof.
+
+The final two properties follow from standard facts about Gaussian random vectors.
+
+
+
+# A.2 PROOF OF LEMMA 3.3
+
+Proof of Lemma 3.3. Using the Lipschitz smoothness of $f(\cdot)$ , we have
+
+$$
+\begin{array}{l} f (w _ {t}) - f (w _ {t + 1}) \geq \langle \nabla f (w _ {t}), w _ {t} - w _ {t + 1} \rangle - \frac {1}{2} \| w _ {t} - w _ {t + 1} \| ^ {2} \\ = \eta \| \nabla f (w _ {t}) \| ^ {2} + \eta \langle \nabla f (w _ {t}), \Xi_ {t} \rangle - \frac {1}{2} \| w _ {t} - w _ {t + 1} \| ^ {2} + \eta \langle \nabla f (w _ {t}), \xi_ {t} \rangle \\ \end{array}
+$$
+
+We first show:
+
+$$
+\left\| w _ {t} - w _ {t + 1} \right\| ^ {2} = \eta^ {2} \left\| \nabla f (w _ {t}) + \Xi_ {t} - \xi_ {t} \right\| ^ {2} \leq 3 \eta^ {2} \left(\left\| \nabla f (w _ {t}) \right\| ^ {2} + \left\| \Xi_ {t} \right\| ^ {2} + \left\| \xi_ {t} \right\| ^ {2}\right)
+$$
+
+$$
+\left| \sum_ {t = 0} ^ {T - 1} \eta \langle \nabla f (w _ {t}), \xi_ {t} \rangle \right| \leq \eta \sqrt {\sum_ {t = 0} ^ {T - 1} \| \nabla f (w _ {t}) \| ^ {2}} \cdot O (\nu \sqrt {C _ {1}}) \leq \left(0. 0 5 \eta \sum_ {t = 0} ^ {T - 1} \| \nabla f (w _ {t}) \| ^ {2}\right) + O (C _ {1} \nu^ {2} \eta)
+$$
+
+The first follows since $(a + b + c)^2 \leq 3(a^2 + b^2 + c^2)$ for any $a, b, c \in \mathbb{R}$ , and the second follows from Lemma 3.2. Combining them, and also using that $\| \Xi_t \|^2 \leq O(C_2)$ , $\| \xi_t \|^2 \leq O(d\nu^2 C_1)$ , and $\eta \leq 0.01$ , we have
+
+$$
+f \left(w _ {0}\right) - f \left(w _ {T}\right) \geq 0. 9 \eta \sum_ {t = 0} ^ {T - 1} \left(\left\| \nabla f \left(w _ {t}\right) \right\| ^ {2} - O \left(\eta C _ {2}\right)\right) + \eta \sum_ {t = 0} ^ {T - 1} \left\langle \nabla f \left(w _ {t}\right), \Xi_ {t} \right\rangle - O \left(\eta T \nu^ {2} C _ {1} \left(\eta d + \frac {1}{T}\right)\right) \tag {A.1}
+$$
+
+For the inner product on the right hand of (A.1), we have that
+
+$$
+\eta \sum_ {t = 0} ^ {T - 1} \left\langle \nabla f \left(w _ {t}\right), \Xi_ {t} \right\rangle = \underbrace {\frac {\eta}{T} \sum_ {q = 0} ^ {T - 1} \left\langle \nabla f \left(w _ {q}\right) , \sum_ {t = 0} ^ {T - 1} \Xi_ {t} \right\rangle} _ {\clubsuit} + \underbrace {\frac {\eta}{T} \sum_ {q = 0} ^ {T - 1} \sum_ {t = 0} ^ {T - 1} \left\langle \nabla f \left(w _ {t}\right) - \nabla f \left(w _ {q}\right) , \Xi_ {t} \right\rangle} _ {\clubsuit} \tag {A.2}
+$$
+
+For the first term $\spadesuit$ , we have
+
+$$
+\begin{array}{l} | \spadesuit | \leq \frac {\eta}{T} \sum_ {q = 0} ^ {T - 1} \left| \left\langle \nabla f (w _ {q}), \sum_ {t = 0} ^ {T - 1} \Xi_ {t} \right\rangle \right| \leq \frac {\eta}{T} \sum_ {q = 0} ^ {T - 1} \| \nabla f (w _ {q}) \| \cdot \left\| \sum_ {t = 0} ^ {T - 1} \Xi_ {t} \right\| \\ \leq 0. 1 \eta \sum_ {q = 0} ^ {T - 1} \| \nabla f (w _ {q}) \| ^ {2} + \frac {O (\eta)}{T ^ {2}} \sum_ {q = 0} ^ {T - 1} \left\| \sum_ {t = 0} ^ {T - 1} \Xi_ {t} \right\| ^ {2} \\ \leq 0. 1 \eta \sum_ {q = 0} ^ {T - 1} \| \nabla f (w _ {q}) \| ^ {2} + O (\eta C _ {2}) \\ \end{array}
+$$
+
+where the last inequality follows from Lemma 3.2.
+
+For the second term $\clubsuit$ , we have
+
+$$
+\begin{array}{l} | \clubsuit | \leq \frac {\eta}{T} \sum_ {q = 0} ^ {T - 1} \left| \sum_ {t = 0} ^ {T - 1} \langle \nabla f (w _ {t}) - \nabla f (w _ {q}), \Xi_ {t} \rangle \right| \leq \underbrace {\frac {\eta}{T} \sum_ {q = 0} ^ {T - 1} \left| \sum_ {t = 0} ^ {T - 1} \langle \nabla^ {2} f (w _ {0}) (w _ {t} - w _ {q}) , \Xi_ {t} \rangle \right|} _ {\diamondsuit} \\ + \underbrace {\frac {\eta}{T} \sum_ {q = 0} ^ {T - 1} \sum_ {t = 0} ^ {T - 1} (\| w _ {t} - w _ {0} \| + \| w _ {q} - w _ {0} \|) \| w _ {t} - w _ {q} \| \| \Xi_ {t} \|} _ {\odot} \\ \end{array}
+$$
+
+Using $\| w_{t} - w_{q}\| \leq \| w_{t} - w_{0}\| +\| w_{q} - w_{0}\|$ , one can derive
+
+$$
+\begin{array}{l} \left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left.\right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left. \right.\left\| \right. w _ {t} - w _ {0} \| + \| w _ {q} - w _ {0} \|\left. \right) ^ {2} \cdot O (\sqrt {C _ {2}}) ^ {2} \cdot O (\sqrt {C _ {2}}) ^ {2} \cdot O (\sqrt {C _ {2}}) ^ {2} \cdot O (\sqrt {C _ {2}}) ^ {2} \cdot O (\sqrt {C _ {2}}) ^ {2} \cdot O (\sqrt {C _ {2}}) ^ {2} \cdot O (\sqrt {C _ {2}}) ^ {1} + 1) ^ {1} + 1) ^ {1} + 1) ^ {1} + 1) ^ {1} + 1) ^ {1} + 1) ^ {1} + 1) ^ {1} + 1) ^ {1} + 1) ^ {1} + 1) ^ {1} + 1) ^ {1} + 1) ^ {1} + 1) ^ {2} + 1) ^ {2} + 1) ^ {2} + 1) ^ {2} + 1) ^ {2} + 1) ^ {2} + 1) ^ {2} + 1) ^ {2} + 1) ^ {2} + 1) ^ {2} + 1) ^ {2} + 1) ^ {2} + 1) ^ {2} + 3}\left. \right) ^ {2} + 3\left. \right) ^ {2} + 3\left. \right) ^ {2} + 3\left. \right) ^ {2} + 3\left. \right)\left. \right) ^ {2}\left. \right) ^ {2}\left. \right) ^ {2}\left. \right) ^ {2}\left. \right) ^ {2}\left. \right) ^ {2}\left. \right) ^ {2}\left. \right) ^ {2}\left. \right) ^ {2}\left. \right) ^ {2}\left. \right) ^ {2}\left. \right) ^ {2}\left. \right) ^ {2}\left. \right) ^ {2}\left. \right) ^ {2}\left. \right) ^ {2}\left. \right) ^ {2}\left. \right) ^ {\prime} \\ \leq \eta \sum_ {t = 0} ^ {T - 1} \| w _ {t} - w _ {0} \| ^ {2} \cdot O (\sqrt {C _ {2}}) \\ \leq \eta^ {3} \sum_ {t = 0} ^ {T - 1} \| \nabla f (w _ {0}) + \dots + \nabla f (w _ {t - 1}) + \Xi_ {0} + \dots + \Xi_ {t - 1} + \xi_ {0} + \dots + \xi_ {t - 1} \| ^ {2} \cdot O (\sqrt {C _ {2}}) \\ \leq O (\sqrt {C _ {2}} \eta^ {3} T ^ {2}) \sum_ {t = 0} ^ {T - 1} \| \nabla f (w _ {t}) \| ^ {2} + O (\sqrt {C _ {2}} C _ {2} \eta^ {3} T ^ {2}) + O (\eta^ {3} \nu^ {2} T ^ {2} d C _ {1} \sqrt {C _ {2}}) \\ \end{array}
+$$
+
+As for $\diamondsuit$
+
+$$
+\Bigl|\sum_{t = 0}^{T - 1}\langle \nabla^{2}f(w_{0})(w_{t} - w_{q}),\Xi_{t}\rangle \Bigr|\leq \Bigl|\sum_{t = q + 1}^{T - 1}\langle \nabla^{2}f(w_{0})(w_{t} - w_{q}),\Xi_{t}\rangle \Bigr| + \Bigl|\sum_{t = 0}^{q - 1}\langle \nabla^{2}f(w_{0})(w_{t} - w_{q}),\Xi_{t}\rangle \Bigr|
+$$
+
+For the first term (and the second term is analogous), we have
+
+$$
+\begin{array}{l} \Big | \sum_ {t = q + 1} ^ {T - 1} \langle \nabla^ {2} f (w _ {0}) (w _ {t} - w _ {q}), \Xi_ {t} \rangle \Big | \\ = \eta \Big | \sum_ {t = q + 1} ^ {T - 1} \langle \nabla^ {2} f (w _ {0}) (\nabla f (w _ {q}) + \dots \nabla f (w _ {t - 1}) + \Xi_ {q} + \dots \Xi_ {t - 1} + \xi_ {q} + \dots + \xi_ {t - 1}), \Xi_ {t} \rangle \Big | \\ \leq \eta \Big | \sum_ {t = q + 1} ^ {T - 1} \left\langle \nabla^ {2} f (w _ {0}) (\xi_ {q} + \dots + \xi_ {t - 1}), \Xi_ {t} \right\rangle \Big | + \\ \eta \Big| \sum_ {t = q + 1} ^ {T - 1} \langle \nabla^ {2} f (w _ {0}) (\nabla f (w _ {q}) + \dots \nabla f (w _ {t - 1})), \Xi_ {t} \rangle \Big | + \eta \Big | \sum_ {t = q + 1} ^ {T - 1} \langle \nabla^ {2} f (w _ {0}) (\Xi_ {q} + \dots \Xi_ {t - 1}), \Xi_ {t} \rangle \Big | \\ \stackrel {1} {\leq} \eta \cdot O (\sqrt {d \nu^ {2} T C _ {1}} \cdot \sqrt {T C _ {2}}) + \\ \eta \Bigl|\sum_{t = q}^{T - 2}\langle \nabla^{2}f(w_{0})\nabla f(w_{t}),\Xi_{t + 1} + \dots +\Xi_{T - 1}\rangle \Bigr| + \frac{\eta}{2}\Bigl\langle \nabla^{2}f(w_{0})(\Xi_{q} + \dots \Xi_{T - 1}),(\Xi_{q} + \dots \Xi_{T - 1})\Bigr\rangle \\ \leq \eta \cdot O (\sqrt {d \nu^ {2} T C _ {1}} \cdot \sqrt {T C _ {2}}) + \eta \sum_ {t = q} ^ {T - 2} \| \nabla f (w _ {t}) \| \| \Xi_ {t + 1} + \dots + \Xi_ {T - 1} \| + \frac {\eta}{2} \| \Xi_ {q} + \dots \Xi_ {T - 1} \| ^ {2} \\ \overset {2} {\leq} O \left(\eta \sqrt {T C _ {2}}\right) \cdot \sum_ {t = 0} ^ {T - 1} \| \nabla f (w _ {t}) \| + O \left(T \eta C _ {2} + T \eta \nu^ {2} d C _ {1}\right). \\ \end{array}
+$$
+
+Above, inequality ① uses $\| \Xi_0 + \dots \Xi_t\| \leq O(\sqrt{TC_2})$ for $C_2 = \alpha^2\log \frac{mT}{p} +\frac{\log(T / p)}{m}$ (see Lemma 3.2) and a delicate application of Azuma's inequality that we state at the end of this subsection (see Proposition 3.4); Inequality ② uses Young's inequality and Lemma 3.2.
+
+Putting this back to the formula of $\diamond$ , we have
+
+$$
+\begin{array}{l} \diamondsuit \leq O \left(\eta^ {2} \sqrt {T C _ {2}}\right) \cdot \sum_ {t = 0} ^ {T - 1} \| \nabla f (w _ {t}) \| + O \left(T \eta^ {2} C _ {2} + T \eta^ {2} \nu^ {2} d C _ {1}\right) \\ \leq 0. 1 \eta \sum_ {t = 0} ^ {T - 1} \| \nabla f (w _ {t}) \| ^ {2} + O \left(\eta^ {3} T ^ {2} C _ {2} + T \eta^ {2} C _ {2} + T \eta^ {2} \nu^ {2} d C _ {1}\right) \\ \end{array}
+$$
+
+Finally, putting $\diamond$ and $\heartsuit$ back to $\clubsuit$ , and putting $\clubsuit$ and $\spadesuit$ back to (A.2) and (A.1), we have
+
+$$
+\begin{array}{l} f (w _ {0}) - f (w _ {T}) \geq 0. 8 \eta \sum_ {t = 0} ^ {T - 1} \| \nabla f (w _ {t}) \| ^ {2} - O (\sqrt {C _ {2}} \eta^ {3} T ^ {2}) \sum_ {t = 0} ^ {T - 1} \| \nabla f (w _ {t}) \| ^ {2} \\ - C _ {2} \cdot O \left(\eta + \eta^ {2} T + \eta^ {3} T ^ {2} + \sqrt {C _ {2}} \eta^ {3} T ^ {2}\right) - C _ {1} \cdot O \left(T \eta^ {2} \nu^ {2} d + T ^ {2} \eta^ {3} \nu^ {2} \sqrt {C _ {2}} + \eta T \nu^ {2} \left(\eta d + \frac {1}{T}\right)\right) \\ \end{array}
+$$
+
+together with $T = \frac{1}{100\eta(1 + \sqrt{C_2})}$ and $\eta \leq 0.01\min \{1,\frac{1}{C_2}\}$ , we have
+
+$$
+\begin{array}{l} f \left(w _ {0}\right) - f \left(w _ {T}\right) \geq 0. 7 \eta \sum_ {t = 0} ^ {T - 1} \| \nabla f \left(w _ {t}\right) \| ^ {2} - C _ {2} \cdot O \left(\eta + \eta^ {2} T + \eta^ {3} T ^ {2} + \sqrt {C _ {2}} \eta^ {3} T ^ {2}\right) - C _ {1} \cdot O \left(T \eta \nu^ {2} \eta \left(d + \sqrt {C _ {2}}\right)\right) \\ = 0. 7 \eta \sum_ {t = 0} ^ {T - 1} \left(\| \nabla f (w _ {t}) \| ^ {2} - C _ {2} \cdot O \left(\frac {1}{T} + \eta + \eta^ {2} T + \sqrt {C _ {2}} \eta^ {2} T\right) - C _ {1} \cdot O \left(\eta T \nu^ {2} \eta (d + \sqrt {C _ {2}})\right)\right) \\ \geq 0. 7 \eta \sum_ {t = 0} ^ {T - 1} \left(\| \nabla f (w _ {t}) \| ^ {2} - \eta \cdot O \left(C _ {2} + \left(C _ {2}\right) ^ {1. 5}\right) - O \left(C _ {1} \nu^ {2} \eta \left(d + \sqrt {C _ {2}}\right)\right)\right). \\ \end{array}
+$$
+
+
+
+# A.3 PROOF OF PROPOSITION 3.4
+
+Proposition 3.4. Fix the dimension parameter $d \geq 1$ . Suppose $\xi_0, \ldots, \xi_{T-1} \in \mathbb{R}^d$ are i.i.d. drawn from $\mathcal{N}(0, \mathbf{I})$ , and that $\Delta_1, \ldots, \Delta_{T-1}$ are arbitrary vectors in $\mathbb{R}^d$ . Here, each vector $\Delta_t$ with $t = 1, \ldots, T-1$ can depend on $\xi_0, \ldots, \xi_{t-1}$ but not on $\xi_t, \ldots, \xi_{T-1}$ . Suppose that these vectors satisfy $\| \Delta_1 + \dots + \Delta_t \|^2 \leq \mathfrak{T}$ for every $t = 1, \ldots, T-1$ . Then, with probability at least $1 - p$ ,
+
+$$
+\left| \sum_ {t = 1} ^ {T - 1} \left\langle \xi_ {0} + \dots + \xi_ {t - 1}, \Delta_ {t} \right\rangle \right| \leq O \left(\sqrt {d T \mathfrak {T} \log (T / p)}\right).
+$$
+
+Proof of Proposition 3.4. Using the identity formula
+
+$$
+\sum_ {t = 1} ^ {T - 1} \left\langle \xi_ {0} + \dots + \xi_ {t - 1}, \Delta_ {t} \right\rangle = \left(\sum_ {t = 0} ^ {T - 2} \xi_ {t}\right) \left(\sum_ {t = 1} ^ {T - 1} \Delta_ {t}\right) - \sum_ {t = 1} ^ {T - 2} \left\langle \xi_ {t}, \Delta_ {1} + \dots + \Delta_ {t} \right\rangle
+$$
+
+we have
+
+$$
+\begin{array}{l} \left| \sum_ {t = 1} ^ {T - 1} \langle \xi_ {0} + \dots + \xi_ {t - 1}, \Delta_ {t} \rangle \right| \leq \left\| \sum_ {t = 0} ^ {T - 2} \xi_ {t} \right\| \cdot \left\| \sum_ {t = 1} ^ {T - 1} \Delta_ {t} \right\| + \left| \sum_ {t = 1} ^ {T - 2} \langle \xi_ {t}, \Delta_ {1} + \dots + \Delta_ {t} \rangle \right|. \\ \leq O \left(\sqrt {d T \mathfrak {T} \log (T / p)}\right) + \left| \sum_ {t = 1} ^ {T - 2} \langle \xi_ {t}, \Delta_ {1} + \dots + \Delta_ {t} \rangle \right|. \\ \end{array}
+$$
+
+where the last inequality uses $\| \xi_0 + \dots +\xi_{T - 2}\| \leq O(\sqrt{dT\log(1 / p)})$ with probability at least $1 - p / 2$ . Furthermore, we note that $\xi_t$ is independent of $\xi_0,\ldots ,\xi_{t - 1},\Delta_1,\ldots ,\Delta_t$ and $\mathbb{E}[\xi_t] = 0$ . Therefore, letting $S_{t} = \langle \xi_{t},\Delta_{1} + \dots +\Delta_{t}\rangle$ , we have $\mathbb{E}[S_t|\xi_0,\dots ,\xi_{t - 1}] = 0$ ; furthermore, with probability at least $1 - p / 2$ , it satisfies $|S_t|\leq O(\sqrt{d\mathfrak{T}}\log (T / p))$ for every $t$ . Finally, by Azuma's inequality, we have
+
+$$
+\left| \sum_ {t = 1} ^ {T - 2} \left\langle \xi_ {t}, \Delta_ {1} + \dots + \Delta_ {t} \right\rangle \right| \leq O \left(\sqrt {d T \mathfrak {T} \log (T / p)}\right).
+$$
+
+
+
+# A.4 PROOF OF LEMMA 3.5
+
+Proof of Lemma 3.5. Let us denote by $r_t = \frac{[\xi_t^a]_1}{2} = -\frac{[\xi_t^b]_1}{2}$ and we know $r_t \sim \mathcal{N}(0, \frac{\nu^2}{4})$ . We can write
+
+$$
+w _ {t + 1} ^ {\mathrm {a}} - w _ {t + 1} ^ {\mathrm {b}} = \eta r _ {t} \mathbf {e} _ {1} + w _ {t} ^ {\mathrm {a}} - w _ {t} ^ {\mathrm {b}} - \eta (\nabla f (w _ {t} ^ {\mathrm {a}}) - \nabla f (w _ {t} ^ {\mathrm {b}})) - \eta (\Xi_ {t} ^ {\mathrm {a}} - \Xi_ {t} ^ {\mathrm {b}})
+$$
+
+Using the second-order smoothness, we have
+
+$$
+\begin{array}{l} \nabla f \left(w _ {t} ^ {\mathrm {a}}\right) - \nabla f \left(w _ {t} ^ {\mathrm {b}}\right) = \int_ {\tau = 0} ^ {1} \nabla^ {2} f \left(w _ {t} ^ {\mathrm {a}} + \tau \left(w _ {t} ^ {\mathrm {b}} - w _ {t} ^ {\mathrm {a}}\right)\right) \left(w _ {t} ^ {\mathrm {a}} - w _ {t} ^ {\mathrm {b}}\right) d \tau \\ = \nabla^ {2} f (w _ {0}) \cdot \left(w _ {t} ^ {\mathrm {a}} - w _ {t} ^ {\mathrm {b}}\right) + \theta_ {t} \\ \end{array}
+$$
+
+for some vector $\| \theta_t\| \leq \max \{\| w_0^a -w_t^a\| ,\| w_0^b -w_t^b\| \} \cdot \| w_t^a -w_t^b\|$ . Therefore, we have
+
+$$
+w _ {t + 1} ^ {\mathrm {a}} - w _ {t + 1} ^ {\mathrm {b}} = \eta r _ {t} \mathbf {e} _ {1} + \left(\mathbf {I} - \eta \nabla^ {2} f (w _ {0})\right) \left(w _ {t} ^ {\mathrm {a}} - w _ {t} ^ {\mathrm {b}}\right) - \eta \left(\Xi_ {t} ^ {\mathrm {a}} - \Xi_ {t} ^ {\mathrm {b}} + \theta_ {t}\right)
+$$
+
+Now, giving $\psi_0 = \overline{\psi}_0 = 0$ , imagine two sequences
+
+- $\psi_{t + 1} = \eta r_t\mathbf{e}_1 + \big(\mathbf{I} - \eta \nabla^2 f(w_0)\big)\psi_t$ and
+- $\overline{\psi}_{t+1} = \eta r_t \mathbf{e}_1 + \left( \mathbf{I} - \eta \nabla^2 f(w_0) \right) \overline{\psi}_t - \eta (\Xi_t^a - \Xi_t^b + \theta_t) = w_{t+1}^a - w_{t+1}^b$
+
+We will inductively prove $\| \psi_t - \overline{\psi}_t \| \leq \frac{1}{2} \| \psi_t \|$ . On one hand, it is easy to see that $\psi_t$ is zero except in the first coordinate, in which it behaves as a Gaussian with zero mean and variance $\sum_{k=0}^{t-1} (1 + \eta \delta)^{2k} \cdot \frac{\eta^2 \nu^2}{4} = \Theta\left(\frac{(1 + \eta \delta)^{2t}}{\eta \delta} \cdot \eta^2 \nu^2\right)$ . By Gaussian tail bounds, we know that
+
+with probability at least 0.99, it satisfies $\| \psi_t\| \leq O\left(\frac{\sqrt{\eta C_1}\nu(1 + \eta\delta)^t}{\sqrt{\delta}}\right)$ for every $t$
+with probability at least 0.99, it satisfies $\| \psi_T\| \geq \frac{1}{1000} (\frac{\sqrt{\eta}\nu(1 + \eta\delta)^T}{\sqrt{\delta}})$
+
+In the rest of the proof, we condition on this event happens. We prove towards contradiction by assuming $\| w_{t}^{\mathbf{a}} - w_{0}^{\mathbf{a}} \| \leq R$ and $\| w_{t}^{\mathbf{b}} - w_{0}^{\mathbf{b}} \| \leq R$ for all $t \in [T]$ .
+
+We will inductively prove that $\| \psi_t - \overline{\psi}_t\| \leq \frac{1}{2000}\left(\frac{\sqrt{\eta}\nu(1 + \eta\delta)^t}{\sqrt{\delta}}\right)$ . We calculate the difference
+
+$$
+\psi_ {t} - \overline {{\psi}} _ {t} = \eta \sum_ {i = 0} ^ {t - 1} \left(\mathbf {I} - \eta \nabla^ {2} f (w _ {0})\right) ^ {t - 1 - i} \left(\Xi_ {i} ^ {\mathrm {a}} - \Xi_ {i} ^ {\mathrm {b}} + \theta_ {i}\right)
+$$
+
+Let $g = \frac{\psi_t - \overline{\psi}_t}{\|\psi_t - \overline{\psi}_t\|}$ , then we can inner-product the above equation by vector $g$ , which gives
+
+$$
+\left\| \psi_ {t} - \bar {\psi} _ {t} \right\| = \eta \sum_ {i = 0} ^ {t - 1} \left\langle \Xi_ {i} ^ {\mathrm {a}} - \Xi_ {i} ^ {\mathrm {b}} + \theta_ {i}, \left(\mathbf {I} - \eta \nabla^ {2} f (w _ {0})\right) ^ {t - 1 - i} g \right\rangle
+$$
+
+$$
+\begin{array}{l} \stackrel {{1}} {{\leq}} \eta \sum_ {i = 0} ^ {t - 1} \left(\left\langle \Xi_ {i} ^ {\mathrm {a}} - \Xi_ {i} ^ {\mathrm {b}}, \left(\mathbf {I} - \eta \nabla^ {2} f (w _ {0})\right) ^ {t - 1 - i} g \right\rangle + R \cdot O (\frac {\sqrt {\eta C _ {1}} \nu (1 + \eta \delta) ^ {i}}{\sqrt {\delta}}) \cdot (1 + \eta \delta) ^ {t - 1 - i}\right) \\ \leq \eta \sum_ {i = 0} ^ {t - 1} \left\langle \Xi_ {i} ^ {\mathrm {a}} - \Xi_ {i} ^ {\mathrm {b}}, \left(\mathbf {I} - \eta \nabla^ {2} f (w _ {0})\right) ^ {t - 1 - i} g \right\rangle + O \left(R \eta T \frac {\sqrt {\eta C _ {1}}}{\sqrt {\delta}} \nu (1 + \eta \delta) ^ {t}\right) \\ \end{array}
+$$
+
+where the inequality ① uses $\| \theta_i\| \leq R\cdot \| \overline{\psi}_i\| \leq R\cdot (\| \psi_i\| +\| \psi_i - \overline{\psi}_i\|),\left\| \big(\mathbf{I} - \eta \nabla^2 f(w_0)\big)^{t - 1 - i}g\right\| \leq (1 + \eta \delta)^{t - 1 - i}$ , and the inductive assumption. Let us call $M = \big(\mathbf{I} - \eta \nabla^{2}f(w_{0})\big)$ , and focus on
+
+$$
+\begin{array}{l} \left| \sum_ {i = 0} ^ {t - 1} \left\langle \Xi_ {i} ^ {\mathrm {a}}, \left(\mathbf {I} - \eta \nabla^ {2} f (w _ {0})\right) ^ {t - 1 - i} g \right\rangle \right| \\ = \left| \left\langle \Xi_ {0} ^ {\mathrm {a}} + \dots + \Xi_ {t - 1} ^ {\mathrm {a}}, g \right\rangle + \sum_ {i = 0} ^ {t - 2} \left\langle \Xi_ {0} ^ {\mathrm {a}} + \dots + \Xi_ {i} ^ {\mathrm {a}}, M ^ {t - 1 - i} g - M ^ {t - 2 - i} g \right\rangle \right| \\ \leq \| \Xi_ {0} ^ {\mathrm {a}} + \dots + \Xi_ {t - 1} ^ {\mathrm {a}} \| \cdot \| g \| + \sum_ {i = 0} ^ {t - 2} \| \Xi_ {0} ^ {\mathrm {a}} + \dots + \Xi_ {i} ^ {\mathrm {a}} \| \cdot \| M ^ {t - 1 - i} g - M ^ {t - 2 - i} g \| \\ \leq O \left( \right.\sqrt {T C _ {2}} \left(\| g \| + \sum_ {i = 0} ^ {t - 2} \| M ^ {t - 1 - i} g - M ^ {t - 2 - i} g \|\right) \quad \text {(u s i n g L e m m a 3 . 2)} \\ \leq O \left(\sqrt {T C _ {2}} \cdot \left(1 + \eta \delta\right) ^ {t - 1}\right) \\ \end{array}
+$$
+
+Together, we have
+
+$$
+\left\| \psi_ {t} - \bar {\psi} _ {t} \right\| \leq O \left(\eta \sqrt {T C _ {2}}\right) \cdot (1 + \eta \delta) ^ {t} + O \left(R \eta T \frac {\sqrt {\eta C _ {1}}}{\sqrt {\delta}} \nu (1 + \eta \delta) ^ {t}\right)
+$$
+
+Under our assumption, we have $\| \psi_t - \overline{\psi}_t \| < \frac{1}{2000} \left( \frac{\sqrt{\eta} \nu (1 + \eta \delta)^t}{\sqrt{\delta}} \right)$ and therefore $\| \overline{\psi}_T \| \geq \| \psi_T \| - \| \psi_T - \overline{\psi}_T \| \geq \frac{1}{2000} \left( \frac{\sqrt{\eta} \nu (1 + \eta \delta)^t}{\sqrt{\delta}} \right)$ . Thus, within $T$ iterations, we have $\| \overline{\psi}_t \| > R$ and this gives a contradiction.
+
+# B FINAL: DOUBLE SAFE GUARD
+
+We now come to our final Algorithm 3 which is our perturbed SGD algorithm with two safeguards. The two safeguard algorithm naturally divides itself into epochs, each consisting of $T_{1}$ iterations. We will demonstrate that within most epochs, we make good progress. Thus, consider some iterate $x_{mT_1}$ , for some $m < T / T_{1}$ . Our goal will be to argue that we make good function value progress by iterate $x_{(m + 1)T_1}$ , and that we do not settle into any saddle points. To slightly simplify notation, let $w_{0} = x_{mT_{1}}$ , and let the sequence of iterates be $w_{0},\ldots ,w_{T_{1} - 1}$ , so that $w_{T_1 - 1} = x_{(m + 1)T_1 - 1}$ . For completeness' sake we rewrite this as Algorithm 1.
+
+# Algorithm 3 Perturbed SGD with double safe guard (for analysis purpose)
+
+Input: $w_0 \in \mathbb{R}^d$ , set good $0 \supseteq$ good, rate $\eta > 0$ , lengths $T_1 \geq T_0 \geq 1$ , threshold $\mathfrak{T}_1 > \mathfrak{T}_0 > 0$ ; 1: for $t \gets 0$ to $T_1 - 1$ do
+
+1: for $t \gets 0$ to $T_{1} - 1$ do
+2: last $\leftarrow$ max{ $t_0\in [t]\colon t_0$ is a multiple of $T_{0}\}$
+3: for each $i\in \mathrm{good}_t$ do
+4: receive $\nabla_{t,i}\in \mathbb{R}^d$ from machine $i$ ;
+5: $A_{i}\gets \sum_{k = 0}^{t}\frac{\nabla_{k,i}}{|\mathsf{good}_{k}|}$ and $B_{i}\gets \sum_{k = last}^{t}\frac{\nabla_{k,i}}{|\mathsf{good}_{k}|};$
+6: $A_{\mathrm{med}} \gets A_i$ where $i \in \mathrm{good}_t$ is any machine s.t. $\left|\{j \in \mathrm{good}_t : \|A_j - A_i\| \leq \mathfrak{T}_1\}\right| > m/2$ .
+7: $B_{\mathrm{med}} \gets B_i$ where $i \in \mathrm{good}_t$ is any machine s.t. $\left|\{j \in \mathrm{good}_t : \|B_j - B_i\| \leq \mathfrak{T}_0\}\right| > m/2$ .
+8: $\mathbf{good}_{t + 1}\gets \left\{i\in \mathbf{good}_t:\| A_i - A_{\mathrm{med}}\| \leq 2\mathfrak{T}_1\wedge \| B_i - B_{\mathrm{med}}\| \leq 2\mathfrak{T}_0\right\} ;$
+9: $w_{t + 1} = w_t - \eta \left(\xi_t + \frac{1}{|\mathbf{good}_t|}\sum_{i\in \mathbf{good}_t}\nabla_{t,i}\right)$
+
+Our main result is the following theorem.
+
+Theorem B.1. Let $C_3 = \alpha^2 +\frac{1}{m}$ . Suppose we pick parameters $p,\delta \in (0,1)$ , $\eta \leq \widetilde{O} (\frac{\delta^3}{C_3})$ , $\nu^{2} = \widetilde{\Theta} (C_{3})$ , $T_0 = \widetilde{\Theta}\bigl (\frac{1}{\eta}\bigr)$ , $T_{1} = \widetilde{\Theta}\bigl (\frac{1}{\eta\delta}\bigr)\geq T_{0}$ , $\mathfrak{T}_1 = \widetilde{\Theta} (\sqrt{T_1})$ , and $\mathfrak{T}_0 = \widetilde{\Theta} (\sqrt{T_0})$ . Then, starting from $w_{0}$
+
+(a) with probability at least $1 - p$ we have
+
+$$
+f \left(w _ {0}\right) - f \left(w _ {T _ {1}}\right) \geq 0. 7 \eta \sum_ {t = 0} ^ {T _ {1} - 1} \left(\| \nabla f \left(w _ {t}\right) \| ^ {2} - \widetilde {O} \left(\eta C _ {3} d\right)\right).
+$$
+
+(b) As long as $\| w_t - w_0 \| \geq R$ for some $t \in \{1, 2, \dots, T_1\}$ and for $R = \widetilde{\Theta}(\delta) \leq \frac{\delta}{2}$ , then with probability at least $1 - p$ we have then
+
+$$
+f \left(w _ {0}\right) - f \left(w _ {T _ {1}}\right) \geq 0. 5 \eta \sum_ {t = 0} ^ {T _ {1} - 1} \left(- \widetilde {\mathcal {O}} \left(\eta C _ {3} d\right)\right) + \widetilde {\Omega} \left(\delta^ {3}\right)
+$$
+
+(c) if $\lambda_{\mathrm{min}}(\nabla^2 f(w_0)) \leq -\delta$ , we also have with probability at least 0.45,
+
+$$
+f \left(w _ {0}\right) - f \left(w _ {T _ {1}}\right) \geq 0. 5 \eta \sum_ {t = 0} ^ {T _ {1} - 1} \left(- \widetilde {\mathcal {O}} \left(\eta C _ {3} d\right)\right) + \widetilde {\Omega} \left(\delta^ {3}\right)
+$$
+
+# B.1 WHY THEOREM B.1 IMPLIES THEOREM 2.3
+
+Using the parameter choice $\eta = \widetilde{\Theta} (\frac{\varepsilon^2}{C_3d})$ from Theorem 2.3, we know $\widetilde{O} (\eta C_3d)\leq 0.1\varepsilon^2$ . We claim two things:
+
+- For at least $90\%$ of the epochs, they must satisfy (denoting by $w_{0}$ and $w_{T_1}$ the beginning and ending points of this epoch)
+
+$$
+f \left(w _ {0}\right) - f \left(w _ {T _ {1}}\right) \leq 2 0 \frac {f \left(x _ {0}\right) - \operatorname* {m i n} f (x)}{T / T _ {1}} \leq \varepsilon^ {1. 5}
+$$
+
+The last inequality uses our choice of $T$ and $\delta = \widetilde{\Theta} (\sqrt{\varepsilon})$ .
+
+The reason for this is by way of contradiction. Suppose for at least $10\%$ of the epochs it satisfies $f(w_0) - f(w_{T_1}) > 20\frac{f(x_0) - \min f(x)}{T / T_1}$ , then, for the remainder of the epochs, they must at least satisfy $f(w_0) - f(w_{T_1}) \geq -0.7\eta T_1 \cdot 0.1\varepsilon^2$ . Summing over all the epochs, we shall obtain $f(x_0) - f(x_T) > f(x_0) - \min f(x)$ but this gives a contradiction.
+
+- For at least $40\%$ of the epochs, they must satisfy the three properties from Theorem B.1.
+
+In particular, for at least $30\%$ of the epochs, they must satisfy both. Since $\varepsilon^{1.5}$ is so small that
+
+$$
+\varepsilon^ {1. 5} \geq f (w _ {0}) - f (w _ {T _ {1}}) \geq 0. 5 \eta \sum_ {t = 0} ^ {T _ {1} - 1} \left(- \tilde {\mathcal {O}} (\eta C _ {3} d)\right) + \tilde {\Omega} (\delta^ {3}) \geq \tilde {\Omega} (\delta^ {3}) - 0. 0 5 \eta T _ {1} \varepsilon^ {2}
+$$
+
+would give a contradiction (for instance, one can choose $\delta$ to be slightly larger than $\sqrt{\varepsilon}$ by some log factors), this means, for those $30\%$ of the epochs, they must satisfy:
+
+- $\varepsilon^{1.5} \geq 0.7\eta \sum_{t=0}^{T_1 - 1} \left( \| \nabla f(w_t) \|^2 - 0.1\varepsilon^2 \right)$ ,
+- $\| w_t - w_0 \| \leq \frac{\delta}{2}$ for every $t = 1, 2, \ldots, T_1$ , and
+- $\nabla^2 f(w_0) \succeq -\delta \mathbf{I}$ .
+
+The latter two properties together implies $\nabla^2 f(w_t) \succeq -\frac{\delta}{2}\mathbf{I}$ for every $t = 1,2,\dots,T_1$ (by the second-order smoothness). The first property implies for at least $90\%$ of the iterations $t$ in this epoch, they must satisfy $\|\nabla f(x)\| \leq \varepsilon$ . This finishes the proof of Theorem 2.3.
+
+# B.2 PROOF OF THEOREM B.1
+
+We first have the following lemma
+
+Lemma B.2 (double safe guard). In Algorithm 3, suppose $\mathfrak{T}_1 = 8\sqrt{T_1\log(16mT_1 / p)}$ and $\mathfrak{T}_0 = 8\sqrt{T_0\log(16mT_1 / p)}$ . Then, with probability at least $1 - p / 2$ , for every $t = 0,\dots ,T_{1} - 1$
+
+- $\mathrm{good}_t \supseteq \mathrm{good}$ .
+- $\| \sigma_t\| ^2\leq O(\frac{\log(T_1 / p)}{m})$ $\| \Delta_t\| ^2\leq \alpha^2$ $\| \xi_t\| ^2\leq O(\nu^2 d\log (T_1 / p))$
+- $\left\| \sigma_0 + \dots + \sigma_{t - 1} \right\|^2 \leq O\left(\frac{T_1 \log(T_1 / p)}{m}\right)$ , $\left\| \sigma_{last} + \dots + \sigma_{t - 1} \right\|^2 \leq O\left(\frac{T_0 \log(T_1 / p)}{m}\right)$
+- $\| \Delta_0 + \dots + \Delta_{t - 1} \|^2 \leq O(\alpha^2 T_1 \log(mT_1 / p))$ and $\| \Delta_{last} + \dots + \Delta_{t - 1} \|^2 \leq O(\alpha^2 T_0 \log(mT_1 / p))$
+- $\| \xi_0 + \dots + \xi_{t - 1} \|^2 \leq O(\nu^2 dT_1\log (T_1 / p))$ and $\| \xi_{last} + \dots + \xi_{t - 1} \|^2 \leq O(\nu^2 dT_0\log (T_1 / p))$ .
+
+We call this probabilistic event $\text{Event}_{T_1, T_0}^{\text{double}}(w_0)$ and $\Pr[\text{Event}_{T_1, T_0}^{\text{double}}(w_0)] \geq 1 - p/2$ .
+
+The proof is a direct corollary of Lemma 3.2, by combining events $\text{Event}_{T_1}^{\text{single}}(w_0)$ , $\text{Event}_{T_0}^{\text{single}}(w_0)$ ,
+
+$\mathsf{Event}_{T_0}^{\mathsf{single}}(w_{T_0})$ , $\mathsf{Event}_{T_0}^{\mathsf{single}}(w_{2T_0})$ and so on. The next lemma is a simple corollary by repeatedly applying Lemma 3.3. It proves Theorem B.1a.
+
+Lemma B.3 (modified from Lemma 3.3). Denote by $C_1 = \log(T_1/p)$ and $C_2 = \alpha^2 \log \frac{mT_1}{p} + \frac{\log(T_1/p)}{m}$ . Suppose $\eta \leq 0.01 \min\{1, \frac{1}{C_2}\}$ , $T_0 = \frac{1}{100\eta(1 + \sqrt{C_2})}$ and $T_1 \geq T_0$ . We start from $w_0$ and apply Algorithm 3. Under event $Event_{T_1,T_0}^{double}(w_0)$ , it satisfies
+
+$$
+f \left(w _ {0}\right) - f \left(w _ {T _ {1}}\right) \geq 0. 7 \eta \sum_ {t = 0} ^ {T _ {1} - 1} \left(\left\| \nabla f \left(w _ {t}\right) \right\| ^ {2} - \eta \cdot O \left(C _ {2} + \left(C _ {2}\right) ^ {1. 5}\right) - O \left(C _ {1} \nu^ {2} \eta \left(d + \sqrt {C _ {2}}\right)\right)\right)
+$$
+
+The next lemma can be easily derived from Lemma 3.5.
+
+Lemma B.4 (modified from Lemma 3.5). Suppose
+
+$$
+R = \Theta \left(\frac {\delta}{\sqrt {C _ {1}} \log \left(\delta^ {3} / \eta C _ {2}\right)}\right) a n d \nu^ {2} = \Theta \left(C _ {2} \log \frac {\delta^ {3}}{\eta C _ {2}}\right)
+$$
+
+Suppose $\eta \leq 0.01\min \{1,\frac{\delta^3}{C_2}\}$ , $T_{0} = \frac{1}{100\eta(1 + \sqrt{C_{2}})}$ and $T_{1} = \Theta (\frac{1}{\eta\delta}\log \frac{\delta^{3}}{\eta C_{2}})\geq T_{0}$ . Let $w_{0}\in \mathbb{R}^{d}$ be any point in the space and suppose $\lambda_{\mathrm{min}}(\nabla^{2}f(w_{0}))\leq -\delta$ for some $\delta \geq 0$ . Given two coupled sequences defined as before, under events Event $_{T_1,T_0}^{\mathrm{double}}(w_0^{\mathrm{a}})$ and Event $_{T_1,T_0}^{\mathrm{double}}(w_0^{\mathrm{b}})$ , we have with probability at least 0.98
+
+$$
+\begin{array}{l} \max \left\{f \left(w _ {0} ^ {\mathrm {a}}\right) - f \left(w _ {T _ {1}} ^ {\mathrm {a}}\right), f \left(w _ {0} ^ {\mathrm {b}}\right) - f \left(w _ {T _ {1}} ^ {\mathrm {b}}\right) \right\} \\ \geq 0.5 \eta \sum_ {t = 0} ^ {T _ {1} - 1} \left(- \eta \cdot O \left(C _ {2} + \left(C _ {2}\right) ^ {1. 5}\right) - O \left(C _ {1} \nu^ {2} \eta (d + \sqrt {C _ {2}})\right)\right) + \Omega \left(\frac {\delta^ {3}}{C _ {1} \log^ {3} \frac {\delta^ {3}}{\eta C _ {2}}}\right) \\ \end{array}
+$$
+
+Lemma B.4 directly proves the second half of Theorem B.1c, because given two coupled sequences with the same marginal distribution, we have
+
+$$
+\Pr [ f (w _ {0} ^ {\mathrm {a}}) - f (w _ {T _ {1}} ^ {\mathrm {a}}) \geq X ] \geq \frac {1}{2} \Pr [ \max \left\{f (w _ {0} ^ {\mathrm {a}}) - f (w _ {T _ {1}} ^ {\mathrm {a}}), f (w _ {0} ^ {\mathrm {b}}) - f (w _ {T _ {1}} ^ {\mathrm {b}}) \right\} \geq X ]
+$$
+
+Proof of Lemma B.4. Our choice on $r$ and $R$ satisfy the requirement of Lemma 3.5. Suppose without loss of generality that the $w_{t}^{\mathbf{a}}$ sequence leaves $w_{0}$ by more than $R$ . Let $T_{1}^{\mathbf{a}}$ be the first iteration $t \leq T_{1}$ in which $\| w_{t}^{\mathbf{a}} - w_{0}^{\mathbf{a}} \| \geq R$ .
+
+$$
+\begin{array}{l} \| w _ {T _ {1} ^ {\mathrm {a}}} ^ {\mathrm {a}} - w _ {0} ^ {\mathrm {a}} \| ^ {2} = \eta^ {2} \| \nabla f (w _ {0} ^ {\mathrm {a}}) + \dots + \nabla f (w _ {T _ {1} ^ {\mathrm {a}}} ^ {\mathrm {a}} - 1) + \Xi_ {0} ^ {\mathrm {a}} + \dots + \Xi_ {T _ {1} ^ {\mathrm {a}}} ^ {\mathrm {a}} - 1 + \xi_ {0} ^ {\mathrm {a}} + \dots + \xi_ {T _ {1} ^ {\mathrm {a}}} ^ {\mathrm {a}} \| ^ {2} \\ \leq O \left(\eta^ {2} T _ {1}\right) \sum_ {t = 0} ^ {T _ {1} ^ {\mathrm {a}} - 1} \| \nabla f \left(w _ {t} ^ {\mathrm {a}}\right) \| ^ {2} + O \left(C _ {2} \eta^ {2} T _ {1}\right) + O \left(C _ {1} \eta^ {2} \nu^ {2} T _ {1} d\right) \\ \end{array}
+$$
+
+Combining this with Lemma B.3, we have
+
+$$
+\begin{array}{l} f \left(w _ {0} ^ {\mathrm {a}}\right) - f \left(w _ {T _ {1} ^ {\mathrm {a}}} ^ {\mathrm {a}}\right) \geq 0. 5 \eta \sum_ {t = 0} ^ {T _ {1} ^ {\mathrm {a}} - 1} \left(\left\| \nabla f \left(w _ {t} ^ {\mathrm {a}}\right) \right\| ^ {2} - \eta \cdot O \left(C _ {2} + \left(C _ {2}\right) ^ {1. 5}\right) - O \left(C _ {1} \nu^ {2} \eta \left(d + \sqrt {C _ {2}}\right)\right)\right) + \frac {\left\| w _ {T _ {1} ^ {\mathrm {a}}} ^ {\mathrm {a}} - w _ {0} ^ {\mathrm {a}} \right\| ^ {2}}{1 0 0 \eta T _ {1}} \\ \geq 0.5 \eta \sum_ {t = 0} ^ {T _ {1} ^ {\mathrm {a}} - 1} \left(\| \nabla f (w _ {t} ^ {\mathrm {a}}) \| ^ {2} - \eta \cdot O (C _ {2} + (C _ {2}) ^ {1. 5}) - O (C _ {1} \nu^ {2} \eta (d + \sqrt {C _ {2}}))\right) + \frac {R ^ {2}}{1 0 0 \eta T _ {1}} \\ \end{array}
+$$
+
+Combining this with Lemma B.3 again but for the remainder iterations, we have
+
+$$
+f (w _ {0} ^ {\mathsf {a}}) - f (w _ {T _ {1}} ^ {\mathsf {a}}) \geq 0. 5 \eta \sum_ {t = 0} ^ {T _ {1} - 1} \left(\| \nabla f (w _ {t} ^ {\mathsf {a}}) \| ^ {2} - \eta \cdot O (C _ {2} + (C _ {2}) ^ {1. 5}) - O (C _ {1} \nu^ {2} \eta (d + \sqrt {C _ {2}}))\right) + \frac {R ^ {2}}{1 0 0 \eta T _ {1}}
+$$
+
+
+
+In fact, the above same proof of Lemma B.4 also implies Theorem B.1b. These together finish the proof of Theorem B.1.
+
+# C MORE ON EXPERIMENTS
+
+We conduct experiments on training a residual network ResNet-20 He et al. (2016) on the CIFAR-10/100 image classification tasks Krizhevsky et al. (2014).
+
+# C.1 SETTING AND IMPLEMENTED METHODS
+
+In all of our experiments, we use 10 workers and mini-batch size 10 per worker. Given any attacker and any defender algorithm, we run SGD three times for 140 epochs, each time with a different initial learning rate $\eta \in \{0.1, 0.2, 0.4\}$ . We let the learning rate decrease by a factor of 10 on epochs 80 and 110, and present the best testing accuracies in the three runs (each corresponding to a different initial learning rate).
+
+We use standard data augmentation (random crops, random flips, and channel normalization).
+
+We compare against Geometric Median Chen et al. (2017), Coordinate-wise Median Yin et al. (2018; 2019), Krum Blanchard et al. (2017), and Zeno Xie et al. (2018b) with attacks. We set $\alpha = 0.4$ so there are 4 Byzantine workers. (This exceeds the fault-tolerance of Krum, and so we also tested Krum with only 3 Byzantine workers.) We formally define those prior works as follows.
+
+Definition C.1 (GeoMed Chen et al. (2017)). The geometric median of $\{y_1,\dots,y_m\}$ , denoted by geo_med $\{y_1,\dots,y_m\}$ , is
+
+$$
+\operatorname {g e o \_ m e d} \left\{y _ {1}, \dots , y _ {m} \right\} := \arg \min _ {y \in \mathbb {R} ^ {d}} \sum_ {i = 1} ^ {m} \| y - y _ {i} \|
+$$
+
+In our experiments, we choose the geometric median from set $\{y_1,\dots,y_m\}$ .
+
+Definition C.2 (coordinate-wise median Yin et al. (2018; 2019)). Coordinate-wise median $g = \text{med}\{y_1, \dots, y_m\}$ is defined as a vector with its $k$ -th coordinate being $g[k] = \text{med}\{y_1[k], \dots, y_m[k]\}$ for each $k \in [d]$ , where $\text{med}$ is the usual (one-dimensional) median.
+
+Definition C.3 (Krum Blanchard et al. (2017)).
+
+$$
+K R \{y _ {1}, \dots , y _ {m} \} := y _ {k} \quad w h e r e \quad k = \underset {i \in [ m ]} {\arg \min } \sum_ {i \rightarrow j} \| y _ {i} - y _ {j} \| ^ {2}
+$$
+
+and $i\to j$ is the indices of the $m - b - 2$ nearest neighbours of $y_{i}$ in $\{y_1,\dots,y_m\} \setminus \{y_i\}$ by Euclidean distances.
+
+Note that Krum requires $2b + 2 < m$ . So, we have also repeated the experiments for Krum with 3 Byzantine workers (out of 10 workers) to be more fair.
+
+Definition C.4 (Zeno Xie et al. (2018b)).
+
+$$
+Z e n o _ {b} \{y _ {1}, \dots , y _ {m} \} = \frac {1}{m - b} \sum_ {i = 1} ^ {m - b} \widetilde {y} (i)
+$$
+
+where $\{\widetilde{y}(i) : i \in [m]\}$ are the gradient estimators with the $m - b$ highest "scores", and the so-called stochastic descendant score for any gradient estimator $u$ , based on the current parameter $x$ , learning rate $\eta$ , and a constant weight $\rho > 0$ , is defined as:
+
+$$
+\operatorname {S c o r e} _ {\eta , \rho} (u, x) = f _ {r} (x) - f _ {r} (x - \eta u) - \rho \| u \| ^ {2}
+$$
+
+$f_{r}(x) - f_{r}(x - \eta u)$ is the estimated descendant of the loss function and $\rho \| u\| ^2$ is the magnitude of the update.
+
+In our experiments, we let $f_{r}(x)$ be the estimated objective over a mini-batch of size $n_r = 10$ (so the time to perform this estimation is on the same magnitude as the gradient evaluation for each individual worker). We also chose $\rho = 0.0005$ (and this value does not affect our experimental results by much).
+
+Safeguard SGD. Our Algorithm 1 is stated in a way to make our theoretical proofs as clean as possible. Here, we discuss how we actually implement it in practice.
+
+First of all, as common in the literature, we omit the Gaussian noise $\xi_{t}$ that is added for theoretical purpose, and instead rely on the natural noise in the training process to escape saddle points.
+
+Also, we make universal choices for our safeguard window sizes (across all attackers): for our algorithm with a single safeguard we have used a universal window size $T = 3$ epochs, and for our algorithm with double safeguards we have used window sizes $T_0 = 1$ epoch and $T_1 = 6$ epochs.
+
+We also provide an automatic empirical process to select safeguard thresholds and eliminate bad workers. The process to determine $A_{\mathrm{med}}$ (and likewise for $B_{\mathrm{med}}$ ) is described as follows. In each iteration, for every worker $i \in [m]$ , we sort $\left\{\|A_i - A_j\|\right\}_{j \in [m]}$ and pick the smallest $\lceil m/2 + 1 \rceil$ -th entry, and let this number be the "score" for worker $i$ . We select the worker with the smallest "score" as $A_{\mathrm{med}}$ and call its "score" $S$ . Then, we use $1.5 \min \{S, 5\}$ as the safeguard threshold for this iteration. Namely, we declare any worker $j$ satisfying
+
+$\| A_{j} - A_{\mathrm{med}}\| \geq 1.5\max \{S,5\}$ as a bad worker.
+
+# C.2 EXPERIMENT RESULTS BY ATTACKS
+
+The ideal accuracies are $91.7\% / 68.0\%$ for CIFAR-10/100, which correspond to applying SGD using only the stochastic gradients from the honest workers. Below we discuss about the experimental results one attack at a time.
+
+# C.2.1 VARIANCE ATTACK
+
+We begin by looking at the hardest proposed attack from prior works. The Variance attack follows the strategy prescribed by Baruch et al. (2019), by which Byzantine workers collude in order to shift the mean among all gradients by a factor $\beta$ times the standard deviation of the gradient, while staying within population variance. More precisely, the maximal change to the mean that can be applied by an attacker without the fear of being detected, by using the properties of the Normal distribution, specifically the cumulative standard normal function, and compute the maximal possible shift so that the attackers' values stay within population variance. (See (Baruch et al., 2019, Algorithm 3) for a precise description. Our $\beta$ is $z_{\mathrm{max}}$ in their notation.) We implement this strategy coordinate-wise, the same way as they did. Their work observes that the shift $\beta$ can be non-trivial in practice, since stochastic gradients tend to have large variance in neural network training (which we also observed in our setup). Critically, the attack cannot be defended against by historyless algorithms, as the attacker's values are statistically indistinguishable from a regular execution in a single iteration.
+
+In our setting, for 10 total nodes and $\alpha = 0.4$ , $\beta$ is upper bounded by 0.3 (following the classic tables for the cumulative normal). We also ran the same attack in the setup from their paper (50 nodes total, of which 24 are Byzantine, which allows $\beta \sim 1.5$ ) and observed a similar outcome. Results for this experiment are given in Figure 3.
+
+
+(a) CIFAR-10
+
+
+(b) CIFAR-100
+Figure 3: Performance comparison under the variance attack.
+
+As shown by the results, our algorithm provably circumvents this attack, and recovers full accuracy. This is explained by the fact that the algorithm has memory: in particular, Byzantine nodes following this strategy will progressively diverge from the (honest) "median" $A_{\mathrm{med}}$ (at a "linear" rate, recall Figure 2(a)), and therefore will eventually exceed the threshold and be marked as malicious by the algorithm.
+
+Specifically, both variants of the algorithm successfully catch all the bad nodes after at most 150 iterations. Indeed, at the 100-th iteration, the pair-wise distances $\| A_i - A_j\|$ among good workers $i,j\in$ good are between 5.3 and 6.3, but the pair-wise distance between a good and a bad worker is at least 12.5.
+
+# C.2.2 SIGN-FLIPPING ATTACK
+
+We next move onto the sign-flipping attack. Recall that, in a sign-flipping attack, each Byzantine worker sends the negative gradient to the master. This is still a strong attack since if one does not avoid any bad workers, the test accuracy will suffer from a significant drop. The results are in Figure 4.
+
+From the plots, one can see that again our single and double safe-guard algorithms both outperform prior works. They also successfully catch all the bad workers within 150 iterations. (For instance, at iteration 150 for CIFAR-10 training, the distance $\| A_{\mathrm{med}} - A_j\|$ for a good worker $j\in$ good is at most 6.9, but for a bad
+
+
+(a) CIFAR-10
+
+
+(b) CIFAR-100
+Figure 4: Performance comparison under the sign-flipping attack.
+
+worker $j \notin$ good it can be more than 11.)
+
+In contrast, prior work Zeno completely fails because locally at a training step, using merely $n_r = 10$ samples to evaluate the objective, it is statistically not possible to even distinguish if the sign of the stochastic gradient is flipped or not. For prior works Krum and GeoMedian, although they appear to have some non-negligible performances, but they are actually no better than simply applying SGD with the naive mean of gradients from all the workers (including those from bad workers). Therefore, we conclude that prior works all fail to be Byzantine fault tolerant under this attack.
+
+# C.2.3 DELAYED-GRADIENT ATTACK
+
+Recall that, in a delayed-gradient attack, each Byzantine worker sends an old gradient to the master. In our experiments, the delay is of $D = 1000$ iterations (= 2 epochs). We believe this is not a very strong attack, because delayed gradients are not sufficiently malicious: they are still "correct" to certain extent albeit being delayed. The results are shown in Figure 5.
+
+
+(a) CIFAR-10
+Figure 5: Performance comparison under the delayed-gradient attack.
+
+
+(b) CIFAR-100
+
+From the plots, one can see that our single and double safe-guard algorithms again both match the ideal accuracies. All the prior works suffer from a significant performance loss under this attack.
+
+It is worth noting that our single and double safe-guard algorithms do not catch any bad worker under this attack, so they simply use the "naive mean" of gradients from all the workers (including those delayed gradients from bad workers). However, there is no performance loss even if we use those delayed gradients. That is why we believe the delayed-gradient attack is not very strong, as the gradients are not sufficiently malicious.
+
+Prior work Zeno suffers from some performance loss, because it only uses 6 workers out of 10, in which statistically only $6 \times 0.6 \approx 3 \sim 4$ gradients are correct.[11] Other prior works suffer from performance loss,
+
+because they only pick one single stochastic gradient from the 10 workers, and it is sometimes even from the bad worker.
+
+# C.2.4 LABEL-FLIPPING ATTACK
+
+Recall that, in the label-flipping attack, each Byzantine worker computes its gradient based on the cross-entropy loss with flipped labels: for CIFAR-10, label $\ell \in \{0,\dots,9\}$ is flipped to $9 - \ell$ , and for CIFAR-100, label $\ell$ is flipped to $99 - \ell$ . The results are shown in Figure 6.
+
+
+(a) CIFAR-10
+
+
+(b) CIFAR-100
+Figure 6: Performance comparison under the label-flipping attack.
+
+From the plots, one can see that our single and double safe-guard algorithms even outperform the "ideal accuracies." (92.4% accuracy vs "ideal accuracy" 91.7% under CIFAR-10; 69.4% accuracy vs "ideal accuracy" 68.0 under CIFAR-100.) In addition, we have found out that the safeguard algorithms did not catch any bad worker. This should not be surprising, since label-flipping (a.k.a. label smoothing) is known to be a regularization technique to actually improve test accuracy, as opposed to hurt performance.
+
+Zeno also performs well under this attack (but it does not outperform the ideal accuracy). We have investigated into Zeno, and found out that it cannot distinguish good workers from bad workers under label-flipping attack; and therefore Zeno effectively always runs under 6 random workers as opposed to using the full power of the $m = 10$ workers (recall Zeno picks 6 workers with the topmost scores, see Definition C.4). This explains its (minor) under-performance comparing to safeguard.
+
+Other prior works perform significantly worse, and this should be alarming since label-flipping is one type of smoothing technique to improve test accuracy, as opposed to an actual "attack" to hurt performance.
+
+# C.2.5 SAFEGUARD ATTACKS
+
+Finally, in the safeguard attack that we design, Byzantine workers send negative but re-scaled gradient to the master. We choose the re-scale factor so that it hardly triggers the safe-guard conditions at the master. From our experiment, choosing the re-scale factor as 0.6 in all the cases do not trigger the safe-guard conditions, while choosing a re-scale factor as 0.7 enables the algorithm to catch Byzantine workers occasionally. Our results are shown in Figure 7 (for re-scale factor 0.6) and Figure 8 (for re-scale factor 0.7).
+
+
+(a) CIFAR-10
+
+
+(b) CIFAR-100
+Figure 7: Performance comparison under the safeguard attack with re-scale factor 0.6. (Recall this attack is designed to maximally impact the performance of our algorithm.)
+
+Re-scale factor 0.6. In Figure 7, the performance of our (single and double) safeguard algorithms indeed get hurt a bit. Recall in Figure 7 the re-scale factor 0.6 is chosen to maximally impact our algorithm. The test accuracy drops from $91.7\%$ to $89.3\%$ under CIFAR-10; and drops from $68.0\%$ to $60.0\%$ under CIFAR-100 (for both single and double safeguards). In these cases, we confirm that both versions of the safeguard algorithms did not catch any bad worker. However, this still significantly outperforms all prior works.
+
+
+(a) CIFAR-10
+
+
+(b) CIFAR-100
+Figure 8: Performance comparison under the safeguard attack with re-scale factor 0.7. (In this case, our our algorithm can catch some bad workers, and thus perform nearly optimally.)
+
+Re-scale factor 0.7. In Figure 8, we present the scenario when the re-scale factor is 0.7, so that the safeguard algorithms can occasionally catch some bad workers (depending on the randomness and learning rate). We confirm that in the three runs of single safeguard, it catches 1, 2, 3 bad workers for CIFAR-10, and 1, 0, 0 bad workers for CIFAR-100 respectively; in the three runs of double safeguard, it catches 1, 2, 4 bad workers for CIFAR-10, and 2, 2, 2 bad workers for CIFAR-100 respectively.
+
+Since there is a significant performance gain when our safeguard algorithms catch bad workers, this explains why safeguard algorithms in Figure 8 outperform their counterparts in Figure 7 with rescale factor 0.6. At the same time, we notice that the double safeguard algorithm has the ability to catch bad workers more easily. This is why double safeguard significantly outperforms single sageguard in Figure 8.
+
+In contrast, all other prior algorithms perform extremely bad under this attack. To some extent, safeguard attack is even stronger than the previously proposed variance attack, since it can drag the 100-class test accuracy on CIFAR-100 for all prior defense algorithms to nearly $1\%$ , while variance attack can only drag them down to around $10\%$ .
+
+# C.3 FULL COMPARISON TABLE
+
+We also include the full test accuracy comparison table in Table 1.
+
+ | single safeguard | double safeguard | coord median | geo median | Krum | Krum (3 faulty nodes) | Zeno |
| variance attack | 92.02 | 91.75 | 21.43 | 22.01 | 21.47 | 22.4 | 33.42 |
| sign-flipping attack | 91.93 | 92.08 | 22 | 59.65 | 57.03 | 70.87 | 22.4 |
| label-flipping attack | 92.33 | 92.44 | 31.93 | 83.07 | 83.18 | 83.52 | 91.66 |
| delayed-gradient attack | 91.58 | 91.42 | 29.43 | 74.36 | 61.81 | 79.29 | 77.27 |
| safeguard(x0.6) attack | 89.01 | 89.26 | 21.44 | 23.12 | 12.48 | 28.66 | 74.46 |
| safeguard(x0.7) attack | 91.24 | 92.08 | 21.61 | 19.95 | 15.17 | 24.52 | 33.36 |
+
+ | single safeguard | double safeguard | coord median | geo median | Krum | Krum (3 faulty nodes) | Zeno |
| variance attack | 68.27 | 67.95 | 6.6 | 5.81 | 5.05 | 6.1 | 10.87 |
| sign-flipping attack | 68.02 | 68.08 | 2.13 | 10.19 | 10.93 | 28.34 | 2.59 |
| label-flipping attack | 69.43 | 68.8 | 5.34 | 51.85 | 52.13 | 51.66 | 67.86 |
| delayed-gradient attack | 67.03 | 66.42 | 4.04 | 36.34 | 31.43 | 44.85 | 36.6 |
| safeguard(x0.6) attack | 59.87 | 60 | 2.01 | 1.9 | 1.25 | 1.72 | 5.02 |
| safeguard(x0.7) attack | 59.84 | 64.91 | 2.07 | 1.97 | 1.32 | 1.55 | 3.31 |
+
+Table 1: Table of test accuracy performances for CIFAR-10 (above) and CIFAR-100 (below).
+
+# C.4 ATTACK AGAINST THE CONVEX ALGORITHM OF ALISTARH ET AL. (2018)
+
+We now briefly describe an attack against this algorithm. The attack specifically leverages the fact that the algorithm does not use sliding windows.
+
+One can first run the vanilla SGD to compute "the maximum deviation per good worker" for the accumulation vector used by the algorithm $\sum_{t=0}^{T} \nabla_t$ . This maximum deviation is therefore a lower bound for the threshold used in their algorithm. Next, we design an attacker who evenly distributes this total allowed deviation to e.g. 5 consecutive epochs, and behaves honestly for the remaining epochs. Such an attacker cannot be identified by this algorithm, because its total deviation across all the iterations is identical to that of a good worker. However, this leads the algorithm to divergence.
+
+Specifically, suppose 4 Byzantine workers all maliciously report their stochastic gradients multiplied by the scalar $-5$ , and the remaining 6 good workers report their true stochastic gradients. One can verify numerically that this attacker can run for 5 consecutive epochs (say, epochs $a, a + 1, a + 2, a + 3, a + 4$ ) without being caught by the algorithm. Now,
+
+- if $a \leq 75$ , within just 1 epoch of attack, the neural net weights diverge (value NAN).
+- if $80 \leq a \leq 115$ , this attack is applied after the first learning rate decay. Within just 1 epoch of the attack, the objective explodes and accuracy becomes $10\%$ (random), and within 3 epochs the algorithm diverges completely.
+- if $120 \leq a \leq 155$ , this attack is after the second learning rate decay. Within just 2 epochs of attack, the accuracy drops to $11\%$ . Later, the accuracy never recovers above $40\%$ .
\ No newline at end of file
diff --git a/byzantineresilientnonconvexstochasticgradientdescent/images.zip b/byzantineresilientnonconvexstochasticgradientdescent/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..6798bf36f98a53b054301ee46a8f211494e3f835
--- /dev/null
+++ b/byzantineresilientnonconvexstochasticgradientdescent/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:09471922ecd4b06dbc53558b3b6aeecd0df240c744661627fc89ff7dc0e8414f
+size 1105536
diff --git a/byzantineresilientnonconvexstochasticgradientdescent/layout.json b/byzantineresilientnonconvexstochasticgradientdescent/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..6c811052f0ee68cb389acda49a11d2fc0842d90b
--- /dev/null
+++ b/byzantineresilientnonconvexstochasticgradientdescent/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:916c36aadf4fec81f27252b2d401aa038c3c740c5337d05dba5438d5b4a8b29a
+size 1314111
diff --git a/calibrationofneuralnetworksusingsplines/98398fcd-90f0-416f-9126-0d39915a1126_content_list.json b/calibrationofneuralnetworksusingsplines/98398fcd-90f0-416f-9126-0d39915a1126_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..cf91a39c6e7ef76984dda4e2630f76552e597067
--- /dev/null
+++ b/calibrationofneuralnetworksusingsplines/98398fcd-90f0-416f-9126-0d39915a1126_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2f2808aed798ecf87342204d5808837a874e4de6492deac6a54cc3cfc329c7b5
+size 119916
diff --git a/calibrationofneuralnetworksusingsplines/98398fcd-90f0-416f-9126-0d39915a1126_model.json b/calibrationofneuralnetworksusingsplines/98398fcd-90f0-416f-9126-0d39915a1126_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..0780117e8f662f8e669a8fcc7a3c8d7d5feacf16
--- /dev/null
+++ b/calibrationofneuralnetworksusingsplines/98398fcd-90f0-416f-9126-0d39915a1126_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dfe9bef7ce031f610ca1fc2fa65c871bf9eb5913395bd18cf1631525b9b89ff1
+size 135975
diff --git a/calibrationofneuralnetworksusingsplines/98398fcd-90f0-416f-9126-0d39915a1126_origin.pdf b/calibrationofneuralnetworksusingsplines/98398fcd-90f0-416f-9126-0d39915a1126_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..8c3a9d5f8af1d69ff4bad117b12252a863ccb0e8
--- /dev/null
+++ b/calibrationofneuralnetworksusingsplines/98398fcd-90f0-416f-9126-0d39915a1126_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ded95583827ba7f28a5e287e8707cdb75c3271b91ec66c515d6d048405af85bb
+size 758193
diff --git a/calibrationofneuralnetworksusingsplines/full.md b/calibrationofneuralnetworksusingsplines/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..e4ec906200c166f0178d6877314abb185107794c
--- /dev/null
+++ b/calibrationofneuralnetworksusingsplines/full.md
@@ -0,0 +1,526 @@
+# CALIBRATION OF NEURAL NETWORKS USING SPLINES
+
+Kartik Gupta $^{1,2}$ , Amir Rahimi $^{1}$ , Thalaiyasingam Ajanthan $^{1}$ , Thomas Mensink $^{3}$ , Cristian Sminchisescu $^{3}$ , Richard Hartley $^{1,3}$
+
+1Australian National University, 2Data61, CSIRO, 3Google Research
+{kartik.gupta, amir.rahimi, thalaiyasingam.ajanthan}@anu.edu.au
+{mensink, sminchisescu, richardhartley}@google.com
+
+# ABSTRACT
+
+Calibrating neural networks is of utmost importance when employing them in safety-critical applications where the downstream decision making depends on the predicted probabilities. Measuring calibration error amounts to comparing two empirical distributions. In this work, we introduce a binning-free calibration measure inspired by the classical Kolmogorov-Smirnov (KS) statistical test in which the main idea is to compare the respective cumulative probability distributions. From this, by approximating the empirical cumulative distribution using a differentiable function via splines, we obtain a recalibration function, which maps the network outputs to actual (calibrated) class assignment probabilities. The spline-fitting is performed using a held-out calibration set and the obtained recalibration function is evaluated on an unseen test set. We tested our method against existing calibration approaches on various image classification datasets and our spline-based recalibration approach consistently outperforms existing methods on KS error as well as other commonly used calibration measures.
+
+# 1 INTRODUCTION
+
+Despite the success of modern neural networks they are shown to be poorly calibrated (Guo et al. (2017)), which has led to a growing interest in the calibration of neural networks over the past few years (Kull et al. (2019); Kumar et al. (2019; 2018); Müller et al. (2019)). Considering classification problems, a classifier is said to be calibrated if the probability values it associates with the class labels match the true probabilities of correct class assignments. For instance, if an image classifier outputs 0.2 probability for the "horse" label for 100 test images, then out of those 100 images approximately 20 images should be classified as horse. It is important to ensure calibration when using classifiers for safety-critical applications such as medical image analysis and autonomous driving where the downstream decision making depends on the predicted probabilities.
+
+One of the important aspects of machine learning research is the measure used to evaluate the performance of a model and in the context of calibration, this amounts to measuring the difference between two empirical probability distributions. To this end, the popular metric, Expected Calibration Error (ECE) (Naeini et al. (2015)), approximates the classwise probability distributions using histograms and takes an expected difference. This histogram approximation has a weakness that the resulting calibration error depends on the binning scheme (number of bins and bin divisions). Even though the drawbacks of ECE have been pointed out and some improvements have been proposed (Kumar et al. (2019); Nixon et al. (2019)), the histogram approximation has not been eliminated. $^{1}$
+
+In this paper, we first introduce a simple, binning-free calibration measure inspired by the classical Kolmogorov-Smirnov (KS) statistical test (Kolmogorov (1933); Smirnov (1939)), which also provides an effective visualization of the degree of miscalibration similar to the reliability diagram (Niculescu-Mizil & Caruana (2005)). To this end, the main idea of the KS-test is to compare the respective classwise cumulative (empirical) distributions. Furthermore, by approximating the empirical cumulative distribution using a differentiable function via splines (McKinley & Levine (1998)), we
+
+obtain an analytical recalibration function2 which maps the given network outputs to the actual class assignment probabilities. Such a direct mapping was previously unavailable and the problem has been approached indirectly via learning, for example, by optimizing the (modified) cross-entropy loss (Guo et al. (2017); Mukhoti et al. (2020); Müller et al. (2019)). Similar to the existing methods (Guo et al. (2017); Kull et al. (2019)) the spline-fitting is performed using a held-out calibration set and the obtained recalibration function is evaluated on an unseen test set.
+
+We evaluated our method against existing calibration approaches on various image classification datasets and our spline-based recalibration approach consistently outperforms existing methods on KS error, ECE as well as other commonly used calibration measures. Our approach to calibration does not update the model parameters, which allows it to be applied on any trained network and it retains the original classification accuracy in all the tested cases.
+
+# 2 NOTATION AND PRELIMINARIES
+
+We abstract the network as a function $f_{\theta}:\mathcal{D}\to [0,1]^{K}$ , where $\mathcal{D}\subset \mathbb{R}^d$ , and write $f_{\theta}(\mathbf{x}) = \mathbf{z}$ . Here, $\mathbf{x}$ may be an image, or other input datum, and $\mathbf{z}$ is a vector, sometimes known as the vector of logits. In this paper, the parameters $\theta$ will not be considered, and we write simply $f$ to represent the network function. We often refer to this function as a classifier, and in theory this could be of some other type than a neural network.
+
+In a classification problem, $K$ is the number of classes to be distinguished, and we call the value $z_{k}$ (the $k$ -th component of vector $\mathbf{z}$ ) the score for the class $k$ . If the final layer of a network is a softmax layer, then the values $z_{k}$ satisfy $\sum_{k=1}^{K} z_{k} = 1$ , and $z_{k} \geq 0$ . Hence, the $z_{k}$ are pseudoprobabilities, though they do not necessarily have anything to do with real probabilities of correct class assignments. Typically, the value $y^{*} = \arg \max_{k} z_{k}$ is taken as the (top-1) prediction of the network, and the corresponding score, $\max_{k} z_{k}$ is called the confidence of the prediction. However, the term confidence does not have any mathematical meaning in this context and we deprecate its use.
+
+We assume we are given a set of training data $(\mathbf{x}_i, y_i)_{i=1}^n$ , where $\mathbf{x}_i \in \mathcal{D}$ is an input data element, which for simplicity we call an image, and $y_i \in \mathcal{K} = \{1, \dots, K\}$ is the so-called ground-truth label. Our method also uses two other sets of data, called calibration data and test data.
+
+It would be desirable if the numbers $z_{k}$ output by a network represented true probabilities. For this to make sense, we posit the existence of joint random variables $(X,Y)$ , where $X$ takes values in a domain $\mathcal{D} \subset \mathbb{R}^{d}$ , and $Y$ takes values in $\mathcal{K}$ . Further, let $Z = f(X)$ , another random variable, and $Z_{k} = f_{k}(X)$ be its $k$ -th component. Note that in this formulation $X$ and $Y$ are joint random variables, and the probability $P(Y \mid X)$ is not assumed to be 1 for single class, and 0 for the others.
+
+A network is said to be calibrated if for every class $k$ ,
+
+$$
+P (Y = k \mid Z = \mathbf {z}) = z _ {k}. \tag {1}
+$$
+
+This can be written briefly as $P(k \mid f(\mathbf{x})) = f_k(\mathbf{x}) = z_k$ . Thus, if the network takes input $\mathbf{x}$ and outputs $\mathbf{z} = f(\mathbf{x})$ , then $z_k$ represents the probability (given $f(\mathbf{x})$ ) that image $\mathbf{x}$ belongs to class $k$ .
+
+The probability $P(k \mid \mathbf{z})$ is difficult to evaluate, even empirically, and most metrics (such as ECE) use or measure a different notion called classwise calibration (Kull et al. (2019); Zadrozny & Elkan (2002)), defined as,
+
+$$
+P (Y = k \mid Z _ {k} = z _ {k}) = z _ {k}. \tag {2}
+$$
+
+This paper uses this definition (2) of calibration in the proposed KS metric.
+
+Calibration and accuracy of a network are different concepts. For instance, one may consider a classifier that simply outputs the class probabilities for the data, ignoring the input $\mathbf{x}$ . Thus, if $f_{k}(\mathbf{x}) = z_{k} = P(Y = k)$ , this classifier $f$ is calibrated but the accuracy is no better than the random predictor. Therefore, in calibration of a classifier, it is important that this is not done while sacrificing classification (for instance top-1) accuracy.
+
+The top- $r$ prediction. The classifier $f$ being calibrated means that $f_{k}(\mathbf{x})$ is calibrated for each class $k$ , not only for the top class. This means that scores $z_{k}$ for all classes $k$ give a meaningful estimate of the probability of the sample belonging to class $k$ . This is particularly important in medical diagnosis where one may wish to have a reliable estimate of the probability of certain unlikely diagnoses.
+
+Frequently, however, one is most interested in the probability of the top scoring class, the top-1 prediction, or in general the top- $r$ prediction. Suppose a classifier $f$ is given with values in $[0,1]^K$ and let $y$ be the ground truth label. Let us use $f^{(-r)}$ to denote the $r$ -th top score (so $f^{(-1)}$ would denote the top score; the notation follows python semantics in which $A[-1]$ represents the last element in array $A$ ). Similarly we define $\max^{(-r)}$ for the $r$ -th largest value. Let $f^{(-r)}: \mathcal{D} \to [0,1]$ be defined as
+
+$$
+f ^ {(- r)} (\mathbf {x}) = \max _ {k} ^ {(- r)} f _ {k} (\mathbf {x}), \quad \text {a n d} \quad y ^ {(- r)} = \left\{ \begin{array}{l l} 1 & \text {i f} y = \arg \max _ {k} ^ {(- r)} f _ {k} (\mathbf {x}) \\ 0 & \text {o t h e r w i s e .} \end{array} \right. \tag {3}
+$$
+
+In words, $y^{(-r)}$ is 1 if the $r$ -th top predicted class is the correct (ground-truth) choice. The network is calibrated for the top- $r$ predictor if for all scores $\sigma$ ,
+
+$$
+P \left(y ^ {(- r)} = 1 \mid f ^ {(- r)} (\mathbf {x}) = \sigma\right) = \sigma . \tag {4}
+$$
+
+In words, the conditional probability that the top- $r$ -th choice of the network is the correct choice, is equal to the $r$ -th top score.
+
+Similarly, one may consider probabilities that a datum belongs to one of the top- $r$ scoring classes. The classifier is calibrated for being within-the-top- $r$ classes if
+
+$$
+P \left(\sum_ {s = 1} ^ {r} y ^ {(- s)} = 1 \mid \sum_ {s = 1} ^ {r} f ^ {(- s)} (\mathbf {x}) = \sigma\right) = \sigma . \tag {5}
+$$
+
+Here, the sum on the left is 1 if the ground-truth label is among the top $r$ choices, 0 otherwise, and the sum on the right is the sum of the top $r$ scores.
+
+# 3 KOLMOGOROV-SMIRNOV CALIBRATION ERROR
+
+We now consider a way to measure if a classifier is classwise calibrated, including top- $r$ and within-top- $r$ calibration. This test is closely related to the Kolmogorov-Smirnov test (Kolmogorov (1933); Smirnov (1939)) for the equality of two probability distributions. This may be applied when the probability distributions are represented by samples.
+
+We start with the definition of classwise calibration:
+
+$$
+P (Y = k \mid f _ {k} (X) = z _ {k}) = z _ {k}. \tag {6}
+$$
+
+$$
+P (Y = k, f _ {k} (X) = z _ {k}) = z _ {k} P \left(f _ {k} (X) = z _ {k}\right), \quad \text {B a y e s ’ r u l e}.
+$$
+
+This may be written more simply but with a less precise notation as
+
+$$
+P (z _ {k}, k) = z _ {k} P (z _ {k}).
+$$
+
+Motivation of the KS test. One is motivated to test the equality (or difference between) two distributions, defined on the interval [0, 1]. However, instead of having a functional form of these distributions, one has only samples from them. Given samples $(\mathbf{x}_i, y_i)$ , it is not straightforward to estimate $P(z_k)$ or $P(z_k \mid k)$ , since a given value $z_k$ is likely to occur only once, or not at all, since the sample set is finite. One possibility is to use histograms of these distributions. However, this requires selection of the bin size and the division between bins, and the result depends on these parameters. For this reason, we believe this is an inadequate solution.
+
+The approach suggested by the Kolmogorov-Smirnov test is to compare the cumulative distributions. Thus, with $k$ given, one tests the equality
+
+$$
+\int_ {0} ^ {\sigma} P \left(z _ {k}, k\right) d z _ {k} = \int_ {0} ^ {\sigma} z _ {k} P \left(z _ {k}\right) d z _ {k}. \tag {7}
+$$
+
+Writing $\phi_1(\sigma)$ and $\phi_2(\sigma)$ to be the two sides of this equation, the KS-distance between these two distributions is defined as $\mathrm{KS} = \max_{\sigma}|\phi_1(\sigma) - \phi_2(\sigma)|$ . The fact that simply the maximum is used
+
+here may suggest a lack of robustness, but this is a maximum difference between two integrals, so it reflects an accumulated difference between the two distributions.
+
+To provide more insights into the KS-distance, let us consider a case where $z_{k}$ consistently over or under-estimates $P(k\mid z_k)$ (which is usually the case, at least for top-1 classification (Guo et al. (2017))), then $P(k\mid z_k) - z_k$ has constant sign for all values of $z_{k}$ . It follows that $P(z_{k},k) - z_{k}P(z_{k})$ has constant sign and so the maximum value in the KS-distance is achieved when $\sigma = 1$ . In this case,
+
+$$
+\mathrm {K S} = \int_ {0} ^ {1} | P (z _ {k}, k) - z _ {k} P (z _ {k}) | d z _ {k} = \int_ {0} ^ {1} | P (k | z _ {k}) - z _ {k} | P (z _ {k}) d z _ {k}, \tag {8}
+$$
+
+which is the expected difference between $z_{k}$ and $P(k\mid z_k)$ . This can be equivalently referred to as the expected calibration error for the class $k$ .
+
+Sampled distributions. Given samples $(\mathbf{x}_i, y_i)_{i=1}^N$ , and a fixed $k$ , one can estimate these cumulative distributions by
+
+$$
+\int_ {0} ^ {\sigma} P \left(z _ {k}, k\right) d z _ {k} \approx \frac {1}{N} \sum_ {i = 1} ^ {N} \mathbf {1} \left(f _ {k} \left(\mathbf {x} _ {i}\right) \leq \sigma\right) \times \mathbf {1} \left(y _ {i} = k\right), \tag {9}
+$$
+
+where $\mathbf{1}:\mathcal{B}\to \{0,1\}$ is the function that returns 1 if the Boolean expression is true and otherwise 0. Thus, the sum is simply a count of the number of samples for which $y_{i} = k$ and $f_{k}(\mathbf{x}_{i})\leq \sigma$ , and so the integral represents the proportion of the data satisfying this condition. Similarly,
+
+$$
+\int_ {0} ^ {\sigma} z _ {k} P \left(z _ {k}\right) d z _ {k} \approx \frac {1}{N} \sum_ {i = 1} ^ {N} \mathbf {1} \left(f _ {k} \left(\mathbf {x} _ {i}\right) \leq \sigma\right) f _ {k} \left(\mathbf {x} _ {i}\right). \tag {10}
+$$
+
+These sums can be computed quickly by sorting the data according to the values $f_{k}(\mathbf{x}_{i})$ , then defining two sequences as follows.
+
+$$
+\tilde {h} _ {0} = h _ {0} = 0,
+$$
+
+$$
+h _ {i} = h _ {i - 1} + \mathbf {1} \left(y _ {i} = k\right) / N, \tag {11}
+$$
+
+$$
+\tilde {h} _ {i} = \tilde {h} _ {i - 1} + f _ {k} (\mathbf {x} _ {i}) / N.
+$$
+
+The two sequences should be the same, and the metric
+
+$$
+\mathrm {K S} \left(f _ {k}\right) = \max _ {i} \left| h _ {i} - \tilde {h} _ {i} \right|, \tag {12}
+$$
+
+gives a numerical estimate of the similarity, and hence a measure of the degree of calibration of $f_{k}$ . This is essentially a version of the Kolmogorov-Smirnov test for equality of two distributions.
+
+Remark. All this discussion holds also when $k < 0$ , for top- $r$ and within-top- $r$ predictions as discussed in section 2. In (11), for instance, $f_{-1}(\mathbf{x}_i)$ means the top score, $f_{-1}(\mathbf{x}_i) = \max_k(f_k(\mathbf{x}_i))$ , or more generally, $f_{-r}(\mathbf{x}_i)$ means the $r$ -th top score. Similarly, the expression $y_i = -r$ means that $y_i$ is the class that has the $r$ -th top score. Note when calibrating the top-1 score, our method is applied after identifying the top-1 score, hence, it does not alter the classification accuracy.
+
+# 4 RECALIBRATION USING SPLINES
+
+The function $h_i$ defined in (11) computes an empirical approximation
+
+$$
+h _ {i} \approx P (Y = k, f _ {k} (X) \leq f _ {k} (\mathbf {x} _ {i})) . \tag {13}
+$$
+
+For convenience, the value of $f_{k}$ will be referred to as the score. We now define a continuous function $h(t)$ for $t \in [0,1]$ by
+
+$$
+h (t) = P \left(Y = k, f _ {k} (X) \leq s (t)\right), \tag {14}
+$$
+
+where $s(t)$ is the $t$ -th fractile score, namely the value that a proportion $t$ of the scores $f_{k}(X)$ lie below. For instance $s(0.5)$ is the median score. So, $h_{i}$ is an empirical approximation to $h(t)$ where $t = i / N$ . We now provide the basic observation that allows us to compute probabilities given the scores.
+
+
+Figure 1: Calibration graphs for an uncalibrated DenseNet-40 (Huang et al. (2017)) trained on CIFAR-10 for top-1 class with a KS error of $5.5\%$ , and top-1 accuracy of $92.4\%$ on the test set. Here $(a)$ shows the plot of cumulative score and probability versus the fractile of the test set, $(b)$ shows the same information with the horizontal axis warped so that the cumulative-score graph is a straight line. This is created as scatter plots of cumulative (score, score): blue and (score, probability): orange. If the network is perfectly calibrated, the probability line will be a straight line coincident with the (score, score) line. This shows that the network is substantially overestimating (score) the probability of the computation. $(c)$ and $(d)$ show plots of (non-cumulative) score and probability plotted against fractile, or score. How these plots are produced is described in section 4.
+
+Proposition 4.1. If $h(t) = P(Y = k, f_k(X) \leq s(t))$ as in (14) where $s(t)$ is the t-th fractile score, then $h'(t) = P(Y = k \mid f_k(X) = s(t))$ , where $h'(t) = dh / dt$ .
+
+Proof. The proof relies on the equality $P(f_{k}(X) \leq s(t)) = t$ . In words, since $s(t)$ is the value that a fraction $t$ of the scores are less than or equal, the probability that a score is less than or equal to $s(t)$ , is (obviously) equal to $t$ . See the supplementary material for a detailed proof.
+
+Notice $h^\prime (t)$ allows direct conversion from score to probability. Therefore, our idea is to approximate $h_i$ using a differentiable function and take the derivative which would be our recalibration function.
+
+# 4.1 SPLINE FITTING
+
+The function $h_i$ (shown in fig 1a) is obtained through sampling only. Nevertheless, the sampled graph is smooth and increasing. There are various ways to fit a smooth curve to it, so as to take derivatives. We choose to fit the sampled points $h_i$ to a cubic spline and take its derivative.
+
+Given sample points $(u_i, v_i)_{i=1}^N$ in $\mathbb{R} \times \mathbb{R}$ , easily available references show how to fit a smooth spline curve that passes directly through the points $(u_i, v_i)$ . A very clear description is given in McKinley & Levine (1998), for the case where the points $u_i$ are equally spaced. We wish, however, to fit a spline curve with a small number of knot points to do a least-squares fit to the points. For convenience, this is briefly described here.
+
+A cubic spline $v(u)$ is defined by its values at certain knot points $(\hat{u}_k, \hat{v}_k)_{k=1}^K$ . In fact, the value of the curve at any point $u$ can be written as a linear function $v(u) = \sum_{k=1}^{K} a_k(u) \hat{v}_k = \mathbf{a}^\top(u) \hat{\mathbf{v}}$ , where the coefficients $a_k$ depend on $u$ . Therefore, given a set of further points $(u_i, v_i)_{i=1}^N$ , which may be different from the knot points, and typically more in number, least-squares spline fitting of the points $(u_i, v_i)$ can be written as a least-squares problem $\min_{\hat{\mathbf{v}}} \| \mathsf{A}(\mathbf{u}) \hat{\mathbf{v}} - \mathbf{v} \|^2$ , which is solved by standard linear least-squares techniques. Here, the matrix $\mathsf{A}$ has dimension $N \times K$ with $N > K$ . Once $\hat{\mathbf{v}}$ is found, the value of the spline at any further points $u$ is equal to $v(u) = \mathbf{a}(u)^\top \hat{\mathbf{v}}$ , a linear combination of the knot-point values $\hat{v}_k$ .
+
+Since the function is piecewise cubic, with continuous second derivatives, the first derivative of the spline is computed analytically. Furthermore, the derivative $v'(u)$ can also be written as a linear combination $v'(u) = \mathbf{a}'(u)^{\top} \hat{\mathbf{v}}$ , where the coefficients $\mathbf{a}'(u)$ can be written explicitly.
+
+Our goal is to fit a spline to a set of data points $(u_{i},v_{i}) = (i / N,h_{i})$ defined in (11), in other words, the values $h_i$ plotted against fractile score. Then according to Proposition 4.1, the derivative of the spline is equal to $P(k\mid f_k(X) = s(t))$ . This allows a direct computation of the conditional probability that the sample belongs to class $k$ .
+
+
+Figure 2: The result of the spline calibration method, on the example given in fig 1 for top-1 calibration. A recalibration function $\gamma : \mathbb{R} \to \mathbb{R}$ is used to adjust the scores, replacing $f_{k}(\mathbf{x})$ with $\gamma(f_{k}(\mathbf{x}))$ (see section 4.2). As is seen, the network is now almost perfectly calibrated when tested on the "calibration" set (top row) used to calibrate it. In bottom row, the recalibration function is tested on a further set "test". It is seen that the result is not perfect, but much better than the one in fig 1d. It is also notable that the improvement in calibration is achieved without any loss of accuracy.
+
+Since the derivative of $h_i$ is a probability, one might constrain the derivative to be in the range [0, 1] while fitting splines. This can be easily incorporated because the derivative of the spline is a linear expression in $\hat{v}_i$ . The spline fitting problem thereby becomes a linearly-constrained quadratic program (QP). However, although we tested this, in all the reported experiments, a simple least-squares solver is used without the constraints.
+
+# 4.2 RECALIBRATION
+
+We suppose that the classifier $f = f_{\theta}$ is fixed, through training on the training set. Typically, if the classifier is tested on the training set, it is very close to being calibrated. However, if a classifier $f$ is then tested on a different set of data, it may be substantially mis-calibrated. See fig 1.
+
+Our method of calibration is to find a further mapping $\gamma : [0,1] \to [0,1]$ , such that $\gamma \circ f_k$ is calibrated. This is easily obtained from the direct mapping from score $f_k(\mathbf{x})$ to $P(k \mid f_k(\mathbf{x}))$ (refer to fig 1d). In equations, $\gamma(\sigma) = h'(s^{-1}(\sigma))$ . The function $h'$ is known analytically, from fitting a spline to $h(t)$ and taking its derivative. The function $s^{-1}$ is a mapping from the given score $\sigma$ to its fractile $s^{-1}(\sigma)$ . Note that, a held out calibration set is used to fit the splines and the obtained recalibration function $\gamma$ is evaluated on an unseen test set.
+
+To this end, given a sample $\mathbf{x}$ from the test set with $f_{k}(\mathbf{x}) = \sigma$ , one can compute $h^{\prime}(s^{-1}(\sigma))$ directly in one step by interpolating its value between the values of $h^{\prime}(f_k(\mathbf{x}_i))$ and $h^\prime (f_k(\mathbf{x}_{i + 1}))$ where $\mathbf{x}_i$ and $\mathbf{x}_{i + 1}$ are two samples from the calibration set, with closest scores on either side of $\sigma$ . Assuming the samples in the calibration set are ordered, the samples $\mathbf{x}_i$ and $\mathbf{x}_{i + 1}$ can be quickly located using binary search. Given a reasonable number of samples in the calibration set, (usually in the order of thousands), this can be very accurate. In our experiments, improvement in calibration is observed in the test set with no difference to the accuracy of the network (refer to fig 2d). In practice, spline fitting is much faster than one forward pass through the network and it is highly scalable compared to learning based calibration methods.
+
+# 5 RELATED WORK
+
+Modern calibration methods. In recent years, neural networks are shown to overfit to the Negative Log-Likelihood (NLL) loss and in turn produce overconfident predictions which is cited as the main reason for miscalibration (Guo et al. (2017)). To this end, modern calibration methods can be broadly categorized into 1) methods that adapt the training procedure of the classifier, and 2) methods that learn a recalibration function post training. Among the former, the main idea is to increase the entropy of the classifier to avoid overconfident predictions, which is accomplished via modifying the training loss (Kumar et al. (2018); Mukhoti et al. (2020); Seo et al. (2019)), label smoothing (Müller et al. (2019); Pereyra et al. (2017)), and data augmentation techniques (Thulasidasan et al. (2019); Yun et al. (2019); Zhang et al. (2018)).
+
+On the other hand, we are interested in calibrating an already trained classifier that eliminates the need for training from scratch. In this regard, a popular approach is Platt scaling (Platt et al. (1999)) which transforms the outputs of a binary classifier into probabilities by fitting a scaled logistic function on a held out calibration set. Similar approaches on binary classifiers include Isotonic Regression (Zadrozny & Elkan (2001)), histogram and Bayesian binning (Naeini et al. (2015); Zadrozny & Elkan (2001)), and Beta calibration (Kull et al. (2017)), which are later extended to the multiclass setting (Guo et al. (2017); Kull et al. (2019); Zadrozny & Elkan (2002)). Among these, the most popular method is temperature scaling (Guo et al. (2017)), which learns a single scalar on a held out set to calibrate the network predictions. Despite being simple and one of the early works, temperature scaling is the method to beat in calibrating modern networks. Our approach falls into this category, however, as opposed to minimizing a loss function, we obtain a recalibration function via spline-fitting, which directly maps the classifier outputs to the calibrated probabilities.
+
+Calibration measures. Expected Calibration Error (ECE) (Naeini et al. (2015)) is the most popular measure in the literature, however, it has a weakness that the resulting calibration error depends on the histogram binning scheme such as the bin endpoints and the number of bins. Even though, some improvements have been proposed (Nixon et al. (2019); Vaicenavicius et al. (2019)), the binning scheme has not been eliminated and it is recently shown that any binning scheme leads to underestimated calibration errors (Kumar et al. (2019); Widmann et al. (2019)). Note that, there are binning-free metrics exist such as Brier score (Brier (1950)), NLL, and kernel based metrics for the multiclass setting (Kumar et al. (2018); Widmann et al. (2019)). Nevertheless, the Brier score and NLL measure a combination of calibration error and classification error (not just the calibration which is the focus). Whereas kernel based metrics, besides being computationally expensive, measure the calibration of the predicted probability vector rather than the classwise calibration error (Kull et al. (2019)) (or top- $r$ prediction) which is typically the quantity of interest. To this end, we introduce a binning-free calibration measure based on the classical KS-test, which has the same benefits as ECE and provides effective visualizations similar to reliability diagrams. Furthermore, KS error can be shown to be a special case of kernel based measures (Gretton et al. (2012)).
+
+# 6 EXPERIMENTS
+
+Experimental setup. We evaluate our proposed calibration method on four different image-classification datasets namely CIFAR-10/100 (Krizhevsky et al. (2009)), SVHN (Netzer et al. (2011)) and ImageNet (Deng et al. (2009)) using LeNet (LeCun et al. (1998)), ResNet (He et al. (2016)), ResNet with stochastic depth (Huang et al. (2017)), Wide ResNet (Zagoruyko & Komodakis (2016)) and DenseNet (Huang et al. (2017)) network architectures against state-of-the-art methods that calibrate post training. We use the pretrained network logits for spline fitting where we choose validation set as the calibration set, similar to the standard practice. Our final results for calibration are then reported on the test set of all datasets. Since ImageNet does not comprise the validation set, test set is divided into two halves: calibration set and test set. We use the natural cubic spline fitting method (that is, cubic splines with linear run-out) with 6 knots for all our experiments. Further experimental details are provided in the supplementary. For baseline methods namely: Temperature scaling, Vector scaling, Matrix scaling with ODIR (Off-diagonal and Intercept Regularisation), and Dirichlet calibration, we use the implementation of Kull et al. (Kull et al. (2019)).
+
+| Dataset | Model | Uncalibrated | Temp. Scaling | Vector Scaling | MS-ODIR | Dir-ODIR | Ours (Spline) |
| CIFAR-10 | Resnet-110 | 4.750 | 0.916 | 0.996 | 0.977 | 1.060 | 0.643 |
| Resnet-110-SD | 4.102 | 0.362 | 0.430 | 0.358 | 0.389 | 0.269 |
| DenseNet-40 | 5.493 | 0.900 | 0.890 | 0.897 | 1.057 | 0.773 |
| Wide Resnet-32 | 4.475 | 0.296 | 0.267 | 0.305 | 0.291 | 0.367 |
| Lenet-5 | 5.038 | 0.799 | 0.839 | 0.646 | 0.854 | 0.348 |
| CIFAR-100 | Resnet-110 | 18.481 | 1.489 | 1.827 | 2.845 | 2.575 | 0.575 |
| Resnet-110-SD | 15.832 | 0.748 | 1.303 | 3.572 | 1.645 | 1.028 |
| DenseNet-40 | 21.156 | 0.304 | 0.483 | 2.350 | 0.618 | 0.454 |
| Wide Resnet-32 | 18.784 | 1.130 | 1.642 | 2.524 | 1.788 | 0.930 |
| Lenet-5 | 12.117 | 1.215 | 0.768 | 1.047 | 2.125 | 0.391 |
| ImageNet | Densenet-161 | 5.721 | 0.744 | 2.014 | 4.723 | 3.103 | 0.406 |
| Resnet-152 | 6.544 | 0.791 | 1.985 | 5.805 | 3.528 | 0.441 |
| SVHN | Resnet-152-SD | 0.852 | 0.552 | 0.570 | 0.573 | 0.607 | 0.556 |
+
+Table 1: KS Error (in %) for top-1 prediction (with lowest in bold and second lowest underlined) on various image classification datasets and models with different calibration methods. Note, our method consistently reduces calibration error to $< 1\%$ in almost all experiments, outperforming state-of-the-art methods.
+
+| Dataset | Model | Uncalibrated | Temp. Scaling | Vector Scaling | MS-ODIR | Dir-ODIR | Ours (Spline) |
| CIFAR-10 | Resnet-110 | 3.011 | 0.947 | 0.948 | 0.598 | 0.953 | 0.347 |
| Resnet-110-SD | 2.716 | 0.478 | 0.486 | 0.401 | 0.500 | 0.310 |
| DenseNet-40 | 3.342 | 0.535 | 0.543 | 0.598 | 0.696 | 0.695 |
| Wide Resnet-32 | 2.669 | 0.426 | 0.369 | 0.412 | 0.382 | 0.364 |
| Lenet-5 | 1.708 | 0.367 | 0.279 | 0.409 | 0.426 | 0.837 |
| CIFAR-100 | Resnet-110 | 4.731 | 1.401 | 1.436 | 0.961 | 1.269 | 0.371 |
| Resnet-110-SD | 3.923 | 0.315 | 0.481 | 0.772 | 0.506 | 0.595 |
| DenseNet-40 | 5.803 | 0.305 | 0.653 | 0.219 | 0.135 | 0.903 |
| Wide Resnet-32 | 5.349 | 0.790 | 1.095 | 0.646 | 0.845 | 0.372 |
| Lenet-5 | 2.615 | 0.571 | 0.439 | 0.324 | 0.799 | 0.587 |
| ImageNet | Densenet-161 | 1.689 | 1.044 | 1.166 | 1.288 | 1.321 | 0.178 |
| Resnet-152 | 1.793 | 1.151 | 1.264 | 1.660 | 1.430 | 0.580 |
| SVHN | Resnet-152-SD | 0.373 | 0.226 | 0.216 | 0.973 | 0.218 | 0.492 |
+
+Table 2: KS Error (in %) for top-2 prediction (with lowest in bold and second lowest underlined) on various image classification datasets and models with different calibration methods. Again, our method consistently reduces calibration error to $< 1\%$ (less then $0.7\%$ , except for one case), in all experiments, the only one of the methods to achieve this.
+
+Results. We provide comparisons of our method using proposed KS error for the top most prediction against state-of-the-art calibration methods namely temperature scaling (Guo et al. (2017)), vector scaling, MS-ODIR, and Dirichlet Calibration (Dir-ODIR) (Kull et al. (2019)) in Table 1. Our method reduces calibration error to $1\%$ in almost all experiments performed on different datasets without any loss in accuracy. It clearly reflects the efficacy of our method irrespective of the scale of the dataset as well as the depth of the network architecture. It consistently performs better than the recently introduced Dirichlet calibration and Matrix scaling with ODIR (Kull et al. (2019)) in all the experiments. Note this is consistent with the top-1 calibration results reported in Table 15 of (Kull et al. (2019)). The closest competitor to our method is temperature scaling, against which our method performs better in 9 out of 13 experiments. Note, in the cases where temperature scaling outperforms our method, the gap in KS error between the two methods is marginal $(< 0.3\%)$ and our method is the second best. We provide comparisons using other calibration metrics in the supplementary.
+
+From the practical point of view, it is also important for a network to be calibrated for top second/third predictions and so on. We thus show comparisons for top-2 prediction KS error in Table 2. An observation similar to the one noted in Table 1 can be made for the top-2 predictions as well. Our method achieves $< 1\%$ calibration error in all the experiments. It consistently performs well especially for experiments performed on large scale ImageNet dataset where it sets new state-of-the-art for calibration. We would like to emphasize here, though for some cases Kull et al. (Kull et al. (2019)) and Vector Scaling perform better than our method in terms of top-2 KS calibration error, overall (considering both top-1 and top-2 predictions) our method performs better.
+
+# 7 CONCLUSION
+
+In this work, we have introduced a binning-free calibration metric based on the Kolmogorov-Smirnov test to measure classwise or (within)-top- $r$ calibration errors. Our KS error eliminates the shortcomings of the popular ECE measure and its variants while accurately measuring the expected calibration error and provides effective visualizations similar to reliability diagrams. Furthermore, we introduced a simple and effective calibration method based on spline-fitting which does not involve any learning and yet consistently yields the lowest calibration error in the majority of our experiments. We believe, the KS metric would be of wide-spread use to measure classwise calibration and our spline method would inspire learning-free approaches to neural network calibration. We intend to focus on calibration beyond classification problems as future work.
+
+# 8 ACKNOWLEDGEMENTS
+
+The work is supported by the Australian Research Council Centre of Excellence for Robotic Vision (project number CE140100016). We would also like to thank Google Research and Data61, CSIRO for their support.
+
+# REFERENCES
+
+Glenn W Brier. Verification of forecasts expressed in terms of probability. Monthly weather review, 78(1):1-3, 1950.
+Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. IEEE, 2009.
+Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Scholkopf, and Alexander Smola. A kernel two-sample test. Journal of Machine Learning Research, 2012.
+Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1321-1330. JMLR.org, 2017.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
+Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700-4708, 2017.
+A Kolmogorov. Sulla determinazione empirica di una legge di distribuzione. 1933.
+Alex Krizhevsky et al. Learning multiple layers of features from tiny images. 2009.
+Meelis Kull, Telmo Silva Filho, and Peter Flach. Beta calibration: a well-founded and easily implemented improvement on logistic calibration for binary classifiers. In Artificial Intelligence and Statistics, pp. 623-631, 2017.
+Meelis Kull, Miquel Perello Nieto, Markus Kangsepp, Telmo Silva Filho, Hao Song, and Peter Flach. Beyond temperature scaling: Obtaining well-calibrated multi-class probabilities with dirichlet calibration. In Advances in Neural Information Processing Systems, pp. 12295-12305, 2019.
+Ananya Kumar, Percy S Liang, and Tengyu Ma. Verified uncertainty calibration. In Advances in Neural Information Processing Systems, pp. 3787-3798, 2019.
+Aviral Kumar, Sunita Sarawagi, and Ujjwal Jain. Trainable calibration measures for neural networks from kernel mean embeddings. In International Conference on Machine Learning, pp. 2805-2814, 2018.
+
+Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998.
+Sky McKinley and Megan Levine. Cubic spline interpolation. College of the Redwoods, 1998.
+Jishnu Mukhoti, Viveka Kulharia, Amartya Sanyal, Stuart Golodetz, Philip HS Torr, and Puneet K Dokania. Calibrating deep neural networks using focal loss. arXiv preprint arXiv:2002.09437, 2020.
+Rafael Müller, Simon Kornblith, and Geoffrey E Hinton. When does label smoothing help? In Advances in Neural Information Processing Systems, pp. 4696-4705, 2019.
+Mahdi Pakdaman Naeini, Gregory Cooper, and Milos Hauskrecht. Obtaining well calibrated probabilities using bayesian binning. In Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015.
+Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. 2011.
+Alexandru Niculescu-Mizil and Rich Caruana. Predicting good probabilities with supervised learning. In Proceedings of the 22nd international conference on Machine learning, 2005.
+Jeremy Nixon, Michael W Dusenberry, Linchuan Zhang, Ghassen Jerfel, and Dustin Tran. Measuring calibration in deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 38-41, 2019.
+Gabriel Pereyra, George Tucker, Jan Chorowski, Łukasz Kaiser, and Geoffrey Hinton. Regularizing neural networks by penalizing confident output distributions. arXiv preprint arXiv:1701.06548, 2017.
+John Platt et al. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in large margin classifiers, 10(3):61-74, 1999.
+Seonguk Seo, Paul Hongsuck Seo, and Bohyung Han. Learning for single-shot confidence calibration in deep neural networks through stochastic inferences. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9030-9038, 2019.
+Nikolai Smirnov. On the estimation of the discrepancy between empirical curves of distribution for two independent samples. 1939.
+Sunil Thulasidasan, Gopinath Chennupati, Jeff A Bilmes, Tanmoy Bhattacharya, and Sarah Michalak. On mixup training: Improved calibration and predictive uncertainty for deep neural networks. In Advances in Neural Information Processing Systems, pp. 13888-13899, 2019.
+Juozas Vaicenavicius, David Widmann, Carl Andersson, Fredrik Lindsten, Jacob Roll, and Thomas B Schon. Evaluating model calibration in classification. AISTATS, 2019.
+David Widmann, Fredrik Lindsten, and Dave Zachariah. Calibration tests in multi-class classification: A unifying framework. In Advances in Neural Information Processing Systems, pp. 12236-12246, 2019.
+Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE International Conference on Computer Vision, pp. 6023-6032, 2019.
+Bianca Zadrozny and Charles Elkan. Obtaining calibrated probability estimates from decision trees and naive bayesian classifiers. In Icml, volume 1, pp. 609-616. CiteSeer, 2001.
+Bianca Zadrozny and Charles Elkan. Transforming classifier scores into accurate multiclass probability estimates. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 694-699, 2002.
+Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
+
+Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. In 6th International Conference on Learning Representations, ICLR 2018, 2018.
+
+Jize Zhang, Bhavya Kailkhura, and T Han. Mix-n-match: Ensemble and compositional methods for uncertainty calibration in deep learning. ICML, 2020.
+
+# Appendices
+
+Here, we first provide the proof of our main result, discuss more about top- $r$ calibration and spline-fitting, and then turn to additional experiments.
+
+# A PROOF OF PROPOSITION 4.1
+
+We first restate our proposition below.
+
+Proposition A.2. If $h(t) = P(Y = k, f_k(X) \leq s(t))$ as in (14) of the main paper where $s(t)$ is the $t$ -th fractile score. Then $h'(t) = P(Y = k \mid f_k(X) = s(t))$ , where $h'(t) = dh / dt$ .
+
+Proof. The proof is using the fundamental relationship between the Probability Distribution Function (PDF) and the Cumulative Distribution Function (CDF) and it is provided here for completeness. Taking derivatives, we see (writing $P(k)$ instead of $P(Y = k)$ ):
+
+$$
+\begin{array}{l} h ^ {\prime} (t) = P \left(k, f _ {k} (X) = s (t)\right). s ^ {\prime} (t) \\ = P \left(k \mid f _ {k} (X) = s (t)\right). P \left(f _ {k} (X) = s (t)\right). s ^ {\prime} (t) \\ = P \left(k \mid f _ {k} (X) = s (t)\right). \frac {d}{d t} \left(P \left(f _ {k} (X) \leq s (t)\right)\right) \tag {15} \\ = P \left(k \mid f _ {k} (X) = s (t)\right). \frac {d}{d t} (t) \\ = P (k \mid f _ {k} (X) = s (t)). \\ \end{array}
+$$
+
+The proof relies on the equality $P(f_{k}(X) \leq s(t)) = t$ . In words: $s(t)$ is the value that a fraction $t$ of the scores are less than or equal. This equality then says: the probability that a score is less than or equal to the value that a fraction $t$ of the scores lie below, is (obviously) equal to $t$ .
+
+# B MORE ON TOP- $r$ AND WITHIN-TOP- $r$ CALIBRATION
+
+In the main paper, definitions of top- $r$ and within-top- $r$ calibration are given in equations (4) and (5). Here, a few more details are given of how to calibrate the classifier $f$ for top- $r$ and within-top- $r$ calibration.
+
+The method of calibration using splines described in this paper consists of fitting a spline to the cumulative accuracy, defined as $h_i$ in equation (11) in the main paper. For top- $r$ classification, the method is much the same as for the classification for class $k$ . Equation (11) is replaced by sorting the data according to the $r$ -th top score, then defining
+
+$$
+\tilde {h} _ {0} = h _ {0} = 0,
+$$
+
+$$
+h _ {i} = h _ {i - 1} + \mathbf {1} \left(y ^ {(- r)} = 1\right) / N, \tag {16}
+$$
+
+$$
+\tilde {h} _ {i} = \tilde {h} _ {i - 1} + f ^ {(- r)} \left(\mathbf {x} _ {i}\right) / N,
+$$
+
+where $y^{(-r)}$ and $f^{(-r)}(\mathbf{x}_i)$ are defined in the main paper, equation (3). These sequences may then be used both as a metric for the correct top- $r$ calibration and for calibration using spline-fitting as described.
+
+For within-top- $r$ calibration, one sorts the data according to the sum of the top $r$ scores, namely $\sum_{s=1}^{r} f^{(-s)}(\mathbf{x}_i)$ , then computes
+
+$$
+\tilde {h} _ {0} = h _ {0} = 0,
+$$
+
+$$
+h _ {i} = h _ {i - 1} + \mathbf {1} \left(\sum_ {s = 1} ^ {r} y ^ {(- s)} = 1\right) / N, \tag {17}
+$$
+
+$$
+\tilde {h} _ {i} = \tilde {h} _ {i - 1} + \sum_ {s = 1} ^ {r} f ^ {(- s)} (\mathbf {x} _ {i}) / N,
+$$
+
+As before, this can be used as a metric, or as the starting point for within-top- $r$ calibration by our method. Examples of this type of calibration (graphs for uncalibrated networks in fig 7 and fig 9) is given in the graphs provided in fig 8 and fig 10 for within-top-2 predictions and within-top-3 predictions respectively.
+
+It is notable that if a classifier is calibrated in the sense of equation (1) in the main paper (also called multi-class-calibrated), then it is also calibrated for top- $r$ and within-top- $r$ classification.
+
+# C LEAST SQUARE SPLINE FITTING
+
+Least-square fitting using cubic splines is a known technique. However, details are given here for the convenience of the reader. Our primary reference is (McKinley & Levine (1998)), which we adapt to least-squares fitting. We consider the case where the knot-points are evenly spaced.
+
+We change notation from that used in the main paper by denoting points by $(x,y)$ instead of $(u,v)$ . Thus, given knot points $(\hat{x}_i,\hat{y}_i)_{k = 1}^K$ one is required to fit some points $(x_i,y_i)_{i = 1}^N$ . Given a point $x$ , the corresponding spline value is given by $y = \mathbf{a}(x)^{\top}\mathbf{M}\hat{\mathbf{y}}$ , where $\hat{\mathbf{y}}$ is the vector of values $\hat{y}_i$ . The form of the vector $\mathbf{a}(x)$ and the matrix $\mathsf{M}$ are given in the following.
+
+The form of the matrix $\mathbf{M}$ is derived from equation (25) in McKinley & Levine (1998). Define the matrices
+
+$$
+\mathsf {A} = \left[ \begin{array}{c c c c c c c c} 4 & 1 & & & & & \\ 1 & 4 & 1 & & & & \\ & 1 & 4 & 1 & & & \\ & & & \ddots & & & \\ & & & & 1 & 4 & 1 \\ & & & & & 1 & 4 \end{array} \right]; \quad \mathsf {B} = \frac {6}{h ^ {2}} \left[ \begin{array}{c c c c c c c c} 1 & - 2 & 1 & & & & \\ & 1 & - 2 & 1 & & & \\ & & & \ddots & & & \\ & & & & 1 & - 2 & 1 \end{array} \right],
+$$
+
+where $h$ is the distance between the knot points. These matrices are of dimensions $K - 2 \times K - 2$ and $K - 2 \times K$ respectively. Finally, let $\mathbb{M}$ be the matrix
+
+$$
+\mathbb {M} = \left[ \begin{array}{c} \mathbf {0} _ {K} ^ {\top} \\ \mathsf {A} ^ {- 1} \mathsf {B} \\ \mathbf {0} _ {K} ^ {\top} \\ \mathsf {I} _ {K \times K} \end{array} \right].
+$$
+
+Here, $\mathbf{0}_K$ is a vector of zeros of length $K$ , and $\mathbb{I}_{K\times K}$ is the identity matrix. The matrix $\mathbb{M}$ has dimension $2K\times K$ .
+
+Next, let the point $x$ lie between the knots $j$ and $j + 1$ and let $u = x - \hat{x}_j$ . Then define the vector $\mathbf{v} = \mathbf{a}(x)$ by values
+
+$$
+v _ {j} = - u ^ {3} / (6 h) + u ^ {2} / 2 - h u / 3,
+$$
+
+$$
+v _ {j + 1} = u ^ {3} / (6 h) - h u / 6,
+$$
+
+$$
+v _ {j + K} = - u / h + 1,
+$$
+
+$$
+v _ {j + 1 + K} = u / h,
+$$
+
+with other entries equal to 0.
+
+| Dataset | Image Size | # class | Calibration set | Test set |
| CIFAR-10 | 32 × 32 | 10 | 5000 | 10000 |
| CIFAR-100 | 32 × 32 | 100 | 5000 | 10000 |
| SVHN | 32 × 32 | 10 | 6000 | 26032 |
| ImageNet | 224 × 224 | 1000 | 25000 | 25000 |
+
+Table 3: Dataset splits used for all the calibration experiments. Note, "calibration" set is used for spline fitting in our method and calibration for the baseline methods and then different methods are evaluated on "test" set.
+
+| Dataset | Model | Uncalibrated | Temp. Scaling | Vector Scaling | MS-ODIR | Dir-ODIR | Ours (Spline) |
| CIFAR-10 | Resnet-110 | 1.805 | 0.097 | 0.176 | 0.140 | 0.195 | 0.277 |
| Resnet-110-SD | 1.423 | 0.111 | 0.089 | 0.082 | 0.073 | 0.104 |
| DenseNet-40 | 2.256 | 0.435 | 0.409 | 0.395 | 0.348 | 0.571 |
| Wide Resnet-32 | 1.812 | 0.145 | 0.105 | 0.124 | 0.139 | 0.537 |
| Lenet-5 | 3.545 | 0.832 | 0.831 | 0.631 | 0.804 | 0.670 |
| CIFAR-100 | Resnet-110 | 14.270 | 0.885 | 0.649 | 1.425 | 1.190 | 0.503 |
| Resnet-110-SD | 12.404 | 0.762 | 1.311 | 2.120 | 1.588 | 0.684 |
| DenseNet-40 | 15.901 | 0.437 | 0.368 | 2.205 | 0.518 | 0.724 |
| Wide Resnet-32 | 14.078 | 0.414 | 0.548 | 1.915 | 1.099 | 1.017 |
| Lenet-5 | 14.713 | 0.787 | 1.249 | 0.643 | 2.682 | 0.518 |
| ImageNet | Densenet-161 | 4.266 | 1.051 | 0.868 | 3.372 | 2.536 | 0.408 |
| Resnet-152 | 4.851 | 1.167 | 0.776 | 4.093 | 2.839 | 0.247 |
| SVHN | Resnet-152-SD | 0.485 | 0.388 | 0.410 | 0.407 | 0.388 | 0.158 |
+
+Table 4: Within-top-2 predictions. KS Error (in %) within-top-2 prediction (with lowest in bold and second lowest underlined) on various image classification datasets and models with different calibration methods. Note, for this experiment we use 14 knots for spline fitting.
+
+Then the value of the spline is given by
+
+$$
+y = \mathbf {a} (x) ^ {\top} \mathbf {M} \hat {\mathbf {y}},
+$$
+
+as required. This allows us to fit the spline (varying the values of $\hat{\mathbf{y}}$ ) to points $(x_{i}, y_{i})$ by least-squares fit, as described in the main paper.
+
+The above description is for so-called natural (linear-runout) splines. For quadratic-runout or cubic-runout splines the only difference is that the first and last rows of matrix A are changed - see McKinley & Levine (1998) for details.
+
+As described in the main paper, it is also possible to add linear constraints to this least-squares problem, such as constraints on derivatives of the spline. This results in a linearly-constrained quadratic programming problem.
+
+# D ADDITIONAL EXPERIMENTS
+
+We first provide the experimental setup for different datasets in Table 3. Note, the calibration set is used for spline fitting in our method and then final evaluation is based on an unseen test set.
+
+We also provide comparisons of our method against baseline methods for within-top-2 predictions (equation 5 of the main paper) in Table 4 using KS error. Our method achieves comparable or better results for within-top-2 predictions. It should be noted that the scores for top-3 $(f^{(-3)}(\mathbf{x}))$ or even top-4, top-5, etc., are very close to zero for majority of the samples (due to overconfidence of top-1 predictions). Therefore the calibration error for top- $r$ with $r > 2$ predictions is very close to zero and comparing different methods with respect to it is of little value. Furthermore, for visual illustration, we provide calibration graphs of top-2 predictions in fig 3 and fig 4 for uncalibrated and calibrated network respectively. Similar graphs for top-3, within-top-2, and within-top-3 predictions are presented in figures 5 - 10.
+
+We also provide classification accuracy comparisons for different post-hoc calibration methods against our method if we apply calibration for all top-1, 2, 3, ..., $K$ predictions for $K$ -class classification problem in Table 5. We would like to point out that there is negligible change in accuracy between the calibrated networks (using our method) and the uncalibrated ones.
+
+
+(a)
+
+
+(b)
+
+
+(c)
+Figure 3: Top-2 predictions, Uncalibrated. Calibration graphs for an uncalibrated DenseNet-40 (Huang et al. (2017)) trained on CIFAR-10 for top-2 class with a KS error of $3.343\%$ on the test set. Here $(a)$ shows the plot of cumulative score and probability versus the fractile of the test set, $(b)$ shows the same information with the horizontal axis warped so that the cumulative-score graph is a straight line. This is created as scatter plots of cumulative (score, score): blue and (score, probability): orange. If the network is perfectly calibrated, the probability line will be a straight line coincident with the (score, score) line. This shows that the network is substantially overestimating (score) the probability of the computation. $(c)$ and $(d)$ show plots of (non-cumulative) score and probability plotted against fractile, or score. How these plots are produced is described in Section 4 of main paper.
+
+
+(d)
+
+
+
+
+
+
+
+
+
+
+Figure 4: Top-2 predictions, Calibrated. The result of the spline calibration method, on the example given in fig 3 for top-2 calibration. A recalibration function $\gamma : \mathbb{R} \to \mathbb{R}$ is used to adjust the scores, replacing $f_{k}(\mathbf{x})$ with $\gamma(f_{k}(\mathbf{x}))$ (see Section 4 of main paper). As is seen, the network is now almost perfectly calibrated when tested on the "calibration" set (top row) used to calibrate it. In bottom row, the recalibration function is tested on a further set "test". It is seen that the result is not perfect, but much better than the original results in fig 3d.
+
+
+
+
+
+
+
+
+Figure 5: Top-3 predictions, Uncalibrated. Calibration graphs for an uncalibrated DenseNet-40 trained on CIFAR-10 for top-3 class with a KS error of $1.277\%$ on the test set. Here (a) shows the plot of cumulative score and probability versus the fractile of the test set, (b) shows the same information with the horizontal axis warped so that the cumulative-score graph is a straight line. (c) and (d) show plots of (non-cumulative) score and probability plotted against fractile, or score.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 6: Top-3 predictions, Calibrated. The result of the spline calibration method, on the example given in fig 5 for top-3 calibration. A recalibration function $\gamma : \mathbb{R} \to \mathbb{R}$ is used to adjust the scores, replacing $f_{k}(\mathbf{x})$ with $\gamma(f_{k}(\mathbf{x}))$ . As is seen, the network is now almost perfectly calibrated when tested on the "calibration" set (top row) used to calibrate it. In bottom row, the recalibration function is tested on a further set "test". It is seen that the result is not perfect, but much better than the original results in fig 5d.
+
+
+
+
+
+
+
+
+(a)
+
+
+(b)
+
+
+(c)
+Figure 7: Within-top-2 predictions, Uncalibrated. Calibration graphs for an uncalibrated DenseNet-40 trained on CIFAR-10 for within-top-2 predictions with a KS error of $2.256\%$ on the test set. Here (a) shows the plot of cumulative score and probability versus the fractile of the test set, (b) shows the same information with the horizontal axis warped so that the cumulative-score graph is a straight line. (c) and (d) show plots of (non-cumulative) score and probability plotted against fractile, or score.
+
+
+(d)
+
+
+(a)
+
+
+(b)
+
+
+(c)
+
+
+(d)
+
+
+
+
+Figure 8: Within-top-2 predictions, Calibrated. The result of the spline calibration method, on the example given in fig 7 for within-top-2 calibration. A recalibration function $\gamma : \mathbb{R} \to \mathbb{R}$ is used to adjust the scores, replacing $f_{k}(\mathbf{x})$ with $\gamma(f_{k}(\mathbf{x}))$ . As is seen, the network is now almost perfectly calibrated when tested on the "calibration" set (top row) used to calibrate it. In bottom row, the recalibration function is tested on a further set "test". It is seen that the result is not perfect, but much better than the original results in fig 7d.
+
+
+
+
+
+
+Figure 9: Within-top-3 predictions, Uncalibrated. Calibration graphs for an uncalibrated DenseNet-40 trained on CIFAR-10 for within-top-3 predictions with a KS error of $0.983\%$ on the test set. Here (a) shows the plot of cumulative score and probability versus the fractile of the test set, (b) shows the same information with the horizontal axis warped so that the cumulative-score graph is a straight line. (c) and (d) show plots of (non-cumulative) score and probability plotted against fractile, or score.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 10: Within-top-3 predictions, Calibrated. The result of the spline calibration method, on the example given in fig 9 for within-top-3 calibration. A recalibration function $\gamma : \mathbb{R} \to \mathbb{R}$ is used to adjust the scores, replacing $f_{k}(\mathbf{x})$ with $\gamma(f_{k}(\mathbf{x}))$ . As is seen, the network is now almost perfectly calibrated when tested on the "calibration" set (top row) used to calibrate it. In bottom row, the recalibration function is tested on a further set "test". It is seen that the result is not perfect, but much better than the original results in fig 9d.
+
+
+
+
+
+
+
+| Dataset | Model | Uncalibrated | Temp. Scaling | Vector Scaling | MS-ODIR | Dir-ODIR | Ours (Spline) |
| CIFAR-10 | Resnet-110 | 93.56 | 93.56 | 93.50 | 93.53 | 93.52 | 93.55 |
| Resnet-110-SD | 94.04 | 94.04 | 94.04 | 94.18 | 94.20 | 94.05 |
| DenseNet-40 | 92.42 | 92.42 | 92.50 | 92.52 | 92.47 | 92.31 |
| Wide Resnet-32 | 93.93 | 93.93 | 94.21 | 94.22 | 94.22 | 93.76 |
| Lenet-5 | 72.74 | 72.74 | 74.48 | 74.44 | 74.52 | 72.64 |
| CIFAR-100 | Resnet-110 | 71.48 | 71.48 | 71.58 | 71.55 | 71.62 | 71.50 |
| Resnet-110-SD | 72.83 | 72.83 | 73.60 | 73.53 | 73.14 | 72.81 |
| DenseNet-40 | 70.00 | 70.00 | 70.13 | 70.40 | 70.24 | 70.17 |
| Wide Resnet-32 | 73.82 | 73.82 | 73.87 | 74.05 | 73.99 | 73.74 |
| Lenet-5 | 33.59 | 33.59 | 36.42 | 37.58 | 37.52 | 33.55 |
| ImageNet | Densenet-161 | 77.05 | 77.05 | 76.72 | 77.15 | 77.19 | 77.05 |
| Resnet-152 | 76.20 | 76.20 | 75.87 | 76.12 | 76.24 | 76.07 |
| SVHN | Resnet-152-SD | 98.15 | 98.15 | 98.13 | 98.12 | 98.19 | 98.17 |
+
+Table 5: Classification (top-1) accuracy (with highest in bold and second highest underlined) post calibration on various image classification datasets and models with different calibration methods. Note, only a negligible change in accuracy is observed in our method compared to the uncalibrated networks.
+
+| Dataset | Model | Uncalibrated | Temp. Scaling | Vector Scaling | MS-ODIR | Dir-ODIR | Ours (Spline) |
| CIFAR-10 | Resnet-110 | 4.750 | 1.224 | 1.092 | 1.276 | 1.240 | 1.011 |
| Resnet-110-SD | 4.135 | 0.777 | 0.752 | 0.684 | 0.859 | 0.992 |
| DenseNet-40 | 5.507 | 1.006 | 1.207 | 1.250 | 1.268 | 1.389 |
| Wide Resnet-32 | 4.512 | 0.905 | 0.852 | 0.941 | 0.965 | 1.003 |
| Lenet-5 | 5.188 | 1.999 | 1.462 | 1.504 | 1.300 | 1.333 |
| CIFAR-100 | Resnet-110 | 18.480 | 2.428 | 2.722 | 3.011 | 2.806 | 1.868 |
| Resnet-110-SD | 15.861 | 1.335 | 2.067 | 2.277 | 2.046 | 1.766 |
| DenseNet-40 | 21.159 | 1.255 | 1.598 | 2.855 | 1.410 | 2.114 |
| Wide Resnet-32 | 18.784 | 1.667 | 1.785 | 2.870 | 2.128 | 1.672 |
| Lenet-5 | 12.117 | 1.535 | 1.350 | 1.696 | 2.159 | 1.029 |
| ImageNet | Densenet-161 | 5.720 | 2.059 | 2.637 | 4.337 | 3.989 | 0.798 |
| Resnet-152 | 6.545 | 2.166 | 2.641 | 5.377 | 4.556 | 0.913 |
| SVHN | Resnet-152-SD | 0.877 | 0.675 | 0.630 | 0.646 | 0.651 | 0.832 |
+
+Table 6: ECE for top-1 predictions (in %) using 25 bins (with lowest in bold and second lowest underlined) on various image classification datasets and models with different calibration methods. Note, for this experiment we use 13 knots for spline fitting.
+
+For the sake of completeness, we present calibration results using the existing calibration metric, Expected Calibration Error (ECE) (Naeini et al. (2015)) in Table 6. We would like to reiterate the fact that ECE metric is highly dependent on the chosen number of bins and thus does not really reflect true calibration performance. To reflect the efficacy of our proposed calibration method, we also present calibration results using other calibration metrics such as recently proposed binning free measure KDE-ECE (Zhang et al. (2020)), MCE (Maximum Calibration Error) (Guo et al. (2017)) and Brier Scores for top-1 predictions on ImageNet dataset in Table 7. Since, the original formulation of Brier Score for multi-class predictions is highly biased on the accuracy and is approximately similar for all calibration methods, we hereby use top-1 Brier Score which is the mean squared error between top-1 scores and ground truths for the top-1 predictions (1 if the prediction is correct and 0 otherwise). It can be clearly observed that our approach consistently outperforms all the baselines on different calibration measures.
+
+| Calibration Metric | Model | Uncalibrated | Temp. Scaling | MS-ODIR | Dir-ODIR | Ours (Spline) |
| KDE-ECE | Densenet-161 | 0.03786 | 0.01501 | 0.02874 | 0.02979 | 0.00637 |
| Resnet-152 | 0.04650 | 0.01864 | 0.03448 | 0.03488 | 0.00847 |
| MCE | Densenet-161 | 0.13123 | 0.05442 | 0.09077 | 0.09653 | 0.06289 |
| Resnet-152 | 0.15930 | 0.09051 | 0.11201 | 0.09868 | 0.04950 |
| Brier Score | Densenet-161 | 0.12172 | 0.11852 | 0.11982 | 0.11978 | 0.11734 |
| Resnet-152 | 0.12626 | 0.12145 | 0.12406 | 0.12308 | 0.12034 |
+
+Table 7: Calibration Error using other different metrics such as binning-free KDE-ECE (Zhang et al. (2020)), MCE (Maximum Calibration Error) (Guo et al. (2017)) and Brier Score for top-1 predictions (with lowest in bold and and second lowest underlined) on ImageNet dataset with different calibration methods. Note, for this experiment we use 6 knots for spline fitting.
\ No newline at end of file
diff --git a/calibrationofneuralnetworksusingsplines/images.zip b/calibrationofneuralnetworksusingsplines/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..0075dacd8d049d435aa67f6d406c0c328a768292
--- /dev/null
+++ b/calibrationofneuralnetworksusingsplines/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:eff38713c20c967819fb5787278d1041ea46739b543addef3d14a1d7c5fee46d
+size 1228742
diff --git a/calibrationofneuralnetworksusingsplines/layout.json b/calibrationofneuralnetworksusingsplines/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..7d65b6df41b23877472a64289e1ed37f65f7623e
--- /dev/null
+++ b/calibrationofneuralnetworksusingsplines/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1df442ed584201a71e807f8862aaf6011076a2869f24663f9fb61db75b84a79a
+size 732435
diff --git a/calibrationtestsbeyondclassification/a5e28cff-dd10-4f08-b8a4-dc2ae6b1b36d_content_list.json b/calibrationtestsbeyondclassification/a5e28cff-dd10-4f08-b8a4-dc2ae6b1b36d_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..47d99fdb09a9a7b674a15d25304f19ab83b029ad
--- /dev/null
+++ b/calibrationtestsbeyondclassification/a5e28cff-dd10-4f08-b8a4-dc2ae6b1b36d_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3c74ed5b5322dff779c821ed48470c8a996495e5c0c216558c4a909ea83bd1c8
+size 256623
diff --git a/calibrationtestsbeyondclassification/a5e28cff-dd10-4f08-b8a4-dc2ae6b1b36d_model.json b/calibrationtestsbeyondclassification/a5e28cff-dd10-4f08-b8a4-dc2ae6b1b36d_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..6a2035f2a307ba8d832d5943cf96c81e2918fad8
--- /dev/null
+++ b/calibrationtestsbeyondclassification/a5e28cff-dd10-4f08-b8a4-dc2ae6b1b36d_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fc0028edd7c88dbaf0db627f78e38c3bbf5b310bf2a11673ac976ad34dcc148e
+size 295835
diff --git a/calibrationtestsbeyondclassification/a5e28cff-dd10-4f08-b8a4-dc2ae6b1b36d_origin.pdf b/calibrationtestsbeyondclassification/a5e28cff-dd10-4f08-b8a4-dc2ae6b1b36d_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..28be29a688ac9addc8742d67833842c601100fbc
--- /dev/null
+++ b/calibrationtestsbeyondclassification/a5e28cff-dd10-4f08-b8a4-dc2ae6b1b36d_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2c5dd8c9195c7afbbc03bf943212c6fe54239abeacc919ac3037cbf9a55a7c92
+size 3650516
diff --git a/calibrationtestsbeyondclassification/full.md b/calibrationtestsbeyondclassification/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..aaa7ff321ef1630828c072b6aee7825e58e3349c
--- /dev/null
+++ b/calibrationtestsbeyondclassification/full.md
@@ -0,0 +1,1405 @@
+# CALIBRATION TESTS BEYOND CLASSIFICATION
+
+# David Widmann
+
+Department of Information Technology
+
+Uppsala University, Sweden
+
+david.widmann@it.uu.se
+
+# Fredrik Lindsten
+
+Division of Statistics and Machine Learning
+
+Linkoping University, Sweden
+
+fredrik.lindsten@liu.se
+
+# Dave Zachariah
+
+Department of Information Technology
+
+Uppsala University, Sweden
+
+dave.zachariah@it.uu.se
+
+# ABSTRACT
+
+Most supervised machine learning tasks are subject to irreducible prediction errors. Probabilistic predictive models address this limitation by providing probability distributions that represent a belief over plausible targets, rather than point estimates. Such models can be a valuable tool in decision-making under uncertainty, provided that the model output is meaningful and interpretable. Calibrated models guarantee that the probabilistic predictions are neither over- nor under-confident. In the machine learning literature, different measures and statistical tests have been proposed and studied for evaluating the calibration of classification models. For regression problems, however, research has been focused on a weaker condition of calibration based on predicted quantiles for real-valued targets. In this paper, we propose the first framework that unifies calibration evaluation and tests for general probabilistic predictive models. It applies to any such model, including classification and regression models of arbitrary dimension. Furthermore, the framework generalizes existing measures and provides a more intuitive reformulation of a recently proposed framework for calibration in multi-class classification. In particular, we reformulate and generalize the kernel calibration error, its estimators, and hypothesis tests using scalar-valued kernels, and evaluate the calibration of real-valued regression problems.1
+
+# 1 INTRODUCTION
+
+We consider the general problem of modelling the relationship between a feature $X$ and a target $Y$ in a probabilistic setting, i.e., we focus on models that approximate the conditional probability distribution $\mathbb{P}(Y|X)$ of target $Y$ for given feature $X$ . The use of probabilistic models that output a probability distribution instead of a point estimate demands guarantees on the predictions beyond accuracy, enabling meaningful and interpretable predicted uncertainties. One such statistical guarantee is calibration, which has been studied extensively in meteorological and statistical literature (DeGroot & Fienberg, 1983; Murphy & Winkler, 1977).
+
+A calibrated model ensures that almost every prediction matches the conditional distribution of targets given this prediction. Loosely speaking, in a classification setting a predicted distribution of the model is called calibrated (or reliable), if the empirically observed frequencies of the different classes match the predictions in the long run, if the same class probabilities would be predicted repeatedly. A classical example is a weather forecaster who predicts each day if it is going to rain on the next day. If she predicts rain with probability $60\%$ for a long series of days, her forecasting model is calibrated for predictions of $60\%$ if it actually rains on $60\%$ of these days.
+
+If this property holds for almost every probability distribution that the model outputs, then the model is considered to be calibrated. Calibration is an appealing property of a probabilistic model since it
+
+provides safety guarantees on the predicted distributions even in the common case when the model does not predict the true distributions $\mathbb{P}(Y|X)$ . Calibration, however, does not guarantee accuracy (or refinement)—a model that always predicts the marginal probabilities of each class is calibrated but probably inaccurate and of limited use. On the other hand, accuracy does not imply calibration either since the predictions of an accurate model can be too over-confident and hence miscalibrated, as observed, e.g., for deep neural networks (Guo et al., 2017).
+
+In the field of machine learning, calibration has been studied mainly for classification problems (Brocker, 2009; Guo et al., 2017; Kull et al., 2017; 2019; Kumar et al., 2018; Platt, 2000; Vaicenavicius et al., 2019; Widmann et al., 2019; Zadrozny, 2002) and for quantiles and confidence intervals of models for regression problems with real-valued targets (Fasiolo et al., 2020; Ho & Lee, 2005; Kuleshov et al., 2018; Rueda et al., 2006; Taillardat et al., 2016). In our work, however, we do not restrict ourselves to these problem settings but instead consider calibration for arbitrary predictive models. Thus, we generalize the common notion of calibration as:
+
+Definition 1. Consider a model $P_{X} \coloneqq P(Y|X)$ of a conditional probability distribution $\mathbb{P}(Y|X)$ . Then model $P$ is said to be calibrated if and only if
+
+$$
+\mathbb {P} (Y \mid P _ {X}) = P _ {X} \quad \text {a l m o s t s u r e l y .} \tag {1}
+$$
+
+If $P$ is a classification model, Definition 1 coincides with the notion of (multi-class) calibration by Bröcker (2009); Kull et al. (2019); Vaicenavicius et al. (2019). Alternatively, in classification some authors (Guo et al., 2017; Kumar et al., 2018; Naeini et al., 2015) study the strictly weaker property of confidence calibration (Kull et al., 2019), which only requires
+
+$$
+\mathbb {P} (Y = \arg \max P _ {X} | \max P _ {X}) = \max P _ {X} \quad \text {a l m o s t s u r e l y}. \tag {2}
+$$
+
+This notion of calibration corresponds to calibration according to Definition 1 for a reduced problem with binary targets $\widetilde{Y} \coloneqq \mathbb{1}(Y = \arg \max P_X)$ and Bernoulli distributions $\widetilde{P}_X \coloneqq \operatorname{Ber}(\max P_X)$ as probabilistic models.
+
+For real-valued targets, Definition 1 coincides with the so-called distribution-level calibration by Song et al. (2019). Distribution-level calibration implies that the predicted quantiles are calibrated, i.e., the outcomes for all real-valued predictions of the, e.g., $75\%$ quantile are actually below the predicted quantile with $75\%$ probability (Song et al., 2019, Theorem 1). Conversely, although quantile-based calibration is a common approach for real-valued regression problems (Fasiolo et al., 2020; Ho & Lee, 2005; Kuleshov et al., 2018; Rueda et al., 2006; Taillardat et al., 2016), it provides weaker guarantees on the predictions. For instance, the linear regression model in Fig. 1 empirically shows quantiles that appear close to being calibrated albeit being uncalibrated according to Definition 1.
+
+
+Figure 1: Illustration of a conditional distribution $\mathbb{P}(Y|X)$ with scalar feature and target. We consider a Gaussian predictive model $P$ , obtained by ordinary least squares regression with 100 training data points (orange dots). Empirically the predicted quantiles on 50 validation data points appear close to being calibrated, although model $P$ is uncalibrated according to Definition 1. Using the framework in this paper, on the same validation data a statistical test allows us to reject the null hypothesis that model $P$ is calibrated at a significance level of $\alpha = 0.05$ ( $p < 0.05$ ). See Appendix A.1 for details.
+
+
+
+Figure 1 also raises the question of how to assess calibration for general target spaces in the sense of Definition 1, without having to rely on visual inspection. In classification, measures of calibration such as the commonly used expected calibration error (ECE) (Guo et al., 2017; Kull et al., 2019;
+
+Naeini et al., 2015; Vaicenavicius et al., 2019) and the maximum calibration error (MCE) (Naeini et al., 2015) try to capture the average and maximal discrepancy between the distributions on the left hand side and the right hand side of Eq. (1) or Eq. (2), respectively. These measures can be generalized to other target spaces (see Definition B.1), but unfortunately estimating these calibration errors from observations of features and corresponding targets is problematic. Typically, the predictions are different for (almost) all observations, and hence estimation of the conditional probability $\mathbb{P}(Y|P_X)$ , which is needed in the estimation of ECE and MCE, is challenging even for low-dimensional target spaces and usually leads to biased and inconsistent estimators (Vaicenavicius et al., 2019).
+
+Kernel-based calibration errors such as the maximum mean calibration error (MMCE) (Kumar et al., 2018) and the kernel calibration error (KCE) (Widmann et al., 2019) for confidence and multi-class calibration, respectively, can be estimated without first estimating the conditional probability and hence avoid this issue. They are defined as the expected value of a weighted sum of the differences of the left and right hand side of Eq. (1) for each class, where the weights are given as a function of the predictions (of all classes) and chosen such that the calibration error is maximized. A reformulation with matrix-valued kernels (Widmann et al., 2019) yields unbiased and differentiable estimators without explicit dependence on $\mathbb{P}(Y|P_X)$ , which simplifies the estimation and allows to explicitly account for calibration in the training objective (Kumar et al., 2018). Additionally, the kernel-based framework allows the derivation of reliable statistical hypothesis tests for calibration in multi-class classification (Widmann et al., 2019).
+
+However, both the construction as a weighted difference of the class-wise distributions in Eq. (1) and the reformulation with matrix-valued kernels require finite target spaces and hence cannot be applied to regression problems. To be able to deal with general target spaces, we present a new and more general framework of calibration errors without these limitations.
+
+Our framework can be used to reason about and test for calibration of any probabilistic predictive model. As explained above, this is in stark contrast with existing methods that are restricted to simple output distributions, such as classification and scalar-valued regression problems. A key contribution of this paper is a new framework that is applicable to multivariate regression, as well as situations when the output is of a different (e.g., discrete ordinal) or more complex (e.g., graph-structured) type, with clear practical implications.
+
+Within this framework a KCE for general target spaces is obtained. We want to highlight that for multi-class classification problems its formulation is more intuitive and simpler to use than the measure proposed by Widmann et al. (2019) based on matrix-valued kernels. To ease the application of the KCE we derive several estimators of the KCE with subquadratic sample complexity and their asymptotic properties in tests for calibrated models, which improve on existing estimators and tests in the two-sample test literature by exploiting the special structure of the calibration framework. Using the proposed framework, we numerically evaluate the calibration of neural network models and ensembles of such models.
+
+# 2 CALIBRATION ERROR: A GENERAL FRAMEWORK
+
+In classification, the distributions on the left and right hand side of Eq. (1) can be interpreted as vectors in the probability simplex. Hence ultimately the distance measure for ECE and MCE (see Definition B.1) can be chosen as a distance measure of real-valued vectors. The total variation, Euclidean, and squared Euclidean distances are common choices (Guo et al., 2017; Kull et al., 2019; Vaicenavicius et al., 2019). However, in a general setting measuring the discrepancy between $\mathbb{P}(Y|P_X)$ and $P_{X}$ cannot necessarily be reduced to measuring distances between vectors. The conditional distribution $\mathbb{P}(Y|P_X)$ can be arbitrarily complex, even if the predicted distributions are restricted to a simple class of distributions that can be represented as real-valued vectors. Hence in general we have to resort to dedicated distance measures of probability distributions.
+
+Additionally, the estimation of conditional distributions $\mathbb{P}(Y|P_X)$ is challenging, even more so than in the restricted case of classification, since in general these distributions can be arbitrarily complex. To circumvent this problem, we propose to use the following construction: We define a random variable $Z_{X}\sim P_{X}$ obtained from the predictive model and study the discrepancy between the joint distributions of the two pairs of random variables $(P_X,Y)$ and $(P_X,Z_X)$ , respectively, instead of
+
+the discrepancy between the conditional distributions $\mathbb{P}(Y|P_X)$ and $P_{X}$ . Since
+
+$$
+(P _ {X}, Y) \stackrel {{d}} {{=}} (P _ {X}, Z _ {X}) \quad \text {i f a n d o n l y i f} \quad \mathbb {P} (Y | P _ {X}) = P _ {X} \quad \text {a l m o s t s u r e l y},
+$$
+
+model $P$ is calibrated if and only if the distributions of $(P_X,Y)$ and $(P_X,Z_X)$ are equal.
+
+The random variable pairs $(P_X,Y)$ and $(P_X,Z_X)$ take values in the product space $\mathcal{P}\times \mathcal{V}$ , where $\mathcal{P}$ is the space of predicted distributions $P_{X}$ and $\mathcal{V}$ is the space of targets $Y$ . For instance, in classification, $\mathcal{P}$ could be the probability simplex and $\mathcal{V}$ the set of all class labels, whereas in the case of Gaussian predictive models for scalar targets $\mathcal{P}$ could be the space of normal distributions and $\mathcal{V}$ be $\mathbb{R}$ .
+
+The study of the joint distributions of $(P_X,Y)$ and $(P_X,Z_X)$ motivates the definition of a generally applicable calibration error as an integral probability metric (Muller, 1997; Striperumbudur et al., 2009; 2012) between these distributions. In contrast to common $f$ -divergences such as the Kullback-Leibler divergence, integral probability metrics do not require that one distribution is absolutely continuous with respect to the other, which cannot be guaranteed in general.
+
+Definition 2. Let $\mathcal{V}$ denote the space of targets $Y$ , and $\mathcal{P}$ the space of predicted distributions $P_{X}$ . We define the calibration error with respect to a space of functions $\mathcal{F}$ of the form $f\colon \mathcal{P}\times \mathcal{V}\to \mathbb{R}$ as
+
+$$
+\mathrm {C E} _ {\mathcal {F}} := \sup _ {f \in \mathcal {F}} \left| \mathbb {E} _ {P _ {X}, Y} f (P _ {X}, Y) - \mathbb {E} _ {P _ {X}, Z _ {X}} f (P _ {X}, Z _ {X}) \right|. \tag {3}
+$$
+
+By construction, if model $P$ is calibrated, then $\mathrm{CE}_{\mathcal{F}} = 0$ regardless of the choice of $\mathcal{F}$ . However, the converse statement is not true for arbitrary function spaces $\mathcal{F}$ . From the theory of integral probability metrics (see, e.g., Müller, 1997; Sriperumbudur et al., 2009; 2012), we know that for certain choices of $\mathcal{F}$ the calibration error in Eq. (3) is a well-known metric on the product space $\mathcal{P} \times \mathcal{V}$ , which implies that $\mathrm{CE}_{\mathcal{F}} = 0$ if and only if model $P$ is calibrated. Prominent examples include the maximum mean discrepancy2 (MMD) (Gretton et al., 2007), the total variation distance, the Kantorovich distance, and the Dudley metric (Dudley, 1989, p. 310).
+
+As pointed out above, Definition 2 is a generalization of the definition for multi-class classification proposed by Widmann et al. (2019)—which is based on vector-valued functions and only applicable to finite target spaces—to any probabilistic predictive model. In Appendix E we show this explicitly and discuss the special case of classification problems in more detail. Previous results (Widmann et al., 2019) imply that in classification MMCE and, for common distance measures $d(\cdot, \cdot)$ such as the total variation and squared Euclidean distance, $\mathrm{ECE}_d$ and $\mathrm{MCE}_d$ are special cases of $\mathrm{CE}_{\mathcal{F}}$ . In Appendix G we show that our framework also covers natural extensions of $\mathrm{ECE}_d$ and $\mathrm{MCE}_d$ to countably infinite discrete target spaces, which to our knowledge have not been studied before and occur, e.g., in Poisson regression.
+
+The literature of integral probability metrics suggests that we can resort to estimating $\mathrm{CE}_{\mathcal{F}}$ from i.i.d. samples from the distributions of $(P_X,Y)$ and $(P_X,Z_X)$ . For the MMD, the Kantorovich distance, and the Dudley metric tractable strongly consistent empirical estimators exist (Sriperumbudur et al., 2012). Here the empirical estimator for the MMD is particularly appealing since compared with the other estimators "it is computationally cheaper, the empirical estimate converges at a faster rate to the population value, and the rate of convergence is independent of the dimension $d$ of the space (for $S = \mathbb{R}^d$ )" (Sriperumbudur et al. (2012)).
+
+Our specific design of $(P_X,Z_X)$ can be exploited to improve on these estimators. If $\mathbb{E}_{Z_x\sim P_x}f(P_x,Z_x)$ can be evaluated analytically for a fixed prediction $P_{x}$ , then $\mathrm{CE}_{\mathcal{F}}$ can be estimated empirically with reduced variance by marginalizing out $Z_{X}$ . Otherwise $\mathbb{E}_{Z_x\sim P_x}f(P_x,Z_x)$ has to be estimated, but in contrast to the common estimators of the integral probability metrics discussed above the artificial construction of $Z_{X}$ allows us to approximate it by numerical integration methods such as (quasi) Monte Carlo integration or quadrature rules with arbitrarily small error and variance. Monte Carlo integration preserves statistical properties of the estimators such as unbiasedness and consistency.
+
+# 3 KERNEL CALIBRATION ERROR
+
+For the remaining parts of the paper we focus on the MMD formulation of $\mathrm{CE}_{\mathcal{F}}$ due to the appealing properties of the common empirical estimator mentioned above. We derive calibration-specific analogues of results for the MMD that exploit the special structure of the distribution of $(P_X,Z_X)$ to improve on existing estimators and tests in the MMD literature. To the best of our knowledge these variance-reduced estimators and tests have not been discussed in the MMD literature.
+
+Let $k \colon (\mathcal{P} \times \mathcal{Y}) \times (\mathcal{P} \times \mathcal{Y}) \to \mathbb{R}$ be a measurable kernel with corresponding reproducing kernel Hilbert space (RKHS) $\mathcal{H}$ , and assume that
+
+$$
+\mathbb {E} _ {P _ {X}, Y} k ^ {1 / 2} \big ((P _ {X}, Y), (P _ {X}, Y) \big) < \infty \quad \text {a n d} \quad \mathbb {E} _ {P _ {X}, Z _ {X}} k ^ {1 / 2} \big ((P _ {X}, Z _ {X}), (P _ {X}, Z _ {X}) \big) < \infty .
+$$
+
+We discuss how such kernels can be constructed in a generic way in Section 3.1 below.
+
+Definition 3. Let $\mathcal{F}_k$ denote the unit ball in $\mathcal{H}$ , i.e., $\mathcal{F} := \{f \in \mathcal{H} \| \| f \|_{\mathcal{H}} \leq 1\}$ . Then the kernel calibration error (KCE) with respect to kernel $k$ is defined as
+
+$$
+\mathrm {K C E} _ {k} := \mathrm {C E} _ {\mathcal {F} _ {k}} = \sup _ {f \in \mathcal {F} _ {k}} \left| \mathbb {E} _ {P _ {X}, Y} f (P _ {X}, Y) - \mathbb {E} _ {P _ {X}, Z _ {X}} f (P _ {X}, Z _ {X}) \right|.
+$$
+
+As known from the MMD literature, a more explicit formulation can be given for the squared kernel calibration error $\mathrm{SKCE}_k\coloneqq \mathrm{KCE}_k^2$ (see Lemma B.2). A similar explicit expression for $\mathrm{SKCE}_k$ was obtained by Widmann et al. (2019) for the special case of classification problems. However, their expression relies on $\mathcal{V}$ being finite and is based on matrix-valued kernels over the finite-dimensional probability simplex $\mathcal{P}$ . A key difference to the expression in Lemma B.2 is that we instead propose to use real-valued kernels defined on the product space of predictions and targets. This construction is applicable to arbitrary target spaces and does not require $\mathcal{V}$ to be finite.
+
+# 3.1 CHOICE OF KERNEL
+
+The construction of the product space $\mathcal{P} \times \mathcal{V}$ suggests the use of tensor product kernels $k = k_{\mathcal{P}} \otimes k_{\mathcal{V}}$ where $k_{\mathcal{P}} \colon \mathcal{P} \times \mathcal{P} \to \mathbb{R}$ and $k_{\mathcal{V}} \colon \mathcal{V} \times \mathcal{V} \to \mathbb{R}$ are kernels on the spaces of predicted distributions and targets, respectively.
+
+By definition, so-called characteristic kernels guarantee that $\mathrm{KCE} = 0$ if and only if the distributions of $(P_X,Y)$ and $(P_X,Z_X)$ are equal (Fukumizu et al., 2004; 2008). Many common kernels such as the Gaussian and Laplacian kernel on $\mathbb{R}^d$ are characteristic (Fukumizu et al., 2008). Szabó & Striperumbudur (2018, Theorem 4) showed that a tensor product kernel $k_{\mathcal{P}}\otimes k_{\mathcal{Y}}$ is characteristic if $k_{\mathcal{P}}$ and $k_{\mathcal{Y}}$ are characteristic, continuous, bounded, and translation-invariant kernels on $\mathbb{R}^d$ , but the implication does not hold for general characteristic kernels (Szabó & Striperumbudur, 2018, Example 1). For calibration evaluation, however, it is sufficient to be able to distinguish between the conditional distributions $\mathbb{P}(Y|P_X)$ and $\mathbb{P}(Z_X|P_X) = P_X$ . Therefore, in contrast to the regular MMD setting, it is sufficient that kernel $k_{\mathcal{Y}}$ is characteristic and kernel $k_{\mathcal{P}}$ is non-zero almost surely, to guarantee that $\mathrm{KCE} = 0$ if and only if model $P$ is calibrated. Thus it is suggestive to construct kernels on general spaces of predicted distributions as
+
+$$
+k _ {\mathcal {P}} (p, p ^ {\prime}) = \exp \left(- \lambda d _ {\mathcal {P}} ^ {\nu} (p, p ^ {\prime})\right), \tag {4}
+$$
+
+where $d_{\mathcal{P}}(\cdot, \cdot)$ is a metric on $\mathcal{P}$ and $\nu, \lambda > 0$ are kernel hyperparameters. The Wasserstein distance is a widely used metric for distributions from optimal transport theory that allows to lift a ground metric on the target space and possesses many important properties (see, e.g., Peyré & Cuturi, 2019, Chapter 2.4). In general, however, it does not lead to valid kernels $k_{\mathcal{P}}$ , apart from the notable exception of elliptically contoured distributions such as normal and Laplace distributions (Peyré & Cuturi, 2019, Chapter 8.3).
+
+In machine learning, common probabilistic predictive models output parameters of distributions such as mean and variance of normal distributions. Naturally these parameterizations give rise to injective mappings $\phi \colon \mathcal{P} \to \mathbb{R}^d$ that can be used to define a Hilbertian metric
+
+$$
+d _ {\mathcal {P}} (p, p ^ {\prime}) = \left\| \phi (p) - \phi \left(p ^ {\prime}\right) \right\| _ {2}.
+$$
+
+For such metrics, $k_{\mathcal{P}}$ in Eq. (4) is a valid kernel for all $\lambda > 0$ and $\nu \in (0,2]$ (Berg et al., 1984, Corollary 3.3.3, Proposition 3.2.7). In Appendix D.3 we show that for many mixture models, and hence model ensembles, Hilbertian metrics between model components can be lifted to Hilbertian metrics between mixture models. This construction is a generalization of the Wasserstein-like distance for Gaussian mixture models proposed by Chen et al. (2019; 2020); Delon & Desolneux (2020).
+
+# 3.2 ESTIMATION
+
+Let $(X_{1},Y_{1}),\ldots ,(X_{n},Y_{n})$ be a data set of features and targets which are i.i.d. according to the law of $(X,Y)$ . Moreover, for notational brevity, for $(p,y),(p^{\prime},y^{\prime})\in \mathcal{P}\times \mathcal{V}$ we let
+
+$$
+\begin{array}{l} h \big ((p, y), \left(p ^ {\prime}, y ^ {\prime}\right) \big) := k \big ((p, y), \left(p ^ {\prime}, y ^ {\prime}\right) \big) - \mathbb {E} _ {Z \sim p} k \big ((p, Z), \left(p ^ {\prime}, y ^ {\prime}\right) \big) \\ - \mathbb {E} _ {Z ^ {\prime} \sim p ^ {\prime}} k \big ((p, y), (p ^ {\prime}, Z ^ {\prime}) \big) + \mathbb {E} _ {Z \sim p, Z ^ {\prime} \sim p ^ {\prime}} k \big ((p, Z), (p ^ {\prime}, Z ^ {\prime}) \big). \\ \end{array}
+$$
+
+Note that in contrast to the regular MMD we marginalize out $Z$ and $Z'$ . Similar to the MMD, there exist consistent estimators of the SKCE, both biased and unbiased.
+
+Lemma 1. The plug-in estimator of $\mathrm{SKCE}_k$ is non-negatively biased. It is given by
+
+$$
+\widehat {\mathrm {S K C E}} _ {k} = \frac {1}{n ^ {2}} \sum_ {i, j = 1} ^ {n} h \big ((P _ {X _ {i}}, Y _ {i}), (P _ {X _ {j}}, Y _ {j}) \big).
+$$
+
+Inspired by the block tests for the regular MMD (Zaremba et al., 2013), we define the following class of unbiased estimators. Note that in contrast to $\widehat{\mathrm{SKCE}}_k$ they do not include terms of the form $h\big((P_{X_i},Y_i),(P_{X_i},Y_i)\big)$ .
+
+Lemma 2. The block estimator of $\mathrm{SKCE}_k$ with block size $B\in \{2,\ldots ,n\}$ given by
+
+$$
+\widehat{\mathrm{SKCE}}_{k,B}:= \left\lfloor \frac{n}{B}\right\rfloor^{-1}\sum_{b = 1}^{\lfloor n / B\rfloor}\binom {B}{2}^{-1}\sum_{(b - 1)B < i < j\leq bB}h\bigl((P_{X_{i}},Y_{i}),(P_{X_{j}},Y_{j})\bigr),
+$$
+
+is an unbiased estimator of $\mathrm{SKCE}_k$ .
+
+The extremal estimator with $B = n$ is a so-called U-statistic of $\mathrm{SKCE}_k$ (Hoeffding, 1948; van der Vaart, 1998), and hence it is the minimum variance unbiased estimator. All presented estimators are consistent, i.e., they converge to $\mathrm{SKCE}_k$ almost surely as the number $n$ of data points goes to infinity. The sample complexity of $\widehat{\mathrm{SKCE}}_k$ and $\widehat{\mathrm{SKCE}}_{k,B}$ is $O(n^{2})$ and $O(Bn)$ , respectively.
+
+# 3.3 CALIBRATION TESTS
+
+A fundamental issue with calibration errors in general, including ECE, is that their empirical estimates do not provide an answer to the question if a model is actually calibrated. Even if the measure is guaranteed to be zero if and only if the model is calibrated, usually the estimates of calibrated models are non-zero due to randomness in the data and (possibly) the estimation procedure. In classification, statistical hypothesis tests of the null hypothesis
+
+$$
+H _ {0}: \text {m o d e l} P \text {i s c a l i b r a t e d},
+$$
+
+so-called calibration tests, have been proposed as a tool for checking rigorously if $P$ is calibrated (Brocker & Smith, 2007; Vaicenavicius et al., 2019; Widmann et al., 2019). For multi-class classification, Widmann et al. (2019) suggested calibration tests based on the asymptotic distributions of estimators of the previously formulated KCE. Although for finite data sets the asymptotic distributions are only approximations of the actual distributions of these estimators, in their experiments with 10 classes the resulting $p$ -value approximations seemed reliable whereas $p$ -values obtained by
+
+so-called consistency resampling (Brocker & Smith, 2007; Vaicenavicius et al., 2019) underestimated the $p$ -value and hence rejected the null hypothesis too often (Widmann et al., 2019).
+
+For fixed block sizes $\sqrt{\lfloor n / B\rfloor}\bigl (\widehat{\mathrm{SKCE}}_{k,B} - \mathrm{SKCE}_k\bigr)\stackrel {d}{\to}\mathcal{N}\bigl (0,\sigma_B^2\bigr)$ as $n\rightarrow \infty$ , and, under $H_0$ $n\widehat{\mathrm{SKCE}}_{k,n}\stackrel {d}{\to}\sum_{i = 1}^{\infty}\lambda_i(Z_i - 1)$ as $n\to \infty$ , where $Z_{i}$ are independent $\chi_1^2$ distributed random variables. See Appendix B for details and definitions of the involved constants. From these results one can derive calibration tests that extend and generalize the existing tests for classification problems, as explained in Remarks B.1 and B.2. Our formulation illustrates also the close connection of these tests to different two-sample tests (Gretton et al., 2007; Zaremba et al., 2013).
+
+# 4 ALTERNATIVE APPROACHES
+
+For two-sample tests, Chwialkowski et al. (2015) suggested the use of the so-called unnormalized mean embedding (UME) to overcome the quadratic sample complexity of the minimum variance unbiased estimator and its intractable asymptotic distribution. As we show in Appendix C, there exists an analogous measure of calibration, termed unnormalized calibration mean embedding (UCME), with a corresponding calibration mean embedding (CME) test.
+
+As an alternative to our construction based on the joint distributions of $(P_X,Y)$ and $(P_X,Z_X)$ , one could try to directly compare the conditional distributions $\mathbb{P}(Y|P_X)$ and $\mathbb{P}(Z_X|P_X) = P_X$ . For instance, Ren et al. (2016) proposed the conditional MMD based on the so-called conditional kernel mean embedding (Song et al., 2009; 2013). However, as noted by Park & Muandet (2020), its common definition as operator between two RKHS is based on very restrictive assumptions, which are violated in many situations (see, e.g., Fukumizu et al., 2013, Footnote 4) and typically require regularized estimates. Hence, even theoretically, often the conditional MMD is "not an exact measure of discrepancy between conditional distributions" (Park & Muandet (2020)). In contrast, the maximum conditional mean discrepancy (MCMD) proposed in a concurrent work by Park & Muandet (2020) is a random variable derived from much weaker measure-theoretical assumptions. The MCMD provides a local discrepancy conditional on random predictions whereas KCE is a global real-valued summary of these local discrepancies. $^5$
+
+# 5 EXPERIMENTS
+
+In our experiments we evaluate the computational efficiency and empirical properties of the proposed calibration error estimators and calibration tests on both calibrated and uncalibrated models. By means of a classic regression problem from statistics literature, we demonstrate that the estimators and tests can be used for the evaluation of calibration of neural network models and ensembles of such models. This section contains only an high-level overview of these experiments to conserve space but all experimental details are provided in Appendix A.
+
+# 5.1 EMPIRICAL PROPERTIES AND COMPUTATIONAL EFFICIENCY
+
+We evaluate error, variance, and computation time of calibration error estimators for calibrated and uncalibrated Gaussian predictive models in synthetic regression problems. The results empirically confirm the consistency of the estimators and the computational efficiency of the estimator with block size $B = 2$ which, however, comes at the cost of increased error and variance.
+
+Additionally, we evaluate empirical test errors of calibration tests at a fixed significance level $\alpha = 0.05$ . The evaluations, visualized in Fig. 2 for models with ten-dimensional targets, demonstrate empirically that the percentage of incorrect rejections of $H_0$ converges to the set significance level as the number of samples increases. Moreover, the results highlight the computational burden of the calibration test that estimates quantiles of the intractable asymptotic distribution of $n\widehat{\mathrm{SKCE}}_{k,n}$ by bootstrapping.
+
+As expected, due to the larger variance of $\widehat{\mathrm{SKCE}}_{k,2}$ the test with fixed block size $B = 2$ shows a decreased test power although being computationally much more efficient.
+
+
+Figure 2: Empirical test errors for 500 data sets of $n \in \{4,16,64,256,1024\}$ samples from models with targets of dimension $d = 10$ . The dashed black line indicates the set significance level $\alpha = 0.05$ .
+
+# 5.2 FRIEDMAN 1 REGRESSION PROBLEM
+
+The Friedman 1 regression problem (Friedman, 1979; 1991; Friedman et al., 1983) is a classic non-linear regression problem with ten-dimensional features and real-valued targets with Gaussian noise. We train a Gaussian predictive model whose mean is modelled by a shallow neural network and a single scalar variance parameter (consistent with the data-generating model) ten times with different initial parameters. Figure 3 shows estimates of the mean squared error (MSE), the average negative log-likelihood (NLL), $\mathrm{SKCE}_k$ , and a $p$ -value approximation for these models and their ensemble on the training and a separate test data set. All estimates indicate consistently that the models are overfit after 1500 training iterations. The estimations of $\mathrm{SKCE}_k$ and the $p$ -values allow to focus on calibration specifically, whereas MSE indicates accuracy only and NLL, as any proper scoring rule (Bröcker, 2009), provides a summary of calibration and accuracy. The estimation of $\mathrm{SKCE}_k$ in addition to NLL could serve as another source of information for early stopping and model selection.
+
+
+Figure 3: Mean squared error (MSE), average negative log-likelihood (NLL), $\widehat{\mathrm{SKCE}}_k$ (SKCE (biased)), and $p$ -value approximation ( $p$ -value) of ten Gaussian predictive models for the Friedman 1 regression problem versus the number of training iterations. Evaluations on the training data set (100 samples) are displayed in green and orange, and on the test data set (50 samples) in blue and purple. The green and blue line and their surrounding bands represent the mean and the range of the evaluations of the ten models. The orange and purple lines visualize the evaluations of their ensemble.
+
+# 6 CONCLUSION
+
+We presented a framework of calibration estimators and tests for any probabilistic model that captures both classification and regression problems of arbitrary dimension as well as other predictive models. We successfully applied it for measuring calibration of (ensembles of) neural network models.
+
+Our framework highlights connections of calibration to two-sample tests and optimal transport theory which we expect to be fruitful for future research. For instance, the power of calibration tests could be improved by heuristics and theoretical results about suitable kernel choices or hyperparameters (cf. Jitkrittum et al., 2016). It would also be interesting to investigate alternatives to KCE captured by our framework, e.g., by exploiting recent advances in optimal transport theory (cf. Geneva et al., 2016).
+
+Since the presented estimators of $\mathrm{SKCE}_k$ are differentiable, we imagine that our framework could be helpful for improving calibration of predictive models, during training (cf. Kumar et al., 2018) or post-hoc. Currently, many calibration methods (see, e.g., Guo et al., 2017; Kull et al., 2019; Song et al., 2019) are based on optimizing the log-likelihood since it is a strictly proper scoring rule and thus encourages both accurate and reliable predictions. However, as for any proper scoring rule, "Per se, it is impossible to say how the score will rank unreliable forecast schemes [...]. The lack of reliability of one forecast scheme might be outbalanced by the lack of resolution of the other" (Brocker (2009)). In other words, if one does not use a calibration method such as temperature scaling (Guo et al., 2017) that keeps accuracy invariant6, it is unclear if the resulting model is trading off calibration for accuracy when using log-likelihood for re-calibration. Thus hypothetically flexible calibration methods might benefit from using the presented calibration error estimators.
+
+# ACKNOWLEDGMENTS
+
+We thank the reviewers for all the constructive feedback on our paper. This research is financially supported by the Swedish Research Council via the projects Learning of Large-Scale Probabilistic Dynamical Models (contract number: 2016-04278), Counterfactual Prediction Methods for Heterogeneous Populations (contract number: 2018-05040), and Handling Uncertainty in Machine Learning Systems (contract number: 2020-04122), by the Swedish Foundation for Strategic Research via the project Probabilistic Modeling and Inference for Machine Learning (contract number: ICA16-0015), by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation, and by ELLIIT.
+
+# REFERENCES
+
+M. A. Arcones and E. Gine. On the bootstrap of $U$ and $V$ statistics. The Annals of Statistics, 20(2): 655-674, 1992.
+C. Berg, J. P. R. Christensen, and P. Ressel. Harmonic Analysis on Semigroups. Springer New York, 1984.
+J. Brocker and L. A. Smith. Increasing the reliability of reliability diagrams. Weather and Forecasting, 22(3):651-661, June 2007.
+Jochen Brocker. Reliability, sufficiency, and the decomposition of proper scores. Quarterly Journal of the Royal Meteorological Society, 135(643):1512-1519, July 2009.
+Y. Chen, T. T. Georgiou, and A. Tannenbaum. Optimal transport for Gaussian mixture models. IEEE Access, 7:6269-6278, 2019.
+Y. Chen, J. Ye, and J. Li. Aggregated Wasserstein distance and state registration for hidden Markov models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(9):2133-2147, September 2020.
+K. Chwialkowski, A. Ramdas, D. Sejdinovic, and A. Gretton. Fast two-sample testing with analytic representations of probability measures. In Proceedings of the 28th International Conference on Neural Information Processing Systems, pp. 1981-1989, Cambridge, MA, USA, 2015. MIT Press.
+
+M. H. DeGroot and S. E. Fienberg. The comparison and evaluation of forecasters. The Statistician, 32(1/2):12, March 1983.
+C. Deledalle, S. Parameswaran, and T. Q. Nguyen. Image denoising with generalized Gaussian mixture model patch priors. SIAM Journal on Imaging Sciences, 11(4):2568-2609, January 2018.
+J. Delon and A. Desolneux. A Wasserstein-type distance in the space of Gaussian mixture models. SIAM Journal on Imaging Sciences, 13(2):936-970, January 2020.
+R. M. Dudley. Real analysis and probability. Wadsworth & Brooks/Cole Pub. Co, Pacific Grove, Calif, 1989.
+M. Fasiolo, S. N. Wood, M. Zaffran, R. Nedellec, and Y. Goude. Fast calibrated additive quantile regression. Journal of the American Statistical Association, pp. 1-11, March 2020.
+J. H. Friedman. A tree-structured approach to nonparametric multiple regression. In Lecture Notes in Mathematics, pp. 5-22. Springer Berlin Heidelberg, 1979.
+J. H. Friedman. Multivariate adaptive regression splines. The Annals of Statistics, 19(1):1-67, 1991.
+J. H. Friedman, E. Grosse, and W. Stuetzle. Multidimensional additive spline approximation. SIAM Journal on Scientific and Statistical Computing, 4(2):291-301, June 1983.
+K. Fukumizu, F. R. Bach, and M. I. Jordan. Dimensionality reduction for supervised learning with reproducing kernel Hilbert spaces. Journal of Machine Learning Research, 5(Jan):73-99, 2004.
+K. Fukumizu, A. Gretton, X. Sun, and B. Schölkopf. Kernel measures of conditional dependence. In Advances in Neural Information Processing Systems 20, pp. 489-496. 2008.
+K. Fukumizu, L. Song, and A. Gretton. Kernel Bayes' rule: Bayesian inference with positive definite kernels. Journal of Machine Learning Research, 14(82):3753-3783, 2013.
+M. Gelbrich. On a formula for the $l^2$ Wasserstein metric between measures on Euclidean and Hilbert spaces. Mathematische Nachrichten, 147(1):185-203, 1990.
+A. Geneva, M. Cuturi, G. Peyre, and F. R. Bach. Stochastic optimization for large-scale optimal transport. In Advances in Neural Information Processing Systems 29, pp. 3440-3448. 2016.
+X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, volume 9 of Proceedings of Machine Learning Research, pp. 249-256. PMLR, 5 2010.
+E. Gómez, M. A. Gómez-Viilegas, and J. M. Marín. A multivariate generalization of the power exponential family of distributions. Communications in Statistics - Theory and Methods, 27(3): 589-600, January 1998.
+E. Gómez-Sánchez-Manzano, M. A. Gómez-Villegas, and J. M. Marín. Multivariate exponential power distributions as mixtures of normal distributions with Bayesian applications. *Communications in Statistics - Theory and Methods*, 37(6):972-985, February 2008.
+A. Gretton, K. Borgwardt, M. Rasch, B. Scholkopf, and A. J. Smola. A kernel method for the two-sample-problem. In Advances in Neural Information Processing Systems 19, pp. 513-520. 2007.
+A. Gretton, K. Fukumizu, Z. Harchaoui, and B. K. Sriperumbudur. A fast, consistent kernel two-sample test. In Advances in Neural Information Processing Systems 22, pp. 673-681. 2009.
+C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger. On calibration of modern neural networks. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 1321-1330. PMLR, 8 2017.
+F. K. Gustafsson, M. Danelljan, and T. B. Schön. Evaluating scalable Bayesian deep learning methods for robust computer vision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2020.
+
+Y. H. S. Ho and S. M. S. Lee. Calibrated interpolated confidence intervals for population quantiles. Biometrika, 92(1):234-241, March 2005.
+W. Hoeffding. A class of statistics with asymptotically normal distribution. The Annals of Mathematical Statistics, 19(3):293-325, September 1948.
+H. Hotelling. The generalization of student's ratio. The Annals of Mathematical Statistics, 2(3): 360-378, August 1931.
+M. Innes. Flux: Elegant machine learning with Julia. Journal of Open Source Software, 3(25):602, May 2018.
+M. Innes, E. Saba, K. Fischer, D. Gandhi, M. C. Rudilosso, N. M. Joy, T. Karmali, A. Pal, and V. Shah. Fashionable modelling with Flux, 2018.
+W. Jitkrittum, Z. Szabó, K. P. Chwialkowski, and A. Gretton. Interpretable distribution features with maximum testing power. In Advances in Neural Information Processing Systems 29, pp. 181-189. 2016.
+N. L. Johnson, S. Kotz, and N. Balakrishnan. Continuous univariate distributions: Vol. 1. Wiley, New York, 2nd edition, 1994.
+D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In *ICLR (Poster)*, 2015.
+V. Kuleshov, N. Fenner, and S. Ermon. Accurate uncertainties for deep learning using calibrated regression. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 2796-2804. PMLR, 7 2018.
+M. Kull, T. Silva Filho, and P. Flach. Beta calibration: a well-founded and easily implemented improvement on logistic calibration for binary classifiers. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, volume 54 of Proceedings of Machine Learning Research, pp. 623-631. PMLR, 4 2017.
+M. Kull, M. Perello Nieto, M. Kangsepp, T. Silva Filho, H. Song, and P. Flach. Beyond temperature scaling: Obtaining well-calibrated multi-class probabilities with Dirichlet calibration. In Advances in Neural Information Processing Systems 32, pp. 12316-12326. 2019.
+A. Kumar, S. Sarawagi, and U. Jain. Trainable calibration measures for neural networks from kernel mean embeddings. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 2805-2814. PMLR, 7 2018.
+A. M. Mathai and S. B. Provost. Quadratic forms in random variables: Theory and applications, volume 126. M. Dekker, New York, 1992.
+C. A. Micchelli and M. Pontil. On learning vector-valued functions. Neural Computation, 17(1): 177-204, January 2005.
+A. Müller. Integral probability metrics and their generating classes of functions. Advances in Applied Probability, 29(2):429-443, June 1997.
+A. H. Murphy and R. L. Winkler. Reliability of subjective probability forecasts of precipitation and temperature. Applied Statistics, 26(1):41, 1977.
+M. P. Naeini, G. Cooper, and M. Hauskrecht. Obtaining well calibrated probabilities using Bayesian binning. In AAAI Conference on Artificial Intelligence, 2015.
+J. Park and K. Muandet. A measure-theoretic approach to kernel conditional mean embeddings. In Advances in Neural Information Processing Systems, volume 33, pp. 21247-21259, 2020.
+G. Peyre and M. Cuturi. Computational optimal transport. Foundations and Trends in Machine Learning, 11(5-6):355-607, 2019.
+J. Platt. Probabilities for SV Machines, pp. 61-73. MIT Press, 2000.
+
+Y. Ren, J. Zhu, J. Li, and Y. Luo. Conditional generative moment-matching networks. In Advances in Neural Information Processing Systems 29, pp. 2928-2936. 2016.
+M. Rueda, S. Martínez-Puertas, H. Martínez-Puertas, and A. Arcos. Calibration methods for estimating quantiles. Metrika, 66(3):355-371, December 2006.
+R. J. Serfling (ed.). Approximation Theorems of Mathematical Statistics. John Wiley & Sons, Inc., November 1980.
+H. Song, T. Diethe, M. Kull, and P. Flach. Distribution calibration for regression. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 5897-5906. PMLR, 6 2019.
+L. Song, J. Huang, A. J. Smola, and K. Fukumizu. Hilbert space embeddings of conditional distributions with applications to dynamical systems. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML '09, pp. 961-968. Association for Computing Machinery, 2009.
+L. Song, K. Fukumizu, and A. Gretton. Kernel embeddings of conditional distributions: A unified kernel framework for nonparametric inference in graphical models. IEEE Signal Processing Magazine, 30(4):98-111, July 2013.
+B. K. Sriperumbudur, K. Fukumizu, A. Gretton, B. Scholkopf, and G. R. G. Lanckriet. On integral probability metrics, $\phi$ -divergences and binary classification, 2009.
+B. K. Sriperumbudur, K. Fukumizu, and G. R.G. Lanckriet. Universality, characteristic kernels and RKHS embedding of measures. Journal of Machine Learning Research, 12(70):2389-2410, 2011.
+B. K. Sriperumbudur, K. Fukumizu, A. Gretton, B. Scholkopf, and G. R. G. Lanckriet. On the empirical estimation of integral probability metrics. Electronic Journal of Statistics, 6(0):1550-1599, 2012.
+Z. Szabó and B. K. Sriperumbudur. Characteristic and universal tensor product kernels. Journal of Machine Learning Research, 18(233):1-29, 2018.
+M. Taillardat, O. Mestre, M. Zamo, and P. Naveau. Calibrated ensemble forecasts using quantile regression forests and ensemble model output statistics. Monthly Weather Review, 144(6):2375-2393, June 2016.
+J. Vaicenavicius, D. Widmann, C. Andersson, F. Lindsten, J. Roll, and T. B. Schön. Evaluating model calibration in classification. In Proceedings of Machine Learning Research, volume 89 of Proceedings of Machine Learning Research, pp. 3459-3467. PMLR, 4 2019.
+A. W. van der Vaart. Asymptotic Statistics. Cambridge University Press, October 1998.
+C. Villani. Optimal Transport. Springer Berlin Heidelberg, 2009.
+D. Widmann, F. Lindsten, and D. Zachariah. Calibration tests in multi-class classification: A unifying framework. In Proceedings of the 32th International Conference on Neural Information Processing Systems, pp. 12236-12246. 2019.
+S. J. Yakowitz and J. D. Spragins. On the identifiability of finite mixtures. The Annals of Mathematical Statistics, 39(1):209-214, February 1968.
+Bianca Zadrozny. Reducing multiclass to binary by coupling probability estimates. In Advances in Neural Information Processing Systems 14, pp. 1041-1048. MIT Press, 2002.
+W. Zaremba, A. Gretton, and M. Blaschko. B-test: A non-parametric, low variance kernel two-sample test. In Advances in Neural Information Processing Systems 26, pp. 755-763. 2013.
+
+# A EXPERIMENTS
+
+The source code of the experiments and instructions for reproducing the results are available at https://github.com/devmotion/Calibration_ICLR2021. Additional material such as automatically generated HTML output and Jupyter notebooks is available at https://devmotion.github.io/Calibration_ICLR2021/.
+
+# A.1 ORDINARY LEAST SQUARES
+
+We consider a regression problem with scalar feature $X$ and scalar target $Y$ with input-dependent Gaussian noise that is inspired by a problem by Gustafsson et al. (2020). Feature $X$ is distributed uniformly at random in $[-1, 1]$ , and target $Y$ is distributed according to
+
+$$
+Y \sim \sin (\pi X) + | 1 + X | \epsilon ,
+$$
+
+where $\epsilon \sim \mathcal{N}(0,0.15^2)$ . We train a linear regression model $P$ with homoscedastic variance using ordinary least squares and a data set of 100 i.i.d. pairs of feature $X$ and target $Y$ (see Fig. 4).
+
+
+Figure 4: Data generating distribution $\mathbb{P}(Y|X)$ and predicted distribution $P(Y|X)$ of the linear regression model. Training data is indicated by orange dots.
+
+A validation data set of $n = 50$ i.i.d. pairs of $X$ and $Y$ is used to evaluate the empirical cumulative probability
+
+$$
+n ^ {- 1} \sum_ {i = 1} ^ {n} \mathbb {1} _ {[ 0, \tau ]} \left(P (Y \leq Y _ {i} | X = X _ {i})\right)
+$$
+
+of model $P$ for quantile levels $\tau \in [0,1]$ . Model $P$ would be quantile calibrated (Song et al., 2019) if
+
+$$
+\tau = \mathbb {P} _ {X ^ {\prime}, Y ^ {\prime}} \left(P (Y \leq Y ^ {\prime} | X = X ^ {\prime}) \leq \tau\right)
+$$
+
+for all $\tau \in [0,1]$ , where $(X,Y)$ and $(X',Y')$ are independent identically distributed pairs of random variables (see Fig. 5).
+
+Additionally, we compute a $p$ -value estimate of the null hypothesis $H_0$ that model $P$ is calibrated using an estimation of the quantile of the asymptotic distribution of $n\widehat{\mathrm{SKCE}}_{k,n}$ with 100000 bootstrap samples on the validation data set (see Remark B.2). Kernel $k$ is chosen as the tensor product kernel
+
+$$
+\begin{array}{l} k \left((p, y), \left(p ^ {\prime}, y ^ {\prime}\right)\right) = \exp \left(- W _ {2} \left(p, p ^ {\prime}\right)\right) \exp \left(- \left(y - y ^ {\prime}\right) ^ {2} / 2\right) \\ = \exp \left(- \sqrt {(m _ {p} - m _ {p ^ {\prime}}) ^ {2} + (\sigma_ {p} - \sigma_ {p ^ {\prime}}) ^ {2}}\right) \exp \big (- (y - y ^ {\prime}) ^ {2} / 2 \big), \\ \end{array}
+$$
+
+where $W_{2}$ is the 2-Wasserstein distance and $m_p, m_{p'}$ and $\sigma_p, \sigma_{p'}$ denote the mean and the standard deviation of the normal distributions $p$ and $p'$ (see Appendix D.1). We obtain $p < 0.05$ in our experiment, and hence the calibration test rejects $H_{0}$ at the significance level $\alpha = 0.05$ .
+
+
+Figure 5: Cumulative probability versus quantile level for the linear regression model on the validation data (orange curve). The green curve indicates the theoretical ideal for a quantile-calibrated model.
+
+# A.2 EMPIRICAL PROPERTIES AND COMPUTATIONAL EFFICIENCY
+
+We study two setups with $d$ -dimensional targets $Y$ and normal distributions $P_{X}$ of the form $\mathcal{N}(c\mathbf{1}_d,0.1^2\mathbf{I}_d)$ as predictions, where $c\sim \mathrm{U}(0,1)$ . Since calibration analysis is only based on the targets and predicted distributions, we neglect features $X$ in these experiments and specify only the distributions of $Y$ and $P_{X}$ .
+
+In the first setup we simulate a calibrated model. We achieve this by sampling targets from the predicted distributions, i.e., by defining the conditional distribution of $Y$ given $P_{X}$ as
+
+$$
+Y \mid P _ {X} = \mathcal {N} (\mu , \Sigma) \sim \mathcal {N} (\mu , \Sigma).
+$$
+
+In the second setup we simulate an uncalibrated model of the form
+
+$$
+Y \mid P _ {X} = \mathcal {N} (\mu , \Sigma) \sim \mathcal {N} ([ 0. 1, \mu_ {2}, \dots , \mu_ {d} ] ^ {\top}, \Sigma).
+$$
+
+We perform an evaluation of the convergence and computation time of the biased estimator $\widehat{\mathrm{SKCE}}_k$ and the unbiased estimator $\widehat{\mathrm{SKCE}}_{k,B}$ with blocks of size $B\in \{2,\sqrt{n},n\}$ . We use the tensor product kernel
+
+$$
+\begin{array}{l} k \left((p, y), \left(p ^ {\prime}, y ^ {\prime}\right)\right) = \exp \left(- W _ {2} (p, p ^ {\prime})\right) \exp \left(- \left(y - y ^ {\prime}\right) ^ {2} / 2\right) \\ = \exp \left(- \sqrt {(m _ {p} - m _ {p ^ {\prime}}) ^ {2} + (\sigma_ {p} - \sigma_ {p ^ {\prime}}) ^ {2}}\right) \exp \big (- (y - y ^ {\prime}) ^ {2} / 2 \big), \\ \end{array}
+$$
+
+where $W_{2}$ is the 2-Wasserstein distance and $m_p, m_{p'}$ and $\sigma_p, \sigma_{p'}$ denote the mean and the standard deviation of the normal distributions $p$ and $p'$ .
+
+Figures 6 to 9 visualize the mean absolute error and the variance of the resulting estimates for the calibrated and the uncalibrated model with dimensions $d = 1$ and $d = 10$ for 500 independently drawn data sets of $n \in \{4, 16, 64, 256, 1024\}$ samples of $(P_X, Y)$ . Computation time indicates the minimum time in the 500 evaluations on a computer with a 3.6 GHz processor. The ground truth values of the uncalibrated models were estimated by averaging the estimates of $\widehat{\mathrm{SKCE}}_{k,1000}$ for 1000 independently drawn data sets of 1000 samples of $(P_X, Y)$ (independent from the data sets used for the evaluation of the estimates). Figures 6 and 7 illustrate that the computational efficiency of $\widehat{\mathrm{SKCE}}_{k,2}$ in comparison with the other estimators comes at the cost of increased error and variance for the calibrated models for fixed numbers of samples.
+
+We compare calibration tests based on the (tractable) asymptotic distribution of $\sqrt{\lfloor n / B\rfloor}\widehat{\mathrm{SKCE}}_{k,B}$ with fixed block size $B\in \{2,\sqrt{n}\}$ (see Remark B.1), the (intractable) asymptotic distribution of $\widehat{n\mathrm{SKCE}}_{k,n}$ which is approximated with 1000 bootstrap samples (see Remark B.2), and a Hotelling's
+
+
+
+
+
+Figure 6: Mean absolute error and variance of 500 calibration error estimates for data sets of $n \in \{4,16,64,256,1024\}$ samples from the calibrated model of dimension $d = 1$ .
+
+SKCE SKCE $(B = 2)$ SKCE $(B = \sqrt{n})$ SKCE $(B = n)$
+
+
+
+
+
+
+
+Figure 7: Mean absolute error and variance of 500 calibration error estimates for data sets of $n \in \{4,16,64,256,1024\}$ samples from the calibrated model of dimension $d = 10$ .
+
+SKCE SKCE $(B = 2)$ SKCE $(B = \sqrt{n})$ SKCE $(B = n)$
+
+
+
+
+
+
+
+Figure 8: Mean absolute error and variance of 500 calibration error estimates for data sets of $n \in \{4,16,64,256,1024\}$ samples from the uncalibrated model of dimension $d = 1$ .
+
+SKCE SKCE $(B = 2)$ SKCE $(B = \sqrt{n})$ SKCE $(B = n)$
+
+
+
+
+
+
+
+Figure 9: Mean absolute error and variance of 500 calibration error estimates for data sets of $n \in \{4,16,64,256,1024\}$ samples from the uncalibrated model of dimension $d = 10$ .
+
+SKCE SKCE $(B = 2)$ SKCE $(B = \sqrt{n})$ SKCE $(B = n)$
+
+
+
+$T^2$ -statistic for $\mathrm{UCME}_{k,10}$ with 10 test locations (see Appendix C). We compute the empirical test errors (percentage of false rejections of the null hypothesis $H_0$ that model $P$ is calibrated if $P$ is calibrated, and percentage of false non-rejections of $H_0$ if $P$ is not calibrated) at a fixed significance level $\alpha = 0.05$ and the minimal computation time for the calibrated and the uncalibrated model with dimensions $d = 1$ and $d = 10$ for 500 independently drawn data sets of $n \in \{4,16,64,256,1024\}$ samples of $(P_X,Y)$ . The 10 test predictions of the CME test are of the form $\mathcal{N}(m,0.1^2\mathbf{I}_d)$ where $m$ is distributed uniformly at random in the $d$ -dimensional unit hypercube $[0,1]^d$ , the corresponding 10 test targets are i.i.d. according to $\mathcal{N}(\mathbf{0},0.1^2\mathbf{I}_d)$ .
+
+Figures 10 and 11 show that all tests adhere to the set significance level asymptotically as the number of samples increases. The convergence of the CME test with 10 test locations is found to be much slower than the convergence of all other tests. The tests based on the tractable asymptotic distribution of $\sqrt{\lfloor n / B\rfloor}\widehat{\mathrm{SKCE}}_{k,B}$ for fixed block size $B$ are orders of magnitudes faster than the test based on the intractable asymptotic distribution of $n\widehat{\mathrm{SKCE}}_{k,n}$ , approximated with 1000 bootstrap samples. We see that the efficiency gain comes at the cost of decreased test power for smaller number of samples, explained by the increasing variance of $\widehat{\mathrm{SKCE}}_{k,B}$ for decreasing block sizes $B$ . However, in our examples the test based on $\widehat{\mathrm{SKCE}}_{k,\sqrt{n}}$ still achieves good test power for reasonably large number of samples ( $>30$ ).
+
+
+Figure 10: Empirical test errors for 500 data sets of $n \in \{4, 16, 64, 256, 1024\}$ samples from models with targets of dimension $d = 1$ . The dashed black line indicates the set significance level $\alpha = 0.05$ .
+
+
+Figure 11: Empirical test errors for 500 data sets of $n \in \{4, 16, 64, 256, 1024\}$ samples from models with targets of dimension $d = 10$ . The dashed black line indicates the set significance level $\alpha = 0.05$ .
+
+# A.3 FRIEDMAN 1 REGRESSION PROBLEM
+
+We study the so-called Friedman 1 regression problem, which was initially described for 200 inputs in the six-dimensional unit hypercube (Friedman, 1979; Friedman et al., 1983) and later modified to 100 inputs in the 10-dimensional unit hypercube (Friedman, 1991). In this regression problem real-valued target $Y$ depends on input $X$ via
+
+$$
+Y = 1 0 \sin (\pi X _ {1} X _ {2}) + 2 0 (X _ {3} - 0. 5) ^ {2} + 1 0 X _ {4} + 5 X _ {5} + \epsilon ,
+$$
+
+where noise $\epsilon$ is typically chosen to be independently standard normally distributed. We generate a training data set of 100 inputs distributed uniformly at random in the 10-dimensional unit hypercube and corresponding targets with identically and independently distributed noise following a standard normal distribution.
+
+We consider models $P^{(\theta, \sigma^2)}$ of normal distributions with fixed variance $\sigma^2$
+
+$$
+P _ {x} ^ {(\theta , \sigma^ {2})} = \mathcal {N} (f _ {\theta} (x), \sigma^ {2}),
+$$
+
+where $f_{\theta}(x)$ , the model of the mean of the distribution $\mathbb{P}(Y|X = x)$ , is given by a fully connected neural network with two hidden layers with 200 and 50 hidden units and ReLU activation functions. The parameters of the neural network are denoted by $\theta$ .
+
+We use a maximum likelihood approach and train the parameters $\theta$ of the model for 5000 iterations by minimizing the mean squared error on the training data set using ADAM (Kingma & Ba, 2015) (default settings in the machine learning framework Flux.jl (Innes, 2018; Innes et al., 2018)). In each iteration, the variance $\sigma^2$ is set to the maximizer of the likelihood of the training data set.
+
+We train 10 models with different initializations of parameters $\theta$ . The initial values of the weight matrices of the neural networks are sampled from the uniform Glorot initialization (Glorot & Bengio, 2010) and the offset vectors are initialized with zeros. In Fig. 12, we visualize estimates of accuracy and calibration measures on the training and test data set with 100 and 50 samples, respectively, for 5000 training iterations. The pinball loss is a common measure and training objective for calibration of quantiles (Song et al., 2019). It is defined as
+
+$$
+\mathbb {E} _ {X, Y} L _ {\tau} \big (Y, \text {q u a n t i l e} \left(P _ {X}, \tau\right) \big),
+$$
+
+where $L_{\tau}(y,\tilde{y}) = (1 - \tau)(\tilde{y} -y)_{+} + \tau (y - \tilde{y})_{+}$ and quantile $(P_x,\tau) = \inf_y\{P_x(Y\leq y)\geq \tau \}$ for quantile level $\tau \in [0,1]$ . In Fig. 12 we plot the average pinball loss (pinball) for quantile levels $\tau \in \{0.05,0.1,\dots ,0.95\}$ . We evaluate $\widehat{\mathrm{SKCE}}_{k,n}$ (SKCE (unbiased)) and $\widehat{\mathrm{SKCE}}_k$ (SKCE (biased)) for the tensor product kernel
+
+$$
+\begin{array}{l} k \left((p, y), \left(p ^ {\prime}, y ^ {\prime}\right)\right) = \exp \left(- W _ {2} \left(p, p ^ {\prime}\right)\right) \exp \left(- \left(y - y ^ {\prime}\right) ^ {2} / 2\right) \\ = \exp \left(- \sqrt {(m _ {p} - m _ {p ^ {\prime}}) ^ {2} + (\sigma_ {p} - \sigma_ {p ^ {\prime}}) ^ {2}}\right) \exp \big (- (y - y ^ {\prime}) ^ {2} / 2 \big), \\ \end{array}
+$$
+
+where $W_{2}$ is the 2-Wasserstein distance and $m_p, m_{p'}$ and $\sigma_p, \sigma_{p'}$ denote the mean and the standard deviation of the normal distributions $p$ and $p'$ (see Appendix D.1). The $p$ -value estimate ( $p$ -value) is computed by estimating the quantile of the asymptotic distribution of $n\widehat{\mathrm{SKCE}}_{k,n}$ with 1000 bootstrap samples (see Remark B.2). The estimates of the mean squared error and the average negative log-likelihood are denoted by MSE and NLL. All estimators indicate consistently that the trained models suffer from overfitting after around 1000 training iterations.
+
+Additionally, we form ensembles of the ten individual models at every training iteration. The evaluations for the ensembles are visualized in Fig. 12 as well. Apart from the unbiased estimates of $\mathrm{SKCE}_k$ , the estimates of the ensembles are consistently better than the average estimates of the ensemble members. For the mean squared error and the negative log-likelihood this behaviour is guaranteed theoretically by the generalized mean inequality.
+
+# B THEORY
+
+# B.1 GENERAL SETTING
+
+Let $(\Omega, \mathcal{A}, \mathbb{P})$ be a probability space. Define the random variables $X \colon (\Omega, \mathcal{A}) \to (\mathcal{X}, \Sigma_X)$ and $Y \colon (\Omega, \mathcal{A}) \to (\mathcal{Y}, \Sigma_Y)$ such that $\Sigma_X$ contains all singletons, and denote a version of the regular conditional distribution of $Y$ given $X = x$ by $\mathbb{P}(Y|X = x)$ for all $x \in \mathcal{X}$ .
+
+
+Figure 12: Estimates of different accuracy and calibration measures of ten Gaussian predictive models for the Friedman 1 regression problem versus the number of training iterations. Evaluations on the training data set (100 samples) are displayed in green and orange, and on the test data set (50 samples) in blue and purple. The green and blue line and their surrounding bands represent the mean and the range of the evaluations of the ten models. The orange and purple lines visualize the evaluations of their ensemble.
+
+Let $P \colon (\mathcal{X}, \Sigma_X) \to (\mathcal{P}, \mathcal{B}(\mathcal{P}))$ be a measurable function that maps features in $\mathcal{X}$ to probability measures in $\mathcal{P}$ on the target space $\mathcal{Y}$ . We call $P$ a probabilistic model, and denote by $P_x \coloneqq P(x)$ its output for feature $x \in \mathcal{X}$ . This gives rise to the random variable $P_X \colon (\Omega, \mathcal{A}) \to (\mathcal{P}, \mathcal{B}(\mathcal{P}))$ as $P_X \coloneqq P(X)$ . We denote a version of the regular conditional distribution of $Y$ given $P_X = P_x$ by $\mathbb{P}(Y | P_X = P_x)$ for all $P_x \in \mathcal{P}$ .
+
+# B.2 EXPECTED AND MAXIMUM CALIBRATION ERROR
+
+The common definition of the expected and maximum calibration error (Guo et al., 2017; Kull et al., 2019; Naeini et al., 2015; Vaicenavicius et al., 2019) for classification models can be generalized to arbitrary predictive models.
+
+Definition B.1. Let $d(\cdot, \cdot)$ be a distance measure of probability distributions of target $Y$ , and let $\mu$ be the law of $P_X$ . Then we call
+
+$$
+\mathrm {E C E} _ {d} = \mathbb {E} d \left(\mathbb {P} (Y | P _ {X}), P _ {X}\right) \quad \text {a n d} \quad \mathrm {M C E} _ {d} = \mu - \operatorname {e s s} \sup d \left(\mathbb {P} (Y | P _ {X}), P _ {X}\right)
+$$
+
+the expected calibration error (ECE) and the maximum calibration error (MCE) of model $P$ with respect to measure $d$ , respectively.
+
+# B.3 KERNEL CALIBRATION ERROR
+
+Recall the general notation: Let $k \colon (\mathcal{P} \times \mathcal{Y}) \times (\mathcal{P} \times \mathcal{Y}) \to \mathbb{R}$ be a kernel, and denote its corresponding RKHS by $\mathcal{H}$ .
+
+If not stated otherwise, we assume that
+
+(K1) $k(\cdot ,\cdot)$ is Borel-measurable.
+(K2) $k$ is integrable with respect to the distributions of $(P_X,Y)$ and $(P_X,Z_X)$ , i.e.,
+
+$$
+\mathbb {E} _ {P _ {X}, Y} k ^ {1 / 2} \big ((P _ {X}, Y), (P _ {X}, Y) \big) < \infty
+$$
+
+and
+
+$$
+\mathbb {E} _ {P _ {X}, Z _ {X}} k ^ {1 / 2} \big ((P _ {X}, Z _ {X}), (P _ {X}, Z _ {X}) \big) < \infty .
+$$
+
+Lemma B.1. There exist kernel mean embeddings $\mu_{P_X Y}, \mu_{P_X Z_X} \in \mathcal{H}$ such that for all $f \in \mathcal{H}$
+
+$$
+\langle f, \mu_ {P _ {X} Y} \rangle_ {\mathcal {H}} = \mathbb {E} _ {P _ {X}, Y} f (P _ {X}, Y) \quad a n d \quad \langle f, \mu_ {P _ {X} Z _ {X}} \rangle_ {\mathcal {H}} = \mathbb {E} _ {P _ {X}, Z _ {X}} f (P _ {X}, Z _ {X}).
+$$
+
+This implies that
+
+$$
+\mu_ {P _ {X} Y} = \mathbb {E} _ {P _ {X}, Y} k (\cdot , (P _ {X}, Y)) \quad a n d \quad \mu_ {P _ {X} Z _ {X}} = \mathbb {E} _ {P _ {X}, Z _ {X}} k (\cdot , (P _ {X}, Z _ {X})).
+$$
+
+Proof. The linear operators $T_{P_X Y} f \coloneqq \mathbb{E}_{P_X, Y} f(P_X, Y)$ and $T_{P_X Z_X} f \coloneqq \mathbb{E}_{P_X, Z_X} f(P_X, Z_X)$ for all $f \in \mathcal{H}$ are bounded since
+
+$$
+\begin{array}{l} \left| T _ {P _ {X} Y} f \right| = \left| \mathbb {E} _ {P _ {X}, Y} f (P _ {X}, Y) \right| \leq \mathbb {E} _ {P _ {X}, Y} | f (P _ {X}, Y) | = \mathbb {E} _ {P _ {X}, Y} | \langle k ((P _ {X}, Y), \cdot), f \rangle_ {\mathcal {H}} | \\ \leq \mathbb {E} _ {P _ {X}, Y} \| k ((P _ {X}, Y), \cdot) \| _ {\mathcal {H}} \| f \| _ {\mathcal {H}} ] = \| f \| _ {\mathcal {H}} \mathbb {E} _ {P _ {X}, Y} k ^ {1 / 2} ((P _ {X}, Y), (P _ {X}, Y)) \\ \end{array}
+$$
+
+and similarly
+
+$$
+\left| T _ {P _ {X} Z _ {X}} f \right| \leq \| f \| _ {\mathcal {H}} \mathbb {E} _ {P _ {X}, Z _ {X}} k ^ {1 / 2} \left(\left(P _ {X}, Z _ {X}\right), \left(P _ {X}, Z _ {X}\right)\right).
+$$
+
+Thus Riesz representation theorem implies that there exist $\mu_{P_X Y}, \mu_{P_X Z_X} \in \mathcal{H}$ such that $T_{P_X Y} f = \langle f, \mu_{P_X Y} \rangle_{\mathcal{H}}$ and $T_{P_X Z_X} f = \langle f, \mu_{P_X Z_X} \rangle_{\mathcal{H}}$ . The reproducing property of $\mathcal{H}$ implies
+
+$$
+\mu_ {P _ {X} Y} (p, y) = \langle k ((p, y), \cdot), \mu_ {P _ {X} Y} \rangle_ {\mathcal {H}} = \mathbb {E} _ {P _ {X}, Y} k ((p, y), (P _ {X}, Y))
+$$
+
+for all $(p,y)\in \mathcal{P}\times \mathcal{V}$ , and similarly $\mu_{P_XZ_X}(p,y) = \mathbb{E}_{P_X,Z_X}k((p,y),(P_X,Z_X))$ .
+
+
+
+Lemma B.2. The squared kernel calibration error (SKCE) with respect to kernel $k$ , defined as $\mathrm{SKCE}_k \coloneqq \mathrm{KCE}_k^2$ , is given by
+
+$$
+\begin{array}{l} \operatorname {S K C E} _ {k} = \mathbb {E} _ {P _ {X}, Y, P _ {X ^ {\prime}}, Y ^ {\prime}} k \big ((P _ {X}, Y), (P _ {X ^ {\prime}}, Y ^ {\prime}) \big) - 2 \mathbb {E} _ {P _ {X}, Y, P _ {X ^ {\prime}}, Z _ {X ^ {\prime}}} k \big ((P _ {X}, Y), (P _ {X ^ {\prime}}, Z _ {X ^ {\prime}}) \big) \\ + \mathbb {E} _ {P _ {X}, Z _ {X}, P _ {X ^ {\prime}}, Z _ {X ^ {\prime}}} k \big ((P _ {X}, Z _ {X}), (P _ {X ^ {\prime}}, Z _ {X ^ {\prime}}) \big), \\ \end{array}
+$$
+
+where $(P_{X'}, Y', Z_{X'})$ is independently distributed according to the law of $(P_X, Y, Z_X)$
+
+Proof. From Lemma B.1 we know that there exist kernel mean embeddings $\mu_{P_X Y}, \mu_{P_X Z_X} \in \mathcal{H}$ that satisfy
+
+$$
+\begin{array}{l} \langle f, \mu_ {P _ {X} Y} - \mu_ {P _ {X} Z _ {X}} \rangle_ {\mathcal {H}} = \langle f, \mu_ {P _ {X} Y} \rangle_ {\mathcal {H}} - \langle f, \mu_ {P _ {X} Z _ {X}} \rangle_ {\mathcal {H}} \\ = \mathbb {E} _ {P _ {X}, Y} f (P _ {X}, Y) - \mathbb {E} _ {P _ {X}, Z _ {X}} f (P _ {X}, Z _ {X}) \\ \end{array}
+$$
+
+for all $f\in \mathcal{H}$ . Hence by the definition of the dual norm
+
+$$
+\begin{array}{l} \mathrm {C E} _ {\mathcal {F} _ {k}} = \sup _ {f \in \mathcal {F} _ {k}} \left| \mathbb {E} _ {P _ {X}, Y} f (P _ {X}, Y) - \mathbb {E} _ {P _ {X}, Z _ {X}} f (P _ {X}, Z _ {X}) \right| \\ = \sup _ {f \in \mathcal {F} _ {k}} \left| \langle f, \mu_ {P _ {X}, Y} - \mu_ {P _ {X}, Z _ {X}} \rangle_ {\mathcal {H}} \right| = \| \mu_ {P _ {X}, Y} - \mu_ {P _ {X}, Z _ {X}} \| _ {\mathcal {H}}, \\ \end{array}
+$$
+
+which implies
+
+$$
+\mathrm {S K C E} _ {k} = \left\langle \mu_ {P _ {X} Y} - \mu_ {P _ {X} Z _ {X}}, \mu_ {P _ {X} Y} - \mu_ {P _ {X} Z _ {X}} \right\rangle_ {\mathcal {H}}.
+$$
+
+From Lemma B.1 we obtain
+
+$$
+\begin{array}{l} \operatorname {S K C E} _ {k} = \mathbb {E} _ {P _ {X}, Y, P _ {X ^ {\prime}}, Y ^ {\prime}} k \big ((P _ {X}, Y), (P _ {X ^ {\prime}}, Y ^ {\prime}) \big) - 2 \mathbb {E} _ {P _ {X}, Y, P _ {X ^ {\prime}}, Z _ {X ^ {\prime}}} k \big ((P _ {X}, Y), (P _ {X ^ {\prime}}, Z _ {X} ^ {\prime}) \big) \\ + \mathbb {E} _ {P _ {X}, Z _ {X}, P _ {X ^ {\prime}}, Z _ {X} ^ {\prime}} k \big ((P _ {X}, Z _ {X}), (P _ {X ^ {\prime}}, Z _ {X} ^ {\prime}) \big), \\ \end{array}
+$$
+
+which yields the desired result.
+
+
+
+Recall that $(P_{X_1},Y_1),\ldots ,(P_{X_n},Y_n)$ is a validation data set that is sampled i.i.d. according to the law of $(P_X,Y)$ and that for all $(p,y),(p',y')\in \mathcal{P}\times \mathcal{V}$
+
+$$
+\begin{array}{l} h \left(\left(p, y\right), \left(p ^ {\prime}, y ^ {\prime}\right)\right) := k \left(\left(p, y\right), \left(p ^ {\prime}, y ^ {\prime}\right)\right) - \mathbb {E} _ {Z \sim p} k \left(\left(p, Z\right), \left(p ^ {\prime}, y ^ {\prime}\right)\right) \\ - \mathbb {E} _ {Z ^ {\prime} \sim p ^ {\prime}} k ((p, y), (p ^ {\prime}, Z ^ {\prime})) + \mathbb {E} _ {Z \sim p, Z ^ {\prime} \sim p ^ {\prime}} k ((p, Z), (p ^ {\prime}, Z ^ {\prime})). \\ \end{array}
+$$
+
+Lemma B.3. For all $i,j = 1,\dots ,n$
+
+$$
+\left| h \left(\left(P _ {X _ {i}}, Y _ {i}\right), \left(P _ {X _ {j}}, Y _ {j}\right)\right) \right| < \infty
+$$
+
+almost surely.
+
+Proof. Let $i, j \in \{1, \dots, n\}$ . By assumption (K2) we know that
+
+$$
+\left| k \big ((P _ {X _ {i}}, Y _ {i}), (P _ {X _ {j}}, Y _ {j}) \big) \right| \leq k ^ {1 / 2} \big ((P _ {X _ {i}}, Y _ {i}), (P _ {X _ {i}}, Y _ {i}) \big) k ^ {1 / 2} \big ((P _ {X _ {j}}, Y _ {j}), (P _ {X _ {j}}, Y _ {j}) \big) < \infty
+$$
+
+almost surely. Moreover,
+
+$$
+\begin{array}{l} \left| \mathbb {E} _ {Z _ {X _ {i}}} k \big ((P _ {X _ {i}}, Z _ {X _ {i}}), (P _ {X _ {j}}, Y _ {j}) \big) \right| \leq \mathbb {E} _ {Z _ {X _ {i}}} \left| k \big ((P _ {X _ {i}}, Z _ {X _ {i}}), (P _ {X _ {j}}, Y _ {j}) \big) \right| \\ \leq \mathbb {E} _ {Z _ {X _ {i}}} \left(k ^ {1 / 2} \big ((P _ {X _ {i}}, Z _ {X _ {i}}), (P _ {X _ {i}}, Z _ {X _ {i}}) \big) k ^ {1 / 2} \big ((P _ {X _ {j}}, Y _ {j}), (P _ {X _ {j}}, Y _ {j}) \big)\right) < \infty \\ \end{array}
+$$
+
+almost surely, and similarly $\left|\mathbb{E}_{Z_{X_i},Z_{X_j}}k\big((P_{X_i},Z_{X_i}),(P_{X_j},Z_{X_j})\big)\right| < \infty$ almost surely. Thus
+
+$$
+\begin{array}{l} \left| h \big ((P _ {X _ {i}}, Y _ {i}), (P _ {X _ {j}}, Y _ {j}) \big) \right| \leq \left| k \big ((P _ {X _ {i}}, Y _ {i}), (P _ {X _ {j}}, Y _ {j}) \big) \right| + \left| \mathbb {E} _ {Z _ {X _ {i}}} k \big ((P _ {X _ {i}}, Z _ {X _ {i}}), (P _ {X _ {j}}, Y _ {j}) \big) \right| \\ + \left| \mathbb {E} _ {Z _ {X _ {j}}} k \big ((P _ {X _ {i}}, Y _ {i}), (P _ {X _ {j}}, Z _ {X _ {j}}) \big) \right| + \left| \mathbb {E} _ {Z _ {X _ {i}}, Z _ {X _ {j}}} k \big ((P _ {X _ {i}}, Z _ {X _ {i}}), (P _ {X _ {j}}, Z _ {X _ {j}}) \big) \right| < \infty \\ \end{array}
+$$
+
+almost surely.
+
+
+
+Lemma 1. The plug-in estimator of $\mathrm{SKCE}_k$ is non-negatively biased. It is given by
+
+$$
+\widehat {\mathrm {S K C E}} _ {k} = \frac {1}{n ^ {2}} \sum_ {i, j = 1} ^ {n} h \big ((P _ {X _ {i}}, Y _ {i}), (P _ {X _ {j}}, Y _ {j}) \big).
+$$
+
+Proof. From Lemma B.2 we know that $\mathrm{KCE}_k < \infty$ , and Lemma B.3 implies that $\widehat{\mathrm{SKCE}}_k < \infty$ almost surely.
+
+For $i = 1, \dots, n$ , the linear operators $T_{i}f \coloneqq \mathbb{E}_{Z_{X_{i}}} f(P_{X_{i}}, Z_{X_{i}})$ for $f \in \mathcal{H}$ are bounded almost surely since
+
+$$
+\begin{array}{l} \left| T _ {i} f \right| = \left| \mathbb {E} _ {Z _ {X _ {i}}} f (P _ {X _ {i}}, Z _ {X _ {i}}) \right| \leq \mathbb {E} _ {Z _ {X _ {i}}} \left| f (P _ {X _ {i}}, Z _ {X _ {i}}) \right| = \mathbb {E} _ {Z _ {X _ {i}}} \left| \langle k \big ((P _ {X _ {i}}, Z _ {X _ {i}}), \cdot \rangle , f \rangle_ {\mathcal {H}} \right| \\ \leq \mathbb {E} _ {Z _ {X _ {i}}} \left(\left\| k \big ((P _ {X _ {i}}, Z _ {X _ {i}}), \cdot \big) \right\| _ {\mathcal {H}} \| f \| _ {\mathcal {H}}\right) = \| f \| _ {\mathcal {H}} \mathbb {E} _ {Z _ {X _ {i}}} k ^ {1 / 2} \big ((P _ {X _ {i}}, Z _ {X _ {i}}), (P _ {X _ {i}}, Z _ {X _ {i}}) \big). \\ \end{array}
+$$
+
+Hence Riesz representation theorem implies that there exist $\rho_{i}\in \mathcal{H}$ such that $T_{i}f = \langle f,\rho_{i}\rangle_{\mathcal{H}}$ almost surely. From the reproducing property of $\mathcal{H}$ we deduce that $\rho_{i}(p,y) = \langle k\big((p,y),\cdot \big),\rho_{i}\rangle_{\mathcal{H}} = \mathbb{E}_{Z_{X_i}}k\big((p,y),(P_{X_i},Z_{X_i})\big)$ for all $(p,y)\in \mathcal{P}\times \mathcal{V}$ almost surely.
+
+Thus by the definition of the dual norm the plug-in estimator $\widehat{\mathrm{KCE}_k}$ satisfies
+
+$$
+\begin{array}{l} \widehat {\mathrm {K C E}} _ {k} = \sup _ {f \in \mathcal {F} _ {k}} \frac {1}{n} \left| \sum_ {i = 1} ^ {n} \left(f \left(P _ {X _ {i}}, Y _ {i}\right) - \mathbb {E} _ {Z _ {X _ {i}}} f \left(P _ {X _ {i}}, Z _ {X _ {i}}\right)\right) \right| \\ = \sup _ {f \in \mathcal {F} _ {k}} \frac {1}{n} \left| \sum_ {i = 1} ^ {n} \left\langle k \big ((P _ {X _ {i}}, Y _ {i}), \cdot \big) - \rho_ {i}, f \right\rangle_ {\mathcal {H}} \right| \\ = \sup _ {f \in \mathcal {F} _ {k}} \frac {1}{n} \left| \left\langle \sum_ {i = 1} ^ {n} \left(k \left(\left(P _ {X _ {i}}, Y _ {i}\right), \cdot\right) - \rho_ {i}\right), f \right\rangle_ {\mathcal {H}} \right| \\ = \frac {1}{n} \left\| \sum_ {i = 1} ^ {n} \left(k \left(\left(G _ {i}, Y _ {i}\right), \cdot\right) - \rho_ {i}\right) \right\| _ {\mathcal {H}} \\ = \frac {1}{n} \left(\left\langle \sum_ {i = 1} ^ {n} k \left(\left(P _ {X _ {i}}, Y _ {i}\right), \cdot\right) - \rho_ {i}, \sum_ {i = 1} ^ {n} k \left(\left(P _ {X _ {i}}, Y _ {i}\right), \cdot\right) - \rho_ {i} \right\rangle_ {\mathcal {H}}\right) ^ {1 / 2} \\ = \frac {1}{n} \left(\sum_ {i, j = 1} ^ {n} h \left(\left(P _ {X _ {i}}, Y _ {i}\right), \left(P _ {X _ {j}}, Y _ {j}\right)\right)\right) ^ {1 / 2} = \widehat {\mathrm {S K C E}} _ {k} ^ {1 / 2} < \infty \\ \end{array}
+$$
+
+almost surely, and hence indeed $\widehat{\mathrm{SKCE}}_k^{1/2}$ is the plug-in estimator of $\mathrm{KCE}_k$ .
+
+Since $(P_X,Y),(P_{X'},Y'),(P_{X_1},Y_1),\ldots ,(P_{X_n},Y_n)$ are identically distributed and pairwise independent, we obtain
+
+$$
+\begin{array}{l} n^{2}\mathbb{E}\widehat{\mathrm{SKCE}}_{k} = \sum_{\substack{i,j = 1,\\ i\neq j}}^{n}\mathbb{E}_{P_{X_{i}},Y_{i},P_{X_{j}},Y_{j}}h\bigl((P_{X_{i}},Y_{i}),(P_{X_{j}},Y_{j})\bigr) \\ + \sum_ {i = 1} ^ {n} \mathbb {E} _ {P _ {X _ {i}}, Y _ {i}} h \big ((P _ {X _ {i}}, Y _ {i}), (P _ {X _ {i}}, Y _ {i}) \big) \\ = n (n - 1) \mathbb {E} _ {P _ {X}, Y, P _ {X ^ {\prime}}, Y ^ {\prime}} h \big ((P _ {X}, Y), (P _ {X ^ {\prime}}, Y ^ {\prime}) \big) + n \mathbb {E} _ {P _ {X}, Y} h \big ((P _ {X}, Y), (P _ {X}, Y) \big) \\ = n (n - 1) \mathrm {S K C E} _ {k} + n \mathbb {E} _ {P _ {X}, Y} h \big ((P _ {X}, Y), (P _ {X}, Y) \big). \tag {B.1} \\ \end{array}
+$$
+
+With the same reasoning as above, there exist $\rho, \rho' \in \mathcal{H}$ such that for all $f \in \mathcal{H} \mathbb{E}_{Z_X} f(P_X, Z_X) = \langle f, \rho \rangle_{\mathcal{H}}$ and $\mathbb{E}_{Z_{X'}} f(P_{X'}, Z_{X'}) = \langle f, \rho' \rangle_{\mathcal{H}}$ almost surely. Thus we obtain
+
+$$
+h \big ((P _ {X}, Y), (P _ {X ^ {\prime}}, Y ^ {\prime}) \big) = \langle k \big ((P _ {X}, Y), \cdot \big) - \rho , k \big ((P _ {X ^ {\prime}}, Y ^ {\prime}), \cdot \big) - \rho^ {\prime} \rangle_ {\mathcal {H}}
+$$
+
+almost surely, and therefore by Lemma B.2 and the Cauchy-Schwarz inequality
+
+$$
+\begin{array}{l} \operatorname {S K C E} _ {k} = \mathbb {E} _ {P _ {X}, Y, P _ {X ^ {\prime}}, Y ^ {\prime}} h \big ((P _ {X}, Y), (P _ {X ^ {\prime}}, Y ^ {\prime}) \big) \\ = \mathbb {E} _ {P _ {X}, Y, P _ {X ^ {\prime}}, Y ^ {\prime}} \left\langle k \left(\left(P _ {X}, Y\right), \cdot\right) - \rho , k \left(\left(G ^ {\prime}, Y ^ {\prime}\right), \cdot\right) - \rho^ {\prime} \right\rangle_ {\mathcal {H}} \\ \leq \mathbb {E} _ {P _ {X}, Y, P _ {X ^ {\prime}}, Y ^ {\prime}} | \langle k ((P _ {X}, Y), \cdot) - \rho , k ((P _ {X ^ {\prime}}, Y ^ {\prime}), \cdot) - \rho^ {\prime} \rangle_ {\mathcal {H}} | \\ \leq \mathbb {E} _ {P _ {X}, Y, P _ {X ^ {\prime}}, Y ^ {\prime}} \| k ((P _ {X}, Y), \cdot) - \rho \| _ {\mathcal {H}} \| k ((P _ {X ^ {\prime}}, Y ^ {\prime}), \cdot) - \rho^ {\prime} \| _ {\mathcal {H}} \\ \leq \mathbb {E} _ {P _ {X}, Y} ^ {1 / 2} \left\| k \big ((P _ {X}, Y), \cdot \big) - \rho \right\| _ {\mathcal {H}} ^ {2} \mathbb {E} _ {P _ {X ^ {\prime}}, Y ^ {\prime}} ^ {1 / 2} \left\| k \big ((P _ {X ^ {\prime}}, Y ^ {\prime}), \cdot \big) - \rho^ {\prime} \right\| _ {\mathcal {H}} ^ {2}. \\ \end{array}
+$$
+
+Since $(P_X,Y)$ and $(P_{X^{\prime}},Y^{\prime})$ are identically distributed, we obtain
+
+$$
+\operatorname {S K C E} _ {k} \leq \mathbb {E} _ {P _ {X}, Y} \| k ((P _ {X}, Y), \cdot) - \rho \| _ {\mathcal {H}} ^ {2} = \mathbb {E} _ {P _ {X}, Y} h ((P _ {X}, Y), (P _ {X}, Y)).
+$$
+
+Thus together with Eq. (B.1) we get
+
+$$
+n ^ {2} \mathbb {E} \widehat {\mathrm {S K C E}} _ {k} \geq n (n - 1) \mathrm {S K C E} _ {k} + n \mathrm {S K C E} _ {k} = n ^ {2} \mathrm {S K C E} _ {k},
+$$
+
+and hence $\widehat{\mathrm{SKCE}}_k$ has a non-negative bias.
+
+Lemma 2. The block estimator of $\mathrm{SKCE}_k$ with block size $B\in \{2,\ldots ,n\}$ , given by
+
+$$
+\widehat{\mathrm{SKCE}}_{k,B}:= \left\lfloor \frac{n}{B}\right\rfloor^{-1}\sum_{b = 1}^{\lfloor n / B\rfloor}\binom {B}{2}^{-1}\sum_{(b - 1)B < i < j\leq bB}h\bigl((P_{X_{i}},Y_{i}),(P_{X_{j}},Y_{j})\bigr),
+$$
+
+is an unbiased estimator of $\mathrm{SKCE}_k$ .
+
+Proof. From Lemma B.2 we know that $\mathrm{SKCE}_k < \infty$ , and Lemma B.3 implies that $\widehat{\mathrm{SKCE}}_{k,B} < \infty$ almost surely.
+
+For $b\in \{1,\ldots ,\lfloor n / B\rfloor \}$ , let
+
+$$
+\widehat {\eta} _ {b} := \binom {B} {2} ^ {- 1} \sum_ {(b - 1) B < i < j \leq b B} h \left(\left(P _ {X _ {i}}, Y _ {i}\right), \left(P _ {X _ {j}}, Y _ {j}\right)\right) \tag {B.2}
+$$
+
+be the estimator of the $b$ th block. From Lemma B.3 it follows that $\widehat{\eta}_b < \infty$ almost surely for all $b$ . Moreover, for all $b$ , $\widehat{\eta}_b$ is a so-called U-statistic of $\mathrm{SKCE}_k$ and hence satisfies $\mathbb{E}\widehat{\eta}_b = \mathrm{SKCE}_k$ (see, e.g., van der Vaart, 1998). Since $(P_{X_1},Y_1),\ldots ,(P_{X_n},Y_n)$ are pairwise independent, this implies that $\widehat{\mathrm{SKCE}}_{k,B}$ is an unbiased estimator of $\mathrm{SKCE}_k$ .
+
+# B.4 CALIBRATION TESTS
+
+Lemma B.4. Let $B \in \{2, \ldots, n\}$ . If $\mathbb{V}_{P_X, Y, P_{X'}, Y'} h((P_X, Y), (P_{X'}, Y')) < \infty$ , then for all $b \in \{1, \ldots, \lfloor n / B \rfloor\}$
+
+$$
+\mathbb {V} \widehat {\eta} _ {b} = \sigma_ {B} ^ {2} := \left( \begin{array}{c} B \\ 2 \end{array} \right) ^ {- 1} \Big (2 (B - 2) \zeta_ {1} + \mathbb {V} _ {P _ {X}, Y, P _ {X ^ {\prime}}, Y ^ {\prime}} h \big ((P _ {X}, Y), (P _ {X ^ {\prime}}, Y ^ {\prime}) \big) \Big),
+$$
+
+where $\widehat{\eta}_b$ is defined according to Eq. (B.2) and
+
+$$
+\zeta_ {1} := \mathbb {E} _ {P _ {X}, Y} \mathbb {E} _ {P _ {X ^ {\prime}}, Y ^ {\prime}} ^ {2} h \left(\left(P _ {X}, Y\right), \left(P _ {X ^ {\prime}}, Y ^ {\prime}\right)\right) - \mathrm {S K C E} _ {k} ^ {2}. \tag {B.3}
+$$
+
+If model $P$ is calibrated, it simplifies to
+
+$$
+\sigma_ {B} ^ {2} = \left( \begin{array}{c} B \\ 2 \end{array} \right) ^ {- 1} \mathbb {E} _ {P _ {X}, Y, P _ {X ^ {\prime}}, Y ^ {\prime}} h ^ {2} \big ((P _ {X}, Y), (P _ {X ^ {\prime}}, Y ^ {\prime}) \big).
+$$
+
+Proof. Let $b \in \{1, \dots, \lfloor n / B \rfloor\}$ . Since $\mathbb{V}_{P_X,Y,P_{X'},Y'} h((P_X,Y), (P_{X'},Y')) < \infty$ , the Cauchy-Schwarz inequality implies $\mathbb{V}\widehat{\eta}_b < \infty$ as well.
+
+As mentioned in the proof of Lemma 2 above, $\widehat{\eta}_b$ is a U-statistic of $\mathrm{SKCE}_k$ . From the general formula of the variance of a U-statistic (see, e.g., Hoeffding, 1948, p. 298-299) we obtain
+
+$$
+\begin{array}{l} \mathbb {V} \widehat {\eta_ {b}} = \binom {B} {2} ^ {- 1} \left(\binom {2} {1} \binom {B - 2} {2 - 1} \zeta_ {1} + \binom {2} {2} \binom {B - 2} {2 - 2} \mathbb {V} _ {P _ {X}, Y, P _ {X ^ {\prime}}, Y ^ {\prime}} h \big ((P _ {X}, Y), (P _ {X ^ {\prime}}, Y ^ {\prime}) \big)\right) \\ = \left( \begin{array}{c} B \\ 2 \end{array} \right) ^ {- 1} \Big (2 (B - 2) \zeta_ {1} + \mathbb {V} _ {P _ {X}, Y, P _ {X ^ {\prime}}, Y ^ {\prime}} h \big ((P _ {X}, Y), (P _ {X ^ {\prime}}, Y ^ {\prime}) \big) \Big), \\ \end{array}
+$$
+
+where
+
+$$
+\zeta_ {1} = \mathbb {E} _ {P _ {X}, Y} \mathbb {E} _ {P _ {X ^ {\prime}}, Y ^ {\prime}} ^ {2} h \big ((P _ {X}, Y), (P _ {X ^ {\prime}}, Y ^ {\prime}) \big) - \mathrm {S K C E} _ {k} ^ {2}.
+$$
+
+If model $P$ is calibrated, then $(P_X,Y)\stackrel {d}{=}(P_X,Z)$ , and hence for all $(p,y)\in \mathcal{P}\times \mathcal{V}$
+
+$$
+\begin{array}{l} \mathbb {E} _ {P _ {X}, Y} h \big ((p, y), (P _ {X}, Y) \big) = \mathbb {E} _ {P _ {X}, Y} k \big ((p, y), (P _ {X}, Y) \big) - \mathbb {E} _ {Z ^ {\prime} \sim p} \mathbb {E} _ {P _ {X}, Y} k \big ((p, Z ^ {\prime}), (P _ {X}, Y) \big) \\ - \mathbb {E} _ {P _ {X}, Z} k \big ((p, y), (P _ {X}, Z) \big) + \mathbb {E} _ {Z ^ {\prime} \sim p} \mathbb {E} _ {P _ {X}, Z} k \big ((p, Z ^ {\prime}), (P _ {X}, Y) \big) \\ = 0. \\ \end{array}
+$$
+
+This implies $\zeta_{1} = \mathbb{E}_{P_{X},Y}\mathbb{E}_{P_{X^{\prime}},Y^{\prime}}^{2}h\big((P_{X},Y),(P_{X^{\prime}},Y^{\prime})\big) = 0$ and $\mathrm{SKCE}_k^2 = 0$ due to Lemma B.2. Thus
+
+$$
+\sigma_ {B} ^ {2} = \left( \begin{array}{c} B \\ 2 \end{array} \right) ^ {- 1} \mathbb {E} _ {P _ {X}, Y, P _ {X ^ {\prime}}, Y ^ {\prime}} h ^ {2} \big ((P _ {X}, Y), (P _ {X ^ {\prime}}, Y ^ {\prime}) \big),
+$$
+
+as stated above.
+
+Corollary B.1. Let $B \in \{2, \ldots, n\}$ . If $\mathbb{V}_{P_X, Y, P_{X'}, Y'} h((P_X, Y), (P_{X'}, Y')) < \infty$ , then
+
+$$
+\mathbb {V} \widehat {S K C E} _ {k, B} = \left\lfloor n / B \right\rfloor^ {- 1} \sigma_ {B} ^ {2}.
+$$
+
+where $\sigma_B^2$ is defined according to Lemma B.4.
+
+Proof. Since the estimators $\widehat{\eta}_1, \ldots, \widehat{\eta}_{\lfloor n / B \rfloor}$ in each block are pairwise independent, this is an immediate consequence of Lemma B.4.
+
+Corollary B.2. Let $B \in \{2, \dots, n\}$ . If $\mathbb{V}_{P_X, Y, P_{X'}, Y'} h((P_X, Y), (P_{X'}, Y')) < \infty$ , then
+
+$$
+\sqrt {\lfloor n / B \rfloor} \big (\widehat {\mathrm {S K C E}} _ {k, B} - \mathrm {S K C E} _ {k} \big) \stackrel {{d}} {{\to}} \mathcal {N} \big (0, \sigma_ {B} ^ {2} \big) \qquad a s n \to \infty ,
+$$
+
+where block size $B$ is fixed and $\sigma_B^2$ is defined according to Lemma B.4.
+
+Proof. The result follows from Lemma 2, Lemma B.4, and the central limit theorem (see, e.g., Serfling, 1980, Theorem A in Section 1.9). $\square$
+
+Remark B.1. Corollary B.2 shows that $\widehat{\mathrm{SKCE}}_{k,B}$ is a consistent estimator of $\mathrm{SKCE}_k$ in the large sample limit as $n\to \infty$ with fixed number $B$ of samples per block. In particular, for the linear estimator with $B = 2$ we obtain
+
+$$
+\sqrt {\lfloor n / 2 \rfloor} \left(\widehat {\mathrm {S K C E}} _ {k, 2} - \mathrm {S K C E} _ {k}\right) \stackrel {{d}} {{\to}} \mathcal {N} \left(0, \sigma_ {2} ^ {2}\right) \qquad \mathrm {a s} n \to \infty .
+$$
+
+Moreover, Lemma B.4 and Corollary B.2 show that the $p$ -value of the null hypothesis that model $P$ is calibrated can be estimated by
+
+$$
+\Phi \left(- \frac {\sqrt {\lfloor n / B \rfloor} \widehat {\mathrm {S K C E}} _ {k , B}}{\hat {\sigma} _ {B}}\right),
+$$
+
+where $\Phi$ is the cumulative distribution function of the standard normal distribution and $\widehat{\sigma}_B$ is the empirical standard deviation of the block estimates $\widehat{\eta}_1, \ldots, \widehat{\eta}_{\lfloor n / B \rfloor}$ , and
+
+$$
+\Phi \bigg (- \frac {\sqrt {\lfloor n / B \rfloor B (B - 1)} \widehat {\mathrm {S K C E}} _ {k , B}}{\sqrt {2} \hat {\sigma}} \bigg),
+$$
+
+where $\widehat{\sigma}^2$ is an estimate of $\mathbb{E}_{P_X,Y,P_{X'}Y'}h^2\big((P_X,Y),(P_{X'},Y')\big)$ . Similar $p$ -value approximations for the two-sample test with blocks of fixed size were used by Chwialkowski et al. (2015).
+
+Corollary B.3. Assume $\mathbb{V}_{P_X,Y,P_{X'},Y'}h\big((P_X,Y),(P_{X'},Y')\big) < \infty$ . Let $s \in \{1,\dots,\lfloor n / 2\rfloor\}$ . Then for all $b \in \{1,\dots,s\}$
+
+$$
+\sqrt {B} \left(\widehat {\eta} _ {b} - \mathrm {S K C E} _ {k}\right) \stackrel {{d}} {{\to}} \mathcal {N} \left(0, 4 \zeta_ {1}\right) \quad a s B \rightarrow \infty , \tag {B.4}
+$$
+
+where $\widehat{\eta}_b$ is defined according to Eq. (B.2) with $n = Bs$ , the number $s$ of equally-sized blocks is fixed, and $\zeta_1$ is defined according to Eq. (B.3).
+
+If model $P$ is calibrated, then $\sqrt{B}\left(\widehat{\eta}_b - \mathrm{SKCE}_k\right) = \sqrt{B}\widehat{\eta}_b$ is asymptotically tight since $\zeta_1 = 0$ , and
+
+$$
+B \widehat {\eta} _ {b} \stackrel {d} {\rightarrow} \sum_ {i = 1} ^ {\infty} \lambda_ {i} \left(Z _ {i} - 1\right) \quad a s B \rightarrow \infty , \tag {B.5}
+$$
+
+where $Z_{i}$ are independent $\chi_1^2$ distributed random variables and $\lambda_{i} \in \mathbb{R}$ are eigenvalues of the Hilbert-Schmidt integral operator
+
+$$
+K f (p, y) := \mathbb {E} _ {P _ {X}, Y} \left(h ((p, y), (P _ {X}, Y)) f (P _ {X}, Y)\right)
+$$
+
+for Borel-measurable functions $f\colon \mathcal{P}\times \mathcal{Y}\to \mathbb{R}$ with $\mathbb{E}_{P_X,Y}f^2 (P_X,Y) < \infty$
+
+Proof. Let $s \in \{1, \dots, \lfloor n/2 \rfloor\}$ and $b \in \{1, \dots, s\}$ . As mentioned above in the proof of Lemma 2, the estimator $\widehat{\eta}_b$ , defined according to Eq. (B.2), is a so-called U-statistic of $\mathrm{SKCE}_k$ (see, e.g., van der Vaart, 1998). Thus Eq. (B.4) follows from the asymptotic behaviour of U-statistics (see, e.g., van der Vaart, 1998, Theorem 12.3).
+
+If $P$ is calibrated, then we know from the proof of Lemma B.4 that $\zeta_1 = 0$ , and hence $\widehat{\eta}_b$ is a so-called degenerate- U-statistic (see, e.g., van der Vaart, 1998, Section 12.3). From the theory of degenerate U-statistics it follows that the sequence $B\widehat{\eta}_b$ converges in distribution to the limit distribution in Eq. (B.5), which is known as Gaussian chaos.
+
+Corollary B.4. Assume $\mathbb{V}_{P_X,Y,P_{X'}Y'}h\big((P_X,Y),(P_{X'},Y')\big) < \infty$ . Let $s\in \{1,\dots ,\lfloor n / 2\rfloor \}$ . Then
+
+$$
+\sqrt {B} \left(\widehat {\mathrm {S K C E}} _ {k, B} - \mathrm {S K C E} _ {k}\right) \xrightarrow {d} \mathcal {N} \left(0, 4 s ^ {- 1} \zeta_ {1}\right) \quad a s B \to \infty ,
+$$
+
+where the number $s$ of equally-sized blocks is fixed, $n = Bs$ , and $\zeta_{1}$ is defined according to Eq. (B.3).
+
+If model $P$ is calibrated, then $\sqrt{B} \left( \widehat{\mathrm{SKCE}}_{k,B} - \mathrm{SKCE}_k \right) = \sqrt{B \widehat{\mathrm{SKCE}}_{k,B}}$ is asymptotically tight since $\zeta_1 = 0$ , and
+
+$$
+\widehat {B \mathrm {S K C E}} _ {k, B} \xrightarrow {d} s ^ {- 1} \sum_ {i = 1} ^ {\infty} \lambda_ {i} (Z _ {i} - s) \quad a s B \to \infty ,
+$$
+
+where $Z_{i}$ are independent $\chi_s^2$ distributed random variables and $\lambda_i \in \mathbb{R}$ are eigenvalues of the Hilbert-Schmidt integral operator
+
+$$
+K f (p, y) := \mathbb {E} _ {P _ {X}, Y} \left(h ((p, y), (P _ {X}, Y)) f (P _ {X}, Y)\right)
+$$
+
+for Borel-measurable functions $f\colon \mathcal{P}\times \mathcal{Y}\to \mathbb{R}$ with $\mathbb{E}_{P_X,Y}f^2 (P_X,Y) < \infty$
+
+Proof. Since the estimators $\widehat{\eta}_1, \ldots, \widehat{\eta}_s$ in each block are pairwise independent, this is an immediate consequence of Corollary B.3.
+
+Remark B.2. Corollary B.4 shows that $\widehat{\mathrm{SKCE}}_{k,B}$ is a consistent estimator of $\mathrm{SKCE}_k$ in the large sample limit as $B\to \infty$ with fixed number $\lfloor n / B\rfloor$ of blocks. Moreover, for the minimum variance unbiased estimator with $B = n$ , Corollary B.4 shows that under the null hypothesis that model $P$ is calibrated
+
+$$
+\widehat {n \mathrm {S K C E}} _ {k, n} \xrightarrow {d} \sum_ {i = 1} ^ {\infty} \lambda_ {i} (Z _ {i} - 1) \quad \text {a s} n \to \infty ,
+$$
+
+where $Z_{i}$ are independent $\chi_1^2$ distributed random variables. Unfortunately quantiles of the limit distribution of $\sum_{i=1}^{\infty} \lambda_{i}(Z_{i} - 1)$ (and hence the $p$ -value of the null hypothesis that model $P$ is calibrated) can not be computed analytically but have to be estimated by, e.g., bootstrapping (Arcones & Giné, 1992), using a Gram matrix spectrum (Gretton et al., 2009), fitting Pearson curves (Gretton et al., 2007), or using a Gamma approximation (Johnson et al., 1994, p. 343, p. 359).
+
+Corollary B.5. Assume $\mathbb{V}_{P_X,Y,P_{X'},Y'}h\big((P_X,Y),(P_{X'},Y')\big) < \infty$ . Then
+
+$$
+\sqrt {\lfloor n / B \rfloor B} \left(\widehat {\mathrm {S K C E}} _ {k, B} - \mathrm {S K C E} _ {k}\right) \xrightarrow {d} \mathcal {N} (0, 4 \zeta_ {1}) \quad a s B \to \infty a n d \lfloor n / B \rfloor \to \infty , \tag {B.6}
+$$
+
+where $B$ is the block size and $s$ is the number of equally-sized blocks, $n = Bs$ , and $\zeta_1$ is defined according to Eq. (B.3).
+
+If model $P$ is calibrated, then $\sqrt{\lfloor n / B\rfloor B}\big(\widehat{\mathrm{SKCE}}_{k,B} - \mathrm{SKCE}_k\big) = \sqrt{\lfloor n / B\rfloor B}\widehat{\mathrm{SKCE}}_{k,B}$ is asymptotically tight since $\zeta_1 = 0$ , and
+
+$$
+\sqrt {\lfloor n / B \rfloor} \widehat {B S K C E} _ {k, B} \xrightarrow {d} \mathcal {N} \left(0, \sum_ {i = 1} ^ {\infty} \lambda_ {i} ^ {2}\right) \quad a s B \to \infty a n d \lfloor n / B \rfloor \to \infty ,
+$$
+
+where $\lambda_{i}\in \mathbb{R}$ are eigenvalues of the Hilbert-Schmidt integral operator
+
+$$
+K f (p, y) := \mathbb {E} _ {P _ {X}, Y} \left(h ((p, y), (P _ {X}, Y)) f (P _ {X}, Y)\right)
+$$
+
+for Borel-measurable functions $f\colon \mathcal{P}\times \mathcal{Y}\to \mathbb{R}$ with $\mathbb{E}_{P_X,Y}f^2 (P_X,Y) < \infty$
+
+Proof. The result follows from Corollary B.3 and the central limit theorem (see, e.g., Serfling, 1980, Theorem A in Section 1.9). $\square$
+
+Remark B.3. Corollary B.5 shows that $\widehat{\mathrm{SKCE}}_{k,B}$ is a consistent estimator of $\mathrm{SKCE}_k$ in the large sample limit as $B\to \infty$ and $\lfloor n / B\rfloor \rightarrow \infty$ , i.e., as both the number of samples per block and the number of blocks go to infinity. Moreover, Corollaries B.3 and B.5 show that the $p$ -value of the null hypothesis that $P$ is calibrated can be estimated by
+
+$$
+\Phi \Bigg (- \frac {\sqrt {\lfloor n / B \rfloor} \widehat {\mathrm {S K C E}} _ {k , B}}{\widehat {\sigma} _ {B}} \Bigg),
+$$
+
+where $\widehat{\sigma}_B$ is the empirical standard deviation of the block estimates $\widehat{\eta}_{1},\ldots ,\widehat{\eta}_{[n / B]}$ . Similar $p$ -value approximations for the two-sample problem with blocks of increasing size were proposed and applied by Zaremba et al. (2013).
+
+# C CALIBRATION MEAN EMBEDDING
+
+# C.1 DEFINITION
+
+Similar to the unnormalized mean embedding (UME) proposed by Chwialkowski et al. (2015) in the standard MMD setting, instead of the calibration error $\mathrm{CE}_{\mathcal{F}_k} = \| \mu_{P_XY} - \mu_{P_XZ_X}\|_{\mathcal{H}}$ we can consider the unnormalized calibration mean embedding (UCME).
+
+Definition C.1. Let $J \in \mathbb{N}$ . The unnormalized calibration mean embedding (UCME) for kernel $k$ with $J$ test locations is defined as the random variable
+
+$$
+\begin{array}{l} \mathrm {U C M E} _ {k, J} ^ {2} = J ^ {- 1} \sum_ {j = 1} ^ {J} \left(\mu_ {P _ {X} Y} (T _ {j}) - \mu_ {P _ {X} Z _ {X}} (T _ {j})\right) ^ {2} \\ = J ^ {- 1} \sum_ {j = 1} ^ {J} \left(\mathbb {E} _ {P _ {X}, Y} k (T _ {j}, (P _ {X}, Y)) - \mathbb {E} _ {P _ {X}, Z _ {X}} k (T _ {j}, (P _ {X}, Z _ {X}))\right) ^ {2}, \\ \end{array}
+$$
+
+where $T_{1},\ldots ,T_{J}$ are i.i.d. random variables (so-called test locations) whose distribution is absolutely continuous with respect to the Lebesgue measure on $\mathcal{P}\times \mathcal{V}$ .
+
+As mentioned above, in many machine learning applications we actually have $\mathcal{P} \times \mathcal{Y} \subset \mathbb{R}^d$ (up to some isomorphism). In such a case, if $k$ is an analytic, integrable, characteristic kernel, then for each $J \in \mathbb{N}$ $\mathrm{UCME}_{k,J}$ is a random metric between the distributions of $(P_X,Y)$ and $(P_X,Z_X)$ , as shown by Chwialkowski et al. (2015, Theorem 2). In particular, this implies that $\mathrm{UCME}_{k,J} = 0$ almost surely if and only if the two distributions are equal.
+
+# C.2 ESTIMATION
+
+Again we assume $(P_{X_1},Y_1),\ldots ,(P_{X_n},Y_n)$ is a validation data set of predictions and targets, which are i.i.d. according to the law of $(P_X,Y)$ . The consistent, but biased, plug-in estimator of $\mathrm{UCME}_{k,J}^2$ is given by
+
+$$
+\widehat {\mathrm {U C M E}} _ {k, J} ^ {2} = J ^ {- 1} \sum_ {j = 1} ^ {J} \left(n ^ {- 1} \sum_ {i = 1} ^ {n} \left(k \big (T _ {j}, (P _ {X _ {i}}, Y _ {i}) \big) - \mathbb {E} _ {Z _ {X _ {i}}} k \big (T _ {j}, (P _ {X _ {i}}, Z _ {X _ {i}}) \big)\right)\right) ^ {2}.
+$$
+
+# C.3 CALIBRATION MEAN EMBEDDING TEST
+
+As Chwialkowski et al. (2015) note, if model $P$ is calibrated, for every fixed sequence of unique test locations $\sqrt{n\mathrm{UCME}}_{k,J}^{2}$ converges in distribution to a sum of correlated $\chi^2$ random variables, as $n \to \infty$ . The estimation of this asymptotic distribution, and its quantiles required for hypothesis testing, requires a bootstrap or permutation procedure, which is computationally expensive. Hence Chwialkowski et al. (2015) proposed the following test based on Hotelling's $T^2$ -statistic (Hotelling, 1931).
+
+For $i = 1,\dots ,n$ , let
+
+$$
+Z _ {i} := \left( \begin{array}{c} k \big (T _ {1}, (P _ {X _ {i}}, Y _ {i}) \big) - \mathbb {E} _ {Z _ {X _ {i}}} k \big (T _ {1}, (P _ {X _ {i}}, Z _ {X _ {i}}) \big) \\ \vdots \\ k \big (T _ {J}, (P _ {X _ {i}}, Y _ {i}) \big) - \mathbb {E} _ {Z _ {X _ {i}}} k \big (T _ {J}, (P _ {X _ {i}}, Z _ {X _ {i}}) \big) \end{array} \right) \in \mathbb {R} ^ {J},
+$$
+
+and denote the empirical mean and covariance matrix of $Z_{1},\ldots ,Z_{n}$ by $\overline{Z}$ and $S$ , respectively. If $\mathrm{UCME}_{k,J}$ is a random metric between the distributions of $(P_X,Y)$ and $(P_X,Z_X)$ , then the test statistic
+
+$$
+Q _ {n} := n \bar {Z} ^ {T} S ^ {- 1} \bar {Z}
+$$
+
+is almost surely asymptotically $\chi^2$ distributed with $J$ degrees of freedom if model $P$ is calibrated, as $n\to \infty$ with $J$ fixed; moreover, if model $P$ is uncalibrated, then for any fixed $r\in \mathbb{R}$ almost surely $\mathbb{P}(Q_n > r)\rightarrow 1$ as $n\to \infty$ (Chwialkowski et al., 2015, Proposition 2). We call the resulting calibration test calibration mean embedding (CME) test.
+
+# D KERNELCHOICE
+
+A natural choice for the kernel $k\colon (\mathcal{P}\times \mathcal{Y})\times (\mathcal{P}\times \mathcal{Y})\to \mathbb{R}$ on the product space of predicted distributions $\mathcal{P}$ and targets $\mathcal{V}$ is a tensor product kernel of the form $k = k_{\mathcal{P}}\otimes k_{\mathcal{V}}$ , i.e., a kernel of the form
+
+$$
+k \bigl ((p, y), (p ^ {\prime}, y ^ {\prime}) \bigr) = k _ {\mathcal {P}} (p, p ^ {\prime}) k _ {\mathcal {Y}} (y, y ^ {\prime}),
+$$
+
+where $k_{\mathcal{P}} \colon \mathcal{P} \times \mathcal{P} \to \mathbb{R}$ and $k_{\mathcal{Y}} \colon \mathcal{Y} \times \mathcal{Y} \to \mathbb{R}$ are kernels on the spaces of predicted distributions and targets, respectively.
+
+As discussed in Section 3.1, if kernel $k$ is characteristic, then the kernel calibration error $\mathrm{KCE}_k$ of model $P$ is zero if and only if $P$ is calibrated. Unfortunately, as shown by Szabo & Striperumbudur (2018, Example 1), even if $k_{\mathcal{P}}$ and $k_{\mathcal{Y}}$ are characteristic, the tensor product kernel $k = k_{\mathcal{P}} \otimes k_{\mathcal{Y}}$ might not be characteristic. However, when analyzing calibration, it is sufficient to be able to distinguish distributions for which the conditional distributions $\mathbb{P}(Y|P_X)$ and $\mathbb{P}(Z_X|P_X) = P_X$ are not equal almost surely. Thus it is sufficient if $k_{\mathcal{Y}}$ is characteristic and $k_{\mathcal{P}}$ is non-zero almost surely.
+
+Many common kernels such as the Gaussian and Laplacian kernel on $\mathbb{R}^d$ are characteristic and can therefore be chosen as kernel $k_{\mathcal{V}}$ for real-valued target spaces. The choice of $k_{\mathcal{P}}$ might be less obvious since $\mathcal{P}$ is a space of probability distributions. Intuitively one might want to use kernels of the form
+
+$$
+k _ {\mathcal {P}} \left(p, p ^ {\prime}\right) = \exp \left(- \lambda d _ {\mathcal {P}} ^ {\nu} \left(p, p ^ {\prime}\right)\right), \tag {D.1}
+$$
+
+where $d_{\mathcal{P}} \colon \mathcal{P} \times \mathcal{P} \to \mathbb{R}$ is a metric on $\mathcal{P}$ and $\nu, \lambda > 0$ are kernel hyperparameters. Kernels of this form would be a generalization of the Gaussian and Laplacian kernel, and would clearly be non-zero almost surely.
+
+Unfortunately, this construction does not necessarily yield valid kernels. Most prominently, the Wasserstein distance does not lead to valid kernels $k_{\mathcal{P}}$ in general (Peyre & Cuturi, 2019, Chapter 8.3). However, if $d_{\mathcal{P}}(\cdot ,\cdot)$ is a Hilbertian metric, i.e., a metric of the form
+
+$$
+d _ {\mathcal {P}} (p, p ^ {\prime}) = \left\| \phi (p) - \phi (p ^ {\prime}) \right\| _ {H}
+$$
+
+for some Hilbert space $H$ and mapping $\phi \colon \mathcal{P} \to H$ , then $k_{\mathcal{P}}$ in Eq. (D.1) is a valid kernel for all $\lambda > 0$ and $\nu \in (0,2]$ (Berg et al., 1984, Corollary 3.3.3, Proposition 3.2.7).
+
+# D.1 NORMAL DISTRIBUTIONS
+
+Assume that $\mathcal{V} = \mathbb{R}^d$ and $\mathcal{P} = \{\mathcal{N}(\mu, \Sigma) \colon \mu \in \mathbb{R}^d, \Sigma \in \mathbb{R}^{d \times d} \text{ psd}\}$ , i.e., the model outputs normal distributions $P_X = \mathcal{N}(\mu_X, \Sigma_X)$ . The distribution of these outputs is defined by the distribution of their mean $\mu_X$ and covariance matrix $\Sigma_X$ .
+
+Let $P_{x} = \mathcal{N}(\mu_{x},\Sigma_{x})\in \mathcal{P}$ $y\in \mathcal{V} = \mathbb{R}^d$ , and $\gamma >0$ . We obtain
+
+$$
+\begin{array}{l} \mathbb {E} _ {Z _ {x} \sim P _ {x}} \exp \left(- \gamma \| Z _ {x} - y \| _ {2} ^ {2}\right) \\ = \left| \mathbf {I} _ {d} + 2 \gamma \Sigma_ {x} \right| ^ {- 1 / 2} \exp \left(- \gamma (\mu_ {x} - y) ^ {\top} \left(\mathbf {I} _ {d} + 2 \gamma \Sigma_ {x}\right) ^ {- 1} (\mu_ {x} - y)\right) \\ \end{array}
+$$
+
+from Mathai & Provost (1992, Theorem 3.2.a.3). In particular, if $\Sigma_x = \mathrm{diag}(\Sigma_{x,1},\dots ,\Sigma_{x,d})$ , then
+
+$$
+\begin{array}{l} \mathbb {E} _ {Z _ {x} \sim P _ {x}} \exp \left(- \gamma \| Z _ {x} - y \| _ {2} ^ {2}\right) \\ = \prod_ {i = 1} ^ {d} \left[ \left(1 + 2 \gamma \Sigma_ {x, i}\right) ^ {- 1 / 2} \exp \left(- \gamma \left(1 + 2 \gamma \Sigma_ {x, i}\right) ^ {- 1} \left(\mu_ {x, i} - y _ {i}\right) ^ {2}\right) \right]. \\ \end{array}
+$$
+
+Let $P_{x^{\prime}} = \mathcal{N}(\mu_{x^{\prime}},\Sigma_{x^{\prime}})$ be another normal distribution. Then we have
+
+$$
+\begin{array}{l} \mathbb {E} _ {Z _ {x} \sim P _ {x}, Z _ {x ^ {\prime}} \sim P _ {x ^ {\prime}}} \exp \left(- \gamma \| Z _ {x} - Z _ {x ^ {\prime}} \| _ {2} ^ {2}\right) \\ = \left| \mathbf {I} _ {d} + 2 \gamma \Sigma_ {x} \right| ^ {- 1 / 2} \mathbb {E} _ {Z _ {x ^ {\prime}} \sim P _ {x ^ {\prime}}} \exp \left(- \gamma \left(\mu_ {x} - Z _ {x ^ {\prime}}\right) ^ {\top} \left(\mathbf {I} _ {d} + 2 \gamma \Sigma_ {x}\right) ^ {- 1} \left(\mu_ {x} - Z _ {x ^ {\prime}}\right)\right) \\ = \left| \mathbf {I} _ {d} + 2 \gamma \left(\Sigma_ {x} + \Sigma_ {x ^ {\prime}}\right) \right| ^ {- 1 / 2} \exp \Big (- \gamma \left(\mu_ {x} - \mu_ {x ^ {\prime}}\right) ^ {\top} \left(\mathbf {I} _ {d} + 2 \gamma \left(\Sigma_ {x} + \Sigma_ {x ^ {\prime}}\right)\right) ^ {- 1} \left(\mu_ {x} - \mu_ {x ^ {\prime}}\right) \Big). \\ \end{array}
+$$
+
+Thus if $\Sigma_{x} = \mathrm{diag}(\Sigma_{x,1},\dots ,\Sigma_{x,d})$ and $\Sigma_{x^{\prime}} = \mathrm{diag}\bigl (\Sigma_{x^{\prime},1},\ldots ,\Sigma_{x^{\prime},d}\bigr)$ , then
+
+$$
+\begin{array}{l} \mathbb {E} _ {Z _ {x} \sim P _ {x}, Z _ {x ^ {\prime}} \sim P _ {x ^ {\prime}}} \exp \left(- \gamma \| Z _ {x} - Z _ {x ^ {\prime}} \| _ {2} ^ {2}\right) \\ = \prod_ {i = 1} ^ {d} \left[ \left(1 + 2 \gamma \left(\Sigma_ {x, i} + \Sigma_ {x ^ {\prime}, i}\right)\right) ^ {- 1 / 2} \exp \left(- \gamma \left(1 + 2 \gamma \left(\Sigma_ {x, i} + \Sigma_ {x ^ {\prime}, i}\right)\right) ^ {- 1} \left(\mu_ {x, i} - \mu_ {x ^ {\prime}, i}\right) ^ {2}\right) \right]. \\ \end{array}
+$$
+
+Hence we see that a Gaussian kernel
+
+$$
+k _ {\mathcal {Y}} (y, y ^ {\prime}) = \exp \left(- \gamma \| y - y ^ {\prime} \| _ {2} ^ {2}\right)
+$$
+
+with inverse length scale $\gamma > 0$ on the space of targets $\mathcal{Y} = \mathbb{R}^d$ allows us to compute $\mathbb{E}_{Z_x \sim P_x} k_{\mathcal{Y}}(Z_x, y)$ and $\mathbb{E}_{Z_x \sim P_x, Z_{x'} \sim P_{x'}} k_{\mathcal{Y}}(Z_x, Z_{x'})$ analytically. Moreover, the Gaussian kernel is characteristic on $\mathbb{R}^d$ (Fukumizu et al., 2008). Hence, as discussed above, by choosing a kernel $k_{\mathcal{P}}$ that is non-zero almost surely we can guarantee that $\mathrm{KCE}_k = 0$ if and only if model $P$ is calibrated.
+
+On the space of normal distributions, the 2-Wasserstein distance with respect to the Euclidean distance between $P_{x} = \mathcal{N}(\mu_{x},\Sigma_{x})$ and $P_{x^{\prime}} = \mathcal{N}(\mu_{x^{\prime}},\Sigma_{x^{\prime}})$ is given by
+
+$$
+W _ {2} ^ {2} \big (P _ {x}, P _ {x ^ {\prime}} \big) = \| \mu_ {x} - \mu_ {x ^ {\prime}} \| _ {2} ^ {2} + \mathrm {T r} \Bigg (\Sigma_ {x} + \Sigma_ {x ^ {\prime}} - 2 \Big (\Sigma_ {x ^ {\prime}} ^ {1 / 2} \Sigma_ {x} \Sigma_ {x ^ {\prime}} ^ {1 / 2} \Big) ^ {1 / 2} \Bigg),
+$$
+
+which can be simplified to
+
+$$
+W _ {2} ^ {2} \left(P _ {x}, P _ {x ^ {\prime}}\right) = \left\| \mu_ {x} - \mu_ {x ^ {\prime}} \right\| _ {2} ^ {2} + \left\| \Sigma_ {x} ^ {1 / 2} - \Sigma_ {x ^ {\prime}} ^ {1 / 2} \right\| _ {\mathrm {F r o b}} ^ {2},
+$$
+
+if $\Sigma_x\Sigma_{x'} = \Sigma_{x'}\Sigma_x$ . This shows that the 2-Wasserstein distance is a Hilbertian metric on the space of normal distributions. Hence as discussed above, the choice
+
+$$
+k _ {\mathcal {P}} \left(P _ {x}, P _ {x ^ {\prime}}\right) = \exp \left(- \lambda W _ {2} ^ {\nu} \left(P _ {x}, P _ {x ^ {\prime}}\right)\right)
+$$
+
+yields a valid kernel for all $\lambda > 0$ and $\nu \in (0,2]$ .
+
+Thus for all $\lambda, \gamma > 0$ and $\nu \in (0,2]$
+
+$$
+k \left((p, y), \left(p ^ {\prime}, y ^ {\prime}\right)\right) = \exp \left(- \lambda W _ {2} ^ {\nu} (p, p ^ {\prime})\right) \exp \left(- \gamma \| y - y ^ {\prime} \| _ {2} ^ {2}\right)
+$$
+
+is a valid kernel on the product space $\mathcal{P} \times \mathcal{V}$ of normal distributions on $\mathbb{R}^d$ and $\mathbb{R}^d$ that allows to evaluate $h\big((p,y),(p',y')\big)$ analytically and guarantees that $\mathrm{KCE}_k = 0$ if and only if model $P$ is calibrated.
+
+# D.2 LAPLACE DISTRIBUTIONS
+
+Assume that $\mathcal{V} = \mathbb{R}$ and $\mathcal{P} = \{\mathcal{L}(\mu, \beta) \colon \mu \in \mathbb{R}, \beta > 0\}$ , i.e., the model outputs Laplace distributions $P_X = \mathcal{L}(\mu_X, \beta_X)$ with probability density function
+
+$$
+p _ {X} (y) = \frac {1}{2 \beta_ {X}} \exp \left(- \beta_ {X} ^ {- 1} | y - \mu_ {X} |\right)
+$$
+
+for $y \in \mathcal{V} = \mathbb{R}$ . The distribution of these outputs is defined by the distribution of their mean $\mu_{X}$ and scale parameter $\beta_{X}$ .
+
+Let $P_{x} = \mathcal{L}(\mu_{x},\beta_{x})\in \mathcal{P},y\in \mathcal{V} = \mathbb{R}$ , and $\gamma >0$ . If $\beta_{x}\neq \gamma^{-1}$ , we have
+
+$$
+\begin{array}{l} \mathbb {E} _ {Z _ {x} \sim P _ {x}} \exp \left(- \gamma | Z _ {x} - y |\right) \\ = \left(\beta_ {x} ^ {2} \gamma^ {2} - 1\right) ^ {- 1} \Big (\beta_ {x} \gamma \exp \big (- \beta_ {x} ^ {- 1} | \mu_ {x} - y | \big) - \exp \big (- \gamma | \mu_ {x} - y | \big) \Big). \\ \end{array}
+$$
+
+Additionally, if $\beta_{x} = \gamma^{-1}$ , the dominated convergence theorem implies
+
+$$
+\begin{array}{l} \mathbb {E} _ {Z _ {x} \sim P _ {x}} \exp (- \gamma | Z _ {x} - y |) \\ = \lim _ {\gamma \rightarrow \beta_ {x} ^ {- 1}} \left(\beta_ {x} ^ {2} \gamma^ {2} - 1\right) ^ {- 1} \left(\beta_ {x} \gamma \exp \left(- \beta_ {x} ^ {- 1} | \mu_ {x} - y |\right) - \exp \left(- \gamma | \mu_ {x} - y |\right)\right) \\ = \frac {1}{2} \big (1 + \gamma | \mu_ {x} - y | \big) \exp \big (- \gamma | \mu_ {x} - y | \big). \\ \end{array}
+$$
+
+Let $P_{x^{\prime}} = \mathcal{L}(\mu_{x^{\prime}},\beta_{x^{\prime}})$ be another Laplace distribution. If $\beta_{x}\neq \gamma^{-1},\beta_{x^{\prime}}\neq \gamma^{-1}$ , and $\beta_{x}\neq \beta_{x^{\prime}}$ , we obtain
+
+$$
+\begin{array}{l} \mathbb {E} _ {Z _ {x} \sim P _ {x}, Z _ {x ^ {\prime}} \sim P _ {x ^ {\prime}}} \exp \big (- \gamma | Z _ {x} - Z _ {x ^ {\prime}} | \big) = \frac {\gamma \beta_ {x} ^ {3}}{(\beta_ {x} ^ {2} \gamma^ {2} - 1) (\beta_ {x} ^ {2} - \beta_ {x ^ {\prime}} ^ {2})} \exp \big (- \beta_ {x} ^ {- 1} | \mu_ {x} - \mu_ {x ^ {\prime}} | \big) \\ + \frac {\gamma \beta_ {x ^ {\prime}} ^ {3}}{\left(\beta_ {x ^ {\prime}} ^ {2} \gamma^ {2} - 1\right) \left(\beta_ {x ^ {\prime}} ^ {2} - \beta_ {x} ^ {2}\right)} \exp \left(- \beta_ {x ^ {\prime}} ^ {- 1} | \mu_ {x} - \mu_ {x ^ {\prime}} |\right) \\ + \frac {1}{(\beta_ {x} ^ {2} \gamma^ {2} - 1) (\beta_ {x ^ {\prime}} ^ {2} \gamma^ {2} - 1)} \exp \big (- \gamma | \mu_ {x} - \mu_ {x ^ {\prime}} | \big). \\ \end{array}
+$$
+
+As above, all other possible cases can be deduced by applying the dominated convergence theorem. More concretely,
+
+- if $\beta_{x} = \beta_{x^{\prime}} = \gamma^{-1}$ , then
+
+$$
+\begin{array}{l} \mathbb {E} _ {Z _ {x} \sim P _ {x}, Z _ {x ^ {\prime}} \sim P _ {x ^ {\prime}}} \exp \left(- \gamma | Z _ {x} - Z _ {x ^ {\prime}} |\right) \\ = \frac {1}{8} \Big (3 + 3 \gamma | \mu_ {x} - \mu_ {x ^ {\prime}} | + \gamma^ {2} | \mu_ {x} - \mu_ {x ^ {\prime}} | ^ {2} \Big) \exp \big (- \gamma | \mu_ {x} - \mu_ {x ^ {\prime}} | \big), \\ \end{array}
+$$
+
+- if $\beta_{x} = \beta_{x^{\prime}}$ and $\beta_{x}\neq \gamma^{-1}$ , then
+
+$$
+\begin{array}{l} \mathbb {E} _ {Z _ {x} \sim P _ {x}, Z _ {x ^ {\prime}} \sim P _ {x ^ {\prime}}} \exp \left(- \gamma | Z _ {x} - Z _ {x ^ {\prime}} |\right) = \frac {1}{\left(\beta_ {x} ^ {2} \gamma^ {2} - 1\right) ^ {2}} \exp \left(- \gamma | \mu_ {x} - \mu_ {x ^ {\prime}} |\right) \\ + \left(\frac {\gamma (\beta_ {x} + | \mu_ {x} - \mu_ {x ^ {\prime}} |)}{2 (\beta_ {x} ^ {2} \gamma^ {2} - 1)} - \frac {\beta_ {x} \gamma}{(\beta_ {x} ^ {2} \gamma^ {2} - 1) ^ {2}}\right) \exp \Big (- \beta_ {x} ^ {- 1} | \mu_ {x} - \mu_ {x ^ {\prime}} | \Big), \\ \end{array}
+$$
+
+- if $\beta_{x} \neq \beta_{x'}$ and $\beta_{x} = \gamma^{-1}$ , then
+
+$$
+\begin{array}{l} \mathbb {E} _ {Z _ {x} \sim P _ {x}, Z _ {x ^ {\prime}} \sim P _ {x ^ {\prime}}} \exp \left(- \gamma | Z _ {x} - Z _ {x ^ {\prime}} |\right) = \frac {\beta_ {x ^ {\prime}} ^ {3} \gamma^ {3}}{\left(\beta_ {x ^ {\prime}} ^ {2} \gamma^ {2} - 1\right) ^ {2}} \exp \left(- \beta_ {x ^ {\prime}} ^ {- 1} | \mu_ {x} - \mu_ {x ^ {\prime}} |\right) \\ - \left(\frac {1 + \gamma | \mu_ {x} - \mu_ {x ^ {\prime}} |}{2 (\beta_ {x ^ {\prime}} ^ {2} \gamma^ {2} - 1)} + \frac {\beta_ {x ^ {\prime}} ^ {2} \gamma^ {2}}{(\beta_ {x ^ {\prime}} ^ {2} \gamma^ {2} - 1) ^ {2}}\right) \exp \big (- \gamma | \mu_ {x} - \mu_ {x ^ {\prime}} | \big), \\ \end{array}
+$$
+
+and if $\beta_{x}\neq \beta_{x^{\prime}}$ and $\beta_{x^{\prime}} = \gamma^{-1}$ , then
+
+$$
+\begin{array}{l} \mathbb {E} _ {Z _ {x} \sim P _ {x}, Z _ {x ^ {\prime}} \sim P _ {x ^ {\prime}}} \exp \left(- \gamma | Z _ {x} - Z _ {x ^ {\prime}} |\right) = \frac {\beta_ {x} ^ {3} \gamma^ {3}}{\left(\beta_ {x} ^ {2} \gamma^ {2} - 1\right) ^ {2}} \exp \left(- \beta_ {x} ^ {- 1} | \mu_ {x} - \mu_ {x ^ {\prime}} |\right) \\ - \left(\frac {1 + \gamma | \mu_ {x} - \mu_ {x ^ {\prime}} |}{2 (\beta_ {x} ^ {2} \gamma^ {2} - 1)} + \frac {\beta_ {x} ^ {2} \gamma^ {2}}{(\beta_ {x} ^ {2} \gamma^ {2} - 1) ^ {2}}\right) \exp \big (- \gamma | \mu_ {x} - \mu_ {x ^ {\prime}} | \big). \\ \end{array}
+$$
+
+The calculations above show that by choosing a Laplacian kernel
+
+$$
+k _ {\mathcal {Y}} \left(y, y ^ {\prime}\right) = \exp \left(- \gamma | y - y ^ {\prime} |\right)
+$$
+
+with inverse length scale $\gamma > 0$ on the space of targets $\mathcal{Y} = \mathbb{R}$ , we can compute $\mathbb{E}_{Z_x \sim P_x} k_{\mathcal{Y}}(Z_x, y)$ and $\mathbb{E}_{Z_x \sim P_x, Z_{x'} \sim P_{x'}} k_{\mathcal{Y}}(Z_x, Z_{x'})$ analytically. Additionally, the Laplacian kernel is characteristic on $\mathbb{R}$ (Fukumizu et al., 2008).
+
+Since the Laplace distribution is an elliptically contoured distribution, we know from Gelbrich (1990, Corollary 2) that the 2-Wasserstein distance with respect to the Euclidean distance between $P_{x} = \mathcal{L}(\mu_{x},\beta_{x})$ and $P_{x^{\prime}} = \mathcal{L}(\mu_{x^{\prime}},\beta_{x^{\prime}})$ can be computed in closed form and is given by
+
+$$
+W _ {2} ^ {2} \left(P _ {x}, P _ {x ^ {\prime}}\right) = \left(\mu_ {x} - \mu_ {x ^ {\prime}}\right) ^ {2} + 2 \left(\beta_ {x} - \beta_ {x ^ {\prime}}\right) ^ {2}.
+$$
+
+Thus we see that the 2-Wasserstein distance is also a Hilbertian metric on the space of Laplace distributions, and hence
+
+$$
+k _ {\mathcal {P}} \left(P _ {x}, P _ {x ^ {\prime}}\right) = \exp \left(- \lambda W _ {2} ^ {\nu} \left(P _ {x}, P _ {x ^ {\prime}}\right)\right)
+$$
+
+is a valid kernel for $0 < \nu \leq 2$ and all $\lambda > 0$ .
+
+Therefore, as discussed above, for all $\lambda ,\gamma >0$ and $\nu \in (0,2]$
+
+$$
+k \big ((p, y), (p ^ {\prime}, y ^ {\prime}) \big) = \exp \big (- \lambda W _ {2} ^ {\nu} (p, p ^ {\prime}) \big) \exp \big (- \gamma | y - y ^ {\prime} | \big)
+$$
+
+is a valid kernel on the product space $\mathcal{P} \times \mathcal{V}$ of Laplace distributions and $\mathbb{R}$ that allows to evaluate $h((p, y), (p', y'))$ analytically and guarantees that $\mathrm{KCE}_k = 0$ if and only if model $P$ is calibrated.
+
+# D.3 PREDICTING MIXTURES OF DISTRIBUTIONS
+
+Assume that the model predicts mixture distributions, possibly with different numbers of components. A special case of this setting are ensembles of models, in which each ensemble member predicts a component of the mixture model.
+
+Let $p, p' \in \mathcal{P}$ with $p = \sum_{i} \pi_{i} p_{i}$ and $p' = \sum_{j} \pi_{j}' p_{j}'$ , where $\pi, \pi'$ are histograms and $p_{i}, p_{j}'$ are the mixture components. For kernel $k_{\mathcal{Y}}$ and $y \in \mathcal{Y}$ we obtain
+
+$$
+\mathbb {E} _ {Z \sim p} k _ {\mathcal {Y}} (Z, y) = \sum_ {i} \pi_ {i} \mathbb {E} _ {W \sim p _ {i}} k _ {\mathcal {Y}} (Z, y)
+$$
+
+and
+
+$$
+\mathbb {E} _ {Z \sim p, Z ^ {\prime} \sim p ^ {\prime}} k _ {\mathcal {Y}} (Z, Z ^ {\prime}) = \sum_ {i, j} \pi_ {i} \pi_ {j} ^ {\prime} \mathbb {E} _ {Z \sim p _ {i}, Z ^ {\prime} \sim p _ {j} ^ {\prime}} k _ {\mathcal {Y}} (Z, Z ^ {\prime}).
+$$
+
+Of course, for these derivations to be meaningful, we require that they do not depend on the choice of histograms $\pi, \pi'$ and mixture components $p_i, p_j'$ .
+
+Definition D.1 (see Yakowitz & Spragins (1968)). A family $\mathcal{P}$ of finite mixture models is called identifiable if two mixtures $p = \sum_{i=1}^{K} \pi_i p_i \in \mathcal{P}$ and $p' = \sum_{j=1}^{K'} \pi_j' p_j' \in \mathcal{P}$ , written such that all $p_i$ and all $p_j'$ are pairwise distinct, are equal if and only if $K = K'$ and the indices can be reordered such that for all $k \in \{1, \ldots, K\}$ there exists some $k' \in \{1, \ldots, K\}$ with $\pi_k = \pi_{k'}'$ and $p_k = p_{k'}'$ .
+
+Clearly, if $\mathcal{P}$ is identifiable, then the derivations above do not depend on the choice of histograms and mixture components. Prominent examples of identifiable mixture models are Gaussian mixture models and mixture models of families of products of exponential distributions (Yakowitz & Spragins, 1968).
+
+Moreover, similar to optimal transport for Gaussian mixture models by Chen et al. (2019; 2020); Delon & Desolneux (2020), we can consider metrics of the form
+
+$$
+\inf _ {w \in \Pi (\pi , \pi^ {\prime})} \left(\sum_ {i, j} w _ {i, j} c ^ {s} (p _ {i}, p _ {j} ^ {\prime})\right) ^ {1 / s},
+$$
+
+where
+
+$$
+\Pi (\pi , \pi^ {\prime}) = \left\{w \colon \sum_ {i} w _ {i, j} = \pi_ {j} ^ {\prime} \wedge \sum_ {j} w _ {i, j} = \pi_ {i} \wedge \forall i, j \colon w _ {i, j} \geq 0 \right\}
+$$
+
+are the couplings of $\pi$ and $\pi'$ , and $c(\cdot, \cdot)$ is a cost function between the components of the mixture model.
+
+Theorem D.1. Let $\mathcal{P}$ be a family of finite mixture models that is identifiable in the sense of Definition D.1, and let $s\in [1,\infty)$ .
+
+If $d(\cdot, \cdot)$ is a (Hilbertian) metric on the space of mixture components, then the Mixture Wasserstein distance of order $s$ defined by
+
+$$
+\mathrm {M W} _ {s} \left(p, p ^ {\prime}\right) := \inf _ {w \in \Pi \left(\pi , \pi^ {\prime}\right)} \left(\sum_ {i, j} w _ {i, j} d ^ {s} \left(p _ {i}, p _ {j} ^ {\prime}\right)\right) ^ {1 / s}, \tag {D.2}
+$$
+
+is a (Hilbertian) metric on $\mathcal{P}$ .
+
+Proof. First of all, note that for all $p,p' \in \mathcal{P}$ an optimal coupling $\hat{w}$ exists (Villani, 2009, Theorem 4.1). Moreover, $\sum_{i,j} \hat{w}_{i,j} d^{s}(p_{i},p_{j}') \geq 0$ , and hence $\mathrm{MW}_s(p,p')$ exists. Moreover, since $\mathcal{P}$ is identifiable, we see that $\mathrm{MW}_s(p,p')$ does not depend on the choice of histograms and mixture components. Thus $\mathrm{MW}_s$ is well-defined.
+
+Clearly, for all $p,p^{\prime}\in \mathcal{P}$ we have $\mathrm{MW}_s(p,p')\geq 0$ and $\mathrm{MW}_s(p,p') = \mathrm{MW}_s(p',p)$ . Moreover,
+
+$$
+\begin{array}{l} \mathrm {M W} _ {s} ^ {s} (p, p) = \min _ {w \in \Pi (\pi , \pi)} \sum_ {i, j} w _ {i, j} d ^ {s} \left(p _ {i}, p _ {j}\right) \leq \sum_ {i, j} \pi_ {i} \delta_ {i, j} d ^ {s} \left(p _ {i}, p _ {j}\right) \\ = \sum_ {i} \pi_ {i} d ^ {s} (p _ {i}, p _ {i}) = \sum_ {i} \pi_ {i} 0 ^ {2} = 0, \\ \end{array}
+$$
+
+and hence $\mathrm{MW}_s(p,p) = 0$ . On the other hand, let $p,p^{\prime}\in \mathcal{P}$ with optimal coupling $\hat{w}$ with respect to $\pi$ and $\pi^\prime$ , and assume that $\mathrm{MW}_s(p,p^{\prime}) = 0$ . We have
+
+$$
+p = \sum_ {i} \pi_ {i} p _ {i} = \sum_ {i, j} \hat {w} _ {i, j} p _ {i} = \sum_ {i, j: \hat {w} _ {i, j} > 0} \hat {w} _ {i, j} p _ {i}.
+$$
+
+Since $\mathrm{MW}_s(p,p') = 0$ , we have $\hat{w}_{i,j}d^{s}(p_{i},p_{j}') = 0$ for all $i,j$ , and hence $d^{s}(p_{i},p_{j}') = 0$ if $\hat{w}_{i,j} > 0$ . Since $d$ is a metric, this implies $p_i = p_j'$ if $\hat{w}_{i,j} > 0$ . Thus we get
+
+$$
+p = \sum_ {i, j: \hat {w} _ {i, j} > 0} \hat {w} _ {i, j} p _ {i} = \sum_ {i, j: \hat {w} _ {i, j} > 0} \hat {w} _ {i, j} p _ {j} ^ {\prime} = \sum_ {i, j} \hat {w} _ {i, j} p _ {j} ^ {\prime} = \sum_ {j} \pi_ {j} ^ {\prime} p _ {j} ^ {\prime} = p ^ {\prime}.
+$$
+
+Function $\mathrm{MW}_s$ also satisfies the triangle inequality, following a similar argument as Chen et al. (2019). Let $p^{(1)}, p^{(2)}, p^{(3)} \in \mathcal{P}$ and denote the optimal coupling with respect to $\pi^{(1)}$ and $\pi^{(2)}$ by $\hat{w}^{(12)}$ , and the optimal coupling with respect to $\pi^{(2)}$ and $\pi^{(3)}$ by $\hat{w}^{(23)}$ . Define $w^{(13)}$ by
+
+$$
+w _ {i, k} ^ {(1 3)} := \sum_ {j: \pi_ {j} ^ {(2)} \neq 0} \frac {\hat {w} _ {i , j} ^ {(1 2)} \hat {w} _ {j , k} ^ {(2 3)}}{\pi_ {j} ^ {(2)}}.
+$$
+
+Clearly $w_{i,k}^{(13)}\geq 0$ for all $i,k$ , and we see that
+
+$$
+\begin{array}{l} \sum_ {i} w _ {i, k} ^ {(1 3)} = \sum_ {i} \sum_ {j: \pi_ {j} ^ {(2)} \neq 0} \frac {\hat {w} _ {i , j} ^ {(1 2)} \hat {w} _ {j , k} ^ {(2 3)}}{\pi_ {j} ^ {(2)}} = \sum_ {j: \pi_ {j} ^ {(2)} \neq 0} \sum_ {i} \frac {\hat {w} _ {i , j} ^ {(1 2)} \hat {w} _ {j , k} ^ {(2 3)}}{\pi_ {j} ^ {(2)}} \\ = \sum_ {j: \pi_ {j} ^ {(2)} \neq 0} \frac {\pi_ {j} ^ {(2)} \hat {w} _ {j , k} ^ {(2 3)}}{\pi_ {j} ^ {(2)}} = \sum_ {j: \pi_ {j} ^ {(2)} \neq 0} \hat {w} _ {j, k} ^ {(2 3)} = \pi^ {(3)} - \sum_ {j: \pi_ {j} ^ {(2)} = 0} \hat {w} _ {j, k} ^ {(2 3)} \\ \end{array}
+$$
+
+for all $k$ . Since for all $j, k$ , $\pi_j^{(2)} \geq \hat{w}_{j,k}^{(23)}$ , we know that $\pi_j^{(2)} = 0$ implies $\hat{w}_{j,k}^{(23)} = 0$ for all $k$ . Thus for all $k$
+
+$$
+\sum_ {i} w _ {i, k} ^ {(1 3)} = \pi^ {(3)}.
+$$
+
+Similarly we obtain for all $i$
+
+$$
+\sum_ {k} w _ {i, k} ^ {(1 3)} = \pi^ {(1)}.
+$$
+
+Thus $w^{(13)} \in \Pi(\pi^{(1)}, \pi^{(3)})$ , and therefore by exploiting the triangle inequality for metric $d$ and the Minkowski inequality we get
+
+$$
+\begin{array}{l} \mathrm{MW}_{s}\big(p^{(1)},p^{(3)}\big)\leq \Bigg(\sum_{i,k}w_{i,k}^{(13)}d^{s}\big(p_{i}^{(1)},p_{k}^{(3)}\big)\Bigg)^{1 / s}\\ = \Bigg(\sum_{i,k}\sum_{j: \pi_{j}^{(2)}\neq 0}\frac{\hat{w}_{i,j}^{(12)}\hat{w}_{j,k}^{(23)}}{\pi_{j}^{(2)}} d^{s}\big(p_{i}^{(1)},p_{k}^{(3)}\big)\Bigg)^{1 / s} \\ \leq \left(\sum_ {i, k} \sum_ {j: \pi_ {j} ^ {(2)} \neq 0} \frac {\hat {w} _ {i , j} ^ {(1 2)} \hat {w} _ {j , k} ^ {(2 3)}}{\pi_ {j} ^ {(2)}} \left(d \left(p _ {i} ^ {(1)}, p _ {j} ^ {(2)}\right) + d \left(p _ {j} ^ {(2)}, p _ {k} ^ {(3)}\right)\right) ^ {s}\right) ^ {1 / s} \\ \leq \left(\sum_ {i, k} \sum_ {j: \pi_ {j} ^ {(2)} \neq 0} \frac {\hat {w} _ {i , j} ^ {(1 2)} \hat {w} _ {j , k} ^ {(2 3)}}{\pi_ {j} ^ {(2)}} d ^ {s} \left(p _ {i} ^ {(1)}, p _ {j} ^ {(2)}\right)\right) ^ {1 / s} \\ + \left(\sum_ {i, k} \sum_ {j: \pi_ {j} ^ {(2)} \neq 0} \frac {\hat {w} _ {i , j} ^ {(1 2)} \hat {w} _ {j , k} ^ {(2 3)}}{\pi_ {j} ^ {(2)}} d ^ {s} \left(p _ {j} ^ {(2)}, p _ {k} ^ {(3)}\right)\right) ^ {1 / s} \\ = \left(\sum_ {i} \sum_ {j: \pi_ {j} ^ {(2)} \neq 0} \hat {w} _ {i, j} ^ {(1 2)} d ^ {s} \left(p _ {i} ^ {(1)}, p _ {j} ^ {(2)}\right)\right) ^ {1 / s} \\ + \left(\sum_ {k} \sum_ {j: \pi_ {j} ^ {(2)} \neq 0} \hat {w} _ {i, k} ^ {(2 3)} d ^ {s} \left(p _ {j} ^ {(2)}, p _ {k} ^ {(3)}\right)\right) ^ {1 / s} \\ \leq \left(\sum_ {i, j} \hat {w} _ {i, j} ^ {(1 2)} d ^ {s} \left(p _ {i} ^ {(1)}, p _ {j} ^ {(2)}\right)\right) ^ {1 / s} + \left(\sum_ {j, k} \hat {w} _ {i, k} ^ {(2 3)} d ^ {s} \left(p _ {j} ^ {(2)}, p _ {k} ^ {(3)}\right)\right) ^ {1 / s} \\ = \mathrm {M W} _ {s} \left(p ^ {(1)}, p ^ {(2)}\right) + \mathrm {M W} _ {s} \left(p ^ {(2)}, p ^ {(3)}\right). \\ \end{array}
+$$
+
+Thus $\mathrm{MW}_s$ is a metric, and it is just left to show that it is Hilbertian if $d$ is Hilbertian. Since $d$ is a Hilbertian metric, there exists a Hilbert space $\mathcal{H}$ and a mapping $\phi$ such that
+
+$$
+d (x, y) = \| \phi (x) - \phi (y) \| _ {\mathcal {H}}.
+$$
+
+Let $r_1, \ldots, r_n \in \mathbb{R}$ with $\sum_{i} r_i = 0$ and $p^{(1)}, \ldots, p^{(n)} \in \mathcal{P}$ . Denote the optimal coupling with respect to $\pi^{(i)}$ and $\pi^{(j)}$ by $\hat{w}^{(i,j)}$ . Then we have
+
+$$
+\begin{array}{l} \sum_ {i, j} r _ {i} r _ {j} \sum_ {k, l} \hat {w} _ {k, l} ^ {(i, j)} \| \phi (p _ {k} ^ {(i)}) \| _ {\mathcal {H}} ^ {2} = \sum_ {i, k} r _ {i} \| \phi (p _ {k} ^ {(i)}) \| _ {\mathcal {H}} ^ {2} \sum_ {j} r _ {j} \sum_ {l} \hat {w} _ {k, l} ^ {(i, j)} \\ = \sum_ {i, k} r _ {i} \| \phi \left(p _ {k} ^ {(i)}\right) \| _ {\mathcal {H}} ^ {2} \sum_ {j} r _ {j} \pi_ {k} ^ {(i)} \tag {D.3} \\ = \sum_ {i, k} r _ {i} \pi_ {k} ^ {(i)} \| \phi \left(p _ {k} ^ {(i)}\right) \| _ {\mathcal {H}} ^ {2} \sum_ {j} r _ {j} = 0, \\ \end{array}
+$$
+
+and similarly
+
+$$
+\sum_ {i, j} r _ {i} r _ {j} \sum_ {k, l} w _ {k, l} ^ {(i, j)} \| \phi \left(p _ {l} ^ {(j)}\right) \| _ {\mathcal {H}} ^ {2} = 0. \tag {D.4}
+$$
+
+Moreover, for all $k,l$ we get
+
+$$
+\begin{array}{l} \sum_ {i, j} r _ {i} r _ {j} \hat {w} _ {k, l} ^ {(i, j)} \left\langle \phi \Big (p _ {k} ^ {(i)} \Big), \phi \Big (p _ {l} ^ {(j)} \Big) \right\rangle_ {\mathcal {H}} = \left\langle \sum_ {i} r _ {i} \sqrt {\hat {w} _ {k , l} ^ {(i , j)}} \phi \Big (p _ {k} ^ {(i)} \Big), \sum_ {j} r _ {j} \sqrt {\hat {w} _ {k , l} ^ {(i , j)}} \phi \Big (p _ {l} ^ {(j)} \Big) \right\rangle_ {\mathcal {H}} \\ = \left\| \sum_ {i} r _ {i} \sqrt {\hat {w} _ {k , l} ^ {(i , j)}} \phi \left(p _ {k} ^ {(i)}\right) \right\| _ {\mathcal {H}} ^ {2} \geq 0, \\ \end{array}
+$$
+
+and hence
+
+$$
+\sum_ {i, j} r _ {i} r _ {j} \sum_ {k, l} \hat {w} _ {k, l} ^ {(i, j)} \left\langle \phi \left(p _ {k} ^ {(i)}\right), \phi \left(p _ {l} ^ {(j)}\right) \right\rangle_ {\mathcal {H}} \geq 0, \tag {D.5}
+$$
+
+and similarly
+
+$$
+\sum_ {i, j} r _ {i} r _ {j} \sum_ {k, l} \hat {w} _ {k, l} ^ {(i, j)} \left\langle \phi \left(p _ {l} ^ {(j)}\right), \phi \left(p _ {k} ^ {(i)}\right) \right\rangle_ {\mathcal {H}} \geq 0. \tag {D.6}
+$$
+
+Hence from Eqs. (D.3) to (D.6) we get
+
+$$
+\begin{array}{l} \sum_ {i, j} r _ {i} r _ {j} \mathrm {M W} _ {s} ^ {s} (p ^ {(i)}, p ^ {(j)}) = \sum_ {i, j} r _ {i} r _ {j} \sum_ {k, l} \hat {w} _ {k, l} ^ {(i, j)} d ^ {s} \left(p _ {k} ^ {(i)}, p _ {l} ^ {(j)}\right) \\ = \sum_ {i, j} r _ {i} r _ {j} \sum_ {k, l} \hat {w} _ {k, l} ^ {(i, j)} \left\| \phi \left(p _ {k} ^ {(i)}\right) - \phi \left(p _ {l} ^ {(j)}\right) \right\| _ {\mathcal {H}} ^ {2} \\ = \sum_ {i, j} r _ {i} r _ {j} \sum_ {k, l} \hat {w} _ {k, l} ^ {(i, j)} \left\| \phi \left(p _ {k} ^ {(i)}\right) \right\| _ {\mathcal {H}} ^ {2} \\ - \sum_ {i, j} r _ {i} r _ {j} \sum_ {k, l} \hat {w} _ {k, l} ^ {(i, j)} \left\langle \phi \left(p _ {k} ^ {(i)}\right), \phi \left(p _ {l} ^ {(j)}\right) \right\rangle_ {\mathcal {H}} \\ - \sum_ {i, j} r _ {i} r _ {j} \sum_ {k, l} \hat {w} _ {k, l} ^ {(i, j)} \left\langle \phi \left(p _ {l} ^ {(j)}\right), \phi \left(p _ {k} ^ {(i)}\right) \right\rangle_ {\mathcal {H}} \\ + \sum_ {i, j} r _ {i} r _ {j} \sum_ {k, l} \hat {w} _ {k, l} ^ {(i, j)} \left\| \phi \left(p _ {l} ^ {(j)}\right) \right\| _ {\mathcal {H}} ^ {2} \\ \leq 0, \\ \end{array}
+$$
+
+which shows that $\mathrm{MW}_s^s$ is a negative definite kernel (Berg et al., 1984, Definition 3.1.1). Since $0 < 1 / s < \infty$ , $\mathrm{MW}_s$ is a negative definite kernel as well (Berg et al., 1984, Corollary 3.2.10), which implies that metric $\mathrm{MW}_s$ is Hilbertian (Berg et al., 1984, Proposition 3.3.2).
+
+Hence we can lift a Hilbertian metric for the mixture components to a Hilbertian metric for the mixture models. For instance, if the mixture components are normal distributions, then the 2-Wasserstein distance with respect to the Euclidean distance is a Hilbertian metric for the mixture components. When we lift it to the space $\mathcal{P}$ of Gaussian mixture models we obtain the $\mathrm{MW}_2$ metric proposed by Chen et al. (2019; 2020); Delon & Desolneux (2020). As shown by Delon & Desolneux (2020), the discrete formulation of $\mathrm{MW}_2$ obtained by our construction is equivalent to the definition
+
+$$
+\mathrm {M W} _ {2} ^ {2} (p, p ^ {\prime}) := \inf _ {\gamma \in \Pi (p, p ^ {\prime}) \cap \mathrm {G M M} _ {2 n} (\infty)} \int_ {\mathbb {R} ^ {n} \times \mathbb {R} ^ {n}} d ^ {2} (y, y ^ {\prime}) \mathrm {d} \gamma (y, y ^ {\prime}) \tag {D.7}
+$$
+
+for two Gaussian mixtures $p, p'$ on $\mathbb{R}^n$ , where $\Pi(p, p')$ are the couplings of $p$ and $p'$ (not of the histograms!) and $\mathrm{GMM}_{2n}(\infty) = \cup_{k \geq 0} \mathrm{GMM}_{2n}(k)$ is the set of all finite Gaussian mixture distributions on $\mathbb{R}^{2n}$ . The construction of the discrete formulation as a solution to a constrained optimization problem similar to Eq. (D.7) can be generalized to mixtures of $t$ -distributions. However, it is not possible for arbitrary mixture models such as mixtures of generalized Gaussian distributions, even though they are elliptically contoured distributions (Deledalle et al., 2018; Delon & Desolneux, 2020).
+
+The optimal coupling of the discrete histograms can be computed efficiently using techniques from linear programming and optimal transport theory such as the network simplex algorithm and the Sinkhorn algorithm. As discussed above, if metric $d_{\mathcal{P}}$ is of the form in Eq. (D.2), functions of the form
+
+$$
+k _ {\mathcal {P}} (p, p ^ {\prime}) = \exp \left(- \lambda d _ {\mathcal {P}} ^ {\nu} (p, p ^ {\prime})\right)
+$$
+
+are valid kernels on $\mathcal{P}$ for all $\lambda > 0$ and $\nu \in (0,2]$ .
+
+Thus taken together, if $k\mathcal{Y}$ is a characteristic kernel on the target space $\mathcal{Y}$ and $d(\cdot, \cdot)$ is a Hilbertian metric on the space of mixture components, then for all $s \in [1, \infty), \lambda > 0$ , and $\nu \in (0, 2]$
+
+$$
+k \big ((p, y), (p ^ {\prime}, y ^ {\prime}) \big) = \exp \big (- \lambda \mathrm {M W} _ {s} ^ {\nu} (p, p ^ {\prime}) \big) k _ {\mathcal {Y}} (y, y ^ {\prime})
+$$
+
+is a valid kernel on the product space $\mathcal{P} \times \mathcal{V}$ of mixture distributions and targets that allow to evaluate $h\big((p,y),(p',y')\big)$ analytically and guarantees that $\mathrm{KCE}_k = 0$ if and only if model $P$ is calibrated.
+
+# E CLASSIFICATION AS A SPECIAL CASE
+
+We show that the calibration error introduced in Definition 2 is a generalization of the calibration error for classification proposed by Widmann et al. (2019). Their formulation of the calibration error is based on a weighted sum of class-wise discrepancies between the left hand side and right hand side of Definition 1, where the weights are output by a vector-valued function of the predictions. Hence their framework can only be applied to finite target spaces, i.e., if $|\mathcal{V}| < \infty$
+
+Without loss of generality, we assume that $\mathcal{Y} = \{1,\dots ,d\}$ for some $d\in \mathbb{N}\setminus \{1\}$ . In our notation, the previously defined calibration error, denoted by CCE (classification calibration error), with respect to a function space $\mathcal{G}\subset \{f\colon \mathcal{P}\to \mathbb{R}^d\}$ is given by
+
+$$
+\mathrm {C C E} _ {\mathcal {G}} := \sup _ {g \in \mathcal {G}} \left| \mathbb {E} _ {P _ {X}} \left(\sum_ {y \in \mathcal {Y}} \big (\mathbb {P} (Y = y | P _ {X}) - P _ {X} (\{y \}) \big) g _ {y} (P _ {X})\right) \right|.
+$$
+
+For the function class
+
+$$
+\mathcal {F} := \left\{f \colon \mathcal {P} \times \mathcal {Y} \to \mathbb {R}, (p, y) \mapsto g _ {y} (p) | g \in \mathcal {G} \right\}
+$$
+
+we get
+
+$$
+\mathrm {C C E} _ {\mathcal {G}} = \sup _ {f \in \mathcal {F}} \left| \mathbb {E} _ {P _ {X}, Y} f (P _ {X}, Y) - \mathbb {E} _ {P _ {X}, Z _ {X}} f (P _ {X}, Z _ {X}) \right| = \mathrm {C E} _ {\mathcal {F}}.
+$$
+
+Similarly, for every function class $\mathcal{F} \subset \{f \colon \mathcal{P} \times \mathcal{Y} \to \mathbb{R}\}$ , we can define the space
+
+$$
+\mathcal {G} := \left\{g \colon \mathcal {P} \to \mathbb {R} ^ {d}, p \mapsto \left(f (p, 1), \dots , f (p, d)\right) ^ {\top} \Big | f \in \mathcal {F} \right\},
+$$
+
+for which
+
+$$
+\mathrm {C E} _ {\mathcal {F}} = \sup _ {g \in \mathcal {G}} \left| \mathbb {E} _ {P _ {X}} \left(\sum_ {y \in \mathcal {Y}} \big (\mathbb {P} (Y = y | P _ {X}) - P _ {X} (\{y \}) \big) g _ {y} (P _ {X})\right) \right| = \mathrm {C C E} _ {\mathcal {G}}.
+$$
+
+Thus both definitions are equivalent for classification models but the structure of the employed function classes differs. The definition of CCE is based on vector-valued functions on the probability simplex whereas the formulation presented in this paper uses real-valued function on the product space of the probability simplex and the targets.
+
+An interesting theoretical aspect of this difference is that in the case of KCE we consider real-valued kernels on $\mathcal{P} \times \mathcal{V}$ instead of matrix-valued kernels on $\mathcal{P}$ , as shown by the following comparison. By $e_i \in \mathbb{R}^d$ we denote the $i$ th unit vector, and for a prediction $p \in \mathcal{P}$ its representation $v_p \in \mathbb{R}^d$ in the probability simplex is defined as
+
+$$
+(v _ {p}) _ {y} = p (\{y \})
+$$
+
+for all targets $y\in \mathcal{V}$
+
+Let $k \colon (\mathcal{P} \times \mathcal{Y}) \times (\mathcal{P} \times \mathcal{Y}) \to \mathbb{R}$ . We define the matrix-valued function $K \colon \mathcal{P} \times \mathcal{P} \to \mathbb{R}^{d \times d}$ by
+
+$$
+\left[ K \left(p, p ^ {\prime}\right) \right] _ {y, y ^ {\prime}} = k \left(\left(p, y\right), \left(p ^ {\prime}, y ^ {\prime}\right)\right)
+$$
+
+for all $y, y' \in \mathcal{V}$ and $p, p' \in \mathcal{P}$ . From the positive definiteness of kernel $k$ it follows that $K$ is a matrix-valued kernel (Micchelli & Pontil, 2005, Definition 2). We obtain
+
+$$
+\begin{array}{l} \mathrm {S K C E} _ {k} = \mathbb {E} _ {P _ {X}, Y, P _ {X ^ {\prime}}, Y ^ {\prime}} \left[ K \left(P _ {X}, P _ {X ^ {\prime}}\right) \right] _ {Y, Y ^ {\prime}} - 2 \mathbb {E} _ {P _ {X}, Y, P _ {X ^ {\prime}}, Z _ {X ^ {\prime}}} \left[ K \left(P _ {X}, P _ {X ^ {\prime}}\right) \right] _ {Y, Z _ {X ^ {\prime}}} \\ + \mathbb {E} _ {P _ {X}, Z _ {X}, P _ {X ^ {\prime}}, Z _ {X ^ {\prime}}} \left[ K (P _ {X}, P _ {X ^ {\prime}}) \right] _ {Z _ {X}, Z _ {X ^ {\prime}}} \\ = \mathbb {E} _ {P _ {X}, Y, P _ {X ^ {\prime}}, Y ^ {\prime}} e _ {Y} ^ {\top} K (P _ {X}, P _ {X ^ {\prime}}) e _ {Y ^ {\prime}} - 2 \mathbb {E} _ {P _ {X}, Y, P _ {X ^ {\prime}}, Y ^ {\prime}} e _ {Y} ^ {\top} K (P _ {X}, P _ {X ^ {\prime}}) v _ {P _ {X ^ {\prime}}} \\ + \mathbb {E} _ {P _ {X}, Y, P _ {X ^ {\prime}}, Y ^ {\prime}} v _ {P _ {X}} ^ {\top} K (P _ {X}, P _ {X ^ {\prime}}) v _ {P _ {X ^ {\prime}}} \\ = \mathbb {E} _ {P _ {X}, Y, P _ {X ^ {\prime}}, Y ^ {\prime}} (e _ {Y} - v _ {P _ {X}}) ^ {\top} K (P _ {X}, P _ {X ^ {\prime}}) (e _ {Y ^ {\prime}} - v _ {P _ {X ^ {\prime}}}), \\ \end{array}
+$$
+
+which is exactly the result by Widmann et al. (2019) for matrix-valued kernels.
+
+As a concrete example, Widmann et al. (2019) used a matrix-valued kernel of the form $(p,p^{\prime})\mapsto$ $\exp (-\gamma \| p - p^{\prime}\|)\mathbf{I}_{d}$ in their experiments. In our formulation this corresponds to the real-valued tensor product kernel $\bigl ((p,y),(p',y')\bigr)\mapsto \exp (-\gamma \| p - p'^{\prime}\|)\delta_{y,y'}$
+
+# F TEMPERATURE SCALING
+
+Since many modern neural network models for classification have been demonstrated to be uncalibrated (Guo et al., 2017), it is of high practical interest being able to improve calibration of predictive models. Generally, one distinguishes between calibration techniques that are applied during training and post-hoc calibration methods that try to calibrate an existing model after training.
+
+Temperature scaling (Guo et al., 2017) is a simple calibration method for classification models with only one scalar parameter. Due to its simplicity it can trade off calibration of different classes (Kull et al., 2019), but conveniently it does not change the most-confident prediction and hence does not affect the accuracy of classification models with respect to the 0-1 loss.
+
+In regression, common post-hoc calibration methods are based on quantile binning and hence insufficient for our framework. Song et al. (2019) proposed a calibration method for regression models with real-valued targets, based on a special case of Definition 1. This calibration method was shown to perform well empirically but is computationally expensive and requires users to choose hyperparameters for a Gaussian process model and its variational inference. As a simpler alternative, we generalize temperature scaling to arbitrary predictive models in the following way.
+
+Definition F.1. Let $P_x$ be the output of a probabilistic predictive model $P$ for feature $x$ . If $P_x$ has probability density function $p_x$ with respect to a reference measure $\mu$ , then temperature scaling with respect to $\mu$ with temperature $T > 0$ yields a new output $Q_x$ whose probability density function $q_x$ with respect to $\mu$ satisfies
+
+$$
+q _ {x} \propto p _ {x} ^ {1 / T}.
+$$
+
+The notion for classification models given by Guo et al. (2017) can be recovered by choosing the counting measure on the classes as reference measure.
+
+For some exponential families on $\mathbb{R}^d$ we obtain particularly simple transformations with respect to the Lebesgue measure $\lambda^d$ that keep the type of predicted distribution and its mean invariant. Hence in contrast to other calibration methods, for these models temperature scaling yields analytically tractable distributions and does not negatively impact the accuracy of the models with respect to the mean squared error and the mean absolute error.
+
+For instance, temperature scaling of multivariate power exponential distributions (Gómez et al., 1998) in $\mathbb{R}^d$ , of which multivariate normal distributions are a special case, with respect to $\lambda^d$ corresponds to multiplication of their scale parameter with $T^{1/\beta}$ , where $\beta$ is the so-called kurtosis parameter (Gómez-Sánchez-Manzano et al., 2008). For normal distributions, this corresponds to multiplication of the covariance matrix with $T$ .
+
+Similarly, temperature scaling of Beta and Dirichlet distributions with respect to reference measure
+
+$$
+\mu (\mathrm {d} x) := x ^ {- 1} (1 - x) ^ {- 1} \mathbb {1} _ {(0, 1)} (x) \lambda^ {1} (\mathrm {d} x)
+$$
+
+and
+
+$$
+\mu (\mathrm {d} x) := \left(\prod_ {i = 1} ^ {d} x _ {i} ^ {- 1}\right) \mathbb {1} _ {(0, 1) ^ {d}} (x) \lambda^ {d} (\mathrm {d} x),
+$$
+
+respectively, corresponds to division of the canonical parameters of these distributions by $T$ without affecting the predicted mean value.
+
+All in all, we see that temperature scaling for general predictive models preserves some of the nice properties for classification models. For some exponential families such as normal distributions reference measure $\mu$ can be chosen such that temperature scaling is a simple transformation of the parameters of the predicted distributions (and hence leaves the considered model class invariant) that does not affect accuracy of these models with respect to the mean squared error and the mean absolute error.
+
+# G EXPECTED CALIBRATION ERROR FOR COUNTABLY INFINITE DISCRETE TARGET SPACES
+
+In literature, $\mathrm{ECE}_d$ and $\mathrm{MCE}_d$ are defined for binary and multi-class classification problems (Guo et al., 2017; Naeini et al., 2015; Vaicenavicius et al., 2019). For common distance measures on the
+
+probability simplex such as the total variation distance and the squared Euclidean distance, $\mathrm{ECE}_d$ and $\mathrm{MCE}_d$ can be formulated as a calibration error in the framework of Widmann et al. (2019), which is a special case of the framework proposed in this paper for binary and multi-class classification problems.
+
+In contrast to previous approaches, our framework handles countably infinite discrete target spaces as well. For every problem with countably infinitely many targets, such as, e.g., Poisson regression, there exists an equivalent regression problem on the set of natural numbers. Hence without loss of generality we assume $\mathcal{Y} = \mathbb{N}$ . Denote the space of probability distributions on $\mathbb{N}$ , the infinite dimensional probability simplex, with $\Delta^{\infty}$ . Clearly, $\Delta^{\infty}$ can be viewed as a subspace of the sequence space $\ell^{1}$ that consists of all sequences $x = (x_{n})_{n\in \mathbb{N}}$ with $x_{n}\geq 0$ for all $n\in \mathbb{N}$ and $\| x\| _1 = 1$ .
+
+Theorem G.1. Let $1 < p < \infty$ with Hölder conjugate $q$ . If
+
+$$
+\mathcal {F} := \left\{f \colon \Delta^ {\infty} \times \mathbb {N} \to \mathbb {R} \mid \mathbb {E} _ {P _ {X}} \| (f (P _ {X}, n)) _ {n \in \mathbb {N}} \| _ {p} ^ {p} \leq 1 \right\},
+$$
+
+then
+
+$$
+\mathrm {C E} _ {\mathcal {F}} ^ {q} = \mathbb {E} _ {P _ {X}} \| \mathbb {P} (Y | P _ {X}) - P _ {X} \| _ {q} ^ {q}.
+$$
+
+Let $\mu$ be the law of $P_{X}$ . If $\mathcal{F} \coloneqq \{f \colon \Delta^{\infty} \times \mathbb{N} \to \mathbb{R} \mid \mathbb{E}_{P_{X}} \| (f(P_{X}, n))_{n \in \mathbb{N}} \|_{1} \leq 1\}$ , then
+
+$$
+\operatorname{CE}_{\mathcal{F}} = \mu \text{-ess}\sup_{\xi \in \Delta^{\infty}}\sup_{y\in \mathbb{N}}|\mathbb{P}(Y = y|P_{X} = \xi) - \xi (\{y\})|.
+$$
+
+Moreover, if $\mathcal{F} = \{f\colon \Delta^{\infty}\times \mathbb{N}\to \mathbb{R}\mid \mu$ - ess $\sup_{\xi \in \Delta \infty}\sup_{y\in \mathbb{N}}|f(\xi ,y)|\leq 1\}$ , then
+
+$$
+\mathrm {C E} _ {\mathcal {F}} = \mathbb {E} _ {P _ {X}} \| \mathbb {P} (Y | P _ {X}) - P _ {X} \| _ {1}.
+$$
+
+Proof. Let $1 \leq p \leq \infty$ , and let $\mu$ be the law of $P_{X}$ and $\nu$ be the counting measure on $\mathbb{N}$ . Since both $\mu$ and $\nu$ are $\sigma$ -finite measures, the product measure $\mu \otimes \nu$ is uniquely determined and $\sigma$ -finite as well. Using these definitions, we can reformulate $\mathcal{F}$ as
+
+$$
+\mathcal {F} = \left\{f \in L ^ {p} \left(\Delta^ {\infty} \times \mathbb {N}; \mu \otimes \nu\right) \mid \| f \| _ {p; \mu \otimes \nu} \leq 1 \right\}.
+$$
+
+Define the function $\delta \colon \Delta^{\infty} \times \mathbb{N} \to \mathbb{R}$ ( $\mu \otimes \nu$ )-almost surely by
+
+$$
+\delta (\xi , y) := \mathbb {P} (Y = y \mid P _ {X} = \xi) - \xi (\{y \}).
+$$
+
+Note that $\delta$ is well-defined since we assume that all singletons on $\Delta^{\infty}$ are $\mu$ -measurable. Moreover, $\delta \in L^{q}(\Delta^{\infty} \times \mathbb{N}; \mu \otimes \nu)$ , which follows from $(\xi, y) \mapsto \mathbb{P}(Y = y \mid P_{X} = \xi)$ and $(\xi, y) \mapsto \xi(\{y\})$ being functions in $L^{q}(\Delta^{\infty} \times \mathbb{N}; \mu \otimes \nu)$ .
+
+Since $\mu \otimes \nu$ is a $\sigma$ -finite measure, the extremal equality of Hölder's inequality implies that
+
+$$
+\begin{array}{l} \mathrm {C E} _ {\mathcal {F}} = \sup _ {f \in \mathcal {F}} \mathbb {E} _ {P _ {X}, Y} f (P _ {X}, Y) - \mathbb {E} _ {P _ {X}, Z _ {X}} f (P _ {X}, Z _ {X}) \\ = \sup _ {f \in \mathcal {F}} \left| \mathbb {E} _ {P _ {X}, Y} f (P _ {X}, Y) - \mathbb {E} _ {P _ {X}, Z _ {X}} f (P _ {X}, Z _ {X}) \right| \\ = \sup _ {f \in \mathcal {F}} \left| \int_ {\Delta^ {\infty} \times \mathbb {N}} f (\xi , y) \delta (\xi , y) (\mu \otimes \nu) (\mathrm {d} (\xi , y)) \right| \\ = \| \delta \| _ {q; \mu \otimes \nu}. \\ \end{array}
+$$
+
+Note that the second equality follows from the symmetry of the function spaces $\mathcal{F}$ : for every $f\in \mathcal{F}$ also $-f\in \mathcal{F}$ .
+
+Hence for $1 < p \leq \infty$ , we obtain
+
+$$
+\begin{array}{l} \mathrm {C E} _ {\mathcal {F}} ^ {q} = \int_ {\Delta^ {\infty} \times \mathbb {N}} | \delta (\xi , y) | ^ {q} (\mu \otimes \nu) (\mathrm {d} (\xi , y)) \\ = \mathbb {E} _ {P _ {X}} \| (\delta (P _ {X}, y)) _ {y \in \mathbb {N}} \| _ {q} ^ {q} = \mathbb {E} _ {P _ {X}} \| \mathbb {P} (Y | P _ {X}) - P _ {X} \| _ {q} ^ {q}. \\ \end{array}
+$$
+
+For $p = 1$ , we get
+
+$$
+\mathrm{CE}_{\mathcal{F}} = \mu_{-\operatorname *{ess}\sup_{\xi \in \Delta^{\infty}}\sup_{y\in \mathbb{N}}}\left|\delta (\xi ,y)\right| = \mu_{-\operatorname *{ess}\sup_{\xi \in \Delta^{\infty}}\sup_{y\in \mathbb{N}}}\left|\mathbb{P}(Y = y|P_{X} = \xi) - \xi (\{y\})\right|,
+$$
+
+which concludes the proof.
+
+
+
+We see that our framework deals with countably infinite discrete target spaces seamlessly whereas the previously proposed framework by Widmann et al. (2019) is not applicable to such spaces. It is mathematically pleasing to see that for countably infinite discrete targets the calibration errors obtained in Theorem G.1 within our framework coincide with the natural generalization of $\mathrm{ECE}_d$ and $\mathrm{MCE}_d$ given in Appendix B.2.
\ No newline at end of file
diff --git a/calibrationtestsbeyondclassification/images.zip b/calibrationtestsbeyondclassification/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..e852c99d222c394dfa331c29cceb611865585311
--- /dev/null
+++ b/calibrationtestsbeyondclassification/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ee26b0ad7ae830dedefedb85669084c130458efa85eb55922269564c0057351a
+size 1871314
diff --git a/calibrationtestsbeyondclassification/layout.json b/calibrationtestsbeyondclassification/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..b2d56b95086f47c755ee369c66a7e389e9150a28
--- /dev/null
+++ b/calibrationtestsbeyondclassification/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:939b3d73cb2f5b8b348e1bb011e74014e342faaea4b893cfcdf8db6dc8877f9d
+size 1694831
diff --git a/canafruitflylearnwordembeddings/d3d7ce30-8b27-408d-81ac-fc9bd526b14a_content_list.json b/canafruitflylearnwordembeddings/d3d7ce30-8b27-408d-81ac-fc9bd526b14a_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..68d48b0f24704b5fc308d93a4b3704c2eed3ff96
--- /dev/null
+++ b/canafruitflylearnwordembeddings/d3d7ce30-8b27-408d-81ac-fc9bd526b14a_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3e7039addc649fcff55e5114f368d6f1a900e08c198f67fc9d1593f5fd978b84
+size 112568
diff --git a/canafruitflylearnwordembeddings/d3d7ce30-8b27-408d-81ac-fc9bd526b14a_model.json b/canafruitflylearnwordembeddings/d3d7ce30-8b27-408d-81ac-fc9bd526b14a_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..c08e77b14eefa46addee6800f3d6a1f11bcd5210
--- /dev/null
+++ b/canafruitflylearnwordembeddings/d3d7ce30-8b27-408d-81ac-fc9bd526b14a_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:75ed1e9ff050dafb37cbc7c61a674d4d80df897c833bb16b89bc8cd6b4252f10
+size 131225
diff --git a/canafruitflylearnwordembeddings/d3d7ce30-8b27-408d-81ac-fc9bd526b14a_origin.pdf b/canafruitflylearnwordembeddings/d3d7ce30-8b27-408d-81ac-fc9bd526b14a_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..4cbfe263c4d96d74fbe6e664ebc2a553570a8ff1
--- /dev/null
+++ b/canafruitflylearnwordembeddings/d3d7ce30-8b27-408d-81ac-fc9bd526b14a_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:28a60f12fcdd388202513fa68ece60f25af934da4032a94f2157f7e8c486ca15
+size 2603060
diff --git a/canafruitflylearnwordembeddings/full.md b/canafruitflylearnwordembeddings/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..928022755a707ad46d7e7b8a3975eccb4669ee43
--- /dev/null
+++ b/canafruitflylearnwordembeddings/full.md
@@ -0,0 +1,369 @@
+# CAN A FRUIT FLY LEARN WORD EMBEDDINGS?
+
+Yuchen Liang *
+
+RPI
+
+MIT-IBM Watson AI Lab
+
+liangy7@rpi.edu
+
+Chaitanya K. Ryali
+
+Department of CS
+
+UC San Diego
+
+rckrishn@eng.ucsd.edu
+
+Benjamin Hoover
+
+MIT-IBM Watson AI Lab
+
+IBM Research
+
+benjamin.hoover@ibm.com
+
+Leopold Grinberg
+
+IBM Research
+
+lgrinbe@ibm.com
+
+Saket Navlakha
+
+Cold Spring Harbor Laboratory
+
+navlakha@cshl.edu
+
+Mohammed J. Zaki
+
+Department of CS
+
+RPI
+
+zaki@cs.rpi.edu
+
+Dmitry Krotov
+
+MIT-IBM Watson AI Lab
+
+IBM Research
+
+krotov@ibm.com
+
+# ABSTRACT
+
+The mushroom body of the fruit fly brain is one of the best studied systems in neuroscience. At its core it consists of a population of Kenyon cells, which receive inputs from multiple sensory modalities. These cells are inhibited by the anterior paired lateral neuron, thus creating a sparse high dimensional representation of the inputs. In this work we study a mathematical formalization of this network motif and apply it to learning the correlational structure between words and their context in a corpus of unstructured text, a common natural language processing (NLP) task. We show that this network can learn semantic representations of words and can generate both static and context-dependent word embeddings. Unlike conventional methods (e.g., BERT, GloVe) that use dense representations for word embedding, our algorithm encodes semantic meaning of words and their context in the form of sparse binary hash codes. The quality of the learned representations is evaluated on word similarity analysis, word-sense disambiguation, and document classification. It is shown that not only can the fruit fly network motif achieve performance comparable to existing methods in NLP, but, additionally, it uses only a fraction of the computational resources (shorter training time and smaller memory footprint).
+
+# 1 INTRODUCTION
+
+Deep learning has made tremendous advances in computer vision, natural language processing and many other areas. While taking high-level inspiration from biology, the current generation of deep learning methods are not necessarily biologically realistic. This raises the question whether biological systems can further inform the development of new network architectures and learning algorithms that can lead to competitive performance on machine learning tasks or offer additional insights into intelligent behavior. Our work is inspired by this motivation. We study a well-established neurobiological network motif from the fruit fly brain and investigate the possibility of reusing it for solving common machine learning tasks in NLP. We consider this exercise as a toy model example illustrating the possibility of "reprogramming" of naturally occurring algorithms and behaviors (clustering combinations of input stimuli from olfaction, vision, and thermo-hydro sensory system) into a target algorithm of interest (learning word embeddings from raw text) that the original biological organism does not naturally engage in.
+
+The mushroom body (MB) is a major area of the brain responsible for processing of sensory information in fruit flies. It receives inputs from a set of projection neurons (PN) conveying information
+
+from several sensory modalities. The major modality is olfaction [2], but there are also inputs from the PN responsible for sensing temperature and humidity [29], as well as visual inputs [45; 6]. These sensory inputs are forwarded to a population of approximately 2000 Kenyon cells (KCs) through a set of synaptic weights [26]. KCs are reciprocally connected through an anterior paired lateral (APL) neuron, which sends a strong inhibitory signal back to KCs. This recurrent network effectively implements winner-takes-all competition between KCs, and silences all but a small fraction of top activated neurons [8]. This is the network motif that we study in this paper; its schematic is shown in Fig. 1. KCs also send their outputs to mushroom body output neurons (MBONs), but this part of the MB network is not included into our mathematical model.
+
+
+Figure 1: Network architecture. Several groups of PNs corresponding to different modalities send their activities to the layer of KCs, which are inhibited through the reciprocal connections to the APL neuron.
+
+Behaviorally, it is important for a fruit fly to distinguish sensory stimuli, e.g., different odors. If a fruit fly senses a smell associated with danger, it's best to avoid it; if it smells food, the fruit fly might want to approach it. The network motif shown in Fig. 1 is believed to be responsible for clustering sensory stimuli so that similar stimuli elicit similar patterns of neural responses at the level of KCs to allow generalization, while distinct stimuli result in different neural responses, to allow discrimination. Importantly, this biological network has evolved to accomplish this task in a very efficient way.
+
+In computational linguistics there is a long tradition [19] of using distributional properties of linguistic units for quantifying semantic similarities between them, as summarized in the famous quote by JR Firth: "a word is characterized by the company it keeps" [14]. This idea has led to powerful tools such as Latent Semantic Analysis [9], topic modelling [3], and language models like word2vec [30], GloVe [34], and, more recently, BERT [10] which relies on the Transformer model [44]. Specifically word2vec models are trained to maximize the likelihood of a word given its context, GloVe models utilize global word-word co-occurrence statistics, and BERT uses a deep neural network with attention to predict masked words (and the next sentence). As such, all these methods utilize the correlations between individual words and their context in order to learn useful word embeddings.
+
+In our work we ask the following question: can the correlations between words and their contexts be extracted from raw text by the biological network of KCs, shown in Fig. 1? Further, how do the word representations learned by KCs differ from those obtained by existing NLP methods? Although this network has evolved to process sensory stimuli from olfaction and other modalities and not to "understand" language it uses a general purpose algorithm to embed inputs (from different modalities) into a high dimensional space with several desirable properties, which we discuss below.
+
+Our approach relies on a recent proposal that the recurrent network of mutually inhibited KCs can be used as a "biological" model for generating sparse binary hash codes for the input data presented at the projection neuron layer [8]. It was argued that a matrix of random weights projecting from PN layer into the KCs layer leads to the highly desirable property of making the generated hash codes locality sensitive, i.e., placing similar inputs close to each other in the embedding space and pushing distinct stimuli far apart. A subsequent study [39] has demonstrated that the locality sensitivity of the hash codes can be significantly increased, compared to the random case, if the matrix of weights from PN to KCs is learned from data. The idea of using the network of KCs with random projections for NLP tasks has also been previously explored in [37], see discussion in section 6.
+
+Biologically, there is an ongoing debate in the neuroscience community regarding whether these projections are random. For instance, [5] argues for the random model, while [47] presents evidence of the non-random structure of this network, which is related to the frequency of presented odors. Since the goal of our work is to build a useful AI system and not mimic every detail of the biological system, we adopt the data-driven synaptic weight strategy even if fruit flies may use random projections. As is clearly demonstrated in [39], learned synapses lead to better performance.
+
+Our main contributions in this work are the following:
+
+1. Inspired by the fruit fly network, we propose an algorithm that makes it possible to generate binary (as opposed to continuous) word embeddings for words and their context. We systematically evaluate the performance of this algorithm on word similarity task, word-sense disambiguation, and document classification.
+2. We demonstrate that our binary embeddings result in tighter and better separated clusters of concepts compared to continuous GloVe embeddings, and stand in line with clustering properties of binarized versions of GloVe.
+3. We show that training the fruit fly network requires an order of magnitude smaller compute time than training the classical NLP architectures, like BERT, at the expense of relatively small decrease in classification accuracy.
+
+# 2 LEARNING ALGORITHM
+
+Consider a training corpus. Each sentence can be decomposed into a collection of $w$ -grams of consecutive words. If the word tokens come from a predefined vocabulary of size $N_{\mathrm{voc}}$ , the input to the algorithm is a vector of size $2N_{\mathrm{voc}}$ . This vector consists of two blocks: the context (the first $N_{\mathrm{voc}}$ elements), and the target (the remaining $N_{\mathrm{voc}}$ elements); see Fig. 2. In this work $w$ is assumed to be an odd integer, and the target word is assumed to be the center of the $w$ -gram. The target word
+
+Apple stock rises on optimism for the new iPhone.
+
+
+Figure 2: The encoding method. The input vector consists of two blocks separated by the (thick) blue line. Assuming $w = 3$ , a center word "stock" is the target word and the two flanking words form a context. The $w$ -gram is highlighted in light blue.
+
+is one-hot encoded in the target block, and the context words are binary encoded as a bag of words in the context block (no positional information is used). The window $w$ slides along the text corpus, and for each position generates a training vector $\mathbf{v}^{\mathbf{A}} = \{v_{i}^{A}\}_{i=1}^{2N_{\mathrm{vec}}}$ , where the index $A$ enumerates different $w$ -grams, and index $i$ enumerates positions in the context-target vector. These training vectors are passed to the learning algorithm. The goal of the algorithm is to learn correlations between the context and the target blocks.
+
+# 2.1 MATHEMATICAL FORMULATION
+
+Mathematically, the objective of the training algorithm is to distribute a set of context-target pairs among $K$ buckets, so that similar pairs end up in similar buckets. In order to achieve this, the learning algorithm takes two inputs: a set of training vectors $\mathbf{v}^{\mathbf{A}}\in \{0,1\}^{2N_{\mathrm{vec}}}$ , and a vector of occurrence probabilities $\mathbf{p} = \{p_i = f_{(i\mod N_{\mathrm{vec}})}\}_{i = 1}^{2N_{\mathrm{vec}}}\in \mathbb{R}^{2N_{\mathrm{vec}}}$ , where $f_{j}$ is the probability of observing word $j$ in the training corpus1. The learning can be formalized as a minimization of the
+
+energy function, see [39] for additional details, defined by
+
+$$
+E = - \sum_ {A \in \text {d a t a}} \frac {\left\langle \mathbf {W} _ {\hat {\mu}} , \mathbf {v} ^ {\mathbf {A}} / \mathbf {p} \right\rangle}{\left\langle \mathbf {W} _ {\hat {\mu}}, \mathbf {W} _ {\hat {\mu}} \right\rangle^ {1 / 2}}, \quad \text {w h e r e} \quad \hat {\mu} = \underset {\mu} {\arg \max } \left\langle \mathbf {W} _ {\mu}, \mathbf {v} ^ {\mathbf {A}} \right\rangle \tag {1}
+$$
+
+In this equation $\mathbf{W} \in \mathbb{R}^{K \times 2N_{\mathrm{vac}}}$ is a matrix of synaptic connections, given as $\mathbf{W} = \{\mathbf{W}_{\mu}\} = \{W_{\mu i}\}$ , projecting from PN layer (individual neurons in the layer are denoted by the index $i$ ) to the KC layer (individual neurons in the KC layer are denoted by the index $\mu$ ). There are $2N_{\mathrm{vac}}$ neurons in the PN layer and $K$ neurons in the KC layer. The inner product $\langle \mathbf{X}, \mathbf{Y} \rangle = \sum_{i=1}^{2N_{\mathrm{vac}}} X_i Y_i$ is defined as a contraction over index $i$ of PN cells. In the numerator of the energy function the binary encoded $w$ -gram is divided by the probabilities of occurrences of individual words element-wise, so that the numerator can be written as
+
+$$
+\left\langle \mathbf {W} _ {\hat {\mu}}, \mathbf {v ^ {A}} / \mathbf {p} \right\rangle = \sum_ {i = 1} ^ {2 N _ {\mathrm {v o c}}} W _ {\hat {\mu} i} \frac {v _ {i} ^ {A}}{p _ {i}}
+$$
+
+Probabilities $\mathbf{p}$ are calculated based on the frequencies of words in the training corpus. The vocabulary contains $N_{\mathrm{vac}}$ most frequent words in the corpus, thus all the elements of $p_i$ are non-zero and the element-wise division is well defined.
+
+Intuitively, the goal of the training algorithm is to adjust the weights of the neural network so that they are aligned with $w$ -grams that are frequently present in the training corpus. We rely on the assumption that semantically related $w$ -grams share several "core" words, while a few individual words might be substituted by synonyms/antonyms. The minimization of the energy function (1) is accomplished by the iterative update of the weights satisfying the following learning rule [25; 39; 17]
+
+$$
+\Delta W _ {\mu i} = \varepsilon g \left[ \sum_ {j} W _ {\mu j} v _ {j} ^ {A} \right] \left[ \frac {v _ {i} ^ {A}}{p _ {i}} - \left(\sum_ {j} W _ {\mu j} \frac {v _ {j} ^ {A}}{p _ {j}}\right) W _ {\mu i} \right] \tag {2}
+$$
+
+In this equation the activation function is equal to one for a maximally driven hidden unit (Kenyon cell), and is equal to zero otherwise
+
+$$
+g \left[ x _ {\mu} \right] = \delta_ {\mu , \hat {\mu}}, \quad \text {w h e r e} \quad \hat {\mu} = \underset {\mu} {\arg \max } \left[ x _ {\mu} \right] \tag {3}
+$$
+
+The learning rate is denoted by $\varepsilon$ , and $\delta_{\mu, \hat{\mu}}$ is a Kronecker delta symbol.
+
+# 2.2 BIO-HashING
+
+After learning is complete the hash codes for the inputs can be generated in the following way. Given the binary encoded $w$ -gram $\mathbf{v}^{\mathbf{A}}$ ,
+
+$$
+H _ {\mu} = \left\{ \begin{array}{l l} 1, & \text {i f} \langle \mathbf {W} _ {\mu}, \mathbf {v} ^ {\mathbf {A}} \rangle \text {i n t h e t o p} k \text {o f a l l K C s a c t i v a t i o n s} \\ 0, & \text {o t h e r w i s e} \end{array} \right. \tag {4}
+$$
+
+This is a crude mathematical approximation of the biological computation performed by the PN-KC-APL neural network [8; 39]. An input $\mathbf{v}^{\mathbf{A}}$ generates an input current $\langle \mathbf{W}_{\mu}, \mathbf{v}^{\mathbf{A}} \rangle$ into the KC neurons using feedforward weights $W_{\mu i}$ . The recurrent network of KCs and the APL neuron silences all but a small fraction of KCs. Those cells that remain active are assigned state 1, while the rest of the KCs are assigned the inactive state 0.
+
+Notice, that equation (4) makes it possible to generate the hash codes for both individual words (static word embeddings like word2vec and GloVe) and phrases (similar to Transformer models). In the static case, the input $\mathbf{v}^{\mathbf{A}}$ has all zeros in the context block and a one-hot encoded word in the target block. In the context-dependent case, both blocks have binary encoded input words. Importantly, both context-dependent and static embeddings are mapped into the same space of sparse binary hash codes (a vector of $K$ elements, with $k$ ones in it). We show below that these hash codes capture semantic meaning of the target word and the context in which it is used. For the rest of the paper we refer to the parameter $k$ in equation (4) as the hash length.
+
+In order to provide an intuition behind the learning algorithm defined by the energy function (1) and weight update rule (2) and connect it to some of the existing methods in machine learning, consider
+
+the limit when all the words have equal probabilities in the training corpus, $p_i = \frac{1}{N_{\mathrm{vec}}}$ . In this limit the energy function (1) reduces to the familiar spherical $K$ -means clustering algorithm [11]. In this limit the weights of each KC correspond to the centroids of the clusters of context-target vectors. The hashing rule (4) assigns active state 1 to the $k$ closest centroids (and inactive state 0 to the remaining ones), defined with respect to cosine similarity distance. In this simple limit the learning algorithm that we use can be viewed as a biologically plausible implementation of this classical algorithm. For real datasets the probabilities of words are different, thus this correspondence does not hold. Notice that division by the probability appears only in the expression for the energy, but not in the definition of $\hat{\mu}$ in equation (1). Equivalently, division by $p_i$ appears in the second bracket of equation (2), but not in the argument of the activation function $g[x_{\mu}]$ . Thus, in the general case (for different word probabilities $p_i$ ) our algorithm is not equivalent to spherical $K$ -means on context-target vectors rescaled by the probabilities. Rather, in the general case, the closest centroid is found for a given context-target vector (via the definition of $\hat{\mu}$ in equation (1) - no $p_i$ involved), but the "updates of the position" of that centroid are computed by enhancing the contributions of rare words (small $p_i$ ) and suppressing the contributions of frequent words (large $p_i$ ). Empirically, we have found that division by the probabilities improves performance of our method compared to the case of spherical $K$ -means (when the factor $1 / \mathbf{p}$ is removed from the algorithm).
+
+# 3 EMPIRICAL EVALUATION
+
+The KC network shown in Fig. 1 was trained on the OpenWebText Corpus [15], which is a 32GB corpus of unstructured text containing approximately 6B tokens. The details of the training protocols and the hyperparameters are reported in section 7 in the supplement.
+
+# 3.1 STATIC WORD EMBEDDINGS EVALUATION
+
+Our aim here is to demonstrate that the sparse embeddings obtained by the fruit fly network motif are competitive with existing state-of-the-art word embeddings such as GloVe [34] and word2vec [30] and commonly used binarization tools for these continuous embeddings. We show this by evaluating the semantic similarity of static word embeddings. Several common benchmark datasets are used: WS353 [13], MEN [4], RW [28], SimLex [21], RG-65 [38], Mturk [18]. These datasets contain pairs of words with human-annotated similarity scores between them. Following previous work [43; 42], model similarity score for binary representations is evaluated as $\text{sim}(v_1, v_2) = (n_{11} + n_{00}) / n$ , where $n_{11}$ ( $n_{00}$ ) is the number of bits in $v_1$ and $v_2$ that are both 1 (0), and $n$ is the length of $v_{1,2}$ . Cosine similarity is used for real-valued representations. Spearman's correlation coefficient is calculated between this similarity and the human annotated score. The results are reported in Table 1.
+
+| Dataset | Ours | GloVe | word2vec | SOTA |
| MEN | 56.6 | 69.5 | 75.5 | 81.3 | [12] |
| WS353 | 63.7 | 64.0 | 66.5 | 81.0 | [18] |
| SIMLEX | 21.0 | 31.5 | 41.7 | 56.0 | [40] |
| RW | 39.4 | 46.8 | 61.3 | 61.7 | [36] |
| RG | 69.0 | 74.2 | 75.4 | 83.3 | [20] |
| Mturk | 56.1 | 57.5 | 69.8 | 72.7 | [18] |
+
+Table 1: Evaluation on word similarity datasets via Spearman's rank correlation coefficient. Both GloVe and word2vec use 300d pretrained embeddings. Hyperparameter settings for our model: $K = 400$ , $w = 11$ . Results for our algorithm are reported only for a fixed hash length, $k = 51$ . See Table 7 for results as a function of hash length.
+
+We observe that our word embeddings demonstrate competitive performance compared to GloVe, but worse performance than word2vec. At the same time, our embeddings are binary, as opposed to GloVe and word2vec, which are represented by continuous vectors. Thus, it is more appropriate to compare them with commonly used binarized versions of the continuous embeddings. Specifically, we compare the performance of fruit fly embeddings with a number of state-of-the-art binarization methods such as: LSH/SimHash [7] (random contractive projections followed by binarization based on sign), RandExp [8] (random expansive projections followed by $k$ -winner take all binarization),
+
+ITQ [16] (iterative quantization), SH (spectral hashing) [46], PCAH [16] (PCA followed by binarization based on sign). The complete evaluation of all these methods for varying hash length is presented in Section 8; please see Tables 7, 8, 9 for binarization of pretrained GloVe, pretrained word2vec, and GloVe trained on OpenWebText. In Table 7 we also include evaluation from NLB, "Near-Lossless Binarization" [43] (autoencoder-based binarization) for the hash lengths where those results are available. Here we only present a short summary of those results for a specific (small) hash length $k = 4$ in Table 2.
+
+| Dataset | Ours | LSH | RandExp | ITQ | SH | PCAH |
| MEN | 34.0 | 16.9/35.5/23.6 | 27.5/24.2/28.4 | 0.1/9.2/26.9 | 9.4/7.2/23.8 | 12.5/5.3/26.0 |
| WS353 | 43.2 | 8.2/26.0/20.2 | 20.9/23.5/30.5 | -6.6/16.0/25.9 | 15.4/3.3/18.1 | 6.4/17.3/21.2 |
| SIMLEX | 13.4 | 6.8/17.0/8.0 | 10.4/17.6/10.1 | 7.0/3.3/7.3 | 9.3/-3.6/12.1 | 4.4/-2.9/11.5 |
| RW | 11.0 | 10.8/21.8/16.2 | 19.9/24.7/22.0 | 13.7/17.4/24.5 | 22.6/14.6/19.7 | 12.4/15.0/19.7 |
| RG | 24.0 | 21.2/44.6/25.5 | 36.6/30.4/28.7 | -17.5/32.8/21.4 | 4.5/18.0/39.8 | 1.9/20.8/45.0 |
| Mturk | 44.0 | 16.0/33.1/18.3 | 29.3/22.7/28.3 | 9.9/22.5/26.3 | 18.9/21.9/20.3 | 15.5/23.6/24.9 |
+
+Table 2: Comparison to common binarization methods. This table is a simplified version (for hash length $k = 4$ ) of the complete evaluation for a range of hash lengths reported in Tables 7, 8, 9. Each binarization technique was evaluated on three continuous embeddings: pretrained GloVe, pretrained word2vec, GloVe trained on OpenWebText (the same dataset that was used for training our fruit fly embeddings), format: pretrained GloVe/ pretrained word2vec/ GloVe on OWT. Hyperparameter settings for our model: $K = 400$ , $w = 11$ . Best result in bold; second best underlined.
+
+It is clear from Table 2 that fruit fly embeddings outperform existing methods for word embedding discretization on WS353 and Mturk, and demonstrate second best result (after LSH binarization of word2vec) on MEN. In general (see Tables 7, 8, 9), we find that fruit fly embeddings are particularly powerful compared to existing methods at small hash lengths (see $k = 4, 8$ in the aforementioned tables). These results indicate that the fruit fly network can learn meaningful binary semantic representations directly from raw text. We also note that an added advantage of binary embeddings is that they require only a fraction (approx. $3\%$ ) of the memory footprint required for continuous word embeddings (assuming they have the same length), since a real value requires 32-bits per vector element, whereas a boolean value requires only 1-bit.
+
+# 3.2 WORD CLUSTERING
+
+A nice aspect of binary embeddings is that they result in tighter and better separated clusters than continuous embeddings. To evaluate this property for our method we started with hash codes for individual words and performed agglomerative clustering via complete link, using the cosine distance as the metric. The clustering algorithm was terminated at 200 clusters (we experimented with possible choices of this parameter, such as 200, 500, 1000, 2000, 3000, 5000, and arrived at similar conclusions). We repeated the same analysis for continuous GloVe, binarization of GloVe embeddings via autoencoder-like method [43], and simple discretization method of GloVe when one declares the largest $k$ elements of each word vector to be 1 and assigns 0 to the remaining elements (for $k = 50, 75, 120, 200$ ). The results for the inter-cluster similarity vs. intra-cluster similarity are shown in Fig. 3 (panel A). It is clear from this scatter plot that the average distance between the points within a cluster is smaller (higher similarity) for all considered binary embeddings compared to GloVe embeddings. At the same time, the distance between the closest clusters is larger or equal (smaller similarity) for the fruit fly embeddings and naive discretizations with $k \ll 120$ . We also observe that the clusters lose detail (i.e., both intra- and inter-cluster similarity increases) as the binarization threshold gets higher (shown for Glove). However, our embeddings maintain a balance between intra- and inter-clustering similarity, and thus still capture fine-grained cluster information. For instance, inspecting the semantic structure of the clusters obtained this way, an example of the hierarchical clustering diagram (lower part of the tree containing 42 leaves) is shown in Fig. 3 (panel B). We clearly observe semantically coherent clusters resulting from the fruit fly word embeddings.
+
+# 3.3 CONTEXT-DEPENDENT WORD EMBEDDINGS
+
+Here, we evaluate the effectiveness of our fruit fly inspired approach for contextual word embeddings, as opposed to static (or context-independent) embeddings from above. We use the WiC [35]
+
+
+Figure 3: Panel A: average cosine similarity between the points within the cluster vs. maximum cosine similarity (minimal distance) to a point from the closest cluster. Solid lines correspond to mean $\pm$ std for the individual clusters. Numbers next to GloVe in the legend correspond to the number of largest elements in the word vector that are mapped to 0 under the naive discretization procedure. Panel B: an example of a cluster generated by the agglomerative clustering for our method, the integer number associated with each node corresponds to the number of daughter leaves in that cluster. The root node corresponds to "interchange (42)".
+
+
+Figure 4: For every word (highlighted in green) in context (left), 10 nearest neighbor words in the binary hashing space are shown (right). Context allows the algorithm to disambiguate the target word's meaning.
+
+and SCWS [22] benchmarks for the evaluation of context-sensitive word embeddings for word sense disambiguation. Both the datasets comprise pairs of sentences that contain a target word, and the task is to determine whether the two target words share a similar semantic meaning in the corresponding contexts. The WiC dataset is modeled as a binary prediction task, with 1 denoting that the target words have the same sense, and 0 indicating that they mean different things. The SCWS dataset is modeled as a rank prediction task, since for each pair of sentences and target words, it reports the average human similarity scores (from 10 Amazon Mechanical Turkers per pair).
+
+| word in context | 10 nearest neighbor words in the hash code space |
| design of apple latest iphone | design, web, features, graphics, radeon, android, apps, ios, apple, nvidia |
| filling sweet apple pie recipe | chocolate, sweet, crispy, noodles, syrup, coconut, cheese, sauce, butter, cinnamon |
| money in bank checking account | money, credit, loans, account, cash, funds, savings, paying, banks, pension |
| boat on the bank of the river | river, lake, near, island, creek, canyon, valley, mountains, area, shore |
+
+Before presenting quantitative results, we qualitatively examine how the fruit fly network performs on context sentence pairs for target words "apple" and "bank" in Fig. 4. We show the top $q = 10$ nearest neighbor words for the context dependent target word. These examples clearly indicate that the "correct" sense of the word has been found ("apple" the device manufacturer has different nearest neighbors from the fruit, and "bank" the financial institution from the natural feature).
+
+For the quantitative comparison, we contrast our method against contextual embeddings from BERT [10], GloVe [34], word2vec [30] and Word2Sense [33]. For BERT we use the 768-dimensional embeddings from the uncased-large model, for GloVe and word2vec we use the 300-dimensional embeddings, and for Word2Sense we use the sparse 2250-dimensional pretrained embeddings. Since BERT outputs contextual embeddings for each word in a sentence, we simply compute the cosine similarity between the embedding vectors for the target words for each pair of instances. For GloVe/word2vec, we use a context window of size $w$ centered at each of the target words and compute the average embedding for each window and compute the cosine similarity between the two window vectors. Similar approach is used for Word2Sense, but the similarity between two embeddings is based on the Jensen-Shannon divergence [33]. For the fruit fly network, given the effectiveness of the top- $q$ nearest neighbor words (as seen in Fig. 4), we devise a two component scoring function. The first component is the dot-product between the context-dependent hash codes for the two target words plus $w$ length context blocks, denoted $J_{\text{dot}}$ . The second is the number of
+
+common contextual nearest neighbors of the two target words among the top- $q$ neighbors of each (scaled to be between 0 and 1), denoted $J_{nn}$ . The final score is given as $J = \alpha \cdot J_{dot} + (1 - \alpha) \cdot J_{nn}$ , where $\alpha \in [0,1]$ is a hyperparameter. For all the methods, we predict a WiC pair to be positive if the score is above a threshold value $\theta$ . For SCWS, the ranking is proportional to the scores above $\theta$ , with the rest scored as zero. The hyperparameter $\theta$ is tuned for all the methods independently. Finally, for a fair comparison, all methods use the same 20k vocabulary.
+
+We report the performance of our context-dependent word embeddings for both SCWS and WiC in Table 3 and Table 4, respectively. For both benchmarks we report the results from a 5-fold cross-validation study, where each fold (in turn) is used as a development set, and the remaining four folds as the test set. We select the optimal hyperparameters (including $\theta, \alpha, q, k, w$ ) for all the methods using only the first fold; no training is done since we evaluate only the pretrained embeddings. The tables report the Spearman rank correlation on SCWS, and the accuracy on WiC.
+
+| Method | mean | std |
| BERT | 56.8 | 0.54 |
| word2vec (w=0) | 56.7 | 0.005 |
| GloVe (w=3) | 40.9 | 1.3 |
| GloVe (w=0) | 54.4 | 0.10 |
| Word2Sense (w=3) | 41.4 | 0.01 |
| Word2Sense (w=0) | 54.2 | 0.008 |
| Ours (w=0) | 49.1 | 0.36 |
+
+Table 3: SCWS dataset: mean and std for Spearman rank correlation. The best window value is also shown.
+
+| Method | mean | std |
| BERT | 61.2 | 0.22 |
| word2vec (w=5) | 51.3 | 0.004 |
| Word2vec (w=0) | 50.0 | 0.003 |
| GloVe (w=7) | 54.9 | 0.26 |
| GloVe (w=0) | 50.1 | 0.25 |
| Word2Sense (w=7) | 56.5 | 0.004 |
| Word2Sense (w=0) | 50.0 | 0.003 |
| Ours (w=21) | 57.7 | 0.27 |
+
+On SWCS (Table 3), we see that the context-independent embeddings (using $w = 0$ ) are better for GloVe, Word2Sense and our method, with word2vec yielding the best results. The reason is that about $86.5\%$ of the word pairs in SCWS are different words, and can be distinguished without looking at the context. Unlike SCWS, the WiC benchmark uses the same target word (with only minor variations in some cases) in both contexts, and therefore a context-independent approach is not expected to perform well. Indeed, on WiC (Table 4), we clearly observe that context-independent vectors $(w = 0)$ are not very good, and our method, that uses the joint scoring function $J$ combining both the hash code and nearest neighbor scores, is better than context-dependent GloVe $(w = 7)$ , word2vec $(w = 5)$ and Word2Sense (also $w = 7$ ).
+
+Table 4: WiC dataset: mean and std for accuracy. The best window value is also shown.
+
+| Dataset | Ours | Glove | NLB(256bits) | NLB(512bits) | Word2vec | BERT |
| 20Newsgroup | 78.2 | 77.9 | 61.6 | 64.1 | 77.3 | 78.6 |
| SST-2 | 77.1 | 78.3 | 76.3 | 78.6 | 80.7 | 90.8 |
| WOS-11967 | 83.8 | 84.2 | 70.6 | 72.8 | 84.8 | 86.7 |
| TREC-6 | 90.4 | 89.0 | 85.2 | 88.8 | 90.9 | 94.0 |
+
+Table 5: Accuracy for document classification task. We use 300d pretrained models for GloVe and word2vec, and pretrained bert-large-uncased model for BERT. For NLB, 300d GloVe embeddings were binarized into 256 and 512 bits. For our model, hash length 30 is used. For fair comparison, all models use the same vocabulary of 20k words.
+
+# 3.4 DOCUMENT CLASSIFICATION
+
+We also compare our binary embeddings with GloVe [34], Word2vec [31], BERT [10] and Near-Lossless Binarization [43] on document classification tasks. The benchmarks we use are 20 Newsgroups [1], Stanford Sentiment Treebank [41], WOS-11967[24] and TREC-6 datasets [27]. The 20 Newsgroups dataset contains around 18,000 documents, partitioned evenly into 20 different groups; the Stanford Sentiment Treebank dataset contains movie reviews reflecting their sentiment as either positive or negative; WOS-11967 dataset contains 11967 documents with 35 categories which include 7 parents categories; and TREC-6 dataset consists of open-domain, fact-based questions divided into broad semantic categories. We use the TextCNN [23] classifier that uses all the different embeddings mentioned above. For fair comparison, we use the same model parameters (e.g., kernel size, filter dimension) while testing different embeddings. The results in Table 5 show how our sparse binary encodings are competitive with other methods.
+
+| device | K | batch-size | GPU mem | time |
| V100 × 3 | 400 | 2000 × 3 | 122MB | 17m |
| V100 × 3 | 400 | 10000 × 3 | 150MB | 8m |
| V100 × 3 | 600 | 2000 × 3 | 232MB | 24m |
| V100 × 3 | 600 | 10000*3 | 267MB | 11.5m |
| CPU 44cores | 400 | 2000 | - | 76m |
| CPU 44cores | 400 | 10000 | - | 25m |
+
+Table 6: Training time (per epoch) and memory footprint of our method on GPUs and CPUs. For the GPU implementation, three V100 GPUs interconnected with 100GB/s (bidirectional) NVLink were used. For the CPU implementation, the computation was done on two 22-core CPUs. CPU memory is 137GB. The results are reported for window $w = 11$ .
+
+
+Figure 5: Spearman's correlation on word similarity datasets (see Section 3.1) vs. training time. Each point is one epoch.
+
+# 4 COMPUTATIONAL COMPLEXITY
+
+The computational complexity of our method can be evaluated by analyzing equations (2,3) for the weight updates. In these equations $\mathbf{v}^{\mathbf{A}}$ is a sparse vector, which has only $w$ non-zero elements in it. Thus, for a minibatch of size $|BS|$ , the computational complexity of evaluating the dot product with weights is $K \cdot w \cdot |BS|$ . Additionally, the argmax operation requires $K \cdot |BS|$ operations. We will assume that the largest parameters in our model are the size of the corpus $|A| \approx 10^{10}$ , and the size of the vocabulary $N_{\mathrm{voc}} \approx 10^{4} - 10^{5}$ . Additionally we use large minibatches $|BS| \approx N_{\mathrm{voc}}$ . Calculation of the second term in (2) requires $O(K \cdot N_{\mathrm{voc}})$ operations in addition to $K \cdot w \cdot |BS|$ operations for calculating the dot-product for each data point. Since the algorithm has to go over the entire corpus, this computation needs to be repeated $|A| / |BS|$ times per epoch. Thus, the overall computational complexity of our method is $O\left(K \cdot |A| \left(w + N_{\mathrm{voc}} / |BS|\right)\right) \approx K \cdot |A| \cdot w$ per epoch. Thus, in the leading order it does not grow with the size of the vocabulary, which is a nice computational feature.
+
+From the practical perspective, typical wall-clock training time and memory requirements per epoch are shown in Table 6. As is shown in Fig. 5, accurate solutions are obtained after about $2 - 3$ epochs; improvements beyond that are relatively small. Thus, our algorithm is capable of producing competitive models in a couple of hours. Contrast this with approximately 24 hours training time for GloVe [34]; 4 days of training on 16 TPUs for $\mathrm{BERT}_{\mathrm{BASE}}$ ; and 4 days on 64 TPUs for $\mathrm{BERT}_{\mathrm{LARGE}}$ [10] (the last two numbers assume training corpus of size 250B tokens vs. 6B tokens considered in this paper). The record breaking training time of 47 minutes for BERT requires the use of 1472 NVIDIA V100 GPUs each with 32GB of memory and a specialized DGX server architecture [32]. In our own experiments, we trained GloVe embedding on OWT corpus using the same vocabulary of 20k words that we used for the fruit fly embeddings. The wall-clock training time was approximately 10 hours on 16 threads, see details in Section 11. These are substantially larger computational resources than those required for training the fruit fly network.
+
+# 5 DISCUSSION AND CONCLUSIONS
+
+In this work we asked the intriguing question whether the core computational algorithm of one of the best studied networks in neuroscience – the network of KCs in the fruit fly brain – can be repurposed for solving a well defined machine learning task, namely, learning word embeddings from text. We have shown that, surprisingly, this network can indeed learn the correlations between the words and their context, and produce high quality word embeddings. On the semantic similarity task the fruit fly word embeddings outperform common methods for binarizing continuous SOTA word embeddings (applied to GloVe, word2vec, and GloVe trained on OWT) at small hash lengths. On the word-in-context task the fruit fly network outperforms GloVe by almost $3\%$ , word2vec by more than $6\%$ , but looses to BERT by $3.5\%$ , see Table 4. The small gap in classification accuracy compared with BERT, however, is outweighed by the benefit of requiring significantly smaller computational resources to obtain these fruit fly embeddings, as we have explained in Section 4, see Table 6. We view this result as an example of a general statement that biologically inspired algorithms might be more compute efficient compared with their classical (non-biological) counterparts, even if they slightly lose in terms of accuracy.
+
+# REFERENCES
+
+[1] 20NewsGroups. 20 newsgroups dataset, 1995. URL http://people.csail.mit.edu/jreddie/20Newsgroups/.
+[2] Alexander Shakeel Bates, Philipp Schlegel, Ruairí JV Roberts, Nikolas Drummond, Imaan FM Tamimi, Robert Gillies Turnbull, Xincheng Zhao, Elizabeth C Marin, Patricia Demetria Popovici, Serene Dhawan, et al. Complete connectomic reconstruction of olfactory projection neurons in the fly brain. BioRxiv, 2020.
+[3] David M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation. Journal of machine Learning research, 3(Jan):993-1022, 2003.
+[4] Elia Bruni, Nam Khanh Tran, and Marco Baroni. Multimodal distributional semantics. Journal of Artificial Intelligence Research, 49(1):1-47, January 2014. ISSN 1076-9757.
+[5] Sophie JC Caron, Vanessa Ruta, LF Abbott, and Richard Axel. Random convergence of olfactory inputs in the drosophila mushroom body. Nature, 497(7447):113-117, 2013.
+[6] Sophie Jeanne Cecile Caron, Jinzhi Li, Brennan Dale Mahoney, and Miles Solomon Jacob. Two parallel pathways convey distinct visual information to the drosophila mushroom body. bioRxiv, 2020.
+[7] Moses S. Charikar. Similarity Estimation Techniques from Rounding Algorithms. In Annual ACM Symposium on Theory of Computing, pp. 380-388, 2002. ISBN 978-1-58113-495-7. doi: 10.1145/509907.509965.
+[8] Sanjoy Dasgupta, Charles F. Stevens, and Saket Navlakha. A neural algorithm for a fundamental computing problem. Science, 358(6364):793-796, 2017.
+[9] Scott Deerwester, Susan T Dumais, George W Furnas, Thomas K Landauer, and Richard Harshman. Indexing by latent semantic analysis. Journal of the American society for information science, 41(6):391-407, 1990.
+[10] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
+[11] Inderjit S Dhillon and Dharmendra S Modha. Concept decompositions for large sparse text data using clustering. Machine learning, 42(1-2):143-175, 2001.
+[12] András Dobó. A comprehensive analysis of the parameters in the creation and comparison of feature vectors in distributional semantic models for multiple languages. PhD thesis, szte, 2019.
+[13] Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. Placing search in context: The concept revisited. ACM Trans. Inf. Syst., 20 (1), 2002. ISSN 1046-8188.
+[14] John R Firth. A synopsis of linguistic theory, 1930-1955. Studies in linguistic analysis, 1957.
+[15] Aaron Gokaslan and Vanya Cohen. OpenWebText Corpus. http://Skylian007.github.io/OpenWebTextCorpus, 2019.
+[16] Yunchao Gong and Svetlana Lazebnik. Iterative quantization: A procrustean approach to learning binary codes. In CVPR 2011, pp. 817-824. IEEE, June 2011. ISBN 978-1-4577-0394-2. doi: 10.1109/CVPR.2011.5995432.
+[17] Leopold Grinberg, John Hopfield, and Dmitry Krotov. Local unsupervised learning for image analysis. arXiv preprint arXiv:1908.08993, 2019.
+[18] Guy Halawi, Gideon Dror, Evgeniy Gabrilovich, and Yehuda Koren. Large-scale learning of word relatedness with constraints. KDD, pp. 1406-1414, 2012.
+[19] Zellig S Harris. Distributional structure. Word, 10(2-3):146-162, 1954.
+
+[20] Samer Hassan Hassan and Rada Mihalcea. Semantic relatedness using salient semantic analysis. In Twenty-Fifth AAAI Conference on Artificial Intelligence. Citeseer, 2011.
+[21] Felix Hill, Roi Reichart, and Anna Korhonen. Simlex-999: Evaluating semantic models with (genuine) similarity estimation. Computational Linguistics, 41(4):665–695, December 2015.
+[22] Eric H. Huang, Richard Socher, Christopher D. Manning, and Andrew Y. Ng. Improving Word Representations via Global Context and Multiple Word Prototypes. In Annual Meeting of the Association for Computational Linguistics (ACL), 2012.
+[23] Yoon Kim. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882, 2014.
+[24] Kamran Kowsari, Donald E Brown, Mojtaba Heidarysafa, Kiana Jafari Meimandi, , Matthew S Gerber, and Laura E Barnes. Hdltex: Hierarchical deep learning for text classification. In Machine Learning and Applications (ICMLA), 2017 16th IEEE International Conference on. IEEE, 2017.
+[25] Dmitry Krotov and John J. Hopfield. Unsupervised learning by competing hidden units. Proceedings of the National Academy of Sciences, 116(16):7723-7731, 2019.
+[26] Feng Li, Jack Lindsey, Elizabeth C Marin, Nils Otto, Marisa Dreher, Georgia Dempsey, Ildiko Stark, Alexander Shakeel Bates, Markus William Pleijzier, Philipp Schlegel, et al. The connectome of the adult drosophila mushroom body: implications for function. bioRxiv, 2020.
+[27] Xin Li and Dan Roth. Learning question classifiers. In COLING 2002: The 19th International Conference on Computational Linguistics, 2002.
+[28] Thang Luong, Richard Socher, and Christopher Manning. Better word representations with recursive neural networks for morphology. In Conference on Computational Natural Language Learning, pp. 104-113, 2013.
+[29] Elizabeth C Marin, Ruairí JV Roberts, Laurin Buld, Maria Theiss, Markus W Pleijzier, Tatevik Sarkissian, Willem J Laursen, Robert Gillies Turnbull, Philipp Schlegel, Alexander Shakeel Bates, et al. Connectomics analysis reveals first, second, and third order thermosensory and hygrosensory neurons in the adult drosophila brain. *BioRxiv*, 2020.
+[30] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
+[31] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger (eds.), Advances in Neural Information Processing Systems, volume 26, pp. 3111-3119. Curran Associates, Inc., 2013. URL https://proceedings.neurips.cc/paper/2013/file/9aa42b31882ec039965f3c4923ce901b-Paper.pdf.
+[32] Shar Narasimhan. Nvidia clocks world's fastest bert training time and largest transformer based model, paving path for advanced conversational ai. https://devblogs.nvidia.com/training-bert-with-gpus/, 2019.
+[33] Abhishek Panigrahi, Harsha Vardhan Simhadri, and Chiranjib Bhattacharyya. Word2sense: sparse interpretable word embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 5692-5705, 2019.
+[34] Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In Empirical methods in natural language processing (EMNLP), pp. 1532-1543, 2014.
+[35] Mohammad Taher Pilehvar and Jose Camacho-Collados. Wic: the word-in-context dataset for evaluating context-sensitive meaning representations. arXiv preprint arXiv:1808.09121, 2018.
+
+[36] Mohammad Taher Pilehvar, Dimitri Kartsaklis, Victor Prokhorov, and Nigel Collier. Card-660: Cambridge rare word dataset-a reliable benchmark for infrequent word representation models. arXiv preprint arXiv:1808.09308, 2018.
+[37] Simon Preissner and Aurélie Herbelot. To be fair: a case for cognitively-inspired models of meaning. In *CLiC-it*, 2019.
+[38] Herbert Rubenstein and John B. Goodenough. Contextual correlates of synonymy. Communications of the ACM, 8(10):627-633, 1965.
+[39] Chaitanya K. Ryali, John J. Hopfield, Leopold Grinberg, and Dmitry Krotov. Bio-Inspired Hashing for Unsupervised Similarity Search. arXiv preprint arXiv:2001.04907, 2020.
+[40] Roy Schwartz, Roi Reichart, and Ari Rappoport. Symmetric pattern based word embeddings for improved word similarity prediction. In Proceedings of the nineteenth conference on computational natural language learning, pp. 258-267, 2015.
+[41] Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pp. 1631-1642, 2013.
+[42] RR Sokal. A statistical method for evaluating systematic relationships. Univ. Kansas, Sci. Bull., 38:1409-1438, 1958.
+[43] Julien Tissier, Christophe Gravier, and Amaury Habrard. Near-lossless binarization of word embeddings. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 7104-7111, 2019.
+[44] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998-6008, 2017.
+[45] Katrin Vogt, Yoshinori Aso, Toshihide Hige, Stephan Knapek, Toshiharu Ichinose, Anja B Friedrich, Glenn C Turner, Gerald M Rubin, and Hiromu Tanimoto. Direct neural pathways convey distinct visual information to drosophila mushroom bodies. *Elife*, 5:e14009, 2016.
+[46] Yair Weiss, Antonio Torralba, and Rob Fergus. Spectral hashing. Advances in neural information processing systems, 21:1753-1760, 2008.
+[47] Zhihao Zheng, Feng Li, Corey Fisher, Iqbal J Ali, Nadiya Sharifi, Steven Calle-Schuler, Joseph Hsu, Najla Masoodpanah, Lucia Kmecova, Tom Kazimiers, et al. Structured sampling of olfactory input by the fly mushroom body. bioRxiv, 2020.
+
+# 6 APPENDIX A. RELATED WORK.
+
+Our work builds on several ideas previously discussed in the literature. The first idea is that fruit fly olfactory network can generate high quality hash codes for the input data in random [8] and data-driven [39] cases. There are two algorithmic differences of our approach compared to these previous studies. First, our network uses representational contraction, rather than expansion when we go from the PN layer to the KCs layer. Second, [8; 39] construct hash codes for data coming from a single modality (e.g., images, or word vectors), while the goal of the present paper is to learn correlations between two different "modalities": target word and its context. The second idea pertains to the training algorithm for learning the PN $\rightarrow$ KCs synapses. We use a biologically plausible algorithm of [25] to do this, with modifications that take into account the wide range of frequencies of different words in the training corpus (we discuss these differences in section 2.1). Also, similarly to [8; 39] the algorithm of [25] is used for learning the representations of the data, and not correlations between two types of data (context and target) as we do in this paper.
+
+Another closely related work [37] uses the network of KCs with random weights for generating binary hash codes for individual words. There are several differences compared to our approach. First, in our system the synaptic weights from PNs to KCs are learned and not random. We have found that learning these weights improves the performance compared to the random case. Second, unlike [37] (and unlike fruit flies), in our system the number of KCs is smaller than the number of PNs, so there is no representational expansion as we move into the "mushroom body". This expansion is essential for the system of [37], which uses random weights. Finally, our algorithm uses a different encoding scheme at the level of PNs, see Fig. 2.
+
+# 7 APPENDIX B. TRAINING PROTOCOLS AND HYPERPARAMETER CHOICES.
+
+The fruit fly network was trained on the OpenWebText Corpus [15], which is a 32GB corpus of unstructured text containing approximately 6B tokens. Individual documents were concatenated and split into sentences. A collection of $w$ -grams were extracted from each sentence by sliding a window of size $w$ along each sentence from the beginning to the end. Sentences shorter than $w$ were removed. The vocabulary was composed of $N_{\mathrm{voc}} = 20000$ most frequent tokens in the corpus.
+
+Training was done for $N_{\mathrm{epoch}}$ . At each epoch all the $w$ -grams were shuffled, organized in minibatches, and presented to the learning algorithm. The learning rate was linearly annealed starting from the maximal value $\varepsilon_0$ at the first epoch to nearly zero at the last epoch.
+
+The training algorithm has the following hyperparameters: size of the KC layer $K$ , window $w$ , overall number of training epochs $N_{\mathrm{epoch}}$ , initial learning rate $\varepsilon_0$ , minibatch size, and hash length $k$ . All models presented in this paper were trained for $N_{\mathrm{epoch}} = 15$ . The optimal ranges of the hyperparameters are: learning rate is $\varepsilon_0 \approx 10^{-4} - 5 \cdot 10^{-4}$ ; $K \approx 200 - 600$ ; $w \approx 9 - 15$ ; minibatch size $\approx 2000 - 15000$ ; hash length $k$ is reported for each individual experiment.
+
+# 8 APPENDIX C. COMPARISON WITH BINARIZED GLOVE AND WORD2VEC.
+
+Our aim here is to demonstrate that the fruit fly word embeddings are competitive with existing state-of-the-art binarization methods applied to GloVe and word2vec embeddings. We show this by evaluating the semantic similarity of static word embeddings, using several common benchmark datasets: WS353 [13], MEN [4], RW [28], SimLex [21], RG-65 [38], and Mturk [18]. These datasets contain pairs of words with human-annotated similarity scores between them. Specifically, we compare with GloVe [34] word embeddings $^{2}$ trained on Wiki2014 and Gigaword 5, GloVe embeddings trained on OpenWebText Corpus [15] and word2vec embeddings $^{3}$ .
+
+Since our representations are binary (in contrast to GloVe and word2vec), we binarize GloVe and word2vec embeddings and report their performance using a number of common hashing methods, LSH/SimHash [7] (random contractive projections followed by binarization based on sign), RandExp [8] (random expansive projections followed by $k$ -winner take all binarization), ITQ [16] (it-
+
+| Method | Hash Length (k) | Hash Length (k) |
| 4 | 8 | 16 | 32 | 64 | 128 | 4 | 8 | 16 | 32 | 64 | 128 |
| MEN (69.5/68.1) | WS353 (64.0/47.7) |
| Ours | 34.0 | 49.9 | 55.9 | 56.7 | 55.3 | 51.3 | 43.2 | 52.1 | 55.3 | 57.4 | 60.3 | 51.7 |
| LSH | 16.9 | 23.7 | 35.6 | 42.6 | 53.6 | 63.4 | 8.2 | 20.7 | 30.0 | 34.7 | 43.9 | 50.3 |
| RandExp | 27.5 | 37.7 | 46.6 | 57.6 | 67.3 | 71.6 | 20.9 | 32.9 | 41.9 | 48.4 | 57.6 | 61.7 |
| ITQ | 0.1 | 7.7 | 10.5 | 16.5 | 30.4 | 50.5 | -6.6 | -6.1 | -2.4 | -4.4 | 6.1 | 24.8 |
| SH | 9.4 | 17.0 | 22.9 | 37.6 | 52.9 | 65.4 | 15.4 | 14.1 | 19.5 | 32.3 | 43.1 | 58.4 |
| PCAH | 12.5 | 21.8 | 27.6 | 39.6 | 53.4 | 68.1 | 6.4 | 6.3 | 20.6 | 33.9 | 49.8 | 62.6 |
| NLB | - | - | - | - | 46.1 | 63.3 | - | - | - | - | 30.1 | 44.9 |
| SIMLEX (31.5/29.8) | RW (46.8/31.4) |
| Ours | 13.4 | 16.5 | 22.8 | 22.1 | 21.1 | 17.0 | 11.0 | 22.6 | 25.8 | 36.9 | 38.6 | 35.2 |
| LSH | 6.8 | 11.9 | 17.0 | 21.2 | 26.8 | 30.9 | 10.8 | 16.3 | 21.8 | 27.8 | 36.3 | 45.0 |
| RandExp | 10.4 | 17.2 | 22.8 | 28.5 | 32.4 | 35.2 | 19.9 | 21.3 | 30.9 | 40.5 | 47.6 | 53.3 |
| ITQ | 7.0 | 1.6 | 4.3 | 5.5 | 11.8 | 18.2 | 13.7 | 5.3 | 6.6 | 6.9 | 12.5 | 26.5 |
| SH | 9.3 | 15.6 | 15.9 | 17.0 | 23.1 | 31.2 | 22.6 | 21.5 | 24.3 | 28.8 | 36.1 | 45.8 |
| PCAH | 4.4 | 10.3 | 11.0 | 17.3 | 24.1 | 31.6 | 12.4 | 16.7 | 21.5 | 30.3 | 36.9 | 44.4 |
| NLB | - | - | - | - | 20.5 | 31.4 | - | - | - | - | 25.1 | 34.3 |
| RG (74.2/67.6) | Mturk (57.5/61.9) |
| Ours | 24.0 | 40.4 | 51.3 | 62.3 | 63.2 | 55.8 | 44.0 | 49.0 | 52.2 | 60.1 | 57.7 | 55.2 |
| LSH | 21.2 | 35.4 | 44.6 | 55.1 | 63.1 | 70.1 | 16.0 | 23.1 | 33.2 | 35.6 | 42.7 | 55.5 |
| RandExp | 36.6 | 49.0 | 49.5 | 66.1 | 69.6 | 70.9 | 29.3 | 35.8 | 41.4 | 50.4 | 59.0 | 61.6 |
| ITQ | -17.5 | -8.9 | 26.3 | 41.7 | 50.5 | 66.2 | 9.9 | 7.8 | 10.1 | 17.7 | 32.8 | 47.3 |
| SH | 4.5 | 5.8 | 20.3 | 42.9 | 61.3 | 72.6 | 18.9 | 17.6 | 27.5 | 35.45 | 48.1 | 57.9 |
| PCAH | 1.9 | 9.6 | 19.8 | 40.9 | 53.3 | 68.2 | 15.5 | 15.1 | 27.1 | 41.7 | 46.5 | 56.2 |
+
+Table 7: Evaluation on word similarity datasets. For each dataset and hash length, the best (second best) score is in **bold (underlined)**. The performance for GloVe embeddings is reported next to the name of each dataset in the format 300d/100d. Spearman's rank correlation coefficient is reported for common baselines that binarize GloVe (300d) embeddings together with our results. Hyperparameter settings for our algorithm: $K = 400$ , $w = 11$ .
+
+erative quantization), SH (spectral hashing) [46], PCAH [16] (PCA followed by binarization based on sign). Where available, we include evaluation from NLB, "Near-Lossless Binarization" [43] (autoencoder-based binarization).
+
+Following previous work [43; 42], model similarity score for binary representations is evaluated as $sim(v_1, v_2) = (n_{11} + n_{00}) / n$ , where $n_{11}(n_{00})$ is the number of bits in $v_1$ and $v_2$ that are both 1 (0), and $n$ is the length of $v_{1,2}$ . Cosine similarity is used for real-valued representations. The results are reported in Tables 7, 8 and 9. For each dataset, we report performance across a range of hash lengths $\{4, 8, 16, 32, 64, 128\}$ . For methods that incorporate randomness (LSH, RandExp, ITQ), we report the average across 5 runs. ITQ, SH and PCAH in Tables 7 and 8 were trained using the top 400k most frequent words. Table 9 compares our method to GloVe trained on OpenWebText (same dataset that our method is trained on) using the same vocabulary as our method uses.
+
+Our binary word embeddings demonstrate competitive performance compared to published methods for GloVe and word2vec binarization, and our algorithm can learn meaningful binary semantic representations directly from raw text. Importantly, our algorithm does not require training GloVe or word2vec embeddings first before binarizing them.
+
+# 9 APPENDIX D. DETAILS OF TECHNICAL IMPLEMENTATION.
+
+From the practical perspective, efficient implementation of the learning algorithm for the fruit fly network requires the use of sparse algebra, atomic updates, and block-sparse data access. Our algorithm is implemented in CUDA as a back-end, while python is used as an interface with the main functions.
+
+The typical memory footprint of our approach is very small. About $100 - 270\mathrm{MB}$ GPU memory is allocated for the operators $W_{\mu i}, \mathbf{v}^{\mathbf{A}}$ and temporary fields; while approximately 140GB CPU memory is needed to store the input data, array of random numbers for shuffle operations and shuffled indices. For GPU implementation, the model data is stored in the GPU's memory, while the input data
+
+| Method | Hash Length (k) | Hash Length (k) |
| 4 | 8 | 16 | 32 | 64 | 128 | 4 | 8 | 16 | 32 | 64 | 128 |
| MEN (75.5) | WS353 (66.5) |
| Ours | 34.0 | 49.9 | 55.9 | 56.7 | 55.3 | 51.3 | 43.2 | 52.1 | 55.3 | 57.4 | 60.3 | 51.7 |
| LSH | 35.5 | 42.5 | 53.6 | 63.4 | 68.4 | 72.2 | 26.0 | 34.7 | 43.9 | 50.3 | 56.0 | 58.6 |
| RandExp | 24.2 | 34.6 | 45.8 | 57.5 | 66.1 | 71.7 | 23.5 | 34.3 | 37.3 | 48.0 | 57.6 | 63.7 |
| ITQ | 9.2 | 13.3 | 25.1 | 41.5 | 57.6 | 68.5 | 16.0 | 18.1 | 22.5 | 30.2 | 43.9 | 54.8 |
| SH | 7.2 | 15.8 | 31.3 | 46.9 | 62.3 | 69.4 | 3.3 | 9.6 | 22.7 | 34.1 | 50.0 | 54.7 |
| PCAH | 5.3 | 18.6 | 37.7 | 52.0 | 63.9 | 71.6 | 17.3 | 24.9 | 38.5 | 42.0 | 52.1 | 59.3 |
| SIMLEX (41.7) | RW (61.3) |
| Ours | 13.4 | 16.5 | 22.8 | 22.1 | 21.1 | 17.0 | 11.0 | 22.6 | 25.8 | 36.9 | 38.6 | 35.2 |
| LSH | 17.0 | 21.2 | 26.8 | 30.9 | 34.4 | 35.1 | 21.8 | 27.8 | 36.3 | 45.0 | 49.6 | 52.1 |
| RandExp | 17.6 | 24.4 | 29.2 | 32.6 | 38.0 | 39.8 | 24.7 | 27.7 | 39.8 | 46.8 | 52.3 | 55.6 |
| ITQ | 3.25 | 5.7 | 6.2 | 14.9 | 23.1 | 31.5 | 17.4 | 15.7 | 19.1 | 33.5 | 45.6 | 53.4 |
| SH | -3.6 | 3.6 | 10.4 | 17.0 | 23.7 | 32.4 | 14.6 | 22.8 | 28.7 | 37.9 | 43.5 | 52.4 |
| PCAH | -2.9 | 2.5 | 11.8 | 17.0 | 24.0 | 36.0 | 15.0 | 21.5 | 28.8 | 35.4 | 46.4 | 50.6 |
| RG (75.4) | Mturk (69.8) |
| Ours | 24.0 | 40.4 | 51.3 | 62.3 | 63.2 | 55.8 | 44.0 | 49.0 | 52.2 | 60.1 | 57.7 | 55.2 |
| LSH | 44.6 | 55.1 | 63.1 | 70.1 | 76.4 | 75.8 | 33.1 | 35.6 | 42.7 | 55.5 | 58.6 | 62.4 |
| RandExp | 30.4 | 42.0 | 48.6 | 59.1 | 70.2 | 74.6 | 22.7 | 34.8 | 42.0 | 45.9 | 57.9 | 61.2 |
| ITQ | 32.8 | 49.7 | 31.5 | 55.9 | 62.2 | 71.6 | 22.5 | 21.3 | 42.3 | 46.9 | 59.3 | 60.7 |
| SH | 18.0 | 30.6 | 36.0 | 48.8 | 56.9 | 75.8 | 21.9 | 27.4 | 41.8 | 51.2 | 58.8 | 58.0 |
| PCAH | 20.8 | 22.9 | 40.6 | 36.5 | 59.0 | 71.2 | 23.6 | 34.4 | 45.5 | 55.7 | 64.2 | 60.5 |
+
+Table 8: Evaluation on word similarity datasets, analogous to Table 7, for 300d word2vec embeddings.
+
+| Method | Hash Length (k) | Hash Length (k) |
| 4 | 8 | 16 | 32 | 64 | 128 | 4 | 8 | 16 | 32 | 64 | 128 |
| MEN (76.4) | WS353 (72.2) |
| Ours | 34.0 | 49.9 | 55.9 | 56.7 | 55.3 | 51.3 | 43.2 | 52.1 | 55.3 | 57.4 | 60.3 | 51.7 |
| LSH | 23.6 | 29.1 | 37.4 | 49.6 | 60.6 | 67.0 | 20.2 | 29.0 | 35.5 | 47.5 | 53.3 | 61.4 |
| RandExp | 28.4 | 40.3 | 52.3 | 62.5 | 67.7 | 71.0 | 30.5 | 40.0 | 48.1 | 57.9 | 63.3 | 67.5 |
| ITQ | 26.9 | 33.9 | 46.3 | 56.1 | 64.1 | 70.3 | 25.9 | 33.7 | 44.5 | 56.1 | 63.9 | 67.6 |
| SH | 23.8 | 28.7 | 44.1 | 54.7 | 62.1 | 69.7 | 18.1 | 25.7 | 40.1 | 51.8 | 60.9 | 62.9 |
| PCAH | 26.0 | 30.1 | 46.3 | 57.9 | 67.5 | 72.4 | 21.2 | 30.5 | 43.8 | 50.7 | 61.1 | 59.9 |
| SIMLEX (34.0) | RW (54.5) |
| Ours | 13.4 | 16.5 | 22.8 | 22.1 | 21.1 | 17.0 | 11.0 | 22.6 | 25.8 | 36.9 | 38.6 | 35.2 |
| LSH | 8.0 | 16.8 | 19.0 | 24.8 | 26.7 | 32.9 | 16.2 | 21.0 | 26.1 | 33.6 | 40.8 | 47.0 |
| RandExp | 10.1 | 17.3 | 23.4 | 26.6 | 29.7 | 31.3 | 22.0 | 28.8 | 34.1 | 43.9 | 46.3 | 51.5 |
| ITQ | 7.3 | 13.8 | 14.4 | 20.9 | 25.3 | 30.3 | 24.5 | 26.8 | 34.8 | 43.2 | 49.1 | 51.5 |
| SH | 12.1 | 14.2 | 17.5 | 20.0 | 26.4 | 36.0 | 19.7 | 24.8 | 32.9 | 38.7 | 45.4 | 46.7 |
| PCAH | 11.5 | 13.8 | 16.4 | 22.6 | 31.1 | 38.6 | 19.7 | 24.8 | 32.9 | 38.7 | 45.4 | 46.7 |
| RG (78.7) | Mturk (71.1) |
| Ours | 24.0 | 40.4 | 51.3 | 62.3 | 63.2 | 55.8 | 44.0 | 49.0 | 52.2 | 60.1 | 57.7 | 55.2 |
| LSH | 25.5 | 24.9 | 34.6 | 62.1 | 61.8 | 73.5 | 18.3 | 31.3 | 31.4 | 42.9 | 56.5 | 60.7 |
| RandExp | 28.7 | 45.6 | 47.3 | 63.7 | 67.8 | 70.8 | 28.3 | 41.3 | 50.1 | 56.5 | 65.4 | 67.1 |
| ITQ | 21.4 | 32.7 | 50.4 | 57.7 | 67.6 | 70.3 | 26.3 | 41.4 | 53.2 | 61.2 | 67.1 | 68.9 |
| SH | 39.8 | 45.6 | 50.0 | 50.2 | 62.3 | 68.6 | 20.3 | 35.9 | 51.9 | 61.9 | 59.1 | 61.3 |
| PCAH | 45.0 | 50.0 | 49.2 | 46.8 | 66.6 | 69.8 | 24.9 | 40.7 | 55.7 | 64.3 | 64.4 | 60.5 |
+
+Table 9: Evaluation on word similarity datasets, analogous to Table 7. The 300d GloVe embeddings trained from scratch on the same OpenWebText dataset as our algorithm.
+
+is stored in the CPU memory. The parallelization strategy in our implementation is based on two aspects. First, each minibatch of data is divided into smaller sub-minibatches which are processed on different GPUs. Second, all the operations (dense-sparse matrix multiplications, arg max operation, and weight updates) are executed in parallel using multiple threads.
+
+# 10 APPENDIX E. QUALITATIVE EVALUATION OF CONTEXTUAL EMBEDDINGS.
+
+In order to evaluate the quality of contextualized embeddings we have created an online tool, which we are planning to release with the paper, that allows users to explore the representations learned by our model for various inputs (context-target pairs). For a given query the tool returns the word cloud visualizations for each of the four top activated Kenyon cells. We show some examples of the outputs produced by this tool in Fig. 6. Each query is used to generate a bag of words input vector $\mathbf{v}^{\mathbf{A}}$ . This vector is then used to compute the activations of KCs using $\left\langle \mathbf{W}_{\mu}, \mathbf{v}^{\mathbf{A}} \right\rangle$ . Top four KCs with the highest activations are selected. The corresponding four weight vectors are used to generate four probability distributions of individual words learned by those KCs by passing the weights through a softmax function. For example, for one of those vectors with index $\mu$ , the probability distribution is computed as $\mathrm{prob}_i = S M(W_{\mu i})$ . These probability distributions for the top four activated KCs are visualized as word clouds. In computing the softmax only the target block of the weight vector was used (we have checked that using only the context block gives qualitatively similar word clouds).
+
+# Query: Entertainment industry shares rise following the premiere of the mass destruction weapon documentary
+
+
+
+
+
+
+
+
+
+# Query: European Court of Human Rights most compelling cases
+
+
+
+
+
+
+
+
+
+# Query: Senate majority leader discussed the issue with the members of the committee
+
+
+Figure 6: Examples of three queries and corresponding word cloud visualization for top four activated KCs (by each query).
+
+
+
+
+
+
+
+The results indicate that the fruit fly network indeed has learned meaningful representations. Consider for example the first query. The sentence: "Entertainment industry shares rise following the premiere of the mass destruction weapon documentary" results in the four top activated KCs shown in Fig. 6. The top activated KC has the largest weights for the words "weapon", "mass", etc. The second activated KC is sensitive to the words "market", "stock", etc. This illustrates how the fruit fly network processes the queries. In this example the query refers to several distinct combinations of concepts: "weapon of mass destruction", "stock market", "movie industry". Each of those concepts has a dedicated KC responsible for it. As one can see the responses are not perfect. For example in this case one would expect to have the 4-th highest activated KC, which is responsible for the "movie industry" concept to have a higher activation than the 3-rd highest KC, which is responsible for the types of "weapons of mass destruction". But overall all the concepts picked by the KCs are meaningful and related to the query.
+
+# 11 APPENDIX F. DETAILS OF GLOVE RETRAINING
+
+To directly compare our method to GloVe, we trained an uninitialized GloVe model on the same OpenText corpus using the code provided by the original GloVe authors [34] $^4$ . This model was optimized to have the same vocab size as our model (the 20k most frequent tokens), used an embedding size of 300, and a window size of 15. The model was trained for 180 iterations at about 3 minutes, 20 seconds per iteration on 16 threads, resulting in the total training time of approximately 10 hours.
+
+# 12 ACKNOWLEDGEMENTS
+
+We are thankful to L.Amini, S.Chang, D.Cox, J.Hopfield, Y.Kim, and H.Strobelt for helpful discussions. This work was supported by the Rensselaer-IBM AI Research Collaboration (http://airc.rpi.edu), part of the IBM AI Horizons Network (http://ibm.biz/AIHorizons).
\ No newline at end of file
diff --git a/canafruitflylearnwordembeddings/images.zip b/canafruitflylearnwordembeddings/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..864049020e86498c9c26879f26bc766b4d585400
--- /dev/null
+++ b/canafruitflylearnwordembeddings/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7e178af25c7ac887379f869b805fa6290f08e48b22e318eda6c70fbfb72f8d30
+size 940582
diff --git a/canafruitflylearnwordembeddings/layout.json b/canafruitflylearnwordembeddings/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..8806432dc5e4394a79cad13d788845838f9a6d90
--- /dev/null
+++ b/canafruitflylearnwordembeddings/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fb29d982e12c29854f9b9cdcd1f2a2276dfa5cf80165049ef8cbdd592248b4b1
+size 536168
diff --git a/capclearningconfidentialandprivatecollaborativelearning/5dfc6bfa-b02c-4c1e-8f87-343a829d6b7d_content_list.json b/capclearningconfidentialandprivatecollaborativelearning/5dfc6bfa-b02c-4c1e-8f87-343a829d6b7d_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..177bbd43050964cf5431a1ee2c0736e880260886
--- /dev/null
+++ b/capclearningconfidentialandprivatecollaborativelearning/5dfc6bfa-b02c-4c1e-8f87-343a829d6b7d_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6aac90de259a2f61f7fd861470522c0ba925a717e4b81156be4a98b92d526493
+size 130746
diff --git a/capclearningconfidentialandprivatecollaborativelearning/5dfc6bfa-b02c-4c1e-8f87-343a829d6b7d_model.json b/capclearningconfidentialandprivatecollaborativelearning/5dfc6bfa-b02c-4c1e-8f87-343a829d6b7d_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..ceb1a63547fcdb66478b3aa2849e87454d1f79ce
--- /dev/null
+++ b/capclearningconfidentialandprivatecollaborativelearning/5dfc6bfa-b02c-4c1e-8f87-343a829d6b7d_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f1591e8d6a0caa64143cd2408466b5af43a948f83e70225b9ce1c535486504a1
+size 158342
diff --git a/capclearningconfidentialandprivatecollaborativelearning/5dfc6bfa-b02c-4c1e-8f87-343a829d6b7d_origin.pdf b/capclearningconfidentialandprivatecollaborativelearning/5dfc6bfa-b02c-4c1e-8f87-343a829d6b7d_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..de264ee724f310997a072e23e5b81158ceb59afd
--- /dev/null
+++ b/capclearningconfidentialandprivatecollaborativelearning/5dfc6bfa-b02c-4c1e-8f87-343a829d6b7d_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:921830379141874300da8ceeb8e833d0b742bc18895e03f257dd82a6bc3c45a6
+size 2339603
diff --git a/capclearningconfidentialandprivatecollaborativelearning/full.md b/capclearningconfidentialandprivatecollaborativelearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..c734a50e004da4ed9e49f8b06a1705da7a288db3
--- /dev/null
+++ b/capclearningconfidentialandprivatecollaborativelearning/full.md
@@ -0,0 +1,513 @@
+# CAPC LEARNING: CONFIDENTIAL AND PRIVATE COLLABORATIVE LEARNING
+
+Christopher A. Choquette-Choo*, Natalie Dullerud* Adam Dziedzic*
+
+University of Toronto and Vector Institute
+
+{christopher.choquette.choo,natalie.dullerud}@mail.utoronto.ca
+ady@vectorinstitute.ai
+
+Yunxiang Zhang*†
+
+The Chinese University of Hong Kong
+
+yunxiang.zhang@ie.cuhk.edu.hk
+
+Somesh Jha†
+
+University of Wisconsin-Madison and XaiPient
+
+jha@cs.wisc.edu
+
+Nicolas Papernot
+
+University of Toronto and Vector Institute
+
+nicolas.papernot@utoronto.ca
+
+Xiao Wang†
+
+Northwestern University
+
+wangxiao@cs.northwestern.edu
+
+# ABSTRACT
+
+Machine learning benefits from large training datasets, which may not always be possible to collect by any single entity, especially when using privacy-sensitive data. In many contexts, such as healthcare and finance, separate parties may wish to collaborate and learn from each other's data but are prevented from doing so due to privacy regulations. Some regulations prevent explicit sharing of data between parties by joining datasets in a central location (confidentiality). Others also limit implicit sharing of data, e.g., through model predictions (privacy). There is currently no method that enables machine learning in such a setting, where both confidentiality and privacy need to be preserved, to prevent both explicit and implicit sharing of data. Federated learning only provides confidentiality, not privacy, since gradients shared still contain private information. Differentially private learning assumes unreasonably large datasets. Furthermore, both of these learning paradigms produce a central model whose architecture was previously agreed upon by all parties rather than enabling collaborative learning where each party learns and improves their own local model. We introduce Confidential and Private Collaborative (CaPC) learning, the first method provably achieving both confidentiality and privacy in a collaborative setting. We leverage secure multi-party computation (MPC), homomorphic encryption (HE), and other techniques in combination with privately aggregated teacher models. We demonstrate how CaPC allows participants to collaborate without having to explicitly join their training sets or train a central model. Each party is able to improve the accuracy and fairness of their model, even in settings where each party has a model that performs well on their own dataset or when datasets are not IID and model architectures are heterogeneous across parties.1
+
+# 1 INTRODUCTION
+
+The predictions of machine learning (ML) systems often reveal private information contained in their training data (Shokri et al., 2017; Carlini et al., 2019) or test inputs. Because of these limitations, legislation increasingly regulates the use of personal data (Mantelero, 2013). The relevant ethical
+
+
+Figure 1: Confidential and Private Collaborative (CaPC) Learning Protocol: 1a Querying party $\mathcal{P}_{i*}$ sends encrypted query $q$ to each answering party $\mathcal{P}_i$ , $i\neq i_*$ . Each $\mathcal{P}_i$ engages in a secure 2-party computation protocol to evaluate Enc(q) on $\mathcal{M}_i$ and outputs encrypted logits Enc( $r_i$ ). 1b Each answering party, $\mathcal{P}_i$ , generates a random vector $\hat{r}_i$ , and sends Enc( $r_i - \hat{r}_i$ ) to the querying party, $\mathcal{P}_{i*}$ , who decrypts to get $r_i - \hat{r}_i$ . 1c Each answering party $\mathcal{P}_i$ runs Yao's garbled circuit protocol ( $Y_i$ ) with querying party $\mathcal{P}_{i*}$ to get $s_i$ for $\mathcal{P}_{i*}$ and $\hat{s}_i$ for $\mathcal{P}_i$ s.t. $s_i + \hat{s}_i$ is the one-hot encoding of argmax of logits. 2 Each answering party sends $\hat{s}_i$ to the privacy guardian (PG). The PG sums $\hat{s}_i$ from each $\mathcal{P}_i$ and adds Laplacian or Gaussian noise for DP. The querying party sums $s_i$ from each $Y_i$ computation. 3 The PG and the querying party run Yao's garbled circuit $Y_s$ to obtain argmax of querying party and PG's noisy share. The label is output to the querying party.
+
+concerns prompted researchers to invent ML algorithms that protect the privacy of training data and confidentiality of test inputs (Abadi et al., 2016; Konečný et al., 2016; Juvekar et al., 2018).
+
+Yet, these algorithms require a large dataset stored either in a single location or distributed amongst billions of participants. This is the case for example with federated learning (McMahan et al., 2017). Prior algorithms also assume that all parties are collectively training a single model with a fixed architecture. These requirements are often too restrictive in practice. For instance, a hospital may want to improve a medical diagnosis for a patient using data and models from other hospitals. In this case, the data is stored in multiple locations, and there are only a few parties collaborating. Further, each party may also want to train models with different architectures that best serve their own priorities.
+
+We propose a new strategy that lets fewer heterogeneous parties learn from each other collaboratively, enabling each party to improve their own local models while protecting the confidentiality and privacy of their data. We call this Confidential and Private Collaborative (CaPC) learning.
+
+Our strategy improves on confidential inference (Boemer, 2020) and PATE, the private aggregation of teacher ensembles (Papernot et al., 2017). Through structured applications of these two techniques, we design a strategy for inference that enables participants to operate an ensemble of heterogeneous models, i.e. the teachers, without having to explicitly join each party's data or teacher model at a single location. This also gives each party control at inference, because inference requires the agreement and participation of each party. In addition, our strategy provides measurable confidentiality and privacy guarantees, which we formally prove. We use the running example of a network of hospitals to illustrate our approach. The hospitals participating in CaPC protocol need guarantees on both confidentiality (i.e., data from a hospital can only be read by said hospital) and privacy (i.e., no hospital can infer private information about other hospitals' data by observing their predictions).
+
+First, one hospital queries all the other parties over homomorphic encryption (HE), asking them to label an encrypted input using their own teacher models. This can prevent the other hospitals from reading the input (Boemer et al., 2019), an improvement over PATE, and allows the answering hospitals to provide a prediction to the querying hospital without sharing their teacher models.
+
+The answering hospitals use multi-party computation (MPC) to compute an aggregated label, and add noise during the aggregation to obtain differential privacy guarantees (Dwork et al., 2014). This is achieved by a privacy guardian (PG), which then relays the aggregated label to the querying hospital. The PG only needs to be semi-trusted: we operate under the honest-but-curious assumption. The use of MPC ensures that the PG cannot decipher each teacher model's individual prediction, and the noise added via noisy argmax mechanism gives differential privacy even when there are few participants.
+
+This is a significant advantage over prior decentralized approaches like federated learning, which require billions of participants to achieve differential privacy, because the sensitivity of the histogram used in our aggregation is lower than that of the gradients aggregated in federated learning. Unlike our approach, prior efforts involving few participants thus had to prioritize model utility over privacy and only guarantee confidentiality (Sheller et al., 2020).
+
+Finally, the querying hospital can learn from this confidential and private label to improve their local model. Since the shared information is a label rather than a gradient, as used by federated learning, CaPC participants do not need to share a common model architecture; in fact, their architectures can vary throughout the participation in the protocol. This favors model development to a degree which is not possible in prior efforts such as federated learning.
+
+We show how participants can instantiate various forms of active and online learning with the labels returned by our protocol: each party participating in the CaPC protocol may (a) identify deficiencies of its model throughout its deployment and (b) finetune the model with labels obtained by interacting with other parties. Intuitively, we achieve the analog of a doctor querying colleagues for a second opinion on a difficult diagnostic, without having to reveal the patient's medical condition. This protocol leads to improvements in both the accuracy and fairness (when there is a skew in the data distribution of each participating hospital) of model predictions for each of the CaPC participants.
+
+To summarize, our contributions are the following:
+
+- We introduce CaPC learning: a confidential and private collaborative learning platform that provides both confidentiality and privacy while remaining agnostic to ML techniques.
+- Through a structured application of homomorphic encryption, secure MPC, and private aggregation, we design a protocol for CaPC. We use two-party deep learning inference and design an implementation of the noisy argmax mechanism with garbled circuits.
+- Our experiments on SVHN and CIFAR10 demonstrate that CaPC enables participants to collaborate and improve the utility of their models, even in the heterogeneous setting where the architectures of their local models differ, and when there are only a few participants.
+- Further, when the distribution of data drifts across participating parties, we show that CaPC significantly improves fairness metrics because querying parties benefit from knowledge learned by other parties on different data distributions, which is distilled in their predictions.
+- We release the source code for reproducing all our experiments.
+
+# 2 BACKGROUND
+
+Before introducing CaPC, we first go over elements of cryptography and differential privacy that are required to understand it. Detailed treatment of these topics can be found in Appendices A and B.
+
+# 2.1 CRYPTOGRAPHIC PRELIMINARIES FOR CONFIDENTIALITY
+
+The main cryptographic tool used in CaPC is secure multi-party computation (MPC) (Yao, 1986). MPC allows a set of distrusting parties to jointly evaluate a function on their input without revealing anything beyond the output. In general, most practical MPC protocols can be classified into two categories: 1) generic MPC protocols that can compute any function with the above security goal (Malkhi et al., 2004); and 2) specialized MPC protocols that can be used to compute only selected functions (e.g., private set intersection (Pinkas et al., 2020), secure machine learning (Mohassel & Zhang, 2017)). Although specialized MPC protocols are less general, they are often more efficient in execution time. Protocols in both categories use similar cryptographic building blocks, including (fully) homomorphic encryption (Gentry, 2009), secret sharing (Shamir, 1979), oblivious transfer (Rabin, 2005), garbled circuits (Yao, 1986). To understand our protocol, it is not necessary to know all details about these cryptographic building blocks and thus we describe them in Appendix A.1. Our work uses these cryptographic preliminaries for secure computation at prediction time, unlike recent approaches, which explore new methods to achieving confidentiality at training time (Huang et al., 2020a;b).
+
+The cryptographic protocol designed in this paper uses a specialized MPC protocol for securely evaluating a private ML model on private data, and a generic two-party computation protocol to compute an argmax in different forms. For the generic two-party computation, we use a classical Yao's
+
+garbled-circuit protocol that can compute any function in Boolean circuit. For secure classification of neural networks, our protocol design is flexible to work with most existing protocols (Boemer et al., 2020; 2019; Gilad-Bachrach et al., 2016; Mishra et al., 2020). Most existing protocols are different in how they handle linear layers (e.g. convolution) and non-linear layers (e.g. ReLU). For instance, one can perform all computations using a fully homomorphic encryption scheme resulting in low communication but very high computation, or using classical MPC techniques with more communication but less computation. Other works (Juvekar et al., 2018) use a hybrid of both and thus enjoy further improvement in performance (Mishra et al., 2020). We discuss it in more details in Appendix A.2.
+
+# 2.2 DIFFERENTIAL PRIVACY
+
+Differential privacy is the established framework for measuring the privacy leakage of a randomized algorithm (Dwork et al., 2006). In the context of machine learning, it requires the training algorithm to produce statistically indistinguishable outputs on any pair of datasets that only differ by one data point. This implies that an adversary observing the outputs of the training algorithm (e.g., the model's parameters, or its predictions) can improve its guess at most by a bounded probability when inferring properties of the training data points. Formally, we have the following definition.
+
+Definition 1 (Differential Privacy). A randomized mechanism $\mathcal{M}$ with domain $\mathcal{D}$ and range $\mathcal{R}$ satisfies $(\varepsilon, \delta)$ -differential privacy if for any subset $\mathcal{S} \subseteq \mathcal{R}$ and any adjacent datasets $d, d' \in \mathcal{D}$ , i.e. $\|d - d'\|_1 \leq 1$ , the following inequality holds:
+
+$$
+\Pr [ \mathcal {M} (d) \in \mathcal {S} ] \leq e ^ {\varepsilon} \Pr [ \mathcal {M} \left(d ^ {\prime}\right) \in \mathcal {S} ] + \delta \tag {1}
+$$
+
+In our work, we obtain differential privacy by post-processing the outputs of an ensemble of models with the noisy argmax mechanism of Dwork et al. (2014) (for more details on differential privacy, please refer to Appendix B), à la PATE (Papernot et al., 2017). We apply the improved analysis of PATE (Papernot et al., 2018) to compute the privacy guarantees obtained (i.e., a bound on $\varepsilon$ ). Our technique differs from PATE in that each of the teacher models is trained by different parties whereas PATE assumes a centralized learning setting where all of the training and inference is performed by a single party. Note that our technique is used at inference time, which differs from recent works in differential privacy that compare neuron pruning during training with mechanisms satisfying differential privacy (Huang et al., 2020c). We use cryptography to securely decentralize computations.
+
+# 3 THE CAPC PROTOCOL
+
+We now introduce our protocol for achieving both confidentiality and privacy in collaborative (CaPC) learning. To do so, we formalize and generalize our example of collaborating hospitals from Section 1.
+
+# 3.1 PROBLEM DESCRIPTION
+
+A small number of parties $\{\mathcal{P}_i\}_{i\in [1,K]}$ , each holding a private dataset $\mathcal{D}_i = \{(x_j,y_j\text{or}\varnothing)_{j\in [1,N_i]}\}$ and capable of fitting a predictive model $\mathcal{M}_i$ to it, wish to improve the utility of their individual models via collaboration. Due to the private nature of the datasets in question, they cannot directly share data or by-products of data (e.g., model weights) with each other. Instead, they will collaborate by querying each other for labels of the inputs about which they are uncertain. In the active learning paradigm, one party $\mathcal{P}_{i*}$ poses queries in the form of data samples $x$ and all the other parties $\{\mathcal{P}_i\}_{i\neq i_*}$ together provide answers in the form of predicted labels $\hat{y}$ . Each model $\{\mathcal{M}_i\}_{i\in [1,K]}$ can be exploited in both the querying phase and the answering phase, with the querying party alternating between different participants $\{\mathcal{P}_i\}_{i\in [1,K]}$ in the protocol.
+
+Threat Model. To obtain the strong confidentiality and privacy guarantees that we described, we require a semi-trusted third party called the privacy guardian (PG). We assume that the PG does not collude with any party and that the adversary can corrupt any subset of $C$ parties $\{\mathcal{P}_i\}_{i \in [1,C]}$ . When more than one party gets corrupted, this has no impact on the confidentiality guarantee, but the privacy budget obtained $\epsilon$ will degrade by a factor proportional to $C$ because the sensitivity of the aggregation mechanism increases (see Section 3.3). We work in the honest-but-curious setting, a commonly adopted assumption in cryptography which requires the adversary to follow the protocol description correctly but will try to infer information from the protocol transcript.
+
+# 3.2 CAPC PROTOCOL DESCRIPTION
+
+Our protocol introduces a novel formulation of the private aggregation of teachers, which implements two-party confidential inference and secret sharing to improve upon the work of Papernot et al. (2017) and guarantee confidentiality. Recall that the querying party $P_{i_*}$ initiates the protocol by sending an encrypted input $x$ to all answering parties $\mathcal{P}_i$ , $i \neq i_*$ . We use $sk$ and $pk$ to denote the secret and public keys owned by party $\mathcal{P}_{i_*}$ . The proposed protocol consists of the following steps:
+
+1. For each $i \neq i_{*}$ , $\mathcal{P}_i$ (with model parameters $\mathcal{M}_i$ as its input) and $\mathcal{P}_{i*}$ (with $x, sk, pk$ as its input) run a secure two-party protocol. As the outcome, $\mathcal{P}_i$ obtains $\hat{s}_i$ and $\mathcal{P}_{i*}$ obtains $\boldsymbol{s}_i$ such that $\boldsymbol{s}_i + \hat{\boldsymbol{s}}_i = \mathrm{OneHot}(\arg \max (\boldsymbol{r}_i))$ where $\boldsymbol{r}_i$ are the predicted logits.
+
+This step could be achieved by the following:
+
+a) $\mathcal{P}_{i_*}$ and $\mathcal{P}_i$ run a secure two-party ML classification protocol such that $\mathcal{P}_{i_*}$ learns nothing while $\mathcal{P}_i$ learns $\mathsf{Enc}_{pk}(\boldsymbol {r}_i)$ , where $\boldsymbol {r}_i$ are the predicted logits.
+b) $\mathcal{P}_i$ generates a random vector $\hat{\boldsymbol{r}}_i$ , performs the following computation on the encrypted data $\mathsf{Enc}_{pk}(\boldsymbol{r}_i) - \mathsf{Enc}_{pk}(\hat{\boldsymbol{r}}_i) = \mathsf{Enc}_{pk}(\boldsymbol{r}_i - \hat{\boldsymbol{r}}_i)$ , and sends the encrypted difference to $\mathcal{P}_{i*}$ , who decrypts and obtains $(\boldsymbol{r}_i - \hat{\boldsymbol{r}}_i)$ .
+c) $\mathcal{P}_i$ (with $\hat{\boldsymbol{r}}_i$ as input) and $\mathcal{P}_{i_*}$ (with $\boldsymbol {r}_i - \hat{\boldsymbol{r}}_i$ as input) engage in Yao's two-party garbled-circuit protocol to obtain vector $s_i$ for $\mathcal{P}_{i_*}$ and vector $\hat{s}_i$ for $\mathcal{P}_i$ , such that $s_i + \hat{s}_i = \mathrm{OneHot}(\arg \max (r_i))$ .
+
+2. $\mathcal{P}_i$ sends $\hat{s}_i$ to the PG. The PG computes $\hat{s} = \sum_{i\neq i_*}\hat{s}_i + \mathrm{DPNoise}(\epsilon)$ , where DPNoise() is element-wise Laplacian or Gaussian noise whose variance is calibrated to obtain a desired differential privacy guarantee $\varepsilon$ ; whereas $\mathcal{P}_{i*}$ computes $s = \sum_{i\neq i_*}s_i$ .
+3. The PG and $P_{i_*}$ engage in Yao's two-party garbled-circuit protocol for computing the argmax: $\mathcal{P}_{i_*}$ gets $\arg \max (\hat{s} + s)$ and the PG gets nothing.
+
+Next, we elaborate on the confidentiality and privacy guarantees achieved by CaPC.
+
+# 3.3 CONFIDENTIALITY AND DIFFERENTIAL PRIVACY GUARANTEES
+
+Confidentiality Analysis. We prove in Appendix E that the above protocol reveals nothing to $\mathcal{P}_i$ or the PG and only reveals the final noisy results to $P_{i*}$ . The protocol is secure against a semi-honest adversary corrupting any subset of parties. Intuitively, the proof can be easily derived based on the security of the underlying components, including two-party classification protocol, secret sharing, and Yao's garbled circuit protocol. As discussed in Section 4.1 and Appendix A.1, for secret sharing of unbounded integers, we need to make sure the random padding is picked from a domain much larger than the maximum possible value being shared. Given the above, a corrupted $\mathcal{P}_{i*}$ cannot learn anything about $\mathcal{M}_i$ of the honest party due to the confidentiality guarantee of the secure classification protocol; similarly, the confidentiality of $x$ against corrupted $\mathcal{P}_i$ is also protected. Intermediate values are all secretly shared (and only recovered within garbled circuits) so they are not visible to any party.
+
+Differential Privacy Analysis. Here, any potential privacy leakage in terms of differential privacy is incurred by the answering parties $\{\mathcal{P}_i\}_{i\neq i_*}$ for their datasets $\{\mathcal{D}_i\}_{i\neq i_*}$ , because these parties share the predictions of their models. Before sharing these predictions to $\mathcal{P}_{i_*}$ , we follow the PATE protocol: we compute the histogram of label counts $\hat{y}$ , then add Laplacian or Gaussian noise using a sensitivity of 1, and finally return the argmax of $\hat{y}_{\sigma}$ to $\mathcal{P}_{i_*}$ . Since $\mathcal{P}_{i_*}$ only sees this noisily aggregated label, both the data-dependent and data-independent differential privacy analysis of PATE apply to $\mathcal{P}_{i_*}$ (Papernot et al., 2017; 2018). Thus, when there are enough parties with high consensus, we can obtain a tighter bound on the privacy budget $\epsilon$ as the true plurality will more likely be returned (refer to Appendix B for more details on how this is achieved in PATE). This setup assumes that only one answering party can be corrupted. If instead $C$ parties are corrupted, the sensitivity of the noisy aggregation mechanism will be scaled by $C$ and the privacy guarantee will deteriorate. There is no privacy leakage to the PG; it does not receive any part of the predictions from $\{\mathcal{P}_i\}_{i\neq i_*}$ .
+
+# 4 EXPERIMENTS
+
+CaPC aims to improve the model utility of collaborating parties by providing them with new labelled data for training their respective local models. Since we designed the CaPC protocol with techniques
+
+for confidentiality (i.e., confidential inference and secret sharing) and differential privacy (i.e., private aggregation), our experiments consider the following three major dimensions:
+
+1. How well does collaboration improve the model utility of all participating parties?
+2. What requirements are there to achieve privacy and how can these be relaxed under different circumstances? What is the trade-off between the privacy and utility provided by CaPC?
+3. What is the resulting computational cost for ensuring confidentiality?
+
+# 4.1 IMPLEMENTATION
+
+We use the HE-transformer library with MPC (MP2ML) by Boemer (2020) in step 1a of our protocol for confidential two-party deep learning inference. To make our protocol flexible to any private inference library, not just those that return the label predicted by the model (HE-transformer only returns logits), we incorporate steps 1b and 1c of the protocol outside of the private inference library. The EMP toolkit (Wang et al., 2016) for generic two-party computation is used to compute the operations including argmax and sum via the garbled circuits. To secret share the encrypted values, we first convert them into integers over a prime field according to the CKKS parameters, and then perform secret sharing on that domain to obtain perfect secret sharing. We use the single largest logit value for each $\mathcal{M}_i$ obtained on its training set $\mathcal{D}_i$ in plain text to calculate the necessary noise.
+
+# 4.2 EVALUATION SETUP
+
+Collaboration. We use the following for experiments unless otherwise noted. We uniformly sample from the training set in $\mathrm{use}^2$ , without replacement, to create disjoint partitions, $\mathcal{D}_i$ , of equal size and identical data distribution for each party. We select $K = 50$ and $K = 250$ as the number of parties for CIFAR10 and SVHN, respectively (the number is larger for SVHN because we have more data). We select $Q = 3$ querying parties, $\mathcal{P}_{i_*}$ , and similarly divide part of the test set into $Q$ separate private pools for each $\mathcal{P}_{i_*}$ to select queries, until their privacy budget of $\epsilon$ is reached (using Gaussian noise with $\sigma = 40$ on SVHN and 7 on CIFAR10). We are left with 1,000 and 16,032 evaluation data points from the test set of CIFAR10 and SVHN, respectively. We fix $\epsilon = 2$ and 20 for SVHN and CIFAR10, respectively (which leads to $\approx 550$ queries per party), and report accuracy on the evaluation set. Querying models are retrained on their $\mathcal{D}_i$ plus the newly labelled data; the difference in accuracies is their accuracy improvement.
+
+We use shallower variants of VGG, namely VGG-5 and VGG-7 for CIFAR10 and SVHN, respectively, to accommodate the small size of each party's private dataset. We instantiate VGG-7 with 6 convolutional layers and one final fully-connected layer, thus there are 7 functional layers overall. Similarly, VGG-5 has 4 convolutional layers followed by a fully connected layer. The ResNet-10 architecture starts with a single convolutional layer, followed by 4 basic blocks with 2 convolutional layers in each block, and ends with a fully-connected layer, giving 10 functional layers in total. The ResNet-8 architecture that we use excludes the last basic block and increases the number of neurons in the last (fully-connected) layer. We present more details on architectures in Appendix F.2.
+
+We first train local models for all parties using their non-overlapping private datasets. Next, we run the CaPC protocol to generate query-answer pairs for each querying party. Finally, we retrain the local model of each querying party using the combination of their original private dataset and the newly obtained query-answer pairs. We report the mean accuracy and class-specific accuracy averaged over 5 runs for all retrained models, where each uses a different random seed.
+
+Heterogeneity and Data Skew. Where noted, our heterogeneous experiments (recall that this is a newly applicable setting that CaPC enables) use VGG-7, ResNet-8 and ResNet-10 architectures for $\frac{K}{3}$ parties, each. One model of each architecture is used for each of $Q = 3$ querying parties. Our data skew experiments use $80\%$ less data samples for the classes 'horse', 'ship', and 'truck' on CIFAR10 and $90\%$ less data for the classes 1 and 2 on SVHN. In turn, unfair ML algorithms perform worse on these specific classes, leading to worse balanced accuracy (see Appendix D). We adopt balanced accuracy instead of other fairness metrics because the datasets we use have no sensitive attributes, making them inapplicable. We employ margin, entropy, and greedy k-center active learning strategies
+
+(described in Appendix C) to encourage ML algorithms to sample more queries from regimes that have been underrepresented and to improve their fairness performance.
+
+# 4.3 COLLABORATION ANALYSIS
+
+We first investigate the benefits of collaboration for improving each party's model performance in several different settings, namely: homogeneous and heterogeneous model architectures across querying and answering parties, and uniform and non-uniform data sampling for training data. From these experiments, we observe: increased accuracy in both homogeneous settings and heterogeneous settings to all model architectures (Section 4.3.1) and improved balanced accuracy when there is data skew between parties, i.e., non-uniform private data (Section 4.3.2).
+
+# 4.3.1 UNIFORMLY SAMPLED PRIVATE DATA
+
+The first setting we consider is a uniform distribution of data amongst the parties—there is no data drift among parties. Our set up for the uniform data distribution experiments is detailed in Section 4.2. We evaluate the per-class and overall accuracy before and after CaPC in both homogeneous and heterogeneous settings on the CIFAR10 and SVHN datasets.
+
+In Figure 2, we see there is a consistent increase in accuracy for each class and overall in terms of mean accuracy across all parties on the test sets. We observe these improvements in both the homogeneous and heterogeneous settings for both datasets tested. As demonstrated in Figure 2, there is a greater climb in mean accuracy for the heterogeneous setting than the homogeneous setting on SVHN. Figures 5, 6, and 7 provide a breakdown of the benefits obtained by each querying party. We can see from these figures that all querying parties observe an increase in overall accuracy in heterogeneous and homogeneous settings with both datasets; additionally, the jump in accuracy is largely constant between different model architectures. In only $6.67\%$ of all cases were any class-specific accuracies degraded, but they still showed a net increase in overall model accuracy.
+
+
+CIFAR10, Homogeneous
+
+
+SVHN, Homogeneous
+Figure 2: Using CaPC to improve model performance. Dashed lines represent mean accuracy. With homogeneous models, we observe a mean increase of 4.09 and of 1.92 percentage points on CIFAR10 and SVHN, respectively, and an increase of 2.64 with heterogeneous models; each party still sees improvements despite differing model architectures (see Figure 7 in Appendix F).
+
+
+SVHN, Heterogeneous
+
+# 4.3.2 NON-UNIFORMLY SAMPLED PRIVATE DATA
+
+In this section, we focus our analysis on two types of data skew between parties: varying size of data per class and total size of data provided; the setup is described in Section 4.2. To analyze data skew, we explore the balanced accuracy (which measures mean recall on a per-class basis, see Appendix D). We use balanced accuracy in order to investigate aggregate fairness gains offered by CaPC. Random sampling from non-uniform distributions leads to certain pitfalls: e.g., underrepresented classes are not specifically targeted in sampling. Thus, we additionally utilize active learning techniques, namely entropy, margin, and greedy-k-center (see Definitions 6-8 in Appendix C), and analyze balanced accuracy with each strategy.
+
+In Figure 3, we see that CaPC has a significant impact on the balanced accuracy when there is data skew between the private data of participating parties. Even random sampling can drastically improve balanced accuracy. Leveraging active learning techniques, we can achieve additional benefits in
+
+balanced accuracy. In particular, we observe that entropy and margin sampling achieves the greatest improvement over random sampling in per-class accuracy for the less represented classes 'horse', 'ship', and 'truck' on CIFAR10 and classes 1 and 2 on SVHN. These enhancements can be explained by the underlying mechanisms of margin and entropy sampling because the less-represented classes have a higher margin/entropy; the queries per class for each method are shown in Figure 9. Through these experiments, we show that in data skew settings, the CaPC protocol can significantly improve the fair performance of models (as measured by balanced accuracy), especially when combined with active learning techniques. Note that we see similar trends with (normal) accuracy as well.
+
+
+Figure 3: Using CaPC with active learning to improve balanced accuracy under non-uniform data distribution. Dashed lines are balanced accuracy (BA). We observe that all sampling strategies significantly improve BA and the best active learning scheme can improve BA by a total of 9.94 percentage-points (an additional 0.8 percentage points over Random sampling) on CIFAR10 (left) and a total of 5.67 percentage-points (an additional 0.38) on SVHN (right).
+
+
+
+# 4.4 PRIVACY VERSUS UTILITY
+
+We now study the trade-off between privacy and utility of our obtained models. Recall that we add Gaussian (or Laplacian) noise to the aggregate of predicted labels of all parties. Under the uniform setting, we choose the standard deviation $\sigma$ by performing a (random) grid search and choosing the highest noise before a significant loss in accuracy is observed. In doing so, each query uses minimal $\varepsilon$ while maximizing utility. Figure 11 in Appendix F shows a sample plot for $K = 250$ models. For more details on how $\varepsilon$ is calculated, please refer to Appendix B.
+
+As we increase the number of parties, we can issue more queries for a given privacy budget $(\varepsilon)$ which leads to a higher accuracy gain. In Figure 4, we report the accuracy gain achieved using CaPC with various numbers of parties, $K$ . With a fixed total dataset size, increasing the number of parties decreases their training data size, leading to worse performing models. These models see the largest benefit from CaPC but, importantly, we always see a net improvement across all values of $K$ .
+
+
+Figure 4: Accuracy gain for balanced SVHN using CaPC versus number of parties and privacy budget, $\varepsilon$ . With more parties, we can achieve a higher accuracy gain at a smaller bound on $\varepsilon$ .
+
+ | Number of parties |
| 150 | 200 | 250 | 300 | 400 |
| Accuracy gain (%) | 0.62 | 1.45 | 2.39 | 3.07 | 3.87 |
| Best ε | 3.50 | 3.32 | 2.60 | 2.40 | 1.91 |
+
+# 4.5 COMPUTATIONAL COSTS OF CONFIDENTIALITY
+
+The incorporation of confidentiality in CaPC increases computational costs. We segment the analysis of computational overhead of CaPC into three parts corresponding to sequential steps in the protocol: (1) inference, (2) secret sharing between each querying and answering party, and (3) secret sharing between the querying party and the PG. Each of these steps is analyzed in terms of the wall-clock time (in seconds). We use the default encryption setting in HE-transformer and vary the modulus range, $N$ , which denotes the max value of a given plain text number to increase the maximum security level possible. HE-transformer only supports inference on CPUs and is used in step (1).
+
+Step (1) with neural network inference using MPC incurs the highest CPU and network costs (see Table 1 and Figure 13 in Appendix F). Even the base level of security increases computational cost by $100\mathrm{X}$ , and high security levels see increases up to $1000\mathrm{X}$ , in comparison to the non-encrypted inference on CPU. Compared to step (1), the rest of the CaPC protocol incurs a negligible overhead to perform secret sharing. Overall, CaPC incurs only a low additional cost over the underlying MP2ML framework, as shown in Figure 13, which enables applicability and scalability as these tools progress.
+
+# 5 DISCUSSION AND CONCLUSIONS
+
+CaPC is a secure and private protocol that protects both the confidentiality of test data and the privacy of training data, which are desired in applications like healthcare and finance. Our framework facilitates collaborative learning using heterogeneous model architectures and separate private datasets, even if the number of parties involved is small. It offers notable advantages over recent methods for learning with multiple participants, such as federated learning, which assumes training of a single fixed model architecture. CaPC does not assume a homogeneous model architecture and allows parties to separately and collaboratively train different models optimized for their own purposes. Federated learning also requires a large number of parties while CaPC provides gains in accuracy with significantly fewer participants, even in contexts where each party already has a model with high accuracy. Notably, CaPC incurs low overhead on top of underlying tools used for secure neural network inference.
+
+Through our experiments, we also demonstrate that CaPC facilitates collaborative learning even when there exists non i.i.d (highly skewed) private data among parties. Our experiments show that CaPC improves on the fair performance of participating querying models as indicated by improvements in the balanced accuracy, a common fairness metric. Further, we observe a significant increase in per-class accuracy on less-represented classes on all datasets tested. Notably, CaPC is easily configured to leverage active learning techniques to achieve additional fairness improvement gains or to learn from other heterogeneous models trained with fairness techniques, e.g., with synthetic minority oversampling (Chawla et al., 2002). In future work, we look to analyzing the fairness implications of CaPC in contexts where there is discrimination over a private dataset's sensitive attributes, not just class labels. In these cases, other fairness metrics like equalized odds and equal opportunity (see Appendix D) can be explored.
+
+We note some limitations of the proposed protocol. HE-transformer does not prevent leaking certain aspects of the model architecture, such as the type of non-linear activation functions and presence of MaxPooling layers. CaPC improves upon existing methods in terms of the necessary number of parties; however, it would be favorable to see this number decreased under 50 for better flexibility and applicability in practice.
+
+In the face of this last limitation, when there are few physical parties, we can generate a larger number of virtual parties for CaPC, where each physical party subdivides their private dataset into disjoint partitions and trains multiple local models. This would allow CaPC to tolerate more noise injected during aggregation and provide better privacy guarantees. Note that each physical party could select queries using a dedicated strong model instead of the weak models used for answering queries in CaPC. This setting is desirable in cases where separate models are required within a single physical party, for example, in a multi-national organization with per-country models.
+
+# ACKNOWLEDGMENTS
+
+We would like to acknowledge our sponsors, who support our research with financial and in-kind contributions: Microsoft, Intel, CIFAR through the Canada CIFAR AI Chair and AI catalyst programs, NFRF through an Exploration grant, and NSERC COHESA Strategic Alliance. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute www.vectorinstitute.ai/partners. Finally, we would like to thank members of CleverHans Lab for their feedback, especially: Tejumade Afonja, Varun Chandrasekaran, Stephan Rabanser, and Jonas Guan.
+
+# REFERENCES
+
+Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 308-318, 2016.
+Fabian Boemer. he-transformer. https://github.com/IntelAI/he-transformer, 2020. [Online; accessed 19-September-2020].
+Fabian Boemer, Yixing Lao, Rosario Cammarota, and Casimir Wierzynski. Ngraph-he: A graph compiler for deep learning on homomorphically encrypted data. In Proceedings of the 16th ACM International Conference on Computing Frontiers, CF '19, pp. 3-13, New York, NY, USA, 2019. Association for Computing Machinery.
+Fabian Boemer, Rosario Cammarota, Daniel Demmler, Thomas Schneider, and Hossein Yalame. MP2ML: a mixed-protocol machine learning framework for private inference. In Melanie Volkamer and Christian Wressnegger (eds.), ARES 2020: The 15th International Conference on Availability, Reliability and Security, Virtual Event, Ireland, August 25-28, 2020, pp. 14:1-14:10. ACM, 2020.
+Zvika Brakerski, Craig Gentry, and Vinod Vaikuntanathan. (levelled) fully homomorphic encryption without bootstrapping. ACM Transactions on Computation Theory (TOCT), 6(3):1-36, 2014.
+Nicholas Carlini, Chang Liu, Ülfar Erlingsson, Jernej Kos, and Dawn Song. The secret sharer: Evaluating and testing unintended memorization in neural networks. In 28th {USENIX} Security Symposium ({USENIX} Security 19), pp. 267-284, 2019.
+Nitesh V Chawla, Kevin W Bowyer, Lawrence O Hall, and W Philip Kegelmeyer. Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research, 16:321-357, 2002.
+Jung Hee Cheon, Andrey Kim, Miran Kim, and Yongsoo Song. Homomorphic encryption for arithmetic of approximate numbers. In International Conference on the Theory and Application of Cryptology and Information Security, pp. 409-437. Springer, 2017.
+Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In Theory of cryptography conference, pp. 265-284. Springer, 2006.
+Cynthia Dwork, Guy N Rothblum, and Salil Vadhan. Boosting and differential privacy. In 2010 IEEE 51st Annual Symposium on Foundations of Computer Science, pp. 51-60. IEEE, 2010.
+Cynthia Dwork, Aaron Roth, et al. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3-4):211-407, 2014.
+David Evans, Yan Huang, Jonathan Katz, and Lior Malka. Efficient privacy-preserving biometric identification. In Proceedings of the 17th conference Network and Distributed System Security Symposium, NDSS, volume 68, 2011.
+Reza Zanjirani Farahani and Masoud Hekmatfar. Facility location: concepts, models, algorithms and case studies. Springer, 2009.
+Craig Gentry. A fully homomorphic encryption scheme, volume 20. Stanford university Stanford, 2009.
+
+Ran Gilad-Bachrach, Nathan Dowlin, Kim Laine, Kristin Lauter, Michael Naehrig, and John Wernsing. Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy. In International Conference on Machine Learning, pp. 201-210, 2016.
+Sebastien Godard. sar (sysstat). http://sebastien.godard/pagesperso-orange.fr/, 2020. [Online; accessed 10-September-2020].
+Moritz Hardt, Eric Price, and Nati Srebro. Equality of opportunity in supervised learning. In Advances in neural information processing systems, pp. 3315-3323, 2016.
+Yangsibo Huang, Zhao Song, Danqi Chen, Kai Li, and Sanjeev Arora. Texthide: Tackling data privacy in language understanding tasks. arXiv preprint 2010.06053, 2020a.
+Yangsibo Huang, Zhao Song, Kai Li, and Sanjeev Arora. Instahide: Instance-hiding schemes for private distributed learning. arXiv preprint 2010.02772, 2020b.
+Yangsibo Huang, Yushan Su, Sachin Ravi, Zhao Song, Sanjeev Arora, and Kai Li. Privacy-preserving learning via deep net pruning. arXiv preprint 2003.01876, 2020c.
+Yuval Ishai, Joe Kilian, Kobbi Nissim, and Erez Petrank. Extending oblivious transfers efficiently. In Annual International Cryptology Conference, pp. 145-161. Springer, 2003.
+Chiraag Juvekar, Vinod Vaikuntanathan, and Anantha Chandrakasan. Gazelle: A low latency framework for secure neural network inference. In 27th USENIX Security Symposium (USENIX Security 18), pp. 1651-1669, 2018.
+Jakub Konečný, H Brendan McMahan, Felix X Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492, 2016.
+David D. Lewis and William A. Gale. A sequential algorithm for training text classifiers. In Proceedings of the 17th Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval. Dublin, Ireland, 3-6 July 1994 (Special Issue of the SIGIR Forum), pp. 3-12, 1994.
+Dahlia Malkhi, Noam Nisan, Benny Pinkas, and Yaron Sella. Fairplay—a secure two-party computation system. In Proceedings of the 13th Conference on USENIX Security Symposium - Volume 13, SSYM'04, pp. 20, USA, 2004. USENIX Association.
+Alessandro Mantelero. The eu proposal for a general data protection regulation and the roots of the 'right to be forgotten'. Computer Law & Security Review, 29(3):229-235, 2013.
+H Brendan McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang. Learning differentially private recurrent language models. arXiv preprint arXiv:1710.06963, 2017.
+Ilya Mironov. Rényi differential privacy. In 2017 IEEE 30th Computer Security Foundations Symposium (CSF), pp. 263-275. IEEE, 2017.
+Pratyush Mishra, Ryan Lehmkuhl, Akshayaram Srinivasan, Wenting Zheng, and Raluca Ada Popa. Delphi: A cryptographic inference service for neural networks. In 29th USENIX Security Symposium (USENIX Security 20), pp. 2505-2522. USENIX Association, August 2020. ISBN 978-1-939133-17-5.
+Payman Mohassel and Yupeng Zhang. Secureml: A system for scalable privacy-preserving machine learning. In 2017 IEEE Symposium on Security and Privacy (SP), pp. 19-38. IEEE, 2017.
+Nicolas Papernot, Martin Abadi, Ülfar Erlingsson, Ian J. Goodfellow, and Kunal Talwar. Semi-supervised knowledge transfer for deep learning from private training data. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings, 2017.
+Nicolas Papernot, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, and Ülfar Erlingsson. Scalable private learning with PATE. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings, 2018.
+
+Benny Pinkas, Mike Rosulek, Ni Trieu, and Avishay Yanai. Psi from paxos: Fast, malicious private set intersection. In Annual International Conference on the Theory and Applications of Cryptographic Techniques, pp. 739-767. Springer, 2020.
+Michael O. Rabin. How to exchange secrets with oblivious transfer. Cryptology ePrint Archive, Report 2005/187, 2005. https://eprint.iacr.org/2005/187.
+Tobias Scheffer, Christian Decomain, and Stefan Wrobel. Active hidden markov models for information extraction. In International Symposium on Intelligent Data Analysis, pp. 309-318. Springer, 2001.
+Ozan Sener and Silvio Savarese. Active learning for convolutional neural networks: A core-set approach. arXiv preprint arXiv:1708.00489, 2017.
+Burr Settles. Active learning literature survey. Computer Sciences Technical Report 1648, University of Wisconsin-Madison, 2009.
+Adi Shamir. How to share a secret. Communications of the ACM, 22(11):612-613, 1979.
+Claude E Shannon. A mathematical theory of communication. Bell system technical journal, 27(3): 379-423, 1948.
+Micah J. Sheller, Brandon Edwards, G. Anthony Reina, Jason Martin, Sarthak Pati, Aikaterini Kotrotsou, Mikhail Milchenko, Weilin Xu, Daniel Marcus, Rivka R. Colen, and Spyridon Bakas. Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data. *Scientific Reports*, 10(1):12598, 2020. doi: 10.1038/s41598-020-69250-1. URL https://doi.org/10.1038/s41598-020-69250-1.
+Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. Membership inference attacks against machine learning models. In 2017 IEEE Symposium on Security and Privacy (SP), pp. 3-18. IEEE, 2017.
+Xiao Wang, Alex J. Malozemoff, and Jonathan Katz. EMP-toolkit: Efficient MultiParty computation toolkit. https://github.com/emp-toolkit, 2016.
+Andrew Chi-Chih Yao. How to generate and exchange secrets (extended). In 27th Annual Symposium on Foundations of Computer Science, pp. 162-167, Toronto, Ontario, Canada, October 27-29, 1986. IEEE Computer Society Press.
+
+# A MORE BACKGROUND ON CRYPTOGRAPHY
+
+# A.1 CRYPTOGRAPHIC BUILDING BLOCKS
+
+Homomorphic encryption. Homomorphic encryption defines an encryption scheme such that the encryption and decryption functions are homomorphic between plaintext and ciphertext spaces. Although it is known that fully homomorphic encryption can be constructed based on lattice-based assumptions, most applications only require a weaker version with bounded number of multiplications on each ciphertext. Schemes with this constraint are much more practical, including for example, BGV (Brakerski et al., 2014), CKKS (Cheon et al., 2017), etc.
+
+Secret sharing. Secret sharing denotes a scheme in which a datum, the secret, is shared amongst a group of parties by dividing the secret into parts such that each party only has one part, or 'share' of the secret. The secret can only be recovered if a certain number of parties conspire to combine their shares. It is easy to construct secret sharing modulo a positive integer. If the application does not allow modular operation, one can still achieve statistically secure secret sharing by using random shares that are much larger than the secret being shared (Evans et al., 2011).
+
+Oblivious transfer. Oblivious transfer involves two parties: the sending party and the receiving party. The sending party has two pieces of information, $s_0$ and $s_1$ , and the receiver wants to receive $s_b$ , where $b \in \{0, 1\}$ , such that the sending party cannot learn $b$ and the receiving party cannot learn $s_{\neg b}$ . In general, oblivious transfer requires public-key operations, however, it is possible to execute a large number of oblivious transfers with only a very small number of public-key operations based on oblivious transfer extension (Ishai et al., 2003).
+
+Garbled circuits. In Yao's garbled circuit protocol for two-party computation, each of the two parties assumes a role, that of garbler or that of evaluator. The function $f$ on which to compute each of the two parties' inputs is described as a Boolean circuit. The garbler randomly generates aliases (termed labels) representing 0 and 1 in the Boolean circuit describing $f$ and replaces the binary values with the generated labels for each wire in the circuit. At each gate in the circuit, which can be viewed as a truth table, the garbler uses the labels of each possible combination of inputs to encrypt the corresponding outputs, and permutes the rows of the truth table. The garbler then uses the generated labels for 0 and 1 to encode their own input data and sends these labels and the garbled Boolean circuit to the evaluator. The evaluator now converts their binary input data to the corresponding labels through a 1-2 oblivious transfer protocol with the garbler. After receiving the labels for their input, the evaluator evaluates the garbled circuit by trying to decrypt each row in the permutable truth tables at each gate using the input labels; only one row will be decryptable at each gate, which is the output label for the outgoing wire from the gate. The evaluator eventually finishes evaluating the garbled circuit and obtains the label for the output of the function $f$ computed on the garbler's and the evaluator's input. The garbler then must provide the true value for the output label so that both parties can get the output.
+
+# A.2 PROTECTING CONFIDENTIALITY USING MPC
+
+Neural networks present a challenge to secure multi-party computation protocols due to their unique structure and exploitative combination of linear computations and non-linear activation functions. Cryptographic inference with neural networks can be considered in two party computation cases in which one party has confidential input for which they wish to obtain output from a model and the other party stores the model; in many cases the party storing the model also wishes that the model remains secure.
+
+Confidential learning and inference with neural networks typically uses homomorphic encryption (HE) or secure multi-party computation (MPC) methods. Many libraries support pure HE or MPC protocols for secure inference of neural networks; a comprehensive list can be viewed in (Boemer et al., 2020). Notably, libraries such as nGraph-HE (Boemer et al., 2019) and CryptoNets (Gilad-Bachrach et al., 2016) provide pure homomorphic encryption solutions to secure neural network inference. nGraph-HE, an extension of graph compiler nGraph, allows secure inference of DNNs through linear computations at each layer using CKKS homomorphic encryption scheme (Cheon et al., 2017; Boemer et al., 2019). CryptoNets similarly permit confidential neural network inference using another leveled homomorphic encryption scheme, YASHE' (Gilad-Bachrach et al., 2016). On the other hand, several libraries employing primarily MPC methods in secure NN inference frameworks rely on ABY, a tool providing support for common non-polynomial activation functions in NNs through use of both Yao's GC and GMW.
+
+In DL contexts, while pure homomorphic encryption methods maintain model security, their failure to support common non-polynomial activation functions leads to leaking of pre-activation values (feature maps at hidden layers). Tools that use solely MPC protocols avoid leaking pre-activation values as they can guarantee data confidentiality on non-polynomial activation functions but may compromise the security of the model architecture by leaking activation functions or model structure.
+
+Recent works on secure NN inference propose hybrid protocols that combine homomorphic encryption schemes, and MPC methods to build frameworks that try to reduce leakages common in pure HE and MPC protocols. Among recent works that use hybrid protocols and do not rely on trusted third parties are Gazelle (Juvekar et al., 2018), Delphi (Mishra et al., 2020), and MP2ML (Boemer et al., 2020).
+
+Gazelle, Delphi and MP2ML largely support non-polynomial activation functions encountered in convolutional neural networks, such as maximum pooling and rectified linear unit (ReLU) operations. Gazelle introduced several improvements over previous methods for secure NN inference primarily relating to latency and confidentiality. In particular, Gazelle framework provides homomorphic encryption libraries with low latency implementations of algorithms for single instruction multiple data (SIMD) operations, ciphertext permutation, and homomorphic matrix and convolutional operations, pertinent to convolutional neural networks. Gazelle utilizes kernel methods to evaluate homomorphic operations for linear components of networks, garbled circuits to compute non-linear activation functions confidentially and additive secret sharing to quickly switch between these cryptographic protocols. Delphi builds on Gazelle, optimizing computation of both linear and non-linear com
+
+putations in CNNs by secret sharing model weights in the pre-processing stage to speed up linear computations later, and approximating certain activation functions such as ReLU with polynomials. MP2ML employs nGraph-HE for homomorphic encryption and ABY framework for evaluation of non-linear functions using garbled circuits.
+
+# B MORE BACKGROUND ON DIFFERENTIAL PRIVACY
+
+One of the compelling properties of differential privacy is that it permits the analysis and control of cumulative privacy cost over multiple consecutive computations. For instance, strong composition theorem (Dwork et al., 2010) gives a tight estimate of the privacy cost associated with a sequence of adaptive mechanisms $\{\mathcal{M}_i\}_{i\in I}$ .
+
+Theorem 1 (Strong Composition). For $\varepsilon, \delta, \delta' \geq 0$ , the class of $(\varepsilon, \delta)$ -differentially private mechanisms satisfies $(\varepsilon', k\delta + \delta')$ -differential privacy under $k$ -fold adaptive composition for:
+
+$$
+\varepsilon^ {\prime} = \varepsilon \sqrt {2 k \log \left(1 / \delta^ {\prime}\right)} + k \varepsilon \left(e ^ {\varepsilon} - 1\right) \tag {2}
+$$
+
+To facilitate the evaluation of privacy leakage resulted by a randomized mechanism $\mathcal{M}$ , it is helpful to explicitly define its corresponding privacy loss $c_{\mathcal{M}}$ and privacy loss random variable $C_{\mathcal{M}}$ . Particularly, the fact that $\mathcal{M}$ is $(\varepsilon, \delta)$ -differentially private is equivalent to a certain tail bound on $C_{\mathcal{M}}$ .
+
+Definition 2 (Privacy Loss). Given a pair of adjacent datasets $d$ , $d' \in \mathcal{D}$ and an auxiliary input aux, the privacy loss $c_{\mathcal{M}}$ of a randomized mechanism $\mathcal{M}$ evaluated at an outcome $o \in \mathcal{R}$ is defined as:
+
+$$
+c _ {\mathcal {M}} \left(o \mid a u x, d, d ^ {\prime}\right) \triangleq \log \frac {\Pr \left[ \mathcal {M} \left(a u x , d\right) = o \right]}{\Pr \left[ \mathcal {M} \left(a u x , d ^ {\prime}\right) = o \right]} \tag {3}
+$$
+
+For an outcome $o \in \mathcal{R}$ sampled from $\mathcal{M}(d)$ , $C_{\mathcal{M}}(aux, d, d')$ takes the value $c_{\mathcal{M}}(o \mid aux, d, d')$ .
+
+Based on the definition of privacy loss, Abadi et al. (Abadi et al., 2016) introduced the moments accountant to track higher-order moments of privacy loss random variable and achieved even tighter privacy bounds for $k$ -fold adaptive mechanisms.
+
+Definition 3 (Moments Accountant). Given any adjacent datasets $d$ , $d' \in \mathcal{D}$ and any auxiliary input aux, the moments accountant of a randomized mechanism $\mathcal{M}$ is defined as:
+
+$$
+\alpha_ {\mathcal {M}} (\lambda) \triangleq \max _ {a u x, d, d ^ {\prime}} \alpha_ {\mathcal {M}} \left(\lambda \mid a u x, d, d ^ {\prime}\right) \tag {4}
+$$
+
+where $\alpha_{\mathcal{M}}(\lambda | aux, d, d') \triangleq \log \mathbb{E}[\exp(\lambda C_{\mathcal{M}}(aux, d, d'))]$ is obtained by taking the logarithm of the privacy loss random variable.
+
+As a natural relaxation to the conventional $(\varepsilon, \delta)$ -differential privacy, Rényi differential privacy (RDP) (Mironov, 2017) provides a more convenient and accurate approach to estimating privacy loss under heterogeneous composition.
+
+Definition 4 (Rényi Divergence). For two probability distributions $P$ and $Q$ defined over $\mathcal{R}$ , the Rényi divergence of order $\lambda > 1$ between them is defined as:
+
+$$
+D _ {\lambda} (P \mid | Q) \triangleq \frac {1}{\lambda - 1} \log \mathbb {E} _ {x \sim Q} \left[ (P (x) / Q (x)) ^ {\lambda} \right] = \frac {1}{\lambda - 1} \log \mathbb {E} _ {x \sim P} \left[ (P (x) / Q (x)) ^ {\lambda - 1} \right] \tag {5}
+$$
+
+Definition 5 (Rényi Differential Privacy). A randomized mechanism $\mathcal{M}$ is said to satisfy $\varepsilon$ -Rényi differential privacy of order $\lambda$ , or $(\lambda, \varepsilon)$ -RDP for short, if for any adjacent datasets $d, d' \in \mathcal{D}$ :
+
+$$
+D _ {\lambda} \left(\mathcal {M} (d) \mid \mid \mathcal {M} \left(d ^ {\prime}\right)\right) = \frac {1}{\lambda - 1} \log \mathbb {E} _ {x \sim \mathcal {M} (d)} \left[ \left(\frac {\Pr [ \mathcal {M} (d) = x ]}{\Pr [ \mathcal {M} \left(d ^ {\prime}\right) = x ]}\right) ^ {\lambda - 1} \right] \leq \varepsilon \tag {6}
+$$
+
+Theorem 2 (From RDP to DP). If a randomized mechanism $\mathcal{M}$ guarantees $(\lambda, \varepsilon)$ -RDP, then it also satisfies $(\varepsilon + \frac{\log(1 / \delta)}{\lambda - 1}, \delta)$ -differential privacy for any $\delta \in (0,1)$ .
+
+Building upon the moments accountant and RDP techniques, Private Aggregation of Teacher Ensembles (PATE) (Papernot et al., 2017) provides a flexible approach to training machine learning models with strong privacy guarantees. Precisely, rather than directly learning from labeled private
+
+data, the model that gets released instead learns from unlabeled public data by querying a teacher ensemble for predicted labels. Models in the ensemble are themselves trained on disjoint partitions of the private dataset, while privacy guarantees are enabled by applying the Laplace mechanism to the ensemble's aggregated label counts. Coupled with data-dependent privacy analysis, PATE achieves a tighter estimate of the privacy loss associated with label queries, especially when the consensus among teacher models is strong. Given this motivation, the follow-up work of PATE (Papernot et al., 2018) further improves the privacy bound both by leveraging a more concentrated noise distribution to strengthen consensus and by rejecting queries that lack consensus.
+
+# C MORE BACKGROUND ON ACTIVE LEARNING
+
+Active learning, sometimes referred to as query learning, exploits the intuition that machine learning algorithms will be able to learn more efficiently if they can actively select the data from which they learn. For certain supervised learning tasks, this insight is of particularly important implications, as labeled data rarely exists in abundance and data labeling can be very demanding (Settles, 2009).
+
+In order to pick queries that will most likely contribute to model learning, various pool sampling methods have been proposed to estimate the informativeness of unlabeled samples. Uncertainty-based approaches (Lewis & Gale, 1994), such as margin sampling and entropy sampling, typically achieve a satisfactory trade-off between sample utility and computational efficiency. We also explore a core-set approach to active learning using greedy-k-center sampling (Sener & Savarese, 2017).
+
+Definition 6 (Margin Sampling (Scheffer et al., 2001)). Given an unlabeled dataset $d$ and a classification model with conditional label distribution $P_{\theta}(y|x)$ , margin sampling outputs the most informative sample:
+
+$$
+x ^ {*} = \underset {x \in d} {\arg \min } P _ {\theta} \left(\hat {y} _ {1} \mid x\right) - P _ {\theta} \left(\hat {y} _ {2} \mid x\right) \tag {7}
+$$
+
+where $\hat{y}_1$ and $\hat{y}_2$ stand for the most and second most probable labels for $x$ , according to the model.
+
+Definition 7 (Entropy Sampling). Using the setting and notations in Definition 6, margin sampling can be generalized by using entropy (Shannon, 1948) as an uncertainty measure as follows:
+
+$$
+x ^ {*} = \underset {x \in d} {\arg \max } - \sum_ {i} P _ {\theta} \left(y _ {i} \mid x\right) \log P _ {\theta} \left(y _ {i} \mid x\right) \tag {8}
+$$
+
+where $y_{i}$ ranges over all possible labels.
+
+Definition 8 (Greedy-K-center Sampling). We aim to solve the $k$ -center problem defined by Farahani & Hekmatfar (2009), which is, intuitively, the problem of picking $k$ center points that minimize the largest distance between a data point and its nearest center. Formally, this goal is defined as
+
+$$
+\min _ {\mathcal {S}: | \mathcal {S} \cup \mathcal {D} | \leq k} \max _ {i} \min _ {j \in \mathcal {S} \cup \mathcal {D}} \Delta \left(\mathbf {x} _ {i}, \mathbf {x} _ {j}\right) \tag {9}
+$$
+
+where $\mathcal{D}$ is the current training set and $S$ is our new chosen center points. This definition can be solved greedily as shown in (Sener & Savarese, 2017).
+
+# D MORE BACKGROUND ON FAIRNESS
+
+Due to the imbalance in sample quantity and learning complexity, machine learning models may have disparate predictive performance over different classes or demographic groups, resulting in unfair treatment of certain population. To better capture this phenomenon and introduce tractable countermeasures, various fairness-related criteria have been proposed, including balanced accuracy, demographic parity, equalized odds (Hardt et al., 2016), etc.
+
+Definition 9 (Balanced Accuracy). Balanced accuracy captures model utility in terms of both accuracy and fairness. It is defined as the average of recall scores obtained on all classes.
+
+Among the criteria that aim to alleviate discrimination against certain protected attributes, equalized odds and equal opportunity Hardt et al. (2016) are of particular research interests.
+
+Definition 10 (Equalized Odds). A machine learning model is said to guarantee equalized odds with respect to protected attribute $A$ and ground truth label $Y$ if its prediction $\hat{Y}$ and $A$ are conditionally independent given $Y$ . In the case of binary random variables $A, Y, \hat{Y}$ , this is equivalent to:
+
+$$
+\Pr \left[ \hat {Y} = 1 \mid A = 0, Y = y \right] = \Pr \left[ \hat {Y} = 1 \mid A = 1, Y = y \right], \quad y \in \{0, 1 \} \tag {10}
+$$
+
+To put it another way, equalized odds requires the model to have equal true positive rates and equal false positive rates across the two demographic groups $A = 0$ and $A = 1$ .
+
+Definition 11 (Equal Opportunity). Equal opportunity is a relaxation of equalized odds that requires non-discrimination only within a specific outcome group, often referred to as the advantaged group. Using previous notations, the binary case with advantaged group $Y = 1$ is equivalent to:
+
+$$
+\Pr \left[ \hat {Y} = 1 \mid A = 0, Y = 1 \right] = \Pr \left[ \hat {Y} = 1 \mid A = 1, Y = 1 \right] \tag {11}
+$$
+
+# E PROOF OF CONFIDENTIALITY
+
+Here we prove that our protocol described in the main body does not reveal anything except the final noised result to $P_{i*}$ . In can be proven in the standard real-world ideal-world paradigm, where the ideal functionality takes inputs from all parties and sends the final results to $P_{i*}$ . We use $\mathcal{A}$ to denote the set of corrupted parties. Below, we describe the simulator (namely $S$ ). The simulator strategy depends on if $i_*$ is corrupted.
+
+If $i_{*} \in \mathcal{A}$ , our simulator works as below:
+
+1.a) The simulator simulates what honest parties would do.
+1.b) For each $i \notin \mathcal{A}$ , $\mathcal{S}$ sends fresh encryption of a random $\boldsymbol{r}_i$ to $\mathcal{P}_{i*}$ .
+1.c) For each $i \notin \mathcal{A}$ , $\mathcal{S}$ sends random $s_i$ to $\mathcal{P}_{i_*}$ on be half of the 2PC functionality between $\mathcal{P}_i$ and $\mathcal{P}_{i_*}$ .
+2-3 $S$ sends the output of the whole computation to $\mathcal{P}_{i_*}$ on behalf of the 2PC functionality between PG and $\mathcal{P}_{i_*}$
+
+If $i_* \notin \mathcal{A}$ , our simulator works as below:
+
+1.a) If $i_* \notin \mathcal{A}$ , for each $i \in \mathcal{A}$ , $S$ computes a fresh encryption of zero and sends it to $\mathcal{P}_i$ on behalf of $\mathcal{P}_{i_*}$ .
+1.b) The simulator simulates what honest parties would do.
+1.c) For each $i \in \mathcal{A}$ , $\mathcal{S}$ sends random $\hat{s}_i$ to $\mathcal{P}_i$ on behalf of the 2PC functionality between $\mathcal{P}_i$ and $\mathcal{P}_{i_*}$ .
+2-3 The simulator simulates what honest parties would do.
+
+Assuming that the underlying encryption scheme is CPA secure and that 2PC protocols used in step 1, 2 and 3 are secure with respect to standard definitions (i.e., reveals nothing beyond the outputs), our simulation itself is perfect.
+
+# F DETAILS ON EXPERIMENTAL SETUP
+
+# F.1 MNIST AND FASHION-MNIST
+
+We use the same setup as for CIFAR10 and SVHN datasets with the following adjustments. We select $K = 250$ as the default number of parties. For the imbalanced classes we select classes 1 and 2 for MNIST as well as Trouser and Pullover for Fashion-MNIST. We use the Gaussian noise with $\sigma = 40$ (similarly to SVHN). We are left with 1,000 evaluation data points from the test set (similarly to CIFAR10). We fix the default value of $\epsilon = 2.35$ for MNIST and $\epsilon = 3.89$ for Fashion-MNIST. We use a variant of the LeNet architecture.
+
+# F.2 DETAILS ON ARCHITECTURES
+
+To train the private models on subsets of datasets, we downsize the standard architectures, such as VGG-16 or ResNet-18. Below is the detailed list of layers in each of the architectures used (generated using torchsummary). The diagram for ResNet-10 also includes skip connections and convolutional layers for adjusting the sizes of feature maps.
+
+VGG-7 for SVHN:
+
+| Layer type | Output Shape | Param # |
| Conv2d-1 | [-1, 64, 32, 32] | 1,728 |
| BatchNorm2d-2 | [-1, 64, 32, 32] | 128 |
| ReLU-3 | [-1, 64, 32, 32] | 0 |
| MaxPool2d-4 | [-1, 64, 16, 16] | 0 |
| Conv2d-5 | [-1, 128, 16, 16] | 73,728 |
| BatchNorm2d-6 | [-1, 128, 16, 16] | 256 |
| ReLU-7 | [-1, 128, 16, 16] | 0 |
| MaxPool2d-8 | [-1, 128, 8, 8] | 0 |
| Conv2d-9 | [-1, 256, 8, 8] | 294,912 |
| BatchNorm2d-10 | [-1, 256, 8, 8] | 512 |
| ReLU-11 | [-1, 256, 8, 8] | 0 |
| Conv2d-12 | [-1, 256, 8, 8] | 589,824 |
| BatchNorm2d-13 | [-1, 256, 8, 8] | 512 |
| ReLU-14 | [-1, 256, 8, 8] | 0 |
| MaxPool2d-15 | [-1, 256, 4, 4] | 0 |
| Conv2d-16 | [-1, 512, 4, 4] | 1,179,648 |
| BatchNorm2d-17 | [-1, 512, 4, 4] | 1,024 |
| ReLU-18 | [-1, 512, 4, 4] | 0 |
| Conv2d-19 | [-1, 512, 4, 4] | 2,359,296 |
| BatchNorm2d-20 | [-1, 512, 4, 4] | 1,024 |
| ReLU-21 | [-1, 512, 4, 4] | 0 |
| Linear-22 | [-1, 10] | 5,130 |
+
+Total params: 4,507,722
+Params size MB: 17.20
+
+ResNet-10:
+
+| Layer type | Output Shape | Param # |
| Conv2d-1 | [-1, 64, 32, 32] | 1,728 |
| BatchNorm2d-2 | [-1, 64, 32, 32] | 128 |
| Conv2d-3 | [-1, 64, 32, 32] | 36,864 |
| BatchNorm2d-4 | [-1, 64, 32, 32] | 128 |
| Conv2d-5 | [-1, 64, 32, 32] | 36,864 |
| BatchNorm2d-6 | [-1, 64, 32, 32] | 128 |
| BasicBlock-7 | [-1, 64, 32, 32] | 0 |
| Conv2d-8 | [-1, 128, 16, 16] | 73,728 |
| BatchNorm2d-9 | [-1, 128, 16, 16] | 256 |
| Conv2d-10 | [-1, 128, 16, 16] | 147,456 |
| BatchNorm2d-11 | [-1, 128, 16, 16] | 256 |
| Conv2d-12 | [-1, 128, 16, 16] | 8,192 |
| BatchNorm2d-13 | [-1, 128, 16, 16] | 256 |
| BasicBlock-14 | [-1, 128, 16, 16] | 0 |
| Conv2d-15 | [-1, 256, 8, 8] | 294,912 |
| BatchNorm2d-16 | [-1, 256, 8, 8] | 512 |
| Conv2d-17 | [-1, 256, 8, 8] | 589,824 |
| BatchNorm2d-18 | [-1, 256, 8, 8] | 512 |
| Conv2d-19 | [-1, 256, 8, 8] | 32,768 |
| BatchNorm2d-20 | [-1, 256, 8, 8] | 512 |
| BasicBlock-21 | [-1, 256, 8, 8] | 0 |
| Conv2d-22 | [-1, 512, 4, 4] | 1,179,648 |
| BatchNorm2d-23 | [-1, 512, 4, 4] | 1,024 |
+
+| Conv2d-24 | [-1, 512, 4, 4] | 2,359,296 |
| BatchNorm2d-25 | [-1, 512, 4, 4] | 1,024 |
| Conv2d-26 | [-1, 512, 4, 4] | 131,072 |
| BatchNorm2d-27 | [-1, 512, 4, 4] | 1,024 |
| BasicBlock-28 | [-1, 512, 4, 4] | 0 |
| Linear-29 | [-1, 10] | 5,130 |
| Total params: 4,903,242 |
| Params size MB: 18.70 |
+
+LeNet style architecture for MNIST:
+
+| Layer type | Output Shape | Param # |
| Conv2d-1 | [-1, 20, 24, 24] | 520 |
| MaxPool2d-2 | | |
| Conv2d-3 | [-1, 50, 8, 8] | 25,050 |
| MaxPool2d-4 | | |
| Linear-5 | [-1, 500] | 400,500 |
| ReLU-6 | | |
| Linear-7 | [-1, 10] | 5,010 |
| Total params: 431,080 |
| Trainable params: 431,080 |
| Non-trainable params: 0 |
| Input size MB: 0.00 |
| Forward/backward pass size MB: 0.12 |
| Params size MB: 1.64 |
| Estimated Total Size MB: 1.76 |
+
+# G ADDITIONAL EXPERIMENTS AND FIGURES
+
+
+Figure 5: Using CaPC to improve each party's model performance on the CIFAR10 dataset. We observe that each separate querying party (QP) sees a per-class and overall accuracy bonus using CaPC.
+
+
+
+
+
+
+Figure 6: Using CaPC to improve each party's model performance on the SVHN dataset. We observe that all querying parties (QPs) see a net increase overall, with nearly every class seeing improved performance.
+
+
+
+
+
+
+SVHN, Model: VGG-7
+
+
+SVHN, Model: ResNet-8
+
+
+SVHN, Model: Resnet-10
+
+
+Figure 7: Using CaPC to improve each party's heterogeneous model performance on the SVHN dataset. Each querying party adopts a different model architecture (1 of 3) and $\frac{1}{3}$ of all answering parties adopt each model architecture. All model architectures see benefits from using CaPC.
+MNIST, Homogeneous
+
+
+Fashion-MNIST, Homogeneous Classes
+Figure 8: Using CaPC to improve model performance on balanced MNIST on Fashion-MNIST. Dashed lines represent mean accuracy. We observe a mean increase of $4.5\%$ for MNIST $(\epsilon = 2.35)$ and $2.9\%$ for Fashion-MNIST $(\epsilon = 3.89)$ .
+
+| Method | Forward Pass (Step 1a) |
| CPU, P = 8192 | 14.22 ± 0.11 |
| CPU, P = 16384 | 29.46 ± 2.34 |
| CPU, P = 32768 | 57.26 ± 0.39 |
| GPU, no encryption | 3.15 ± 0.22 |
| CPU, no encryption | 0.152 ± 0.0082 |
+
+| QP-AP (Steps 1b and 1c) | QP-PG (Steps 2 and 3) |
| 0.12 ± 0.0058 | 0.030 ± 0.0045 |
+
+Table 1: Wall-clock time (sec) of various encryption methods with a batch size of 1. We vary the modulus range, $P$ , which denotes the max value of a given plain text number. Note that the GPU is slower than the CPU because of the mini-batch with a single data item and data transfer overhead to and from the GPU. We use the CryptoNet-ReLU model provided by HE-transformer (Boemer, 2020).
+
+
+CIFAR10, Homogeneous
+
+
+SVHN, Homogeneous
+Figure 9: Using active learning to improve CaPC fairness. We observe that underrepresented classes are sampled more frequently than in a random strategy.
+
+
+Figure 10: Using CaPC with active learning to improve balanced accuracy under non-uniform data distribution. Dashed lines are balanced accuracy (BA). We observe that all sampling strategies significantly improve BA and the best active learning scheme can improve BA by a total of 10.10 percentage-points (an additional 0.45 percentage points over Random sampling) on MNIST (left) and a total of 10.94 percentage-points (an additional 2.48) on Fashion-MNIST (right).
+
+
+Classes
+
+
+
+
+
+
+Figure 11: Tuning the amount of noise $(\sigma)$ in CaPC. We tune the amount of Gaussian noise that should be injected in the noisy argmax mechanism by varying the standard deviation. We choose the highest noise: $\sigma = 7$ for CIFAR10, $\sigma = 40$ for SVHN, MNIST, and Fashion-MNIST, without having a significant impact on the model accuracy, allowing a minimal privacy budget expenditure while maximizing utility. We train 50 models for CIFAR10 and 250 models for SVHN, MNIST, and Fashion-MNIST.
+
+
+
+
+
+ | Number of parties |
| 150 | 200 | 250 | 300 | 400 |
| Accuracy gain (%) | 4.11 | 3.33 | 4.50 | 4.69 | 8.39 |
| Best ε | 4.50 | 2.50 | 2.35 | 2.00 | 1.63 |
+
+Figure 12: Accuracy gain for balanced MNIST using CaPC versus number of parties and privacy budget, $\varepsilon$ . With more parties, we can achieve a higher accuracy gain at a smaller bound on $\varepsilon$ .
+
+
+MP2ML (HE-transformer for nGraph).
+
+
+CaPC (built on top of the MP2ML framework).
+Figure 13: Measuring the CPU, Network (NET), and Memory (MEM) usage over time for CaPC. We use the CryptoNet-ReLU model provided by HE-transformer (Boemer, 2020) and sar (Godard, 2020) (System Activity Report) to perform this micro-analysis. We label the steps according to the CaPC protocol shown in Figure 1. The network usage reaches its peaks during execution of ReLU and then MaxPool, where the intermediate feature maps have to be exchanged between the querying and answering parties for the computation via garbled circuits.
\ No newline at end of file
diff --git a/capclearningconfidentialandprivatecollaborativelearning/images.zip b/capclearningconfidentialandprivatecollaborativelearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..d7b815677d17474ec24f4380452f3f4dc52f8db3
--- /dev/null
+++ b/capclearningconfidentialandprivatecollaborativelearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:104060143d1d3da131dc1aca6a534604c0435f85a5d142721c0bac38438e68e8
+size 1260781
diff --git a/capclearningconfidentialandprivatecollaborativelearning/layout.json b/capclearningconfidentialandprivatecollaborativelearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..a935f16ec48359b735f9c726dee349d3bfa468d7
--- /dev/null
+++ b/capclearningconfidentialandprivatecollaborativelearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a0b133d38f5c6741a7be919ec1e111246e9e1e5ae1e9a61daff328ce970344ba
+size 759531
diff --git a/capturinglabelcharacteristicsinvaes/e6a34173-a3b7-4e70-8036-e062ce8ee2c0_content_list.json b/capturinglabelcharacteristicsinvaes/e6a34173-a3b7-4e70-8036-e062ce8ee2c0_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..ad140a4f3d2bf10afd1eb7237f7b1b24fe626292
--- /dev/null
+++ b/capturinglabelcharacteristicsinvaes/e6a34173-a3b7-4e70-8036-e062ce8ee2c0_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:134892fe9c7df5e7b782746393ec9e78ed715254a106f16673b3819815f22067
+size 144359
diff --git a/capturinglabelcharacteristicsinvaes/e6a34173-a3b7-4e70-8036-e062ce8ee2c0_model.json b/capturinglabelcharacteristicsinvaes/e6a34173-a3b7-4e70-8036-e062ce8ee2c0_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..9d9749a242d3b3d25db92830a634f58f800b9983
--- /dev/null
+++ b/capturinglabelcharacteristicsinvaes/e6a34173-a3b7-4e70-8036-e062ce8ee2c0_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7e158dcc818f4f701ac0e810f3b7a5e66117730d2bd05d73ee95193f32988db3
+size 168553
diff --git a/capturinglabelcharacteristicsinvaes/e6a34173-a3b7-4e70-8036-e062ce8ee2c0_origin.pdf b/capturinglabelcharacteristicsinvaes/e6a34173-a3b7-4e70-8036-e062ce8ee2c0_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..754d1027fff4a59bd92db4a8890942101fe74324
--- /dev/null
+++ b/capturinglabelcharacteristicsinvaes/e6a34173-a3b7-4e70-8036-e062ce8ee2c0_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:99394166cb5ac018d38c5ef41d150057b3c5326534fc40f519e54b2118a6e2fd
+size 26215570
diff --git a/capturinglabelcharacteristicsinvaes/full.md b/capturinglabelcharacteristicsinvaes/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..189903db71f638b975d28e5e9b9d40b2670e1ff9
--- /dev/null
+++ b/capturinglabelcharacteristicsinvaes/full.md
@@ -0,0 +1,545 @@
+# CAPTURING LABEL CHARACTERISTICS IN VAEs
+
+Tom Joy1, Sebastian M. Schmon*1,2, Philip H. S. Torr1, N. Siddharth†1,3 & Tom Rainforth†1
+
+1University of Oxford
+2Improbable
+3University of Edinburgh & The Alan Turing Institute
+
+tomjoy@robots.ox.ac.uk
+
+# ABSTRACT
+
+We present a principled approach to incorporating labels in variational autoencoders (VAEs) that captures the rich characteristic information associated with those labels. While prior work has typically conflated these by learning latent variables that directly correspond to label values, we argue this is contrary to the intended effect of supervision in VAEs—capturing rich label characteristics with the latents. For example, we may want to capture the characteristics of a face that make it look young, rather than just the age of the person. To this end, we develop the characteristic capturing VAE (CCVAE), a novel VAE model and concomitant variational objective which captures label characteristics explicitly in the latent space, eschewing direct correspondences between label values and latents. Through judicious structuring of mappings between such characteristic latents and labels, we show that the CCVAE can effectively learn meaningful representations of the characteristics of interest across a variety of supervision schemes. In particular, we show that the CCVAE allows for more effective and more general interventions to be performed, such as smooth traversals within the characteristics for a given label, diverse conditional generation, and transferring characteristics across datapoints.
+
+# 1 INTRODUCTION
+
+Learning the characteristic factors of perceptual observations has long been desired for effective machine intelligence (Brooks, 1991; Bengio et al., 2013; Hinton & Salakhutdinov, 2006; Tenenbaum, 1998). In particular, the ability to learn meaningful factors—capturing human-understandable characteristics from data—has been of interest from the perspective of human-like learning (Tenenbaum & Freeman, 2000; Lake et al., 2015) and improving decision making and generalization across tasks (Bengio et al., 2013; Tenenbaum & Freeman, 2000).
+
+At its heart, learning meaningful representations of data allows one to not only make predictions, but critically also to manipulate factors of a datapoint. For example, we might want to manipulate the age of a person in an image. Such manipulations allow for the expression of causal effects between the meaning of factors and their corresponding realizations in the data. They can be categorized into conditional generation—the ability to construct whole exemplar data instances with characteristics dictated by constraining relevant factors—and intervention—the ability to manipulate just particular factors for a given data point, and subsequently affect only the associated characteristics.
+
+A particularly flexible framework within which to explore the learning of meaningful representations are variational autoencoders (VAEs), a class of deep generative models where representations of data are captured in the underlying latent variables. A variety of methods have been proposed for inducing meaningful factors in this framework (Kim & Mnih, 2018; Mathieu et al., 2019; Mao et al., 2019; Kingma et al., 2014; Siddharth et al., 2017; Vedantam et al., 2018), and it has been argued that the most effective generally exploit available labels to (partially) supervise the training process (Locatello et al., 2019). Such approaches aim to associate certain factors of the representation (or equivalently factors of the generative model) with the labels, such that the former encapsulate the latter—providing a mechanism for manipulation via targeted adjustments of relevant factors.
+
+Prior approaches have looked to achieve this by directly associating certain latent variables with labels (Kingma et al., 2014; Siddharth et al., 2017; Maaloe et al., 2016). Originally motivated by the desiderata of semi-supervised classification, each label is given a corresponding latent variable of the same type (e.g. categorical), whose value is fixed to that of the label when the label is observed and imputed by the encoder when it is not.
+
+Though natural, we argue that this assumption is not just unnecessary but actively harmful from a representation-learning perspective, particularly in the context of performing manipulations. To allow manipulations, we want to learn latent factors that capture the characteristic information associated with a label, which is typically much richer than just the label value itself. For example, there are
+
+various visual characteristics of people's faces associated with the label "young," but simply knowing the label is insufficient to reconstruct these characteristics for any particular instance. Learning a meaningful representation that captures these characteristics, and isolates them from others, requires encoding more than just the label value itself, as illustrated in Figure 1.
+
+The key idea of our work is to use labels to help capture and isolate this related characteristic information in a VAE's representation. We do this by exploiting the interplay between the labels and inputs to capture more information than the labels alone convey; information that will be lost (or at least entangled) if we directly encode the label itself. Specifically, we introduce the characteristic capturing VAE (CCVAE) framework, which employs a novel VAE formulation which captures label characteristics explicitly in the latent space. For each label, we introduce a set of characteristic latents that are induced into
+
+
+Figure 1: Manipulating label characteristics for "hair color" and "smile".
+
+capturing the characteristic information associated with that label. By coupling this with a principled variational objective and carefully structuring the characteristic-latent and label variables, we show that CCVAEs successfully capture meaningful representations, enabling better performance on manipulation tasks, while matching previous approaches for prediction tasks. In particular, they permit certain manipulation tasks that cannot be performed with conventional approaches, such as manipulating characteristics without changing the labels themselves and producing multiple distinct samples consistent with the desired intervention. We summarize our contributions as follows:
+
+i) showing how labels can be used to capture and isolate rich characteristic information;
+ii) formulating CCVAEs, a novel model class and objective for supervised and semi-supervised learning in VAEs that allows this information to be captured effectively;
+iii) demonstrating CCVAEs' ability to successfully learn meaningful representations in practice.
+
+# 2 BACKGROUND
+
+VAEs (Kingma & Welling, 2013; Rezende et al., 2014) are a powerful and flexible class of model that combine the unsupervised representation-learning capabilities of deep autoencoders (Hinton & Zemel, 1994) with generative latent-variable models—a popular tool to capture factored low-dimensional representations of higher-dimensional observations. In contrast to deep autoencoders, generative models capture representations of data not as distinct values corresponding to observations, but rather as distributions of values. A generative model defines a joint distribution over observed data $\mathbf{x}$ and latent variables $\mathbf{z}$ as $p_{\theta}(\mathbf{x}, \mathbf{z}) = p(\mathbf{z})p_{\theta}(\mathbf{x} \mid \mathbf{z})$ . Given a model, learning representations of data can be viewed as performing inference—learning the posterior distribution $p_{\theta}(\mathbf{z} \mid \mathbf{x})$ that constructs the distribution of latent values for a given observation.
+
+VAEs employ amortized variational inference (VI) (Wainwright & Jordan, 2008; Kingma & Welling, 2013) using the encoder and decoder of an autoencoder to transform this setup by i) taking the model likelihood $p_{\theta}(\boldsymbol{x} \mid \boldsymbol{z})$ to be parameterized by a neural network using the decoder, and ii) constructing an amortized variational approximation $q_{\phi}(\boldsymbol{z} \mid \boldsymbol{x})$ to the (intractable) posterior $p_{\theta}(\boldsymbol{z} \mid \boldsymbol{x})$ using the encoder. The variational approximation of the posterior enables effective estimation of the objective—maximizing the marginal likelihood—through importance sampling. The objective is obtained through invoking Jensen's inequality to derive the evidence lower bound (ELBO) of the
+
+model which is given as:
+
+$$
+\log p _ {\theta} (\boldsymbol {x}) = \log \mathbb {E} _ {q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})} \left[ \frac {p _ {\theta} (\boldsymbol {z} , \boldsymbol {x})}{q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})} \right] \geq \mathbb {E} _ {q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})} \left[ \log \frac {p _ {\theta} (\boldsymbol {z} , \boldsymbol {x})}{q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})} \right] \equiv \mathcal {L} (\boldsymbol {x}; \phi , \theta). \tag {1}
+$$
+
+Given observations $\mathcal{D} = \{\pmb{x}_1, \dots, \pmb{x}_N\}$ taken to be realizations of random variables generated from an unknown distribution $p_{\mathcal{D}}(\pmb{x})$ , the overall objective is $\frac{1}{N} \sum_{n} \mathcal{L}(\pmb{x}_n; \theta, \phi)$ . Hierarchical VAEs Sønderby et al. (2016) impose a hierarchy of latent variables improving the flexibility of the approximate posterior, however we do not consider these models in this work.
+
+Semi-supervised VAEs (SSVAEs) (Kingma et al., 2014; Maaloe et al., 2016; Siddharth et al., 2017) consider the setting where a subset of data $S \subset \mathcal{D}$ is assumed to also have corresponding labels $\pmb{y}$ . Denoting the (unlabeled) data as $\mathcal{U} = \mathcal{D} \backslash S$ , the log-marginal likelihood is decomposed as
+
+$$
+\log p \left(\mathcal {D}\right) = \sum_ {\left(\boldsymbol {x}, \boldsymbol {y}\right) \in \mathcal {S}} \log p _ {\theta} (\boldsymbol {x}, \boldsymbol {y}) + \sum_ {\boldsymbol {x} \in \mathcal {U}} \log p _ {\theta} (\boldsymbol {x}),
+$$
+
+where the individual log-likelihoods are lower bounded by their ELBOs. Standard practice is then to treat $y$ as a latent variable to marginalize over whenever the label is not provided. More specifically, most approaches consider splitting the latent space in $z = \{z_y, z_{\backslash y}\}$ and then directly fix $z_y = y$ whenever the label is provided, such that each dimension of $z_y$ explicitly represents a predicted value of a label, with this value known exactly only for the labeled datapoints. Much of the original motivation for this (Kingma et al., 2014) was based around performing semi-supervised classification of the labels, with the encoder being used to impute the values of $z_y$ for the unlabeled datapoints. However, the framework is also regularly used as a basis for learning meaningful representations and performing manipulations, exploiting the presence of the decoder to generate new datapoints after intervening on the labels via changes to $z_y$ . Our focus lies on the latter, for which we show this standard formulation leads to serious pathologies. Our primary goal is not to improve the fidelity of generations, but instead to demonstrate how label information can be used to structure the latent space such that it encapsulates and disentangles the characteristics associated with the labels.
+
+# 3 RETHINKING SUPERVISION
+
+As we explained in the last section, the de facto assumption for most approaches to supervision in VAEs is that the labels correspond to a partially observed augmentation of the latent space, $z_{y}$ . However, this can cause a number of issues if we want the latent space to encapsulate not just the labels themselves, but also the characteristics associated with these labels. For example, encapsulating the youthful characteristics of a face, not just the fact that it is a "young" face. At an abstract level, such an approach fails to capture the relationship between the inputs and labels: it fails to isolate characteristic information associated with each label from the other information required to reconstruct data. More specifically, it fails to deal with the following issues.
+
+Firstly, the information in a datapoint associated with a label is richer than stored by the (typically categorical) label itself. That is not to say such information is absent when we impose $z_{y} = y$ , but here it is entangled with the other latent variables $z_{\backslash y}$ , which simultaneously contain the associated information for all the labels. Moreover, when $y$ is categorical, it can be difficult to ensure that the VAE actually uses $z_{y}$ , rather than just capturing information relevant to reconstruction in the higher-capacity, continuous, $z_{\backslash y}$ . Overcoming this is challenging and generally requires additional heuristics and hyper-parameters.
+
+Second, we may wish to manipulate characteristics without fully changing the categorical label itself. For example, making a CelebA image depict more or less 'smiling' without fully changing its "smile" label. Here we do not know how to manipulate the latents to achieve this desired effect: we can only do the binary operation of changing the relevant variable in $z_{y}$ . Also, we often wish to keep a level of diversity when carrying out conditional generation and, in particular, interventions. For example, if we want to add a smile, there is no single correct answer for how the smile would look, but taking $z_{y} =$ "smile" only allows for a single point estimate for the change.
+
+Finally, taking the labels to be explicit latent variables can cause a mismatch between the VAE prior $p(z)$ and the pushforward distribution of the data to the latent space $q(z) = \mathbb{E}_{p_{\mathcal{D}}(x)}[q_{\phi}(z \mid x)]$ . During training, latents are effectively generated according to $q(z)$ , but once learned, $p(z)$ is used to make generations; variations between the two effectively corresponds to a train-test mismatch. As there is a ground truth data distribution over the labels (which are typically not independent), taking the latents as the labels themselves implies that there will be a ground truth $q(\boldsymbol{z}_y)$ . However, as this is not generally known a priori, we will inevitably end up with a mismatch.
+
+What do we want from supervision? Given these issues, it is natural to ask whether having latents directly correspond to labels is actually necessary. To answer this, we need to think about exactly what it is we are hoping to achieve through the supervision itself. Along with uses of VAEs more generally, the three most prevalent tasks are: a) Classification, predicting the labels of inputs where these are not known a priori; b) Conditional Generation, generating new examples conditioned on those examples conforming to certain desired labels; and c) Intervention, manipulating certain desired characteristics of a data point before reconstructing it.
+
+Inspecting these tasks, we see that for classification we need a classifier form $z$ to $y$ , for conditional generation we need a mechanism for sampling $z$ given $y$ , and for inventions we need to know how to manipulate $z$ to bring about a desired change. None of these require us to have the labels directly correspond to latent variables. Moreover, as we previously explained, this assumption can be actively harmful, such as restricting the range of interventions that can be performed.
+
+# 4 CHARACTERISTIC CAPTURING VARIATIONAL AUTOENCODERS
+
+To correct the issues discussed in the last section, we suggest eschewing the treatment of labels as direct components of the latent space and instead employ them to condition latent variables which are designed to capture the characteristics. To this end, we similarly split the latent space into two components, $z = \{z_{c}, z_{\backslash c}\}$ , but where $z_{c}$ , the characteristic latent, is now designed to capture the characteristics associated with labels, rather than directly encode the labels themselves. In this breakdown, $z_{\backslash c}$ is intended only to capture information not directly associated with any of the labels, unlike $z_{\backslash y}$ which was still tasked with capturing the characteristic information.
+
+For the purposes of exposition and purely to demonstrate how one might apply this schema, we first consider a standard VAE, with a latent space $z = \{z_{c}, z_{\backslash c}\}$ . The latent representation of the VAE will implicitly contain characteristic information required to perform classification, however the structure of the latent space will be arranged to optimize for reconstruction and characteristic information may be entangled between $z_{c}$ and $z_{\backslash c}$ . If we were now to jointly learn a classifier—from $z_{c}$ to $y$ with the VAE, resulting in the following objective:
+
+$$
+\mathcal {J} = \sum_ {\boldsymbol {x} \in \mathcal {U}} \mathcal {L} _ {\mathrm {V A E}} (\boldsymbol {x}) + \sum_ {(\boldsymbol {x}, \boldsymbol {y}) \in \mathcal {S}} \left(\mathcal {L} _ {\mathrm {V A E}} (\boldsymbol {x}) + \alpha \mathbb {E} _ {q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})} \left[ \log q _ {\varphi} (\boldsymbol {y} \mid \boldsymbol {z} _ {c}) \right]\right), \tag {2}
+$$
+
+where $\alpha$ is a hyperparameter, there will be pressure on the encoder to place characteristic information in $z_{c}$ , which can be interpreted as a stochastic layer containing the information needed for classification and reconstruction1. The classifier thus acts as a tool allowing $\mathbf{y}$ to influence the structure of $\mathbf{z}$ , it is this high level concept, i.e. using $\mathbf{y}$ to structure $\mathbf{z}$ , that we utilize in this work.
+
+However, in general, the characteristics of different labels will be entangled within $z_{c}$ . Though it will contain the required information, the latents will typically be uninterpretable, and it is unclear how we could perform conditional generation or interventions. To disentangle the characteristics of different labels, we further partition the latent space, such that the classification of particular labels $y^{i}$ only has access to particular latents $z_{c}^{i}$ and thus $\log q_{\varphi}(\boldsymbol{y} \mid \boldsymbol{z}_{c}) = \sum_{i} \log q_{\varphi^{i}}(y^{i} \mid z_{c}^{i})$ . This has the critical effect of forcing the characteristic information needed to classify $y^{i}$ to be stored only in the corresponding $z_{c}^{i}$ , providing a means to encapsulate such information for each label separately. We further see that it addresses many of the prior issues: there are no measure-theoretic issues as $z_{c}^{i}$ is not discrete, diversity in interventions is achieved by sampling different $z_{c}^{i}$ for a given label, $z_{c}^{i}$ can be manipulated while remaining within class decision boundaries, and a mismatch between $p(z_{c})$ and $q(z_{c})$ does not manifest as there is no ground truth for $q(z_{c})$ .
+
+How to conditionally generate or intervene when training with (2) is not immediately obvious though. However, the classifier implicitly contains the requisite information to do this via inference in an implied Bayesian model. For example, conditional generation needs samples from $p(\mathbf{z}_c)$ that classify to the desired labels, e.g. through rejection sampling. See Appendix A for further details.
+
+# 4.1 THE CHARACTERISTIC CAPTURING VAE
+
+One way to address the need for inference is to introduce a conditional generative model $p_{\psi}(\pmb{z}_c \mid \pmb{y})$ , simultaneously learned alongside the classifier introduced in (2), along with a prior $p(\pmb{y})$ . This
+
+approach, which we term the CCVAE, allows the required sampling for conditional generations and interventions directly. Further, by persisting with the latent partitioning above, we can introduce a factorized set of generative models $p(\boldsymbol{z}_c \mid \boldsymbol{y}) = \prod_i p(\boldsymbol{z}_c^i \mid y^i)$ , enabling easy generation and manipulation of $z_c^i$ individually. CCVAE ensures that labels remain a part of the model for unlabeled datapoints, which transpires to be important for effective learning in practice.
+
+To address the issue of learning, we perform variational inference, treating $\mathbf{y}$ as a partially observed auxiliary variable. The final graphical model is illustrated in Figure 2. The CCVAE can be seen as a way of combining top-down and bottom-up information to obtain a structured latent representation. However, it is important to highlight that CCVAE does not contain a hierarchy of latent variables. Unlike a hierarchical VAE, reconstruction is performed only from $z \sim q_{\phi}(z \mid x)$ without going through the "deeper" $\mathbf{y}$ , as doing so would lead to a loss of information due to the bottleneck of $\mathbf{y}$ . By enforcing each label variable to link to different characteristic-
+
+latent dimensions, we are able to isolate the generative factors Figure 2: CCVAE graphical model. corresponding to different label characteristics.
+
+
+Figure 2: CCVAE graphical model.
+
+# 4.2 MODEL OBJECTIVE
+
+We now construct an objective function that encapsulates the model described above, by deriving a lower bound on the full model log-likelihood which factors over the supervised and unsupervised subsets as discussed in $\S 2$ . The supervised objective can be defined as
+
+$$
+\log p _ {\theta , \psi} (\boldsymbol {x}, \boldsymbol {y}) \geq \mathbb {E} _ {q _ {\varphi , \phi} (\boldsymbol {z} | \boldsymbol {x}, \boldsymbol {y})} \left[ \log \frac {p _ {\theta} (\boldsymbol {x} \mid \boldsymbol {z}) p _ {\psi} (\boldsymbol {z} \mid \boldsymbol {y}) p (\boldsymbol {y})}{q _ {\varphi , \phi} (\boldsymbol {z} \mid \boldsymbol {x} , \boldsymbol {y})} \right] \equiv \mathcal {L} _ {\mathrm {C C V A E}} (\boldsymbol {x}, \boldsymbol {y}), \tag {3}
+$$
+
+with $p_{\psi}(\boldsymbol{z} \mid \boldsymbol{y}) = p(\boldsymbol{z}_{\setminus c})p_{\psi}(\boldsymbol{z}_{c} \mid \boldsymbol{y})$ . Here, we avoid directly modeling $q_{\varphi,\phi}(\boldsymbol{z} \mid \boldsymbol{x}, \boldsymbol{y})$ ; instead leveraging the conditional independence $\boldsymbol{x} \perp \boldsymbol{y} \mid \boldsymbol{z}$ , along with Bayes rule, to give
+
+$$
+q _ {\varphi , \phi} (\boldsymbol {z} \mid \boldsymbol {x}, \boldsymbol {y}) = \frac {q _ {\varphi} (\boldsymbol {y} \mid \boldsymbol {z} _ {c}) q _ {\phi} (\boldsymbol {z} \mid \boldsymbol {x})}{q _ {\varphi , \phi} (\boldsymbol {y} \mid \boldsymbol {x})}, \quad \text {w h e r e} \quad q _ {\varphi , \phi} (\boldsymbol {y} \mid \boldsymbol {x}) = \int q _ {\varphi} (\boldsymbol {y} \mid \boldsymbol {z} _ {c}) q _ {\phi} (\boldsymbol {z} \mid \boldsymbol {x}) d \boldsymbol {z}.
+$$
+
+Using this equivalence in (3) yields (see Appendix B.1 for a derivation and numerical details)
+
+$$
+\mathcal {L} _ {\mathrm {C C V A E}} (\boldsymbol {x}, \boldsymbol {y}) = \mathbb {E} _ {q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})} \left[ \frac {q _ {\varphi} (\boldsymbol {y} \mid \boldsymbol {z} _ {c})}{q _ {\varphi , \phi} (\boldsymbol {y} \mid \boldsymbol {x})} \log \frac {p _ {\theta} (\boldsymbol {x} \mid \boldsymbol {z}) p _ {\psi} (\boldsymbol {z} \mid \boldsymbol {y})}{q _ {\varphi} (\boldsymbol {y} \mid \boldsymbol {z} _ {c}) q _ {\phi} (\boldsymbol {z} \mid \boldsymbol {x})} \right] + \log q _ {\varphi , \phi} (\boldsymbol {y} \mid \boldsymbol {x}) + \log p (\boldsymbol {y}). \tag {4}
+$$
+
+Note that a classifier term $\log q_{\varphi, \phi}(\boldsymbol{y} \mid \boldsymbol{x})$ falls out naturally from the derivation, unlike previous models (e.g. Kingma et al. (2014); Siddharth et al. (2017)). Not placing the labels directly in the latent space is crucial for this feature. When defining latents to directly correspond to labels, observing both $\boldsymbol{x}$ and $\boldsymbol{y}$ detaches the mapping $q_{\varphi, \phi}(\boldsymbol{y} \mid \boldsymbol{x})$ between them, resulting in the parameters $(\varphi, \phi)$ not being learned—motivating addition of an explicit (weighted) classifier. Here, however, observing both $\boldsymbol{x}$ and $\boldsymbol{y}$ does not detach any mapping, since they are always connected via an unobserved random variable $z_c$ , and hence do not need additional terms. From an implementation perspective, this classifier strength can be increased, we experimented with this, but found that adjusting the strength had little effect on the overall classification accuracies. We consider this insensitivity to be a significant strength of this approach, as the model is able to apply enough pressure to the latent space to obtain high classification accuracies without having to hand tune parameter values. We find that the gradient norm of the classifier parameters suffers from a high variance during training, we find that not reparameterizing through $z_c$ in $q_{\varphi}(\boldsymbol{y} \mid z_c)$ reduces this affect and aides training, see Appendix C.3.1 for details.
+
+For the datapoints without labels, we can again perform variational inference, treating the labels as random variables. Specifically, the unsupervised objective, $\mathcal{L}_{\mathrm{CCVAE}}(\pmb{x})$ , derives as the standard (unsupervised) ELBO. However, it requires marginalising over labels as $p(z) = p(z_{c})p(z_{\backslash c}) = p(z_{\backslash c})\sum_{\pmb{y}}p(z_{c}|\pmb{y})p(\pmb{y})$ . This can be computed exactly, but doing so can be prohibitively expensive if the number of possible label combinations is large. In such cases, we apply Jensen's inequality a second time to the expectation over $\pmb{y}$ (see Appendix B.2) to produce a looser, but cheaper to calculate, ELBO given as
+
+$$
+\mathcal {L} _ {\mathrm {C C V A E}} (\boldsymbol {x}) = E _ {q _ {\phi} (\boldsymbol {z} | \boldsymbol {x}) q _ {\varphi} (\boldsymbol {y} | \boldsymbol {z} _ {c})} \left[ \log \left(\frac {p _ {\theta} (\boldsymbol {x} \mid \boldsymbol {z}) p _ {\psi} (\boldsymbol {z} \mid \boldsymbol {y}) p (\boldsymbol {y})}{q _ {\varphi} (\boldsymbol {y} \mid \boldsymbol {z} _ {c}) q _ {\phi} (\boldsymbol {z} \mid \boldsymbol {x})}\right) \right]. \tag {5}
+$$
+
+Combining (4) and (5), we get the following lower bound on the log probability of the data
+
+$$
+\log p (\mathcal {D}) \geq \sum_ {(\boldsymbol {x}, \boldsymbol {y}) \in \mathcal {S}} \mathcal {L} _ {\mathrm {C C V A E}} (\boldsymbol {x}, \boldsymbol {y}) + \sum_ {\boldsymbol {x} \in \mathcal {U}} \mathcal {L} _ {\mathrm {C C V A E}} (\boldsymbol {x}), \tag {6}
+$$
+
+that unlike prior approaches faithfully captures the variational free energy of the model. As shown in § 6, this enables a range of new capabilities and behaviors to encapsulate label characteristics.
+
+# 5 RELATED WORK
+
+The seminal work of Kingma et al. (2014) was the first to consider supervision in the VAEs setting, introducing the M2 model for semi-supervised classification which was also approach to place labels directly in the latent space. The related approach of Maaloge et al. (2016) augments the encoding distribution with an additional, unobserved latent variable, enabling better semi-supervised classification accuracies. Siddharth et al. (2017) extended the above work to automatically derive the regularised objective for models with arbitrary (pre-defined) latent dependency structures. The approach of placing labels directly in the latent space was also adopted in Li et al. (2019). Regarding the disparity between continuous and discrete latent variables in the typical semi-supervised VAEs, Dupont (2018) provide an approach to enable effective unsupervised learning in this setting.
+
+From a purely modeling perspective, there also exists prior work on VAEs involving hierarchies of latent variables, exploring richer higher-order inference and issues with redundancy among latent variables both in unsupervised (Ranganath et al., 2016; Zhao et al., 2017) and semi-supervised (Maaløe et al., 2017; 2019) settings. In the unsupervised case, these hierarchical variables do not have a direct interpretation, but exist merely to improve the flexibility of the encoder. The semi-supervised approaches extend the basic M2 model to hierarchical VAEs by incorporating the labels as an additional latent (see Appendix F in Maaløe et al., 2019, for example), and hence must incorporate additional regularisers in the form of classifiers as in the case of M2. Moreover, by virtue of the typical dependencies assumed between labels and latents, it is difficult to disentangle the characteristics just associated with the label from the characteristics associated with the rest of the data—something we capture using our simpler split latents $(z_{c}, z_{\backslash c})$ .
+
+From a more conceptual standpoint, Mueller et al. (2017) introduces interventions (called revisions) on VAEs for text data, regressing to auxiliary sentiment scores as a means of influencing the latent variables. This formulation is similar to (2) in spirit, although in practice they employ a range of additional factoring and regularizations particular to their domain of interest, in addition to training models in stages, involving different objective terms. Nonetheless, they share our desire to enforce meaningfulness in the latent representations through auxiliary supervision.
+
+Another related approach involves explicitly treating labels as another data modality (Vedantam et al., 2018; Suzuki et al., 2017; Wu & Goodman, 2018; Shi et al., 2019). This work is motivated by the need to learn latent representations that jointly encode data from different modalities. Looking back to (3), by refactoring $p(z \mid y)p(y)$ as $p(y \mid z)p(z)$ , and taking $q(z \mid x,y) = \mathcal{G}(q(z \mid x),q(z \mid y))$ , one derives multi-modal VAEs, where $\mathcal{G}$ can construct a product (Wu & Goodman, 2018) or mixture (Shi et al., 2019) of experts. Of these, the MVAE (Wu & Goodman, 2018) is more closely related to our setup here, as it explicitly targets cases where alternate data modalities are labels. However, they differ in that the latent representations are not structured explicitly to map to distinct classifiers, and do not explore the question of explicitly capturing the label characteristics. The JLVM model of Adel et al. (2018) is similar to the MVAE, but is motivated from an interpretability perspective—with labels providing 'side-channel' information to constrain latents. They adopt a flexible normalising-flow posterior from data $x$ , along with a multi-component objective that is additionally regularised with the information bottleneck between data $x$ , latent $z$ , and label $y$ .
+
+DIVA (Ilse et al., 2019) introduces a similar graphical model to ours, but is motivated to learn a generalized classifier for different domains. The objective is formed of a classifier which is regularized by a variational term, requiring additional hyper-parameters and preventing the ability to disentangle the representations. In Appendix C.4 we propose some modifications to DIVA that allow it to be applied in our problem domain.
+
+In terms of interoperability, the work of Ainsworth et al. (2018) is closely related to ours, but they focus primarily on group data and not introducing labels. Here the authors employ sparsity in the multiple linear transforms for each decoder (one for each group) to encourage certain latent dimensions to encapsulate certain factors in the sample, thus introducing interoperability into the
+
+model. Tangentially to VAEs, similar objectives of structuring the latent space using GANs also exist Xiao et al. (2017; 2018), although they focus purely on interventions and cannot perform conditional generations, classification, or estimate likelihoods.
+
+# 6 EXPERIMENTS
+
+Following our reasoning in § 3 we now showcase the efficacy of CCVAE for the three broad aims of (a) intervention, (b) conditional generation and (c) classification for a variety of supervision rates, denoted by $f$ . Specifically, we demonstrate that CCVAE is able to: encapsulate characteristics for each label in an isolated manner; introduce diversity in the conditional generations; permit a finer control on interventions; and match traditional metrics of baseline models. Furthermore, we demonstrate that no existing method is able to perform all of the above,[2] highlighting its sophistication over existing methods. We compare against: M2 (Kingma et al., 2014); MVAE (Wu & Goodman, 2018); and our modified version of DIVA (Ilse et al., 2019). See Appendix C.4 for details.
+
+To demonstrate the capture of label characteristics, we consider the multi-label setting and utilise the Chexpert (Irvin et al., 2019) and CelebA (Liu et al., 2015) datasets. For CelebA, we restrict ourselves to the 18 labels which are distinguishable in reconstructions; see Appendix C.1 for details. We use the architectures from Higgins et al. (2016) for the encoder and decoder. The label-predictive distribution $q_{\varphi}(\boldsymbol{y} \mid \boldsymbol{z}_c)$ is defined as $\mathrm{Ber}(\boldsymbol{y} \mid \pi_{\varphi}(\boldsymbol{z}_c))$ with a diagonal transformation $\pi_{\varphi}(\cdot)$ enforcing $q_{\varphi}(\boldsymbol{y} \mid \boldsymbol{z}_c) = \prod_i q_{\varphi^i}(y_i \mid \boldsymbol{z}_c^i)$ . The conditional prior $p_{\psi}(\boldsymbol{z}_c \mid \boldsymbol{y})$ is then defined as $\mathcal{N}(z_c|\mu_\psi(\boldsymbol{y}), \mathrm{diag}(\sigma_\psi^2(\boldsymbol{y})))$ with appropriate factorization, and has its parameters also derived through MLPs. See Appendix C.3 for further details.
+
+# 6.1 INTERVENTIONS
+
+If CCVAE encapsulates characteristics of a label in a single latent (or small set of latents), then it should be able to smoothly manipulate these characteristics without severely affecting others. This allows for finer control during interventions, which is not possible when the latent variables directly correspond to labels. To demonstrate this, we traverse two dimensions of the latent space and display the reconstructions in Figure 3. These examples indicate that CCVAE is indeed able to smoothly manipulate characteristics. For example, in b) we are able to induce varying skin tones rather than have this be a binary intervention on pale skin, unlike DIVA in a). In c), the $z_{c}^{i}$ associated with the necktie label has also managed to encapsulate information about whether someone is wearing a shirt or is bare-necked. No such traversals are possible for M2 and it is not clear how one would do them for MVAE; additional results, including traversals for DIVA, are given in Appendix D.2.
+
+
+Figure 3: Continuous interventions through traversal of $z_{c}$ . From left to right, a) DIVA pale skin and young; b) CCVAE pale skin and young; c) CCVAE smiling and necktie; d) CCVAE Pleural Effusion and Cardiomegaly.
+
+# 6.2 DIVERSITY OF GENERATIONS
+
+Label characteristics naturally encapsulate diversity (e.g. there are many ways to smile) which should be present in the learned representations. By virtue of the structured mappings between labels and characteristic latents, and since $z_{c}$ is parameterized by continuous distributions, CCVAE is able to capture diversity in representations, allowing exploration for an attribute (e.g. smile) while
+
+
+Figure 4: Diverse conditional generations for CCVAE, $\pmb{y}$ is held constant along each row and each column represents a different sample for $z_{c} \sim p(z_{c} | \pmb{y})$ . $z_{\backslash c}$ is held constant over the entire figure.
+
+
+Figure 5: Variance in reconstructions when intervening on a single label. [Top two] CelebA, from left to right: reconstruction, bangs, eyeglasses, pale skin, smiling, necktie.. [Bottom] Chexpert: reconstruction, cardiomegaly, edema, consolidation, atelectasis, pleural effusion.
+
+preserving other characteristics. This is not possible with labels directly defined as latents, as only discrete choices can be made—diversity can only be introduced here by sampling from the unlabeled latent space—which necessarily affects all other characteristics. To demonstrate this, we reconstruct multiple times with $z = \{z_{c} \sim p_{\psi}(z_{c} \mid y), z_{\backslash c}\}$ for a fixed $z_{\backslash c}$ . We provide qualitative results in Figure 4.
+
+If several samples are taken from $\mathbf{z}_c \sim p_{\psi}(\mathbf{z}_c \mid \mathbf{y})$ when intervening on only a single characteristic, the resulting variations in pixel values should be focused around the locations relevant to that characteristic, e.g. pixel variations should be focused around the neck when intervening on necktie. To demonstrate this, we perform single interventions on each class, and take multiple samples of $\mathbf{z}_c \sim p_{\psi}(\mathbf{z}_c \mid \mathbf{y})$ . We then display the variance of each pixel in the reconstruction in green in Figure 5, where it can be seen that generally there is only variance in the spatial locations expected. Interestingly, for the class smile (2nd from right), there is variance in the jaw line, suggesting that the model is able capture more subtle components of variation that just the mouth.
+
+# 6.3 CLASSIFICATION
+
+To demonstrate that reparameterizing the labels in the latent space does not hinder classification accuracy, we inspect the predictive ability of CCVAE across a range of supervision rates, given in Table 1. It can be observed that CCVAE generally obtains prediction accuracies slightly superior to other models. We emphasize here that CCVAE's primary purpose is not to achieve better classification accuracies; we are simply checking that it does not harm them, which it most clearly does not.
+
+Table 1: Classification accuracies.
+
+| Model | CelebA | Chexpert |
| f = 0.004 | f = 0.06 | f = 0.2 | f = 1.0 | f = 0.004 | f = 0.06 | f = 0.2 | f = 1.0 |
| CCVAE | 0.832 | 0.862 | 0.878 | 0.900 | 0.809 | 0.792 | 0.794 | 0.826 |
| M2 | 0.794 | 0.862 | 0.877 | 0.893 | 0.799 | 0.779 | 0.777 | 0.774 |
| DIVA | 0.807 | 0.860 | 0.867 | 0.877 | 0.747 | 0.786 | 0.781 | 0.775 |
| MVAE | 0.793 | 0.828 | 0.847 | 0.864 | 0.759 | 0.787 | 0.767 | 0.715 |
+
+# 6.4 DISENTANGLEMENT OF LABELED AND UNLABELLED LATENTS
+
+If a model can correctly disentangle the label characteristics from other generative factors, then manipulating $z_{\backslash c}$ should not change the label characteristics of the reconstruction. To demonstrate this, we perform "characteristic swaps," where we first obtain $z = \{z_c, z_{\backslash c}\}$ for a given image, then swap in the characteristics $z_c$ to another image before reconstructing. This should apply the exact characteristics, not just the label, to the scene/background of the other image (cf. Figure 6).
+
+
+Figure 6: Characteristic swap, where the characteristics of the first image (blond hair, smiling, heavy makeup, female, no necktie, no glasses etc.) are transferred to the unlabeled characteristics of the second (red background etc.).
+
+Comparing CCVAE to our baselines in Figure 7, we see that CCVAE is able to transfer the exact characteristics to a greater extent than other models. Particular attention is drawn to the preservation of labeled characteristics in each row, where CCVAE is able to preserve characteristics, like the precise skin tone and hair color of the pictures on the left. We see that M2 is only able to preserve the label and not the exact characteristic, while MVAE performs very poorly, effectively ignoring the attributes entirely. Our modified DIVA variant performs reasonably well, but less reliably and at the cost of reconstruction fidelity compared to CCVAE.
+
+
+unlabeled contextual attributes, $z_{\backslash c}$
+Figure 7: Characteristic swaps. Characteristics (smiling, brown hair, skin tone, etc) of the left image should be preserved along the row while background information should be preserved along the column.
+
+An ideal characteristic swap should not change the probability assigned by a pre-trained classifier between the original image and a swapped one. We employ this as a quantitative measure, reporting the average difference in log probabilities for multiple swaps in Table 2. CCVAE is able to preserve the characteristics to a greater extent than other models. DIVA's performance is largely due to its heavier weighting on the classifier, which adversely affects reconstructions, as seen earlier.
+
+Table 2: Difference in log-probabilities of pre-trained classifier from denotation swaps, lower is better.
+
+| Model | CelebA | Chexpert |
| f=0.004 | f=0.06 | f=0.2 | f=1.0 | f=0.004 | f=0.06 | f=0.2 | f=1.0 |
| CCVAE | 1.177 | 0.890 | 0.790 | 0.758 | 1.142 | 1.221 | 1.078 | 1.084 |
| M2 | 2.118 | 1.194 | 1.179 | 1.143 | 1.624 | 1.43 | 1.41 | 1.415 |
| DIVA | 1.489 | 0.976 | 0.996 | 0.941 | 1.36 | 1.25 | 1.199 | 1.259 |
| MVAE | 2.114 | 2.113 | 2.088 | 2.121 | 1.618 | 1.624 | 1.618 | 1.601 |
+
+# 7 DISCUSSION
+
+We have presented a novel mechanism for faithfully capturing label characteristics in VAEs, the characteristic capturing VAE (CCVAE), which captures label characteristics explicitly in the latent space while eschewing direct correspondences between label values and latents. This has allowed us to encapsulate and disentangle the characteristics associated with labels, rather than just the label values. We are able to do so without affecting the ability to perform the tasks one typically does in the (semi-)supervised setting—namely classification, conditional generation, and intervention. In particular, we have shown that, not only does this lead to more effective conventional label-switch interventions, it also allows for more fine-grained interventions to be performed, such as producing diverse sets of samples consistent with an intervened label value, or performing characteristic swaps between datapoints that retain relevant features.
+
+# 8 ACKNOWLEDGMENTS
+
+TJ, PHST, and NS were supported by the ERC grant ERC-2012-AdG 321162-HELIOS, EPSRC grant Seebipyte EP/M013774/1 and EPSRC/MURI grant EP/N019474/1. Toshiba Research Europe also support TJ. TJ would also like to thank Dr. M. Stoddart. PHST would also like to acknowledge the Royal Academy of Engineering and FiveAI.
+SMS was partially supported by the Engineering and Physical Sciences Research Council (EPSRC) grant EP/K503113/1.
+TR's research leading to these results has received funding from a Christ Church Oxford Junior Research Fellowship and from Tencent AI Labs.
+
+# REFERENCES
+
+Tameem Adel, Zoubin Ghahramani, and Adrian Weller. Discovering interpretable representations for both deep generative and discriminative models. In International Conference on Machine Learning, pp. 50-59, 2018.
+Samuel K. Ainsworth, Nicholas J. Foti, Adrian K.C. Lee, and Emily B. Fox. Interpretable VAEs for nonlinear group factor analysis. ICML, 2018.
+Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell., 35(8):1798-1828, August 2013. ISSN 0162-8828.
+Rodney A Brooks. Intelligence without representation. Artificial intelligence, 47(1-3):139-159, 1991.
+Emilien Dupont. Learning disentangled joint continuous and discrete representations. In Advances in Neural Information Processing Systems, pp. 710-720, 2018.
+Yarin Gal. Uncertainty in deep learning. PhD thesis, University of Cambridge, 2016. Unpublished doctoral dissertation.
+Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in neural information processing systems, pp. 6626-6637, 2017.
+Irina Higgins, Loic Matthew, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-VAE: Learning basic visual concepts with a constrained variational framework. In Proceedings of the International Conference on Learning Representations, 2016.
+Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks. science, 313(5786):504-507, 2006.
+Geoffrey E Hinton and Richard S Zemel. Autoencoders, minimum description length and helmholtz free energy. In Advances in neural information processing systems, pp. 3-10, 1994.
+Maximilian Ilse, Jakub M Tomczak, Christos Louizos, and Max Welling. Diva: Domain invariant variational autoencoders. arXiv preprint arXiv:1905.10427, 2019.
+Jeremy Irvin, Pranav Rajpurkar, Michael Ko, Yifan Yu, Silviana Ciurea-Ilcus, Chris Chute, Henrik Marklund, Behzad Haghgoo, Robyn Ball, Katie Shpanskaya, et al. Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 590-597, 2019.
+Hyunjik Kim and Andriy Mnih. Disentangling by factorising. In International Conference on Machine Learning, pp. 2649-2658, 2018.
+Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
+
+Durk P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In Advances in neural information processing systems, pp. 3581-3589, 2014.
+Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332-1338, 2015.
+Yang Li, Quan Pan, Suhang Wang, Haiyun Peng, Tao Yang, and Erik Cambria. Disentangled variational auto-encoder for semi-supervised learning. Information Sciences, 482:73-85, 2019.
+Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015.
+Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Raetsch, Sylvain Gelly, Bernhard Schölkopf, and Olivier Bachem. Challenging common assumptions in the unsupervised learning of disentangled representations. In International Conference on Machine Learning, pp. 4114-4124, 2019.
+Lars Maaløe, Casper Kaae Sønderby, Søren Kaae Sønderby, and Ole Winther. Auxiliary deep generative models. arXiv preprint arXiv:1602.05473, 2016.
+Lars Maaløe, Marco Fraccaro, and Ole Winther. Semi-supervised generation with cluster-aware generative models. arXiv preprint arXiv:1704.00637, 2017.
+Lars Maaloe, Marco Fraccaro, Valentin Lievin, and Ole Winther. Biva: A very deep hierarchy of latent variables for generative modeling. In Advances in Neural Information Processing Systems, volume 32, pp. 6551-6562. Curran Associates, Inc., 2019.
+Jiayuan Mao, Chuang Gan, Pushmeet Kohli, Joshua B Tenenbaum, and Jiajun Wu. The neurosymbolic concept learner: Interpreting scenes, words, and sentences from natural supervision. arXiv preprint arXiv:1904.12584, 2019.
+Emile Mathieu, Tom Rainforth, N Siddharth, and Yee Whye Teh. Disentangling disentanglement in variational autoencoders. In International Conference on Machine Learning, pp. 4402-4412, 2019.
+Jonas Mueller, David Gifford, and Tommi Jaakkola. Sequence to better sequence: continuous revision of combinatorial structures. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2536-2544. JMLR.org, 2017.
+Rajesh Ranganath, Dustin Tran, and David Blei. Hierarchical variational models. In International Conference on Machine Learning, pp. 324-333, 2016.
+Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In International Conference on Machine Learning, pp. 1278-1286, 2014.
+Yuge Shi, N. Siddharth, Brooks Paige, and Philip H. S. Torr. Variational mixture-of-experts autoencoders for multi-modal deep generative models. In Advances in Neural Information Processing Systems (NeurIPS), pp. 15692-15703, December 2019.
+N. Siddharth, T Brooks Paige, Jan-Willem Van de Meent, Alban Desmaison, Noah Goodman, Pushmeet Kohli, Frank Wood, and Philip Torr. Learning disentangled representations with semi-supervised deep generative models. In Advances in Neural Information Processing Systems, pp. 5925-5935, 2017.
+Lewis Smith and Yarin Gal. Understanding measures of uncertainty for adversarial example detection. arXiv preprint arXiv:1803.08533, 2018.
+Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther. Ladder variational autoencoders. In Advances in neural information processing systems, pp. 3738-3746, 2016.
+Masahiro Suzuki, Kotaro Nakayama, and Yutaka Matsuo. Joint multimodal learning with deep generative models. In International Conference on Learning Representations Workshop, 2017.
+
+Joshua B Tenenbaum. Mapping a manifold of perceptual observations. In Advances in neural information processing systems, pp. 682-688, 1998.
+Joshua B Tenenbaum and William T Freeman. Separating style and content with bilinear models. Neural computation, 12(6):1247-1283, 2000.
+Ramakrishna Vedantam, Ian Fischer, Jonathan Huang, and Kevin Murphy. Generative models of visually grounded imagination. In Proceedings of the International Conference on Learning Representations, 2018.
+Martin J Wainwright and Michael I Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends® in Machine Learning, 1(1-2):1-305, 2008.
+Mike Wu and Noah Goodman. Multimodal generative models for scalable weakly-supervised learning. In Advances in Neural Information Processing Systems, pp. 5580-5590, 2018.
+Taihong Xiao, Jiapeng Hong, and Jinwen Ma. Dna-gan: Learning disentangled representations from multi-attribute images. arXiv preprint arXiv:1711.05415, 2017.
+Taihong Xiao, Jiapeng Hong, and Jinwen Ma. Elegant: Exchanging latent encodings with gan for transferring multiple face attributes. In Proceedings of the European conference on computer vision (ECCV), pp. 168-184, 2018.
+Shengjia Zhao, Jiaming Song, and Stefano Ermon. Learning hierarchical features from deep generative models. In International Conference on Machine Learning, pp. 4091-4099, 2017.
+
+# A CONDITIONAL GENERATION AND INTERVENTION FOR EQUATION (2)
+
+For the model trained using (2) as the objective to be usable, we must consider whether it can carry out the classification, conditional generation, and intervention tasks outlined previously. Of these, classification is straightforward, but it is less apparent how the others could be performed. The key here is to realize that the classifier itself implicitly contains the information required to perform these tasks.
+
+Consider first conditional generation and note that we still have access to the prior $p(z)$ as per a standard VAE. One simple way of performing conditional generation would be to conduct a rejection sampling where we draw samples $\hat{z} \sim p(z)$ and then accept these if and only if they lead to the classifier predicting the desired labels up to a desired level of confidence, i.e. $q_{\phi}(\boldsymbol{y} \mid \hat{\boldsymbol{z}}_c) > \lambda$ where $0 < \lambda < 1$ is some chosen confidence threshold. Though such an approach is likely to be highly inefficient for any general $p(z)$ due to the curse of dimensionality, in the standard setting where each dimension of $z$ is independent, this rejection sampling can be performed separately for each $z_c^i$ , making it relatively efficient. More generally, we have that conditional generation becomes an inference problem where we wish to draw samples from
+
+$$
+p \left(\boldsymbol {z} \mid \left\{q _ {\phi} (\boldsymbol {y} \mid \boldsymbol {z} _ {c}) > \lambda \right\}\right) \propto p (\boldsymbol {z}) \mathbb {I} \left(q _ {\phi} (\boldsymbol {y} \mid \boldsymbol {z} _ {c}) > \lambda\right).
+$$
+
+Interventions can also be performed in an analogous manner. Namely, for a conventional intervention where we change one or more labels, we can simply resample the $z_{c}^{i}$ associated we those labels, thereby sampling new characteristics to match the new labels. Further, unlike prior approaches, we can perform alternative interventions too. For example, we might attempt to find the closest $z_{c}^{i}$ to the original that leads to the class label changing; this can be done in a manner akin to how adversarial attacks are performed. Alternatively, we might look to manipulate the $z_{c}^{i}$ without actually changing the class itself to see what other characteristics are consistent with the labels.
+
+To summarize, (2) yields an objective which provides a way of learning a semi-supervised VAEs that avoids the pitfalls of directly fixing the latents to correspond to labels. It still allows us to perform all the tasks usually associated with semi-supervised VAEs and in fact allows a more general form of interventions to be performed. However, this comes at the cost of requiring inference to perform conditional generation or interventions. Further, as the label variables $\mathbf{y}$ are absent when the labels are unobserved, there may be empirical complications with forcing all the denotational information to be encoded to the appropriate characteristic latent $z_{c}^{i}$ . In particular, we still have a hyperparameter $\alpha$ that must be carefully tuned to ensure the appropriate balance between classification and reconstruction.
+
+# B MODEL FORMULATION
+
+# B.1 VARIATIONAL LOWER BOUND
+
+In this section we provide the mathematical details of our objective functions. We show how to derive it as a lower bound to the marginal model likelihood and show how we estimate the model components.
+
+The variational lower bound for the generative model in Figure 2, is given as
+
+$$
+\mathcal {L} _ {\mathrm {C C V A E}} = \sum_ {\boldsymbol {x} \in \mathcal {U}} \mathcal {L} _ {\mathrm {C C V A E}} (\boldsymbol {x}) + \sum_ {(\boldsymbol {x}, \boldsymbol {y}) \in \mathcal {S}} \mathcal {L} _ {\mathrm {C C V A E}} (\boldsymbol {x}, \boldsymbol {y})
+$$
+
+$$
+\mathcal {L} _ {\mathrm {C C V A E}} (\boldsymbol {x}, \boldsymbol {y}) = E _ {q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})} \left[ \frac {q _ {\varphi} (\boldsymbol {y} \mid \boldsymbol {z} _ {c})}{q _ {\varphi , \phi} (\boldsymbol {y} \mid \boldsymbol {x})} \log \left(\frac {p _ {\theta} (\boldsymbol {x} \mid \boldsymbol {z}) p _ {\psi} (\boldsymbol {z} \mid \boldsymbol {y})}{q _ {\varphi} (\boldsymbol {y} \mid \boldsymbol {z} _ {c}) q _ {\phi} (\boldsymbol {z} \mid \boldsymbol {x})}\right) \right] + \log q _ {\varphi , \phi} (\boldsymbol {y} \mid \boldsymbol {x}) + \log p (\boldsymbol {y}),
+$$
+
+$$
+\mathcal {L} _ {\mathrm {C C V A E}} (\boldsymbol {x}) = E _ {q _ {\phi} (\boldsymbol {z} | \boldsymbol {x}) q _ {\varphi} (\boldsymbol {y} | \boldsymbol {z} _ {c})} \left[ \log \left(\frac {p _ {\theta} (\boldsymbol {x} \mid \boldsymbol {z}) p _ {\psi} (\boldsymbol {z} _ {c} \mid \boldsymbol {y}) p (\boldsymbol {y})}{q _ {\varphi} (\boldsymbol {y} \mid \boldsymbol {z} _ {c}) q _ {\phi} (\boldsymbol {z} \mid \boldsymbol {x})}\right) \right].
+$$
+
+The overall likelihood in the semi-supervised case is given as
+
+$$
+p _ {\theta} (\boldsymbol {x}, \boldsymbol {y}) = \prod_ {(\boldsymbol {x}, \boldsymbol {y}) \in \mathcal {S}} p _ {\theta} (\boldsymbol {x}, \boldsymbol {y}) \prod_ {\boldsymbol {x} \in \mathcal {U}} p _ {\theta} (\boldsymbol {x}),
+$$
+
+To derive a lower bound for the overall objective, we need to obtain lower bounds on $\log p_{\theta}(\pmb{x})$ and $\log p_{\theta}(\pmb{x},\pmb{y})$ . When the labels are unobserved the latent state will consist of $\pmb{z}$ and $\pmb{y}$ . Using the
+
+factorization according to the graph in Figure 2 yields
+
+$$
+\log p _ {\theta} (\boldsymbol {x}) \geq E _ {q _ {\phi} (\boldsymbol {z} | \boldsymbol {x}) q _ {\varphi} (\boldsymbol {y} | \boldsymbol {z} _ {c})} \left[ \log \left(\frac {p _ {\theta} (\boldsymbol {x} \mid \boldsymbol {z}) p _ {\psi} (\boldsymbol {z} \mid \boldsymbol {y}) p (\boldsymbol {y})}{q _ {\varphi} (\boldsymbol {y} \mid \boldsymbol {z} _ {c}) q _ {\phi} (\boldsymbol {z} \mid \boldsymbol {x})}\right) \right],
+$$
+
+where $p_{\psi}(\boldsymbol{z} \mid \boldsymbol{y}) = p(\boldsymbol{z}_{\backslash c})p_{\psi}(\boldsymbol{z}_c \mid \boldsymbol{y})$ . For supervised data points we consider a lower bound on the likelihood $p_{\theta}(\boldsymbol{x}, \boldsymbol{y})$ ,
+
+$$
+\log p _ {\theta} (\boldsymbol {x}, \boldsymbol {y}) \geq \int \log \frac {p _ {\theta} (\boldsymbol {x} \mid \boldsymbol {z}) p _ {\psi} (\boldsymbol {z} \mid \boldsymbol {y}) p (\boldsymbol {y})}{q _ {\varphi , \phi} (\boldsymbol {z} \mid \boldsymbol {x} , \boldsymbol {y})} q _ {\varphi , \phi} (\boldsymbol {z} \mid \boldsymbol {x}, \boldsymbol {y}) d \boldsymbol {z},
+$$
+
+in order to make sense of the term $q_{\varphi, \phi}(\boldsymbol{z} \mid \boldsymbol{x}, \boldsymbol{y})$ , which is usually different from $q_{\phi}(\boldsymbol{z} \mid \boldsymbol{x})$ we consider the inference model
+
+$$
+q _ {\varphi , \phi} (\boldsymbol {z} \mid \boldsymbol {x}, \boldsymbol {y}) = \frac {q _ {\varphi} (\boldsymbol {y} \mid \boldsymbol {z} _ {c}) q _ {\phi} (\boldsymbol {z} \mid \boldsymbol {x})}{q _ {\varphi , \phi} (\boldsymbol {y} \mid \boldsymbol {x})}, \quad \text {w h e r e} \quad q _ {\varphi , \phi} (\boldsymbol {y} \mid \boldsymbol {x}) = \int q _ {\varphi} (\boldsymbol {y} \mid \boldsymbol {z} _ {c}) q _ {\phi} (\boldsymbol {z} \mid \boldsymbol {x}) d \boldsymbol {z}.
+$$
+
+Returning to the lower bound on $\log p_{\theta}(\pmb{x},\pmb{y})$ we obtain
+
+$$
+\begin{array}{l} \log p _ {\theta} (\boldsymbol {x}, \boldsymbol {y}) \geq \int \log \frac {p _ {\theta} (\boldsymbol {x} \mid \boldsymbol {z}) p _ {\psi} (\boldsymbol {z} \mid \boldsymbol {y}) p (\boldsymbol {y})}{q (\boldsymbol {z} \mid \boldsymbol {x} , \boldsymbol {y})} q (\boldsymbol {z} \mid \boldsymbol {x}, \boldsymbol {y}) d \boldsymbol {z} \\ = \int \log \left(\frac {p _ {\theta} (\boldsymbol {x} \mid \boldsymbol {z}) p _ {\psi} (\boldsymbol {z} \mid \boldsymbol {y}) p (\boldsymbol {y}) q _ {\varphi , \phi} (\boldsymbol {y} \mid \boldsymbol {x})}{q _ {\varphi} (\boldsymbol {y} \mid \boldsymbol {z} _ {c}) q _ {\phi} (\boldsymbol {z} \mid \boldsymbol {x})}\right) \frac {q _ {\varphi} (\boldsymbol {y} \mid \boldsymbol {z} _ {c}) q _ {\phi} (\boldsymbol {z} \mid \boldsymbol {x})}{q _ {\varphi , \phi} (\boldsymbol {y} \mid \boldsymbol {x})} d \boldsymbol {z} \\ = E _ {q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})} \left[ \frac {q _ {\varphi} (\boldsymbol {y} \mid \boldsymbol {z} _ {c})}{q _ {\varphi , \phi} (\boldsymbol {y} \mid \boldsymbol {x})} \log \left(\frac {p (\boldsymbol {x} \mid \boldsymbol {z}) p _ {\psi} (\boldsymbol {z} _ {c} \mid \boldsymbol {y})}{q _ {\varphi} (\boldsymbol {y} \mid \boldsymbol {z} _ {c}) q _ {\phi} (\boldsymbol {z} \mid \boldsymbol {x})}\right) \right] + \log q _ {\varphi , \phi} (\boldsymbol {y} \mid \boldsymbol {x}) + \log p (\boldsymbol {y}), \\ \end{array}
+$$
+
+where $q_{\varphi}(\boldsymbol{y} \mid \boldsymbol{z}_c) / q_{\varphi, \phi}(\boldsymbol{y} \mid \boldsymbol{x})$ denotes the Radon-Nikodym derivative of $q_{\varphi, \phi}(\boldsymbol{z} \mid \boldsymbol{x}, \boldsymbol{y})$ with respect to $q_{\phi}(\boldsymbol{z} \mid \boldsymbol{x})$ .
+
+# B.2 ALTERNATIVE DERIVATION OF UNSUPERVISED BOUND
+
+The bound for the unsupervised case can alternatively be derived by applying Jensen's inequality twice. First, use the standard (unsupervised) ELBO
+
+$$
+\log p _ {\theta} (\boldsymbol {x}) \geq \mathbb {E} _ {q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})} \left[ \log \frac {p _ {\theta} (\boldsymbol {x} \mid \boldsymbol {z}) p (\boldsymbol {z})}{q _ {\phi} (\boldsymbol {z} \mid \boldsymbol {x})} \right].
+$$
+
+Now, since calculating $p(\pmb{z}) = p(\pmb{z}_c)p(\pmb{z}_{\backslash c}) = p(\pmb{z}_{\backslash c})\sum_{\pmb{y}}p(\pmb{z}_c|\pmb{y})p(\pmb{y})$ can be expensive we can apply Jensen's inequality a second time to the expectation over $\pmb{z}_c$ to obtain
+
+$$
+\log p \left(\boldsymbol {z} _ {c}\right) \geq \mathbb {E} _ {q _ {\varphi} (\boldsymbol {y} | \boldsymbol {z} _ {c})} \left[ \log \frac {p _ {\psi} \left(\boldsymbol {z} _ {s} \mid \boldsymbol {y}\right) p (\boldsymbol {y})}{q _ {\varphi} (\boldsymbol {y} \mid \boldsymbol {z} _ {s})} \right].
+$$
+
+Substituting this bound into the unsupervised ELBO yields again our bound
+
+$$
+\log p (\boldsymbol {x}) \geq \mathbb {E} _ {q _ {\phi} (\boldsymbol {z} | \boldsymbol {x}) q _ {\varphi} (\boldsymbol {y} | \boldsymbol {z} _ {c})} \left[ \log \frac {p _ {\theta} (\boldsymbol {x} \mid \boldsymbol {z}) p (\boldsymbol {z} \mid \boldsymbol {y})}{q _ {\phi} (\boldsymbol {z} \mid \boldsymbol {x}) q _ {\varphi} (\boldsymbol {y} \mid \boldsymbol {z} _ {c})} \right] + \log p (\boldsymbol {y}) \tag {7}
+$$
+
+# C IMPLEMENTATION
+
+# C.1 CELEBA
+
+We chose to use only a subset of the labels present in CelebA, since not all attributes are visually distinguishable in the reconstructions e.g. (earrings). As such we limited ourselves to the following labels: arched eyebrows, bags under eyes, bangs, black hair, blond hair, brown hair, bushy eyebrows, chubby, eyeglasses, heavy makeup, male, no beard, pale skin, receding hairline, smiling, wavy hair, wearing necktie, young. No images were omitted or cropped, the only modifications were keeping the aforementioned labels and resizing the images to be $64 \times 64$ in dimension.
+
+# C.2 CHEXPERT
+
+The Chexpert dataset comprises of chest X-rays taken from a variety of patients. We down-sampled each image to be $64 \times 64$ and used the same networks from the CelebA experiments. The five main attributes for Chexpert are: cardiomegaly, edema, consolidation, atelectasis, pleural effusion. Which for non medical experts can be interpreted as: enlargement of the heart; fluid in the alveoli; fluid in the lungs; collapsed lung; fluid in the corners of the lungs.
+
+# C.3 IMPLEMENTATION DETAILS
+
+For our experiments we define the generative and inference networks as follows. The approximate posterior is represented as $q_{\phi}(z \mid x) = \mathcal{N}(z_c, z_{\backslash c} \mid \mu_{\phi}(x), \mathrm{diag}(\sigma_{\phi}^2(x)))$ with $\mu_{\phi}(x)$ and $\mathrm{diag}(\sigma_{\phi}^2(x))$ being the architecture from Higgins et al. (2016). The generative model $p_{\theta}(x \mid z)$ is represented by a Laplace distribution, again parametrized using the architecture from Higgins et al. (2016). The label predictive distribution $q_{\varphi}(y \mid z_c)$ is represented as $\mathrm{Ber}(y \mid \pi_{\varphi}(z_c))$ with $\pi_{\varphi}(z_c)$ being a diagonal transformation forcing the factorisation $q_{\varphi}(y \mid z_c) = \prod_i q_{\psi^i}(y_i \mid z_c^i)$ . The conditional prior is given as $p_{\psi}(z_c \mid y) = \mathcal{N}(z_c \mid \mu_{\psi}(y), \mathrm{diag}(\sigma_{\psi}^2(y)))$ , with the appropriate factorisation, where the parameters are represented by an MLP. Finally, the prior placed on the portion of the latent space reserved for unlabelled latent variables is $p(z_{\backslash c}) = \mathcal{N}(z_{\backslash c} \mid 0, \mathbf{I})$ . For the latent space $z_c \in \mathbb{R}^{m_c}$ and $z_{\backslash c} \in \mathbb{R}^{m_{\backslash c}}$ , where $m = m_c + m_{\backslash c}$ with $m_c = 18$ and $m_{\backslash c} = 27$ for CelebA. The architectures are given in and Table 3.
+
+| Encoder | Decoder |
| Input 32 x 32 x 3 channel image | Input ∈ Rm |
| 32 × 3 × 4 × 4 Conv2d stride 2 & ReLU | m × 256 Linear layer |
| 32 × 32 × 4 × 4 Conv2d stride 2 & ReLU | 128 × 256 × 4 × 4 ConvTranspose2d stride 1 & ReLU |
| 64 × 32 × 4 × 4 Conv2d stride 2 & ReLU | 64 × 128 × 4 × 4 ConvTranspose2d stride 2 & ReLU |
| 128 × 64 × 4 × 4 Conv2d stride 2 & ReLU | 32 × 64 × 4 × 4 ConvTranspose2d stride 2 & ReLU |
| 256 × 128 × 4 × 4 Conv2d stride 1 & ReLU | 32 × 32 × 4 × 4 ConvTranspose2d stride 2 & ReLU |
| 256 × (2×m) Linear layer | 3 × 32 × 4 × 4 ConvTranspose2d stride 2 & Sigmoid |
+
+| Classifier | Conditional Prior |
| Input ∈ Rm_c | Input ∈ Rm_c |
| mc × mc Diagonal layer | mc × mc Diagonal layer |
+
+Table 3: Architectures for CelebA and Chexpert.
+
+Optimization We trained the models on a GeForce GTX Titan GPU. Training consumed $\sim 2\mathrm{Gb}$ for CelebA and Chexpert, taking around 2 hours to complete 100 epochs respectively. Both models were optimized using Adam with a learning rate of $2\times 10^{-4}$ for CelebA respectively.
+
+# C.3.1 HIGH VARIANCE OF CLASSIFIER GRADIENTS
+
+The gradients of the classifier parameters $\varphi$ suffer from a high variance during training. We find that not reparameterizing $z_{c}$ for $q_{\varphi}(\boldsymbol{y} \mid \boldsymbol{z}_{c})$ reduces this issue:
+
+$$
+\mathcal {L} _ {\mathrm {C C V A E}} (\boldsymbol {x}, \boldsymbol {y}) = \mathbb {E} _ {q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})} \left[ \frac {q _ {\varphi} (\boldsymbol {y} \mid \bar {\boldsymbol {z}} _ {\boldsymbol {c}})}{q _ {\varphi , \phi} (\boldsymbol {y} \mid \boldsymbol {x})} \log \frac {p _ {\theta} (\boldsymbol {x} \mid \boldsymbol {z}) p _ {\psi} (\boldsymbol {z} \mid \boldsymbol {y})}{q _ {\varphi} (\boldsymbol {y} \mid \bar {\boldsymbol {z}} _ {\boldsymbol {c}}) q _ {\phi} (\boldsymbol {z} \mid \boldsymbol {x})} \right] + \log q _ {\varphi , \phi} (\boldsymbol {y} \mid \boldsymbol {x}) + \log p (\boldsymbol {y}). \tag {8}
+$$
+
+where $\bar{z}_c$ indicates that we do not reparameterize the sample. This significantly reduces the variance of the magnitude of the gradient norm $\nabla_{\varphi}$ , allowing the classifier to learn appropriate weights and structure the latent space. This can be seen in Figure 8, where we plot the gradient norm of $\varphi$ for when we do reparameterize $z_c$ (blue) and when we do not (orange). Clearly not reparameterizing leads to a lower variance in the gradient norm of the classifier, which aides learning. To a certain extent these gradients can be viewed as redundant, as there is already gradients to update the predictive distribution due to the $\log q_{\varphi,\phi}(\mathbf{y}|\mathbf{x})$ term anyway.
+
+
+Figure 8: Gradient norms of classifier.
+
+
+Figure 9: Left: Generative model for DIVA, Right: Inference model where dashed line indicates auxiliary classifier.
+
+
+
+# C.4 MODIFIED DIVA
+
+The primary goal of DIVA is domain invariant classification and not to obtain representations of individual characteristics like we do here. The objective is essentially a classifier which is regularized by a variational objective. However, to achieve domain generalization, the authors aim to disentangle the domain, class and other generative factors. This motivation leads to a graphical model that is similar in spirit to ours (Figure 9), in that the latent variables are used to predict labels, and the introduction of the inductive bias to partition the latent space. As such, DIVA can be modified to suit our problem of encapsulating characteristics. The first modification we need to consider is the removal of $\boldsymbol{z}_d$ , as we are not considering multi-domain problems. Secondly, we introduce the factorization present in CCVAE, namely $q_{\varphi}(\boldsymbol{y} \mid \boldsymbol{z}_c) = \prod_i q_{\psi^i(y_i} | \boldsymbol{z}_c^i)$ . With these two modifications an alternative objective can now be constructed, with the supervised given as
+
+$$
+\begin{array}{l} \mathcal {L} _ {S D I V A} (\boldsymbol {x}, \boldsymbol {y}) = \mathbb {E} _ {q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})} \log p _ {\theta} (\boldsymbol {x} \mid \boldsymbol {z}) - \beta K L (q _ {\phi} (\boldsymbol {z} \backslash_ {\mathbf {c}} | x) | | p (\boldsymbol {z} \backslash_ {\mathbf {c}})) \\ - \beta K L (q _ {\phi} (\boldsymbol {z} _ {c} | x) | | p _ {\psi} (\boldsymbol {z} _ {c} | \boldsymbol {y})) \\ \end{array}
+$$
+
+and the unsupervised as
+
+$$
+\begin{array}{l} \mathcal {L} _ {U D I V A} (\boldsymbol {x}) = \mathbb {E} _ {q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})} \log p _ {\theta} (\boldsymbol {x} \mid \boldsymbol {z}) - \beta K L (q _ {\phi} (\boldsymbol {z} \backslash_ {\mathbf {c}} | x) | | p (\boldsymbol {z} \backslash_ {\mathbf {c}})) \\ + \beta \mathbb {E} _ {q _ {\phi} (\boldsymbol {z} _ {c} | x) q _ {\varphi} (\boldsymbol {y} | \boldsymbol {z} _ {c})} [ \log p _ {\psi} (\boldsymbol {z} _ {c} | \boldsymbol {y}) - \log q _ {\phi} (\boldsymbol {z} _ {c} | x) ], \\ + \beta \mathbb {E} _ {q _ {\phi} (\boldsymbol {z} _ {c} | x) q _ {\varphi} (\boldsymbol {y} | \boldsymbol {z} _ {c})} [ \log p (\boldsymbol {y}) - \log q _ {\varphi} (\boldsymbol {y} | \boldsymbol {z} _ {c}) ], \\ \end{array}
+$$
+
+where $y$ has to be imputed. The final objective for DIVA is then given as
+
+$$
+\log p _ {\theta} \left(\mathcal {D}\right) \geq \sum_ {\left(\boldsymbol {x}, \boldsymbol {y}\right) \in \mathcal {S}} \mathcal {L} _ {S D I V A} (\boldsymbol {x}, \boldsymbol {y}) + \sum_ {\boldsymbol {x} \in \mathcal {U}} \left[ \mathcal {L} _ {U D I V A} (\boldsymbol {x}) + \alpha \mathbb {E} _ {q \left(\boldsymbol {z} _ {c} \mid \boldsymbol {x}\right)} \log q _ {\varphi} (\boldsymbol {y} \mid \boldsymbol {z} _ {c}) \right].
+$$
+
+It is interesting to note the differences to the objective of CCVAE, namely, there is no emergence of a natural classifier in the supervised case, and $\pmb{y}$ has to be imputed in the unsupervised case instead of relying on variational inference as in CCVAE. Clearly such differences have a significant impact on performance as demonstrated by the main results of this paper.
+
+# D ADDITIONAL RESULTS
+
+# D.1 SINGLE INTERVENTIONS
+
+Here we demonstrate single interventions where we change the binary value for the desired attributes. To quantitatively evaluate the single interventions, we intervene on a single label and report the changes in log-probabilities assigned by a pre-trained classifier. If the single intervention only affects the characteristics of the chosen label, then there should be no change in other classes and only a change on the chosen label. Intervening on all possible labels yields a confusion matrix, with the optimal results being a diagonal matrix with zero off-diagonal elements. We also report the condition number for the confusion matrices, given in the titles.
+
+It is interesting to note that the interventions for CCVAE are subtle, this is due to the latent $z_{c}^{i} \sim p(z_{c}^{i}|y_{i})$ , which will be centered around the mean. More striking intervention can be achieved by traversing along $z_{c}^{i}$ .
+
+
+
+
+
+
+Figure 10: Confusion matrices for CCVAE for (from top left clockwise) $f = 0.004, 0.06, 0.2, 1.0$
+
+
+
+
+Figure 11: CCVAE. From left to right: original, reconstruction, then interventions from switching on the following labels: arched eyebrows, bags under eyes, bangs, black hair, blond hair, brown hair, bushy eyebrows, chubby, eyeglasses, heavy makeup, male, no beard, pale skin, receding hairline, smiling, wavy hair, wearing necktie, young.
+
+
+
+
+
+
+Figure 12: Confusion matrices for M2 for (from top left clockwise) $f = 0.004, 0.06, 0.2, 1.0$
+
+
+
+
+Figure 13: M2. From left to right: original, reconstruction, then interventions from switching on the following labels: arched eyebrows, bags under eyes, bangs, black hair, blond hair, brown hair, bushy eyebrows, chubby, eyeglasses, heavy makeup, male, no beard, pale skin, receding hairline, smiling, wavy hair, wearing necktie, young.
+
+
+
+
+
+
+Figure 14: Confusion matrices for DIVA for (from top left clockwise) $f = 0.004, 0.06, 0.2, 1.0$
+
+
+
+
+Figure 15: DIVA. From left to right: original, reconstruction, then interventions from switching on the following labels: arched eyebrows, bags under eyes, bangs, black hair, blond hair, brown hair, bushy eyebrows, chubby, eyeglasses, heavy makeup, male, no beard, pale skin, receding hairline, smiling, wavy hair, wearing necktie, young.
+
+
+
+
+
+
+Figure 16: Confusion matrices for MVAE for (from top left clockwise) $f = 0.004, 0.06, 0.2, 1.0$
+
+
+
+
+Figure 17: MVAE. From left to right: original, reconstruction, then interventions from switching on the following labels: arched eyebrows, bags under eyes, bangs, black hair, blond hair, brown hair, bushy eyebrows, chubby, eyeglasses, heavy makeup, male, no beard, pale skin, receding hairline, smiling, wavy hair, wearing necktie, young.
+
+# D.2 LATENT TRIVERSALS
+
+Here we provide more latent traversals for CCVAE in Figure 18 and for DIVA in Figure 19. CCVAE is able to smoothly alter characteristics, indicating that it is able to encapsulate characteristics in a single dimension, unlike DIVA which is unable to alter the characteristics effectively, suggesting it cannot encapsulate the characteristics.
+
+# D.3 GENERATION
+
+We provide results for the fidelity of image generation on CelebA. To do this we use the FID metric Heusel et al. (2017), we omitted results for Chexpert as the inception model used in FID has not been trained on the typical features associated with X-Rays. The results are given in Table 4, interestingly for low supervision rates MVAE obtains the best performance but for higher supervision rates M2 outperforms MVAE. We posit that this is due to MVAE having little structure imposed on the latent space, as such the POE can structure the representation purely for reconstruction without considering the labels, something which is not possible as the supervision rate is increased. CCVAE obtains competitive results with respect to M2. It is important to note that generative fidelity is not the focus of this work as we focus purely on how to structure the latent space using labels. It is no surprise then that the generations are bad as structuring the latent space will potentially be at odds with the reconstruction term in the loss.
+
+Table 4: CelebA FID scores.
+
+| Model | f = 0.004 | f = 0.06 | f = 0.2 | f = 1.0 |
| CCVAE | 127.956 | 121.84 | 121.751 | 120.457 |
| M2 | 127.719 | 122.521 | 120.406 | 119.228 |
| DIVA | 192.448 | 230.522 | 218.774 | 201.484 |
| MVAE | 118.308 | 115.947 | 128.867 | 137.461 |
+
+# D.4 CONDITIONAL GENERATION
+
+To assess conditional generation, we first train an independent classifier for both datasets. We then conditionally generate samples given labels and evaluate them using this pre-trained classifier. Results provided in Table 5. CCVAE and M2 are comparable in generative abilities, but DIVA and MVAE perform poorly, indicated by random guessing.
+
+Table 5: Generations accuracies.
+
+| Model | CelebA | Chexpert |
| f=0.004 | f=0.06 | f=0.2 | f=1.0 | f=0.004 | f=0.06 | f=0.2 | f=1.0 |
| CCVAE | 0.513 | 0.605 | 0.612 | 0.596 | 0.516 | 0.563 | 0.549 | 0.542 |
| M2 | 0.499 | 0.61 | 0.612 | 0.611 | 0.503 | 0.547 | 0.547 | 0.558 |
| DIVA | 0.501 | 0.501 | 0.501 | 0.501 | 0.499 | 0.503 | 0.503 | 0.503 |
| MVAE | 0.501 | 0.501 | 0.501 | 0.501 | 0.499 | 0.499 | 0.499 | 0.499 |
+
+# D.5 DIVERSITY OF CONDITIONAL GENERATIONS
+
+We also report more examples for diversity, as in Figure 5, in Figure 20.
+
+# D.6 MULTI-CLASS SETTING
+
+Here we provide results for the multi-class setting of MNIST and FashionMNIST. The multi-class setting is somewhat tangential to our work, but we include it for completeness. For CCVAE, we have some flexibility over the size of the latent space. Trying to encapsulate representations for each label is not well suited for this setting, as it's not clear how you could alter the representation of an image being a 6, whilst preserving the representation of it being an 8. In fact, there is really only one label for this setting, but it takes multiple values. With this in mind, we can now make an explicit choice about how the latent space will be structured, we can set $z_{c} \in \mathbb{R}$ or $z_{c} \in \mathbb{R}^{N}$ , or conversely, store all of the representation in $z_{c}$ , i.e. $z_{\backslash c} = \emptyset$ . Furthermore, we do not need to enforce the factorization $q_{\varphi}(y \mid z_c) = \prod_i q(y_i|z_c^i)$ , and instead can be parameterized by a function $\mathcal{F}: \mathbb{R}^N \to \mathbb{R}^M$ where $M$ is the number of possible classes.
+
+Classification We provide the classification results in Table 6.
+
+
+Figure 18: Various latent traversals for CCVAE.
+
+
+Figure 19: Various latent traversals for DIVA.
+
+
+Figure 20: CCVAE, variance in reconstructions when intervening on a single label. From left to right: reconstruction, then interventions from switching on the following labels: arched eyebrows, bags under eyes, bangs, black hair, blond hair, brown hair, bushy eyebrows, chubby, eyeglasses, heavy makeup, male, no beard, pale skin, receding hairline, smiling, wavy hair, wearing necktie, young.
+
+Table 6: Additional classification accuracies.
+
+| Model | MNIST | FashionMNIST |
| f=0.004 | f=0.06 | f=0.2 | f=1.0 | f=0.004 | f=0.06 | f=0.2 | f=1.0 |
| CCVAE | 0.927 | 0.974 | 0.979 | 0.988 | 0.741 | 0.865 | 0.879 | 0.901 |
| M2 | 0.918 | 0.962 | 0.968 | 0.981 | 0.756 | 0.848 | 0.860 | 0.892 |
+
+Conditional Generation We provide classification accuracies for pre-trained classifier using conditional generated samples as input and the condition as a label. We also report the mutual information to give an indication of how out-of-distribution the samples are. In order to estimate the uncertainty, we transform a fixed pre-trained classifier into a Bayesian predictive classifier that integrates over the posterior distribution of parameters $\omega$ as $p(\boldsymbol{y} \mid \boldsymbol{x}, \mathcal{D}) = \int p(\boldsymbol{y} \mid \boldsymbol{x}, \omega) p(\omega \mid \mathcal{D}) \mathrm{d}\omega$ . The utility of classifier uncertainties for out-of-distribution detection has previously been explored Smith & Gal (2018), where dropout is also used at test time to estimate the mutual information (MI) between the predicted label $\boldsymbol{y}$ and parameters $\omega$ (Gal, 2016; Smith & Gal, 2018) as
+
+$$
+I (\boldsymbol {y}, \omega \mid \boldsymbol {x}, \mathcal {D}) = H [ p (\boldsymbol {y} \mid \boldsymbol {x}, \mathcal {D}) ] - \mathbb {E} _ {p (\omega \mid \mathcal {D})} [ H [ p (\boldsymbol {y} \mid \boldsymbol {x}, \omega) ] ].
+$$
+
+However, the Monte Carlo (MC) dropout approach has the disadvantage of requiring assembling over multiple instances of the classifier for a robust estimate and repeated forward passes through the classifier to estimate MI. To mitigate this, we instead employ a sparse variational GP (with 200 inducing points) as a replacement for the last linear layer of the classifier, fitting just the GP to the data and labels while holding the rest of the classifier fixed. This, in our experience, provides a more robust and cheaper alternative to MC-dropout for estimating MI. Results are provided in Table 7.
+
+Latent Traversals We can also perform latent traversals for the multi-class setting. Here, we perform linear interpolation on the polytope where the corners are obtained from the network $\pmb{\mu}_{\psi}(\pmb{y})$ for four different classes. We provide the reconstructions in Figure 21.
+
+Table 7: Pre-trained classifier accuracies and MI for MNIST (top) and FashionMNIST (bottom).
+
+ | Model | f = 0.004 | f = 0.06 | f = 0.2 | f = 1.0 |
| Acc | MI | Acc | MI | Acc | MI | Acc | MI |
| Σ | CCVAE | 0.910 | 0.020 | 0.954 | 0.014 | 0.961 | 0.013 | 0.973 | 0.010 |
| M2 | 0.883 | 0.035 | 0.929 | 0.026 | 0.934 | 0.024 | 0.948 | 0.020 |
| E | CCVAE | 0.734 | 0.025 | 0.806 | 0.024 | 0.801 | 0.028 | 0.798 | 0.029 |
| M2 | 0.750 | 0.032 | 0.792 | 0.032 | 0.787 | 0.032 | 0.789 | 0.031 |
+
+
+Figure 21: CCVAE latent traversals for MNIST and FashionMNIST. It is interesting to see how one class transforms into another, e.g. for MNIST we see the end of the 5 curling around to form an 8 and a steady elongation of the torso when traversing from t-shirt to dress.
+
+
+
+Diversity in Conditional Generations Here we show how we can introduce diversity in the conditional generations whilst keeping attributes such as pen-stroke and orientation constant. Inspecting the M2 results Figure 22 and Figure 23, where we have to sample from $z$ to introduce diversity, indicates that we are unable to introduce diversity without affecting other attributes.
+
+Interventions We can also perform interventions on individual classes, as showed in Figure 24.
+
+
+Figure 22: CCVAE conditional generations with $z_{\backslash c}$ fixed. Here we can see that CCVAE is able to introduce diversity whilst preserving the "style" of the digit, e.g. pen width and tilt.
+
+
+Figure 23: M2 conditional generations. Here we can see that M2 is unable to introduce diversity without altering the "style" of the digit, e.g. pen width and tilt.
+
+
+Figure 24: Left: CCVAE, right: M2. As with other approaches, we can also perform wholesale interventions on each class whilst preserving the style.
\ No newline at end of file
diff --git a/capturinglabelcharacteristicsinvaes/images.zip b/capturinglabelcharacteristicsinvaes/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..e68f272808eca5d55b58a30f731fb61c085b2d70
--- /dev/null
+++ b/capturinglabelcharacteristicsinvaes/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c93efe8e0d8499357e1aee17894ce303e82a6cfa7c085d19c5e8035435f26a36
+size 4212957
diff --git a/capturinglabelcharacteristicsinvaes/layout.json b/capturinglabelcharacteristicsinvaes/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..b3ea2169d963bd78eb808a259537042570b45f5a
--- /dev/null
+++ b/capturinglabelcharacteristicsinvaes/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3780d173022a487a1109f1c0d7a29775293bb147be46b3e801441bf2a2d00c47
+size 718167
diff --git a/categoricalnormalizingflowsviacontinuoustransformations/74ab2e62-e5e6-4b7f-a617-c2de792f4968_content_list.json b/categoricalnormalizingflowsviacontinuoustransformations/74ab2e62-e5e6-4b7f-a617-c2de792f4968_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..5c3eac1a47d75cd216d71dd225b52967ab6776c0
--- /dev/null
+++ b/categoricalnormalizingflowsviacontinuoustransformations/74ab2e62-e5e6-4b7f-a617-c2de792f4968_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:31b662d0a462dade157ad3e29223966b879512daff521a125986e18036221bd0
+size 164506
diff --git a/categoricalnormalizingflowsviacontinuoustransformations/74ab2e62-e5e6-4b7f-a617-c2de792f4968_model.json b/categoricalnormalizingflowsviacontinuoustransformations/74ab2e62-e5e6-4b7f-a617-c2de792f4968_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..15423d5a9d6edbeffb120f199cbd5e902ef25447
--- /dev/null
+++ b/categoricalnormalizingflowsviacontinuoustransformations/74ab2e62-e5e6-4b7f-a617-c2de792f4968_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9b395b24d9597ee74ebaa55ed81bbccfdaf262f21afa55d56c39557082e99b17
+size 192582
diff --git a/categoricalnormalizingflowsviacontinuoustransformations/74ab2e62-e5e6-4b7f-a617-c2de792f4968_origin.pdf b/categoricalnormalizingflowsviacontinuoustransformations/74ab2e62-e5e6-4b7f-a617-c2de792f4968_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..186d3fc160a515da067c32b76739c07062f5a2c9
--- /dev/null
+++ b/categoricalnormalizingflowsviacontinuoustransformations/74ab2e62-e5e6-4b7f-a617-c2de792f4968_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:394b45bbd9756226a81a975436f99f0b7a77056a69839abaa6e040993ed74ac1
+size 1701031
diff --git a/categoricalnormalizingflowsviacontinuoustransformations/full.md b/categoricalnormalizingflowsviacontinuoustransformations/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..39c9df566631dbdb091ea947fd2330faf6c943be
--- /dev/null
+++ b/categoricalnormalizingflowsviacontinuoustransformations/full.md
@@ -0,0 +1,612 @@
+# CATEGORICAL NORMALIZING FLOWS VIA CONTINUOUS TRANSFORMATIONS
+
+Phillip Lippe
+
+University of Amsterdam, QUVA lab
+
+p.lippe@uva.nl
+
+Efstratios Gavves
+
+University of Amsterdam
+
+egavves@uva.nl
+
+# ABSTRACT
+
+Despite their popularity, to date, the application of normalizing flows on categorical data stays limited. The current practice of using dequantization to map discrete data to a continuous space is inapplicable as categorical data has no intrinsic order. Instead, categorical data have complex and latent relations that must be inferred, like the synonymy between words. In this paper, we investigate Categorical Normalizing Flows, that is normalizing flows for categorical data. By casting the encoding of categorical data in continuous space as a variational inference problem, we jointly optimize the continuous representation and the model likelihood. Using a factorized decoder, we introduce an inductive bias to model any interactions in the normalizing flow. As a consequence, we do not only simplify the optimization compared to having a joint decoder, but also make it possible to scale up to a large number of categories that is currently impossible with discrete normalizing flows. Based on Categorical Normalizing Flows, we propose GraphCNF a permutation-invariant generative model on graphs. GraphCNF implements a three-step approach modeling the nodes, edges, and adjacency matrix stepwise to increase efficiency. On molecule generation, GraphCNF outperforms both one-shot and autoregressive flow-based state-of-the-art.
+
+# 1 INTRODUCTION
+
+Normalizing Flows have been popular for tasks with continuous data like image modeling (Dinh et al., 2017; Kingma and Dhariwal, 2018; Ho et al., 2019) and speech generation (Kim et al., 2019; Prenger et al., 2019) by providing efficient parallel sampling and exact density evaluation. The concept that normalizing flows rely on is the rule of change of variables, a continuous transformation designed for continuous data. However, there exist many data types typically encoded as discrete, categorical variables, like language and graphs, where normalizing flows are not straightforward to apply.
+
+To address this, it has recently been proposed to discretize the transformations inside normalizing flows to act directly on discrete data. Unfortunately, these discrete transformations have shown to be limited in terms of the vocabulary size and layer depth due to gradient approximations (Hoogeboom et al., 2019; Tran et al., 2019). For the specific case of discrete but ordinal data, like images where integers represent quantized values, a popular strategy is to add a small amount of noise to each value (Dinh et al., 2017; Ho et al., 2019). It is unnatural, however, to apply such dequantization techniques for the general case of categorical data, where values represent categories with no intrinsic order. Treating these categories as integers for dequantization biases the data to a non-existing order, and makes the modeling task significantly harder. Besides, relations between categories are often multi-dimensional, for example, word meanings, which cannot be represented with dequantization.
+
+In this paper, we investigate normalizing flows for the general case of categorical data. To account for discontinuity, we propose continuous encodings in which different categories correspond to unique, non-overlapping and thus close-to-deterministic volumes in a continuous latent space. Instead of pre-specifying the non-overlapping volumes per category, we resort to variational inference to jointly learn those and model the likelihood by a normalizing flow at the same time. This work is not the first to propose variational inference with normalizing flows, mostly considered for improving the flexibility of the approximate posterior (Kingma et al., 2016; Rezende and Mohamed, 2015; Van Den Berg et al., 2018). Different from previous works, we use variational inference to learn
+
+a continuous representation $z$ of the discrete categorical data $x$ to a normalizing flow. A similar idea has been investigated in (Ziegler and Rush, 2019), who use a variational autoencoder structure with the normalizing flow being the prior. As both their decoder and normalizing flow model (complex) dependencies between categorical variables, (Ziegler and Rush, 2019) rely on intricate yet sensitive learning schedules for balancing the likelihood terms. Instead, we propose to separate the representation and relation modeling by factorizing the decoder both over the categorical variable $x$ and the conditioning latent $z$ . This forces the encoder and decoder to focus only on the mapping from categorical data to continuous encodings, and not model any interactions. By inserting this inductive bias, we move all complexity into the flow. We call this approach Categorical Normalizing Flows (CNF).
+
+Categorical Normalizing Flows can be applied to any task involving categorical variables, but we primarily focus on modeling graphs. Current state-of-the-art approaches often rely on autoregressive models (Li et al., 2018; Shi et al., 2020; You et al., 2018) that view graphs as sequences, although there exists no intrinsic order of the node. In contrast, normalizing flows can perform generation in parallel making a definition of order unnecessary. By treating both nodes and edges as categorical variables, we employ our variational inference encoding and propose GraphCNF. GraphCNF is a novel permutation-invariant normalizing flow on graph generation which assigns equal likelihood to any ordering of nodes. Meanwhile, GraphCNF efficiently encodes the node attributes, edge attributes, and graph structure in three consecutive steps. As shown in the experiments, the improved encoding and flow architecture allows GraphCNF to outperform significantly both the autoregressive and parallel flow-based state-of-the-art. Further, we show that Categorical Normalizing Flows can be used in problems with regular categorical variables like modeling natural language or sets.
+
+Our contributions are summarized as follows. Firstly, we propose Categorical Normalizing Flows using variational inference with a factorized decoder to move all complexity into the prior and scale up to large number of categories. Secondly, starting from the Categorical Normalizing Flows, we propose GraphCNF, a permutation-invariant normalizing flow on graph generation. On molecule generation, GraphCNF sets a new state-of-the-art for flow-based methods outperforming one-shot and autoregressive baselines. Finally, we show that simple mixture models for encoding distributions are accurate, efficient, and generalize across a multitude of setups, including sets language and graphs.
+
+# 2 CATEGORICAL NORMALIZING FLOWS
+
+# 2.1 NORMALIZING FLOWS ON CONTINUOUS DATA
+
+A normalizing flow (Rezende and Mohamed, 2015; Tabak and Vanden Eijnden, 2010) is a generative model that models a probability distribution $p(\boldsymbol{z}^{(0)})$ by applying a sequence of invertible, smooth mappings $f_{1},\ldots ,f_{K}:\mathbb{R}^{d}\to \mathbb{R}^{d}$ . Using the rule of change of variables, the likelihood of the input $\boldsymbol{z}^{(0)}$ is determined as follows:
+
+$$
+p \left(\boldsymbol {z} ^ {(0)}\right) = p \left(\boldsymbol {z} ^ {(K)}\right) \cdot \prod_ {k = 1} ^ {K} \left| \det \frac {\partial f _ {k} \left(\boldsymbol {z} ^ {(k - 1)}\right)}{\partial \boldsymbol {z} ^ {(k - 1)}} \right| \tag {1}
+$$
+
+where $\boldsymbol{z}^{(k)} = f_k(\boldsymbol{z}^{(k-1)})$ , and $p(\boldsymbol{z}^{(K)})$ represents a prior distribution. This calculation requires to compute the Jacobian for the mappings $f_1, \ldots, f_K$ , which is expensive for arbitrary functions. Thus, the mappings are often designed to allow efficient computation of its determinant. One of such is the coupling layer proposed by Dinh et al. (2017) which showed to work well with neural networks. For a detailed introduction to normalizing flows, we refer the reader to Kobyzev et al. (2019).
+
+# 2.2 NORMALIZING FLOWS ON CATEGORICAL DATA
+
+We define $\pmb{x} = \{x_{1},\dots,x_{S}\}$ to be a multivariate, categorical random variable, where each element $x_{i}$ is itself a categorical variable of $K$ categories with no intrinsic order. For instance, $\pmb{x}$ could be a sentence with $x_{i}$ being the words. Our goal is to learn the joint probability mass function, $P_{\mathrm{model}}(\pmb{x})$ , via a normalizing flow. Specifically, as normalizing flows constitute a class of continuous transformations, we aim to learn a continuous latent space in which each categorical choice of a variable $x_{i}$ maps to a stochastic continuous variable $\pmb{z}_{i} \in \mathbb{R}^{d}$ whose distribution we learn.
+
+Compared to variational autoencoders (Kingma and Welling, 2014) and latent normalizing flows (Ziegler and Rush, 2019), we want to ensure that all modeling complexity is solely in the prior, and keep a lossless reconstruction from latent space. To implement this, we simplify the decoder by factorizing the decoder over latent variables: $p(\pmb{x}|\pmb{z}) = \prod_{i} p(x_{i}|\pmb{z}_{i})$ . Factorizing the conditional likelihood means that we enforce independence between the categorical variables $x_{i}$ given their learned continuous encodings $z_{i}$ . Therefore, any interaction between the categorical variables $\pmb{x}$ must be learned inside the normalizing flow. If in this setup, the encoding distributions of multiple categories would overlap, the prior would be limited in the dependencies over $x_{1},\ldots,x_{S}$ it can model as it cannot clearly distinguish between all categories. Therefore, the encoder $q(\pmb{z}|\pmb{x})$ is being optimized to provide suitable representations of the categorical variables to the flow while separating the different categories in latent space. Meanwhile, the decoder is incentivized to be deterministic, i.e. precisely reconstructing $x$ from $z$ , in order to minimize the overlap of categories. Overall, our objective becomes:
+
+$$
+\mathbb {E} _ {\boldsymbol {x} \sim P _ {\mathrm {d a t a}}} \left[ \log P _ {\mathrm {m o d e l}} (\boldsymbol {x}) \right] \geq \mathbb {E} _ {\boldsymbol {x} \sim P _ {\mathrm {d a t a}}} \mathbb {E} _ {\boldsymbol {z} \sim q (\cdot | \boldsymbol {x})} \left[ \log \frac {p _ {\mathrm {m o d e l}} (\boldsymbol {z}) \prod_ {i} p \left(x _ {i} \mid \boldsymbol {z} _ {i}\right)}{q (\boldsymbol {z} | \boldsymbol {x})} \right] \tag {2}
+$$
+
+We refer to this framework as Categorical Normalizing Flows. In contrast to dequantization, the continuous encoding $z$ is not bounded by the domain of the encoding distribution. Instead, the partitioning is jointly learned with the model likelihood. Furthermore, we can freely choose the dimensionality of the continuous variables, $z_{i}$ , to fit the number of categories and their relations.
+
+Modeling the encoder The encoder $q(\boldsymbol{z}|\boldsymbol{x})$ and decoder $p(x_i|\boldsymbol{z}_i)$ can be implemented in several ways. The first and main setup we consider is to encode each category by a logistic distribution with a learned mean and scaling. Therefore, our encoding distribution $q(\boldsymbol{z}_i)$ is a mixture of $K$ logistics, one per category. With $g$ denoting the logistic, the encoder becomes $q(\boldsymbol{z}|\boldsymbol{x}) = \prod_{i=1}^{S} g(\boldsymbol{z}_i|\mu(x_i),\sigma(x_i))$ . In this setup, the decoder likelihood can actually be found correspondingly to the encoder by applying Bayes: $p(x_i|\boldsymbol{z}_i) = \frac{\tilde{p}(x_i)q(\boldsymbol{z}_i|x_i)}{\sum_{\hat{x}}\tilde{p}(\hat{x})q(\boldsymbol{z}_i|\hat{x})}$ with $\tilde{p}(x_i)$ being a prior over categories. Hence, we do not need to learn a separate decoder but can calculate the likelihood based on the encoder's parameters. The objective in Equation 2 simplifies to the following:
+
+$$
+\mathbb {E} _ {\boldsymbol {x} \sim P _ {\mathrm {d a t a}}} \left[ \log P _ {\mathrm {m o d e l}} (\boldsymbol {x}) \right] \geq \mathbb {E} _ {\boldsymbol {x} \sim P _ {\mathrm {d a t a}}} \mathbb {E} _ {\boldsymbol {z} \sim q (\cdot | \boldsymbol {x})} \left[ \log \left(p _ {\mathrm {m o d e l}} (\boldsymbol {z}) \prod_ {i = 1} ^ {S} \frac {\tilde {p} (x _ {i})}{\sum_ {\hat {x}} \tilde {p} (\hat {x}) q (\boldsymbol {z} _ {i} | \hat {x})}\right) \right] \tag {3}
+$$
+
+Note that the term $q(\mathbf{z}_i | x_i)$ in the numerator of $p(x_i | \mathbf{z}_i)$ cancels out with the denominator in Equation 2. Given that the encoder and decoder are sharing the parameters, we remove any possible mismatch between $p(x_i | \mathbf{z}_i)$ and $q(x_i | \mathbf{z}_i)$ . This allows changes in the encoding distribution to directly being propagated to the decoder, and further moves the focus of the training to the prior. Besides, the mixture encoding introduces a dependency of the true posterior $p(\mathbf{z} | \mathbf{x})$ on the approximate posterior $q(\mathbf{z} | \mathbf{x})$ , which potentially tightens the variational gap compared to a separately learned decoder. During testing, we can use importance sampling (Burda et al., 2016) to further reduce the gap. Details on the posterior dependency in the variational gap, and training and test steps can be found in Appendix A.1.
+
+The mixture model is simple and efficient, but might be limited in the distributions it can express. To test whether greater encoding flexibility is needed, we experiment with adding flows conditioned on the categories which transform each logistic into a more complex distribution. We refer to this approach as linear flows. Taking a step further, we can also represent the encoder $q(z|x)$ with a flow across categorical variables, similar to variational dequantization (Ho et al., 2019). Experiments presented in Section 5 show however that a simple mixture of logistics usually suffices.
+
+# 3 GRAPH GENERATION WITH CATEGORICAL NORMALIZING FLOWS
+
+A graph $\mathcal{G} = (V, E)$ is defined by a set of nodes $V$ , and a set of edges $E$ representing connections between nodes. When modeling a graph, we must take into account the node and edge attributes, often represented by categorical data, as well as the overall graph structure. Moreover, nodes and edges are better viewed as sets and not sequences since any permutation represents the same graph and assigned the same likelihood.
+
+We propose GraphCNF, a normalizing flow for graph generation, which is invariant to the order of nodes by generating all nodes and edges at once. Given a graph $\mathcal{G}$ , we model each node and edge as a
+
+
+Figure 1: Visualization of GraphCNF for an example graph of five nodes. We add the node and edge attributes, as well as the virtual edges stepwise to the latent space while leveraging the graph structure in the coupling layers. The last step considers a fully connected graph with features per edge.
+
+separate categorical variable where the categories correspond to their discrete attributes. To represent the graph structure, i.e. between which pairs of nodes does or does not exist an edge, we add an extra category to the edges representing the missing or virtual edges. Hence, to model an arbitrary graph, we consider an edge variable for every possible tuple of nodes.
+
+To apply normalizing flows on the node and edge categorical variables, we map them into continuous latent space using Categorical Normalizing Flows. Subsequent coupling layers map those representations to a continuous prior distribution. Thereby, GraphCNF uses two crucial design choices for graph modeling: (1) we perform the generation stepwise by first encoding the nodes, edges and then the adjacency matrix for improved efficiency, and (2) we introduce an inductive bias that the model assigns equal likelihood to any ordering of the nodes.
+
+# 3.1 THREE-STEP GENERATION
+
+Modeling all edges including the virtual ones requires a significant amount of latent variables and is computationally expensive. However, normalizing flows have been shown to benefit from splitting of latent variables at earlier layers while increasing efficiency (Dinh et al., 2017; Kingma and Dhariwal, 2018). Thus, we propose to add the node types, edge attributes and graph structure stepwise to the latent space as visualized in Figure 1. In the first step, we encode the nodes into continuous latent space, $\mathbf{z}_0^{(V)}$ , using Categorical Normalizing Flows. On those, we apply a group of coupling layers, $f_1$ , which additionally use the adjacency matrix and the edge attributes, denoted by $E_{attr}$ , as input. Thus, we can summarize the first step as:
+
+$$
+z _ {1} ^ {(V)} = f _ {1} \left(z _ {0} ^ {(V)}; E, E _ {\text {a t t r}}\right) \tag {4}
+$$
+
+The second step incorporates the edge attributes, $E_{attr}$ , into latent space. Hence, all edges of the graph except the virtual edges are encoded into latent variables, $z_0^{(E_{attr})}$ , representing their attribute. The following coupling layers, denoted by $f_2$ , transform both the node and edge attribute variables:
+
+$$
+z _ {2} ^ {(V)}, z _ {1} ^ {(E _ {\text {a t t r}})} = f _ {2} \left(z _ {1} ^ {(V)}, z _ {0} ^ {(E _ {\text {a t t r}})}; E\right) \tag {5}
+$$
+
+Finally, we add the virtual edges to the latent variable model as $z_0^{(E^*)}$ . Thereby, we need to slightly adjust our encoding from Categorical Normalizing Flows as we consider the virtual edges as an additional category of the edges. While the other categories are already encoded by $z_1^{(E_{attr})}$ , we add a separate encoding distribution for the virtual edges, for which we use a simple logistic. Meanwhile, the decoder needs to be applied on all edges, as we need to distinguish the continuous representation between virtual and non-virtual edges. Overall, the mapping can be summarized as:
+
+$$
+z _ {3} ^ {(V)}, z _ {1} ^ {(E)} = f _ {3} \left(z _ {2} ^ {(V)}, z _ {0} ^ {(E)}\right) \text {w h e r e} z _ {0} ^ {(E)} = \left[ z _ {1} ^ {\left(E _ {\text {a t t r}}\right)}, z _ {0} ^ {\left(E ^ {*}\right)} \right] \tag {6}
+$$
+
+where the latent variables $z_3^{(V)}$ and $z_1^{(E)}$ are trained to follow a prior distribution. During sampling, we first inverse $f_3$ and determine the general graph structure. Next, we inverse $f_2$ and reconstruct the edge attributes. Finally, we apply the inverse of $f_1$ and determine the node types.
+
+# 3.2 PERMUTATION-INVARIANT GRAPH MODELING
+
+To achieve permutation invariance for the likelihood estimate, the transformations of the coupling layers need to be independent of the node order. This includes both the split of variables that will be transformed and the network model that predicts the transformation parameters. We ensure the first aspect by applying a channel masking strategy (Dinh et al., 2017), where the split is performed over the latent dimensions for each node and edge separately making it independent of the node order. For the second aspect, we leverage the graph structure in the coupling networks and apply graph neural networks. In the first step of GraphCNF, $f_{1}$ , we use a Relation GCN (Schlichtkrull et al., 2018) which incorporates the categorical edge attributes into the layer. For the second and third steps, we need a graph network that supports the modeling of both node and edge features. We implement this by alternating between updates of the edge and the node features. Specifically, given node features $v^{t}$ and edge features $e^{t}$ at layer $t$ , we update those as follows:
+
+$$
+\boldsymbol {v} ^ {t + 1} = f _ {\text {n o d e}} \left(\boldsymbol {v} ^ {t}; \boldsymbol {e} ^ {t}\right), \quad \boldsymbol {e} ^ {t + 1} = f _ {\text {e d g e}} \left(\boldsymbol {e} ^ {t}; \boldsymbol {v} ^ {t + 1}\right) \tag {7}
+$$
+
+We call this network Edge-GNN, and compare different implementations of $f_{node}$ and $f_{edge}$ in Appendix B. Using both design choices, GraphCNF models a permutation invariant distribution.
+
+# 4 RELATED WORK
+
+Dequantization Applying continuous normalizing flows on discrete data leads to undesired density models where arbitrarily high likelihoods are placed on particular values (Theis et al., 2016; Uria et al., 2013). A common solution to this problem is to dequantize the data $\pmb{x}$ by adding noise $\pmb{u} \in [0,1)^D$ . Theis et al. (2016) have shown that modeling $p_{\mathrm{model}}(\pmb{x} + \pmb{u})$ lower-bounds the discrete distribution $P_{\mathrm{model}}(\pmb{x})$ . The noise distribution $q(\pmb{u}|\pmb{x})$ is usually uniform or learned by a second normalizing flow. The latter is referred to as variational dequantization and has been proven to be crucial for state-of-the-art image modeling (Ho et al., 2019; Hoogeboom et al., 2020). Categories, however, are not quantized values, so that ordering them as integers introduces bias to the representation.
+
+Discrete NF Recent works have investigated normalizing flows with discretized transformations. Hoogeboom et al. (2019) proposed to use additive coupling layers with rounding operators for ensuring discrete output. Tran et al. (2019) discretizes the output by a Gumbel-Softmax approximating an argmax operator. Thereby, the coupling layers resemble a reversible shift operator. While both approaches achieved competitive results, due to discrete operations the gradient approximations have been shown to introduce new challenges, such as limiting the number of layers or distribution size.
+
+Latent NF Several works have investigated the application of normalizing flows in variational auto-encoders (VAEs) (Kingma and Welling, 2014) for increasing the flexibility of the approximate posterior (Kingma et al., 2016; Van Den Berg et al., 2018; Tomczak and Welling, 2017). However, VAEs model a lower bound of the true likelihood. To minimize this gap, Ziegler and Rush (2019) proposed Latent Normalizing Flows that move the main model complexity into the prior by using normalizing flows. In contrast to CNFs, Latent NF have a joint decoder over the latent space, $p(x|z)$ , which allows the modeling of interactions between variables in the decoder. Thus, instead of an inductive bias to push all complexity to the normalizing flow, Latent NF rely on a loss-scheduling weighting the decoder loss much higher. This pushes the decoder to be deterministic but can lead to unstable optimization due to neglecting the flow's likelihood. Further, experiments on sequence tasks show Latent NF to be competitive but are still outperformed by an LSTM baseline as we observed in our experiments.
+
+Graph modeling The first generation models on graphs have been autoregressive (Liao et al., 2019; You et al., 2018), generating nodes and edges in sequential order. While being efficient in memory, they are slow in sampling and assume an order in the set of nodes. The first application of normalizing flows for graph generation was introduced by Liu et al. (2019), where a flow modeled the node representations of a pretrained autoencoder. Recent works of GraphNVP (Madhawa et al., 2019) and GraphAF (Shi et al., 2020) proposed normalizing flows for molecule generation. GraphNVP consists of two separate flows, one for modeling the adjacency matrix and a second for modeling the node types. Although allowing parallel generation, the model is sensitive to the node order due to its masking strategy and feature networks in the coupling layer. GraphAF is an autoregressive normalizing flow sampling nodes and edges sequentially but allowing parallel training. However, both flows use standard uniform dequantization to represent the node and edge categories. VAE have
+
+also been proposed for latent-based graph generation (Simonovsky and Komodakis, 2018; Ma et al., 2018; Liu et al., 2018; Jin et al., 2018). Although those models can be permutation-invariant, they model a lower bound and do not provide a lossless reconstruction from latent space.
+
+# 5 EXPERIMENTAL RESULTS
+
+We start our experiments by evaluating GraphCNF on two benchmarks for graph generation, namely molecule generation and graph coloring. Further, to test generality we evaluate CNFs on other categorical problems, specifically language and set modeling. For the normalizing flows, we use a sequence of logistic mixture coupling layers (Ho et al., 2019) mapping a mixture of logistic distributions back into a single mode. Before each coupling layer, we include an activation normalization layer and invertible $1 \times 1$ convolution (Kingma and Dhariwal, 2018). For reproducibility, we provide all hyperparameter details in Appendix D, and make our code publicly available. $^{1}$
+
+# 5.1 MOLECULE GENERATION
+
+Modeling and generating graphs is crucial in biology and chemistry for applications such as drug discovery, where molecule generation has emerged as a common benchmark (Jin et al., 2018; Shi et al., 2020). In a molecule graph, the nodes are atoms and the edges represent bonds between atoms, both represented by categorical features. Using a dataset of existing molecules, the goal is to learn a distribution of valid molecules as not all possible combinations of atoms and bonds are valid. We perform experiments on the Zinc250k (Irwin et al., 2012) dataset which consists of 250,000 drug-like molecules. The molecules contain up to 38 atoms of 9 different types, with three different bond types possible between the atoms. For comparability, we follow the preprocessing of Shi et al. (2020).
+
+We compare GraphCNF to baselines that consider molecules as a graph and not as text representation. As per VAE-based approaches, we consider R-VAE (Ma et al., 2018) and Junction-Tree VAE (JT-VAE) (Jin et al., 2018). R-VAE is a one-shot generation model using regularization to ensure semantic validity. JT-VAE represents a molecule as a junction tree of sub-graphs that are obtained from the training dataset. We also compare our model to GraphNVP (Madhawa et al., 2019) and GraphAF (Shi et al., 2020). The models are evaluated by sampling 10,000 examples and measuring the proportion of valid molecules. We also report the proportion of unique molecules and novel samples that are not in the training dataset. These metrics prevent models from memorizing a small subset of graphs. Finally, the reconstruction rate describes whether graphs can be accurately decoded from latent space. Normalizing Flows naturally score $100\%$ due to their invertible mapping, and we achieve the same with our encoding despite no guarantees.
+
+Table 1 shows that GraphCNF generates almost twice as many valid molecules than other one-shot approaches. Yet, the validity and uniqueness stay at almost $100\%$ . Even the autoregressive normalizing flow, GraphAF, is outperformed by GraphCNF by $15\%$ . However, the rules for generating valid molecules can be enforced in autoregressive models by masking out the invalid outputs. This has been the case for JT-VAE as it has been trained with those manual rules, and thus achieves a
+
+
+(a) Molecule of Zinc250k
+
+
+(b) Generated molecule
+Figure 2: Visualization of molecules from (a) the Zinc250k dataset and (b,c) generated by GraphCNF. Nodes with black connections and no description represent carbon atoms. Sub-figure (c) shows the failure case of two valid sub-graphs. More example generations can be found in Appendix C.
+
+
+(c) Sub-molecule generations
+
+Table 1: Performance on molecule generation trained on Zinc250k (Irwin et al., 2012), calculated on 10k samples and averaged over 4 runs. The standard deviation of those runs can be found in Appendix C. Scores of the baselines are taken from their respective papers.
+
+| Method | Validity | Uniqueness | Novelty | Reconstruction | Parallel | Manual Rules |
| JT-VAE [10] | 100% | 100% | 100% | 71% | X | ✓ |
| GraphAF [36] | 68% | 99.10% | 100% | 100% | X | X |
| R-VAE [23] | 34.9% | 100% | - | 54.7% | ✓ | X |
| GraphNVP [24] | 42.60% | 94.80% | 100% | 100% | ✓ | X |
| GraphCNF | 83.41% | 99.99% | 100% | 100% | ✓ | X |
| + Sub-graphs | 96.35% | 99.98% | 99.98% | 100% | ✓ | X |
+
+validity of $100\%$ . Nevertheless, we are mainly interested in the model's capability of learning the rules by itself and being not specific to any application. While GraphNVP and GraphAF sample with a lower standard deviation from the prior to increase validity, we explicitly sample from the original prior to underline that our model covers the whole latent space well. Surprisingly, we found out that most invalid graphs consist of two or more that in isolation are valid, as shown in Figure 2c. This can happen as one-shot models have no guidance for generating a single connected graph. By taking the largest sub-graph of these predictions, we obtain a validity ratio of $96.35\%$ making our model generate almost solely valid molecules without any manually encoded rules. We also evaluated our model on the Moses (Polykovskiy et al., 2018) dataset and achieved similar scores as shown in Appendix C.
+
+# 5.2 GRAPH COLORING
+
+Graph coloring is a well-known combinatorial problem (Bondy et al., 1976) where for a graph $\mathcal{G}$ , each node is assigned to one of $K$ colors. Yet, two adjacent nodes cannot have the same color (see Figure 3). Modeling the distribution of valid color assignments to arbitrary graphs is NP-complete. To train models on such a distribution, we generate a dataset of valid graph colorings for randomly sampled graphs. To further investigate the effect of complexity, we create two dataset versions, one with graphs of size $10 \leq |V| \leq 20$ and another with $25 \leq |V| \leq 50$ , as larger graphs are commonly harder to solve.
+
+For graph coloring, we rely on GraphCNF and compare to a variational autoencoder and an autoregressive model generating one node at a time. As no edges are being modeled here, we only use the first step of GraphCNF's three-step generation. For all models, we apply the same Graph Attention network (Veličković et al., 2018). As autoregressive
+
+models require a manually prescribed node order, we compare the following: a random ordering per graph, largest_first which is inspired by heuristics of automated theorem provers that start from the nodes with the most connections, and smallest_first, where we reverse the order of the previous heuristic. We evaluate the models by measuring the likelihood of color assignments to unseen test graphs in bits per node. Secondly, we sample one color assignment per model for each test graph and report the proportion of valid colorings.
+
+The results in Table 2 show that the node ordering has indeed a significant effect on the autoregressive model's performance. While the smallest_first ordering leads to only $32\%$ valid solutions on the large dataset, reversing the order simplifies the task for the model such that it generates more than twice as many valid color assignments. In contrast, GraphCNF is invariant of the order of nodes. Despite generating all nodes in parallel, it outperforms all node orderings on the small dataset, while being close to the best ordering on the larger dataset. This invariance property is especially beneficial in tasks where an optimal order of nodes is not known, like molecule generation. Although having more parameters, the sampling with GraphCNF is also considerably faster than the autoregressive models. The sampling time can further be improved by replacing the logistic mixture coupling layers with
+
+
+Figure 3: Example graph with $|V| = 26$ and generated valid coloring by GraphCNF.
+
+Table 2: Results on the graph coloring problem. Runtimes are measured on a NVIDIA TitanRTX GPU with a batch size of 128. The standard deviations of the results can be found in Appendix D.2.
+
+| Method | 10 ≤ |V| ≤ 20 | 25 ≤ |V| ≤ 50 |
| Validity | Bits per node | Time | Validity | Bits per node | Time |
| VAE | 44.95% | 0.84bpd | 0.05s | 7.75% | 0.64bpd | 0.10s |
| RNN+Smallest_first | 76.86% | 0.73bpd | 0.69s | 32.27% | 0.50bpd | 2.88s |
| RNN+Random | 88.62% | 0.70bpd | 0.69s | 49.28% | 0.46bpd | 2.88s |
| RNN+Largest_first | 93.41% | 0.68bpd | 0.69s | 71.32% | 0.43bpd | 2.88s |
| GraphCNF | 94.56% | 0.67bpd | 0.28s | 66.80% | 0.45bpd | 0.54s |
| - Affine coupling | 93.90% | 0.69bpd | 0.12s | 65.78% | 0.47bpd | 0.35s |
+
+
+Figure 4: Results on language modeling. The reconstruction error is shown in a lighter color corresponding to the model. The exact results including standard deviations can be found in Table 11.
+
+affine ones. Due to the lower complexity, we see a slight decrease in validity and bits per dimension but can verify that logistic mixture couplings are not crucial for CNFs.
+
+# 5.3 LANGUAGE MODELING
+
+To compare CNF with Latent NF, we test both models on language modeling. We experiment with two popular character-level datasets, Penn Treebank (Marcus et al., 1994) and text8 (Mahoney, 2011) with a vocabulary size of $K = 51$ and $K = 27$ respectively. We also test a word-level dataset, Wikitext103 (Merit et al., 2017), with $K = 10,000$ categories, which Discrete NF cannot handle due to its gradient approximations (Tran et al., 2019). We follow the setup of Ziegler and Rush (2019) for the Penn Treebank and train on sequences of 256 tokens for the other two datasets. Both Latent NF and CNF apply a single mixture coupling layer being autoregressive across time and latent dimensions and differ only in the encoding/decoding strategy. The applied LSTM in the coupling layers is shown as an additional RNN baseline in Figure 4. Categorical Normalizing Flow performs on par with their autoregressive baseline, only slightly underperforming on Wikitext103 due to using a single flow layer. Latent NF however performs considerably worse on text8 and Wikitext103 due to a non-deterministic decoding and higher fluctuation in the loss that we experienced during training (see Appendix A.4.2 for visualizations). This shows the importance of a factorized likelihood and underlines the two benefits of CNFs. Firstly, CNFs are more stable and simpler to train as no loss scheduling is required, and the likelihood is mainly modeled in the flow prior. Secondly, the encoding of CNFs is much more efficient than Latent NFs. We conclude that CNFs are more widely applicable, and Latent NFs might be the better choice if the decoder can explicitly help, i.e., prior knowledge.
+
+# 5.4 SET MODELING
+
+Finally, we present experiments on sets of categorical variables, for which we create two toy datasets with known data likelihood: set shuffling and set summation. The goal is to assign high likelihood only to those possible sets that occur in the dataset, and shows how accurately our flows can model an arbitrary, discrete dataset. In set shuffling, we model a set of $N$ categorical variables each having one out of $N$ categories. Each category has to appear exactly once, which leads to $N!$ possible
+
+Table 3: Results on set modeling. Metric used is bits per categorical variable (dimension).
+
+| Model | Set shuffling | Set summation |
| Discrete NF (Tran et al., 2019) | 3.87 ±0.04 | 2.51 ±0.00 |
| Variational Dequant. (Ho et al., 2019) | 3.01 ±0.02 | 2.29 ±0.01 |
| Latent NF (Ziegler and Rush, 2019) | 2.78 ±0.00 | 2.26 ±0.01 |
| CNF + Mixture model | 2.78 ±0.00 | 2.24 ±0.00 |
| CNF + Linear flows | 2.78 ±0.00 | 2.25 ±0.00 |
| CNF + Variational Encoding | 2.79 ±0.01 | 2.25 ±0.01 |
| Optimal | 2.77 | 2.24 |
+
+assignments that need to be modeled. In set summation, we again consider a set of size $N$ with $N$ categories, but those categories represent the actual integers $1, 2, \ldots, N$ and have to sum to an arbitrary number, $L$ . In contrast to set shuffling, the data is ordinal, which we initially expected to help dequantization methods. For both experiments we set $N = 16$ and $L = 42$ .
+
+In Table 3, we compare CNFs to applying variational dequantization (Ho et al., 2019), Latent Normalizing Flows (Ziegler and Rush, 2019) and Discrete Normalizing Flows (Tran et al., 2019). The results show that CNFs achieve nearly optimal performance. Although we model a lower bound in continuous space, our flows can indeed model discrete distributions precisely. Interestingly, representing the categories by a simple mixture model is sufficient for achieving these results. We observe the same trend in domains with more complex relations between categories, such as on graphs and language modeling, presumably because of both the coupling layers and the prior distribution resting upon logistic distributions as well. Variational dequantization performs worse on the shuffling dataset, while on set summation with ordinal data, the gap to the optimum is smaller. The same holds for Discrete NFs, although it is worth noting that unlike CNFs, optimizing Discrete NFs had issues due to their gradient approximations. Latent Normalizing Flows with the joint decoder achieve similar performance as CNF, which can be attributed to close-to deterministic decoding. When looking at the encoding space (see Appendix A.4.1 for visualization), we see that Latent NF has indeed learned a mixture model as well. Hence, the added complexity is not needed on this simple dataset.
+
+# 6 CONCLUSION
+
+We present Categorical Normalizing Flows which learn a categorical, discrete distribution by jointly optimizing the representation of categorical data in continuous latent space and the model likelihood of a normalizing flow. Thereby, we apply variational inference with a factorized posterior to maintain almost unique decoding while allowing flexible encoding distributions. We find that a plain mixture model is sufficient for modeling discrete distributions accurately while providing an efficient way for encoding and decoding categorical data. Compared to a joint posterior, CNFs are more stable, efficient, and have an inductive bias to move all modeling complexity into the prior. Furthermore, GraphCNF, a normalizing flow on graph modeling based on CNFs, outperforms autoregressive and one-shot approaches on molecule generation and graph coloring while being invariant to the node order. This emphasizes the potential of normalizing flows on categorical tasks, especially for such with non-sequential data.
+
+# ACKNOWLEDGEMENTS
+
+We thank SURFsara for the support in using the Lisa Compute Cluster.
+
+# REFERENCES
+
+John Adrian Bondy, Uppaluri Siva Ramachandra Murty, and others. 1976. Graph theory with applications, volume 290. Macmillan London.
+Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. 2016. Importance weighted autoencoders. 4th International Conference on Learning Representations, ICLR 2016 - Conference Track Proceedings, pages 1-14.
+
+Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. 2017. Density estimation using Real NVP. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France.
+Dheeru Dua and Casey Graff. 2019. UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.
+Dan Hendrycks and Kevin Gimpel. 2016. Gaussian Error Linear Units (GELUs). arXiv preprint arXiv:1606.08415v3.
+Jonathan Ho, Xi Chen, Aravind Srinivas, Yan Duan, and Pieter Abbeel. 2019. Flow++: Improving Flow-Based Generative Models with Variational Dequantization and Architecture Design. In Proceedings of the 36th International Conference on Machine Learning, volume 97, pages 2722-2730, Long Beach, California, USA. PMLR.
+Emiel Hoogeboom, Taco S. Cohen, and Jakub M. Tomczak. 2020. Learning Discrete Distributions by Dequantization. arXiv preprint arXiv:2001.11235v1.
+Emiel Hoogeboom, Jorn W. T. Peters, Rianne van den Berg, and Max Welling. 2019. Integer Discrete Flows and Lossless Compression. In Advances in Neural Information Processing Systems 32, pages 12134-12144, Vancouver, BC, Canada.
+John J Irwin, Teague Sterling, Michael M Mysinger, Erin S Bolstad, and Ryan G Coleman. 2012. ZINC: A Free Tool to Discover Chemistry for Biology. Journal of Chemical Information and Modeling, 52(7):1757-1768.
+Wengong Jin, Regina Barzilay, and Tbmmi Jaakkola. 2018. Junction tree variational autoencoder for molecular graph generation. 35th International Conference on Machine Learning, ICML 2018, 5:3632-3648.
+Sungwon Kim, Sang-Gil Lee, Jongyoon Song, Jaehyeon Kim, and Sungroh Yoon. 2019. FloWaveNet: A Generative Flow for Raw Audio. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 3370-3378, Long Beach, California, USA. PMLR.
+Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In International Conference on Learning Representations (ICLR) 2015, San Diego, CA, USA.
+Diederik P. Kingma and Prafulla Dhariwal. 2018. Glow: Generative Flow with Invertible 1x1 Convolutions. In Advances in Neural Information Processing Systems, volume 31, pages 10215-10224. Curran Associates, Inc.
+Diederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. 2016. Improved variational inference with inverse autoregressive flow. Advances in Neural Information Processing Systems, 29:4743-4751.
+Diederik P Kingma and Max Welling. 2014. Auto-Encoding Variational Bayes. In 2nd International Conference on Learning Representations, ICLR, Banff, AB, Canada.
+Ivan Kobyzev, Simon Prince, and Marcus A. Brubaker. 2019. Normalizing Flows: Introduction and Ideas. arXiv preprint arXiv:1908.09257v1.
+Henrique Lemos, Marcelo Prates, Pedro Avelar, and Luis Lamb. 2019. Graph colouring meets deep learning: Effective graph neural network models for combinatorial problems. In Proceedings - International Conference on Tools with Artificial Intelligence, ICTAI, volume 2019-Novem, pages 879-885. IEEE Computer Society.
+Yujia Li, Oriol Vinyals, Chris Dyer, Razvan Pascanu, and Peter Battaglia. 2018. Learning deep generative models of graphs. arXiv preprint arXiv:1803.03324.
+Renjie Liao, Yujia Li, Yang Song, Shenlong Wang, Will Hamilton, David K Duvenaud, Raquel Urtasun, and Richard Zemel. 2019. Efficient Graph Generation with Graph Recurrent Attention Networks. In H Wallach, H Larochelle, A Beygelzimer, F D'Alché-Buc, E Fox, and R Garnett, editors, Advances in Neural Information Processing Systems 32, pages 4255-4265. Curran Associates, Inc.
+Jenny Liu, Aviral Kumar, Jimmy Ba, Jamie Kiros, and Kevin Swersky. 2019. Graph Normalizing Flows. In Advances in Neural Information Processing Systems, pages 13556-13566.
+Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Jiawei Han. 2020. On the Variance of the Adaptive Learning Rate and Beyond. In International Conference on Learning Representations.
+
+Qi Liu, Miltiadis Allamanis, Marc Brockschmidt, and Alexander Gaunt. 2018. Constrained Graph Variational Autoencoders for Molecule Design. In S Bengio, H Wallach, H Larochelle, K Grauman, N Cesa-Bianchi, and R Garnett, editors, Advances in Neural Information Processing Systems 31, pages 7795-7804. Curran Associates, Inc.
+Tengfei Ma, Jie Chen, and Cao Xiao. 2018. Constrained Generation of Semantically Valid Graphs via Regularizing Variational Autoencoders. In S Bengio, H Wallach, H Larochelle, K Grauman, N Cesa-Bianchi, and R Garnett, editors, Advances in Neural Information Processing Systems 31, pages 7113-7124. Curran Associates, Inc.
+Kaushalya Madhawa, Katushiko Ishiguro, Kosuke Nakago, and Motoki Abe. 2019. GraphNVP: An Invertible Flow Model for Generating Molecular Graphs. arXiv preprint arXiv:1905.11600v1.
+Matt Mahoney. 2011. Large text compression benchmark. Benchmark published at http://mattmahoney.net/dc/text.html.
+Mitch Marcus, Grace Kim, Mary Ann Marcinkiewicz, Robert MacIntyre, Ann Bies, Mark Ferguson, Karen Katz, and Britta Schasberger. 1994. The Penn Treebank: Annotating Predicate Argument Structure. In Proceedings of the Workshop on Human Language Technology, pages 114-119, Plainsboro, NJ. Association for Computational Linguistics.
+Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017. Pointer Sentinel Mixture Models. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.
+Tomáš Mikolov, Ilya Sutskever, Anoop Deoras, Hai-Son Le, Stefan Kombrink, and Jan Cernocky. 2012. Subword language modeling with neural networks. preprint (http://www.fit.vutbr.cz/imikolov/rnllm/char.pdf), 8.
+Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In H Wallach, H Larochelle, A Beygelzimer, F d' Alché-Buc, E Fox, and R Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024-8035. Curran Associates, Inc.
+Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.
+Daniil Polykovskiy, Alexander Zhebrak, Benjamin Sanchez-Lengeling, Sergey Golovanov, Oktai Tatanov, Stanislav Belyaev, Rauf Kurbanov, Aleksey Artamonov, Vladimir Aladinskiy, Mark Veselov, Artur Kadurin, Simon Johansson, Hongming Chen, Sergey Nikolenko, Alan Aspuru-Guzik, and Alex Zhavoronkov. 2018. Molecular Sets (MOSES): A Benchmarking Platform for Molecular Generation Models. arXiv preprint arXiv:1811.12823v3, pages 1-17.
+Ryan Prenger, Rafael Valle, and Bryan Catanzaro. 2019. Waveglow: A flow-based generative network for speech synthesis. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3617-3621. IEEE.
+Afshin Rahimi, Trevor Cohn, and Timothy Baldwin. 2018. Semi-supervised User Geolocation via Graph Convolutional Networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2009-2019, Melbourne, Australia. Association for Computational Linguistics.
+Danilo Jimenez Rezende and Shakir Mohamed. 2015. Variational Inference with Normalizing Flows. In Proceedings of the 32nd International Conference on Machine Learning, volume 37, Lille, France.
+Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2018. Modeling Relational Data with Graph Convolutional Networks. In *The Semantic Web*, pages 593-607, Cham. Springer International Publishing.
+Chence Shi, Minkai Xu, Zhaocheng Zhu, Weinan Zhang, Ming Zhang, and Jian Tang. 2020. GraphAF: a Flow-based Autoregressive Model for Molecular Graph Generation. In International Conference on Learning Representations.
+Martin Simonovsky and Nikos Komodakis. 2018. GraphVAE: Towards Generation of Small Graphs Using Variational Autoencoders. In International Conference on Artificial Neural Networks, volume abs/1802.0, pages 412-422.
+
+Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research, 15:1929-1958.
+Esteban Tabak and Eric Vanden Eijnden. 2010. Density estimation by dual ascent of the log-likelihood. Communications in Mathematical Sciences, 8(1):217-233.
+Lucas Theis, Aaron Van Den Oord, and Matthias Bethge. 2016. A note on the evaluation of generative models. 4th International Conference on Learning Representations, ICLR 2016 - Conference Track Proceedings.
+Jakub M. Tomczak and Max Welling. 2017. Improving Variational Auto-Encoders using Householder Flow. arXiv preprint arXiv:1611.09630v4.
+Dustin Tran, Keyon Vafa, Kumar Krishna Agrawal, Laurent Dinh, and Ben Poole. 2019. Discrete Flows: Invertible Generative Models of Discrete Data. In Advances in Neural Information Processing Systems, pages 14692-14701.
+Benigno Uria, Iain Murray, and Hugo Larochellehugo. 2013. RNADE: The real-valued neural autoregressive density-estimator. In Advances in Neural Information Processing Systems, volume 26, pages 2175-2183.
+Rianne Van Den Berg, Leonard Hasenclever, Jakub M. Tomczak, and Max Welling. 2018. Sylvester normalizing flows for variational inference. 34th Conference on Uncertainty in Artificial Intelligence 2018, UAI 2018, 1:393-402.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In I Guyon, U V Luxburg, S Bengio, H Wallach, R Fergus, S Vishwanathan, and R Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998-6008. Curran Associates, Inc.
+Petar Velicković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. 2018. Graph Attention Networks. International Conference on Learning Representations.
+Jiaxuan You, Rex Ying, Xiang Ren, William Hamilton, and Jure Leskovec. 2018. GraphRNN: Generating Realistic Graphs with Deep Auto-regressive Models. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 5708-5717, Stockholm, Sweden. PMLR.
+Jie Zhou, Ganqu Cui, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. 2018. Graph Neural Networks: A Review of Methods and Applications. arXiv preprint arXiv:1812.08434.
+Zachary M. Ziegler and Alexander M. Rush. 2019. Latent Normalizing Flows for Discrete Sequences. In Proceedings of the 36th International Conference on Machine Learning, volume 97, pages 7673-7682, Long Beach, California, USA. PMLR.
+
+# A VISUALIZATIONS AND DETAILS ON ENCODING DISTRIBUTIONS
+
+In the following, we visualize the different encoding distributions we tested in Categorical Normalizing Flows, and outline implementation details for full reproducibility.
+
+# A.1 MIXTURE OF LOGISTICS
+
+The mixture model represents each category by an independent logistic distribution in continuous latent space, as visualized in Figure 5. Specifically, the encoder distribution $q(z|x)$ , with $x$ being the categorical input and $z$ the continuous latent representation, can be written as:
+
+$$
+q \left(\boldsymbol {z} \mid \boldsymbol {x}\right) = \prod_ {i = 1} ^ {N} g \left(\boldsymbol {z} _ {i} \mid \mu \left(x _ {i}\right), \sigma \left(x _ {i}\right)\right) \tag {8}
+$$
+
+$$
+g (\boldsymbol {v} | \mu , \sigma) = \prod_ {j = 1} ^ {d} \frac {\exp (- \epsilon_ {j})}{\left(1 + \exp (- \epsilon_ {j})\right) ^ {2}} \text {w h e r e} \epsilon_ {j} = \frac {v _ {j} - \mu_ {j}}{\sigma_ {j}} \tag {9}
+$$
+
+$g$ represent the logistic distribution, and $d$ the dimensionality of the continuous latent space per category. Both parameters $\mu$ and $\sigma$ are learnable parameter, which can be implemented via a simple table lookup. For decoding the discrete categorical data from continuous space, the true posterior is calculated by applying the Bayes rule:
+
+$$
+p \left(x _ {i} \mid \boldsymbol {z} _ {i}\right) = \frac {\tilde {p} \left(x _ {i}\right) q \left(\boldsymbol {z} _ {i} \mid x _ {i}\right)}{\sum_ {\hat {x}} \tilde {p} (\hat {x}) q \left(\boldsymbol {z} _ {i} \mid \hat {x}\right)} \tag {10}
+$$
+
+where the prior over categories, $\tilde{p}(x_i)$ , is calculated based on the category frequencies in the training dataset. Although the posterior models a distribution over categories, the distribution is strongly peaked for most continuous points in the latent space as the probability steeply decreases the further a point is away from a specific mode. Furthermore, the distribution is trained to minimize the posterior entropy which pushes the posterior to be deterministic for commonly sampled continuous points. Hence, the posterior partitions the latent space into fragments in which all continuous points are assigned to one discrete category. The borders between the fragments, where the posterior is not close to deterministic, are small and very rarely sampled by the encoder distribution. We visualized the partitioning for an example of three categories in Figure 5.
+
+
+(a) Encoding distribution $q(\mathbf{z}_i | x_i)$
+
+
+(b) Decoder partitioning $p(x_{i}|\pmb{z}_{i})$
+Figure 5: Visualization of the mixture model encoding and decoding for 3 categories. Best viewed in color. (a) Each category is represented by a logistic distribution with independent mean and scale which are learned during training. (b) The posterior partitions the latent space which we visualize by the background color. The borders show from when on we have an almost unique decoding of the corresponding mixture ( $>0.95$ decoding probability). Note that these borders do not directly correspond to the euclidean distance as we use logistic distributions instead of Gaussians.
+
+Notably, the posterior can also be learned by a second, small linear network. While this possibly introduces a difference between encoder and decoder, we experienced it to vanish quickly over training iterations and did not observe any significant difference compared to using the Bayes posterior besides a slower training in the very early stages of training. Additionally, we were able
+
+to achieve very low reconstruction errors in two dimensions for most discrete distributions of $\leq 50$ categories. Nevertheless, higher dimensionality of the latent space is not only crucial for a large number of categories as for word-level vocabularies but can also be beneficial for more complex problems. Still, using even higher dimensionality rarely caused any problems or showed significantly decreasing performance. Presumably, the flow learns to ignore latent dimensions if those are not needed for modeling the discrete distribution. To summarize, the dimensionality of the latent space should be considered as important, but robust hyperparameter which can be tuned in an early stage of hyperparameter search.
+
+In the very first training iterations, it can happen that the mixtures of multiple categories are at the exact same spot and have troubles to clearly separate. This can be easily resolved by either weighing the reconstruction loss slightly higher for the first $\sim 500$ iterations or initializing the mean of the mixtures with a higher variance. Once the mixtures are separated, the model has no incentive to group them together again as it has started to learn the underlying discrete distribution which results in a considerably higher likelihood than a plain uniform distribution.
+
+# A.1.1 KL DIVERGENCE
+
+The objective for learning categorical normalizing flows with a mixture encoding, shown in Equation 3, constitutes a lower bound. The variational gap between the objective and the evidence $\log P_{\mathrm{model}}$ is given by the KL divergence between the approximate posterior $q(z|x)$ , and the true posterior $p(z|x): D_{KL}(q(z|x)||p(z|x))$ . The advantage of using the mixture encoding is that by replacing the decoder by a Bayes formulation of the encoder, as in Equation 10, we introduce a dependency between $q(z|x)$ and $p(z|x)$ . Specifically, we can rewrite the true posterior as:
+
+$$
+\begin{array}{l} p (\boldsymbol {z} | \boldsymbol {x}) = \frac {p (\boldsymbol {x} | \boldsymbol {z}) p (\boldsymbol {z})}{p (\boldsymbol {x})} (11) \\ = \frac {\prod_ {i} p \left(x _ {i} \mid \boldsymbol {z} _ {i}\right) p (\boldsymbol {z})}{p (\boldsymbol {x})} (12) \\ = \left(\prod_ {i} \frac {q \left(\boldsymbol {z} _ {i} \mid x _ {i}\right) p \left(x _ {i}\right)}{\sum_ {\hat {x}} q \left(\boldsymbol {z} _ {i} \mid \hat {x}\right) p (\hat {x})}\right) \frac {p (\boldsymbol {z})}{p (\boldsymbol {x})} (13) \\ = \prod_ {i} q \left(\boldsymbol {z} _ {i} \mid x _ {i}\right) \cdot \frac {\prod_ {i} p \left(x _ {i}\right)}{p (\boldsymbol {x})} \cdot \frac {p (\boldsymbol {z})}{\prod_ {i} q \left(\boldsymbol {z} _ {i}\right)} (14) \\ = q (\boldsymbol {z} | \boldsymbol {x}) \cdot \frac {p (\boldsymbol {z})}{p (\boldsymbol {x})} \cdot \prod_ {i} \frac {p \left(x _ {i}\right)}{q \left(z _ {i}\right)} (15) \\ \end{array}
+$$
+
+Intuitively, this makes it easier for the model to tighten the gap because a change in $q(z|x)$ entails a change in $p(z|x)$ . In experiments, we observe the difference by a considerably faster optimization in the first steps during iteration. However, once the decoder is close to deterministic, both approaches with the Bayes formulation and the separate network will reach a similar variational gap as $p(x|z)$ will drop out of the Equation 11 with being 1.
+
+Note that this variational gap is very similar to that from variational dequantization for integers, which is being used in most SOTA image modeling architectures. The difference to dequantization is that the decoder $p(x_{i} | z_{i})$ has been manually fixed, but yet represents the Bayes formulation of the encoder $q(z|x)$ (the latent variable $z$ represents $x + u$ here, i.e. the original integers with a random variable $u$ between 0 and 1).
+
+# A.1.2 TRAINING AND TESTING
+
+Below, we lay out the specifics for training and testing a categorical normalizing flow with the logistic mixture encoding. Training and testing are almost identical to normalizing flows trained on image modeling, except for the loss calculation and encoding. Algorithm 1 shows the training procedure. First of all, we determine the prior $p(x_{i})$ over categories, which can be done by counting the number of occurrences of each category and divide by the sum of all. The difference between the prior probabilities in the training and testing is usually neglectable, while the training set commonly is
+
+larger and hence provides a better overall data statistic. After this, the training can start by iterating over batches in the training set $\mathcal{D}$ . For encoding, we can make use of the reparameterization trick and simply shift and scale the samples of a standard logistic distribution. The loss is the lower bound of Equation 3.
+
+Algorithm 1 Training procedure for the logistic mixture encoding in CNFs
+1: Calculate prior probabilities $p(x_{i})$ on the training dataset;
+2: for $\pmb {x}\in \mathcal{D}$ do
+3: Sample a random logistic variable $z^{\prime}\colon z^{\prime}\sim \mathrm{LogisticDist}(0,1)$
+4: Reparameterization trick for encoding: $z_{i} = z_{i}^{\prime}\cdot \sigma (x_{i}) + \mu (x_{i})$
+5: Negative log-likelihood calculation $\mathcal{L} = -\log \left(p_{\mathrm{model}}(\boldsymbol {z})\prod_{i = 1}^{S}\frac{\tilde{p}(x_i)}{\sum_{\hat{x}}\tilde{p}(\hat{x})q(\boldsymbol{z}_i|\hat{x})}\right)$ (Eq. 3)
+6: Minimize loss $\mathcal{L}$ by updating parameters in $p_{\mathrm{model}}(z)$ and $q(z_{i}|x_{i})$
+7: end for
+
+During testing, we make use of importance sampling to tighten the gap, given that $\log \mathbb{E}_x[p(x)] \geq \mathbb{E}_x\left[\log \frac{1}{N}\sum_{n=1}^{N}p(x_n)\right] \geq \mathbb{E}_x\left[\log p(x)\right]$ (Burda et al., 2016). This is again a standard technique for evaluating normalizing flows on images, and can improve the bits per dimension score slightly. In our experiments however, we did not experience a significant difference between $N = 1$ and $N = 1000$ . The bits per dimension score is calculated by using the log with base 2 on the likelihood, and divide it by the number of dimensions/elements in categorical space (denoted by $S$ ).
+
+Algorithm 2 Test procedure for the logistic mixture encoding in CNFs. We use $N = 1000$ in our experiments, although the difference between $N = 1$ and $N = 1000$ was marginal for most cases.
+1: for $x \in \mathcal{D}$ do
+2: for $n = 1, \dots, N$ do
+3: Encode $x$ as continuous variable $z: z \sim q(z|x)$ ;
+4: Determine likelihood $\mathcal{L}_n = p_{\mathrm{model}}(z) \prod_i \frac{\tilde{p}(x_i)}{\sum_{\hat{x}} \tilde{p}(\hat{x}) q(z_i|\hat{x})}$ ;
+5: end for
+6: Determine bits per dimension score: $-\log_2\left[\frac{1}{N} \sum_{n=1}^{N} \mathcal{L}_n\right] / S$ ;
+7: end for
+
+# A.1.3 EXAMPLE ENCODING ON MOLECULE GENERATION
+
+An example of a trained model encoding is shown in Figure 6. Here, we visualize the encoding of the edge attributes in GraphCNF trained the molecule generation. In this setup, we have 3 categories, representing the single, double and triple bond. While the single bond category is clearly the dominant one due to the higher prior probability, we did not observe any specific trends across trained models of the position or scale of the distributions.
+
+Visualizations on graph coloring show similar behavior as Figure 5 because all three categories have the same prior probability. Other encoding distributions such as the node types (atoms) cannot be so easily visualized because of their higher dimensionality than 2. We tried applying dimensionality reduction techniques for those but experienced that those do not capture the distribution shape well.
+
+# A.2 LINEAR FLOWS
+
+The flexibility of the mixture model can be increased by applying normalizing flows on each mixture that depend on the discrete category. We refer to this approach as linear flows as the flows are applied for each categorical input variable independently. We visualize possible encoding distributions
+
+
+(a) Encoding distribution $q(z_{i}|x_{i})$
+
+
+(b) Decoder partitioning $p(x_{i} | \mathbf{z}_{i})$
+Figure 6: Visualization of the mixture model encoding of the edge attributes for a trained model on molecule generation. The majority of the space is assigned to category 1, the single bond, as it is by far the most common edge type. Across multiple models however, we did not see a consistent trend of position/standard deviation of each category's encoding.
+
+with linear flows in Figure 7. Formally, we can write the distribution as:
+
+$$
+q (\boldsymbol {z} | \boldsymbol {x}) = \prod_ {i = 1} ^ {N} q \left(\boldsymbol {z} _ {i} \mid x _ {i}\right) \tag {16}
+$$
+
+$$
+q \left(\boldsymbol {z} ^ {(K)} \mid x _ {i}\right) = g \left(\boldsymbol {z} ^ {(0)}\right) \cdot \prod_ {k = 1} ^ {K} \left| \det \frac {\partial f _ {k} \left(\boldsymbol {z} ^ {(k - 1)} ; x _ {i}\right)}{\partial \boldsymbol {z} ^ {(k - 1)}} \right| \text {w h e r e} \boldsymbol {z} _ {i} = \boldsymbol {z} ^ {(K)} \tag {17}
+$$
+
+where $f_{1}, \ldots, f_{K}$ are invertible, smooth mappings. In particular, we use here again a sequence of coupling layers with activation normalization and invertible 1x1 convolutions (Kingma and Dhariwal, 2018). Both the activation normalization and coupling use the category $x_{i}$ as additional external input to determine their transformation parameters by a neural network. The class-conditional transformations could also be implemented by storing $K$ parameter sets for the coupling layer neural networks, which is however inefficient for a larger number of categories. Furthermore, in coupling layers, we apply a channel mask that splits $z_{i}$ over latent dimensionality $d$ into two equally sized parts, of which one is transformed using the other as input.
+
+
+(a) Encoding distribution $q(z_{i}|x_{i})$
+Figure 7: Visualization of the linear flow encoding and decoding for 3 categories. Best viewed in color. (a) The distribution per category is not restricted to a simple logistic and can be multi-modal, rotated or transformed even more. (b) The posterior partitions the latent space which we visualize by the background color. The borders show from when on we have an almost unique decoding of the corresponding category distribution ( $>0.95$ decoding probability).
+
+
+(b) Decoder partitioning $p(x_{i} | \mathbf{z}_{i})$
+
+Similarly to the mixture model, we can calculate the true posterior $p(x_{i} | \mathbf{z}_{i})$ using the Bayes rule. Thereby, we sample from the flow for $x_{i}$ and need to inverse the flows for all other categories. Note that as the inverse of the flow also needs to be differentiable in this situation, we apply affine coupling layers instead of logistic mixture layers. However, this gets computationally expensive for more than 20 categories, and thus we used a single-layer linear network as posterior in these situations. The partitions of the latent space that can be learned by the encoding distribution are much more flexible, as illustrated in Figure 7.
+
+We experimented with increasing sizes of linear flows but noticed that the encoding distribution usually fell back to rotated logistic distributions. The fact that the added complexity and flexibility by the flows are not being further supports our observation that mixture models are indeed sufficient for representing categorical data well in normalizing flows.
+
+# A.3 VARIATIONAL ENCODING
+
+The third encoding distribution we experimented with is inspired by variational dequantization (Ho et al., 2019) and models $q(z|x)$ by one flow across all categorical variables. Still, the posterior, $p(x_{i}|z_{i})$ , is applied per categorical variable independently to maintain unique decoding and partitioning of the latent space. The normalizing flow again consists of a sequence of logistic mixture coupling layers with activation normalization and invertible 1x1 convolutions. The inner feature network of the coupling layers depends on the task the normalizing flow is applied on. Hence, for sets, we used a transformer architecture, while for the graph experiments, we used a GNN. On the language modeling task, we used a Bi-LSTM model to generate the transformation parameters. All those networks use the discrete, categorical data $x$ as additional input.
+
+As the true posterior cannot be found for this distribution, we apply a two-layer linear network to determine $p(x_{i}|\mathbf{z}_{i})$ . While the reconstruction error was again very low, we again experienced that the model mainly relied on a logistic mixture model, even if we initialize it differently beforehand. Variational dequantization is presumably important for images as every pixel value has its own independent Gaussian noise signal. This noise can be nicely modeled by flexible dequantization distributions which need to be complex enough to capture the true mean and variance of this Gaussian noise. In categorical distributions, however, we do not have such noise signals and therefore seem not to benefit from variational encodings.
+
+# A.4 LATENT NORMALIZING FLOW
+
+# A.4.1 ENCODING VISUALIZATION
+
+
+Figure 8: Visualization of the encoding distribution $q(z|x)$ of latent NF on set summation.
+
+In this section, we show visualizations of the learned encoding distribution of Latent NF (Ziegler and Rush, 2019) on the task of set summation. As shown in Figure 8, the encoder learned a clear mixture distribution although not being restricted too. This shows that the possible complexity in the decoder is not being used. The figure was generated by sampling 5,000 examples and merging them into one plot. Each latent variable $\mathbf{z}_i$ is two dimensional, hence our two-dimensional plot. The
+
+colors represent different categories. For readability, each color is normalized independently (i.e. highest value 1, lowest 0) as otherwise, the colors of the stretched mixtures would be too low. For visualizations of the latent space in language modeling, we refer the reader to Ziegler and Rush (2019).
+
+# A.4.2 LOSS COMPARISON
+
+In the following, we compare LatentNF and CNF on their training behavior and loss fluctuation. Figure 9 visualizes the loss curve for both models on the task of language modeling, trained on the text8 dataset (Mahoney, 2011). Note that both models use the same hyperparameters and model except that the reconstruction loss is weighted 10 times higher in the first 5k iterations, and decays exponentially over consecutive iterations. This is why the initial loss of LatentNF is higher, but the reconstruction loss becomes close to zero after 3k iterations in the example of Figure 9. Interestingly, we experience high fluctuations of the loss with LatentNF during the first iterations. Although Figure 9 shows only a single run, similar fluctuations have occurred in all trained LatentNF models but at different training iterations. Hence, we decided to plot a single loss curve instead of the average of multiple. The fluctuations can most likely be explained by the need for a strong loss scheduling. At the beginning of the training, the decoder loss is weighted significantly higher than the prior component. Thus, the backpropagated gradients mostly focus on the decoder, which can peak for rare categories/occurrences in the dataset. These peaks cause the encoding distribution to change abruptly, and the prior has to adapt to these changes within the next iterations. However, this can again lead to a high loss when the transformations in the prior do not fit to the new encoding, and thus map them to points of low likelihood. Thus, we see a loss peak across a couple of iterations until the model balances itself out again.
+
+
+(a) Loss plot on text8
+
+
+(b) Zoomed loss plot on text8
+Figure 9: Plotting the loss over batch iterations on the language modeling dataset text8 for LatentNF and CNF. The batch size is 128, and we average the loss over the last 10 iterations. Sub-figure (b) shows a zoomed version of sub-figure (a) to show the larger fluctuation even on smaller scale, while CNF provides a smooth optimization.
+
+Similarly, when training on Wikitext103 (Merit et al., 2017), we experience even more frequent peaks in the loss, as Wikitext103 with 10k categories contains even more rare occurrences (see Figure 10). Reducing the learning rate did not show to improve the stability while considerably increasing the training time. When reducing the weight of the decoder, the model was not able to optimize as well as before and usually reached bits per dimension scores of 10-20.
+
+# B IMPLEMENTATION DETAILS OF GRAPHCNF
+
+In this section, we describe further implementation details of GraphCNF. We detail the implementation of the Edge-GNN model used in the coupling layers of GraphCNF, and discuss how we encode graphs of different sizes.
+
+# B.1 EDGE GRAPH NEURAL NETWORK
+
+GraphCNF implements a three-step generation approach, for which the second and third step also models latent variables for edges. Hence, in the coupling layers, we need a graph neural network
+
+
+(a) Loss plot on Wikitext103
+
+
+(b) Zoomed loss plot on Wikitext103
+Figure 10: Plotting the loss over batch iterations on the language modeling dataset Wikitext103 for LatentNF and CNF. The batch size is 128, and we average the loss over the last 10 iterations. Sub-figure (b) shows a zoomed version of sub-figure (a) to show the details of the peaks in LatentNF's loss curve.
+
+which supports both node and edge features. We implement this by alternating between updates of the edge and the node features. Specifically, given node features $\pmb{v}^t$ and edge features $e^t$ at layer $t$ , we update those as follows:
+
+$$
+\boldsymbol {v} ^ {t + 1} = f _ {\text {n o d e}} \left(\boldsymbol {v} ^ {t}; \boldsymbol {e} ^ {t}\right) \tag {18}
+$$
+
+$$
+\boldsymbol {e} ^ {t + 1} = f _ {\text {e d g e}} \left(\boldsymbol {e} ^ {t}; \boldsymbol {v} ^ {t + 1}\right) \tag {19}
+$$
+
+The update functions, $f_{node}$ and $f_{edge}$ , are both common GNN layers with slight adjustments to allow a communication between nodes and edges. Before detailing the update layers, it should be noted that we use Highway GNNs (Rahimi et al., 2018) which apply a gating mechanism. Specifically, the updates for the nodes are determined by:
+
+$$
+\boldsymbol {v} ^ {t + 1} = \boldsymbol {v} ^ {t} \cdot T (\tilde {\boldsymbol {v}} ^ {t + 1}) + H (\tilde {\boldsymbol {v}} ^ {t + 1}) \cdot (1 - T (\tilde {\boldsymbol {v}} ^ {t + 1})) \tag {20}
+$$
+
+where $\tilde{\boldsymbol{v}}^{t + 1}$ is the output of the GNN layer. $H$ and $T$ represent single linear layer networks where $T$ has a consecutive sigmoid activation to limit the outputs between 0 and 1. The edge updates are applied in the similar manner. We experienced that such a gated update functions helps the gradient flow through the layers back to the input. This is important for normalizing flows as coupling layers or transformations in general strongly depend on previous transformations. Hence, we apply the same gating mechanism in the first step of GraphCNF, $f_{1}$ .
+
+Next, we detail the GNN layers to obtain $\tilde{e}^{t + 1}$ and $\tilde{\boldsymbol{v}}^{t + 1}$ . The edge update layer $f_{edge}$ resembles a graph convolutional layer (Zhou et al., 2018), and can be specified as follows:
+
+$$
+\tilde {e} _ {i j} ^ {t + 1} = g \left(W _ {e} ^ {t} e _ {i j} ^ {t} + W _ {v} ^ {t} v _ {i} ^ {t} + W _ {v} ^ {t} v _ {j} ^ {t}\right) \tag {21}
+$$
+
+where $e_{ij}$ represents the features of the edge between node $i$ and $j$ . $g$ stands for a GELU (Hendrycks and Gimpel, 2016) non-linear activation. Using more complex transformations did not show to significantly improve the performance of GraphCNF.
+
+To update the node representations, we took inspiration of the transformer architecture (Vaswani et al., 2017) and use a modified multi-head attention layer. In particular, a linear transformation maps each node to a key, query and value vector:
+
+$$
+K _ {v _ {i}}, Q _ {v _ {i}}, V _ {v _ {i}} = W _ {K} v _ {i} ^ {t}, W _ {Q} v _ {i} ^ {t}, W _ {V} v _ {i} ^ {t} \tag {22}
+$$
+
+The attention value is usually computed based on the dot product between two nodes. However, as we explicitly have features for the edge between the two nodes, we use those to control the attention mechanism. Hence, we have an additional weight matrix $u$ to map the edge features to an attention bias:
+
+$$
+\hat {a} _ {i j} = Q _ {v _ {i}} K _ {v _ {i}} ^ {T} / \sqrt {d} + e _ {i j} ^ {t + 1} u ^ {T} \tag {23}
+$$
+
+where $d$ represents the hidden dimensionality of the features. Finally, we also add a edge-based value vector to allow a full communication from edges to nodes. Overall, the updates node features are
+
+calculated by:
+
+$$
+a _ {i j} = \frac {\exp (\hat {a} _ {i j})}{\sum_ {m} \exp (\hat {a} _ {i m})}, \tag {24}
+$$
+
+$$
+\tilde {v} _ {i} ^ {t + 1} = \sum_ {j} a _ {i j} \cdot \left[ V _ {v _ {j}} + W _ {e} e _ {i j} ^ {t + 1} \right] \tag {25}
+$$
+
+Alternatively to transformers, we also experimented with Graph Attention Networks (Veličković et al., 2018). However, those showed slightly worse results which is why we used the transformer-based layer.
+
+In step 2, the (binary) adjacency matrix is given such that each node has a limited number of neighbors. A full transformer-based architecture as above is then not necessary anymore as every atom has usually between 1 and 3 neighbors. Especially the node-to-node dot product is expensive to perform. Hence, we experimented with a node update layer where the attention is purely based on the edge features in step 2. We found both to work equally well while the second is computationally more efficient.
+
+# B.2 ENCODING GRAPH SIZE
+
+The number of nodes $N$ varies across graphs in the dataset, and hence a generative model needs to be flexible regarding $N$ . To encode the number of nodes, we use a similar approach as Ziegler and Rush (2019) for sequences and add a simple prior over $N$ . The prior is parameterized based on the graph size frequency in the training set. Alternatively, to integrate the number of nodes in the latent space, we could add virtual nodes to the model, similar to virtual edges. Every graph in the training dataset would be filled up to the maximum number of nodes (38 for Zinc250k (Irwin et al., 2012)) by adding such virtual nodes. Meanwhile, during sampling, we remove virtual nodes if the model generates such. GraphNVP (Madhawa et al., 2019) uses such an encoding as their coupling layers did not support flexible graph sizes. However, in experiments, we obtained similar performance with both size encodings while the external prior is computationally more efficient and therefore used in this paper.
+
+# C ADDITIONAL RESULTS ON MOLECULE GENERATION
+
+In this section, we present additional results on the molecule generation task. Table 4 shows the results of our model on the Zinc250k (Irwin et al., 2012) dataset including the likelihood on the test set in bits per node. We calculate this metric by summing the log-likelihood of all latent variables, both nodes, and edges, and divide by the number of nodes. Although the number of edges scales with $\mathcal{O}(N^2)$ , a higher proportion of those are virtual and did not have a significant contribution to the likelihood. Thus, bits per node constitutes a good metric for comparing the likelihood of molecules of varying size. Additionally, we also report the standard deviation for all metrics over 4 independent runs. For this, we initialized the random number generator with the seeds 42, 43, 44, and 45 before creating the model. The specific validity values we obtained are $80.74\%$ , $81.16\%$ , $85.3\%$ and $86.44\%$ (in no particular order). It should be noted that the standard deviation among those models is considerably high. This is because the models in molecule generation are trained on maximizing the likelihood of the training dataset and not explicitly on generating valid molecules. We experienced that among over seeds, models that perform better in terms of likelihood do not necessarily perform better in terms of validity.
+
+Table 4: Performance on molecule generation trained on Zinc250k (Irwin et al., 2012) with standard deviation is calculated over 4 independent runs. See Table 1 for baselines.
+
+| Method | Validity | Uniqueness | Novelty | Reconstruction | Bits per node |
| GraphCNF | 83.41% | 99.99% | 100% | 100% | 5.17bpd |
| (±2.88) | (±0.01) | (±0.00) | (±0.00) | (±0.05) |
| + Sub-graphs | 96.35% | 99.98% | 99.98% | 100% | |
| (±2.21) | (±0.01) | (±0.02) | (±0.00) | |
+
+We also evaluated GraphCNF on the Moses (Polykovskiy et al., 2018) molecule dataset. Moses contains 1.9 million molecules with up to 30 heavy atoms of 7 different types. Again, we follow the preprocessing of Shi et al. (2020) and represent molecules in kekulized form in which hydrogen is removed. The results can be found in Table 5 and show that we achieve very similar scores to the experiments on Zinc250k. Compared to the normalizing flow baseline GraphAF, GraphCNF generates considerably more valid atoms while being parallel in generation in contrast to GraphAF being autoregressive. JT-VAE uses manually encoded rules for generating valid molecules only such that the validity rate is $100\%$ . Overall, the experiment on Moses validates that GraphCNF is not specialized on a single dataset but can improve on current flow-based graph models across datasets.
+
+Table 5: Performance on molecule generation Moses (Polykovskiy et al., 2018), calculated on 10k samples and averaged over 4 runs. Score for GraphAF taken from Shi et al. (2020), and JT-VAE from Polykovskiy et al. (2018).
+
+| Method | Validity | Uniqueness | Novelty | Bits per node |
| JT-VAE (Jin et al., 2018) | 100% | 99.92% | 91.53% | - |
| GraphAF (Shi et al., 2020) | 71% | 99.99% | 100% | - |
| GraphCNF | 82.56% | 100.0% | 100% | 4.94bpd |
| (±2.34) | (±0.00) | (±0.00) | (±0.04) |
| + Sub-graphs | 95.66% | 99.98% | 100% | |
| (±2.58) | (±0.01) | (±0.00) | |
+
+Finally, we show 12 randomly sampled molecules from our model in Figure 11. In general, GraphCNF is able to generate a very diverse set of molecules with a variety of atom types. This qualitative analysis endorses the previous quantitative results of obtaining close to $100\%$ uniqueness on 10k samples.
+
+
+Figure 11: Visualization of molecules generated by GraphCNF which has been trained on the Zinc250k (Irwin et al., 2012) dataset. Nodes with black connections and no description represent carbon atoms. All of the presented molecules are valid. Best viewed in color and electronically for large molecules.
+
+Besides generating multiple sub-graphs, the most common failure case we have found are single, invalid edges in a large molecule, as shown in four examples in Figure 12.
+
+
+Figure 12: Failure cases in molecule generation besides sub-graph generation. The invalid edges have been indicates with a green arrow. Changing the edges to single bonds in the examples above would constitute a valid molecule.
+
+
+
+
+
+# D EXPERIMENTAL SETTINGS
+
+In this section, we detail the hyperparameter settings and datasets for all experiments. All experiments have been implemented using the deep learning framework PyTorch (Paszke et al., 2019). The experiments for graph coloring and molecule generation have been executed on a single NVIDIA TitanRTX GPU. The average training time was between 1 and 2 days. The set and language experiments have been executed on a single NVIDIA GTX1080Ti in 4 to 16 hours. All experiments have been repeated with at least 3 different random seeds.
+
+# D.1 SET MODELING
+
+Dataset details We use two toy datasets, set shuffling and set summation, to simulate a discrete distribution over sets in our experiments. Note that we do not have a classical split of train/val/test dataset, but instead train and test the models on samples from the same discrete distribution. This is because we want to verify whether a categorical normalizing flow and other baselines can model an arbitrary discrete distribution. The special property of sets is that permuting the elements of a set still represent the same set. However, a generative model still has to learn all possible permutations. While an autoregressive model considers those permutations as different data points, a permutation-invariant model as Categorical Normalizing Flow contains an inductive bias to assign the same likelihood to any permutation.
+
+In set shuffling, we only have one set to model which is the following (with categories $C_1$ to $C_{16}$ ):
+
+$$
+\left\{C _ {1}, C _ {2}, C _ {3}, C _ {4}, C _ {5}, C _ {6}, C _ {7}, C _ {8}, C _ {9}, C _ {1 0}, C _ {1 1}, C _ {1 2}, C _ {1 3}, C _ {1 4}, C _ {1 5}, C _ {1 6} \right\}
+$$
+
+This set has 16! possible permutations and therefore challenging to model. The optimal likelihood in bits per element is calculated by $\log_2(16!) / 16 \approx 2.77$ .
+
+The dataset set summing contains of 2200 valid sets for $N = 16$ and $L = 42$ . An example for a valid set is:
+
+$$
+\{1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 6, 8 \}
+$$
+
+For readability, the set is sorted by ascending values, although any permutation of the elements represent the exact same set. Taking into account all possible permutations of the sets in the dataset, we obtain a optimal likelihood of $\log_2\left(6.3 \cdot 10^{10}\right) / 16 \approx 2.24$ . The values for the sequence length $N$ and sum $L$ was chosen such that the task is challenging enough to show the differences between Categorical Normalizing Flows and its baselines, but also not too challenging to prevent unnecessarily long training times and model complexities.
+
+Hyperparameter details Table 6 shows an overview of the hyperparameters per model applied on set modeling. We use the notation " $\{\mathrm{val1, val2, \ldots}\}$ " to show the different values we have tried during hyperparameter search. Thereby, the underlined value denotes the hyperparameter value with the best performance and finally was being used to generate the results in Table 3.
+
+The number of encoding coupling layers in Categorical Normalizing Flows are sorted by the used encoding distribution. The mixture model uses no additional coupling layers, while for the linear flows, we apply 4 affine coupling layers using an external input for the discrete category. For the variational encoding distribution $q(z|x)$ , we use 4 mixture coupling layers across the all latent variables $z$ with external input for $x$ . A larger dimensionality of the latent space per element showed to be beneficial for all encoding distributions. Note that due to a dimensionality larger than 1 per
+
+element, we can apply the channel mask instead of a chess mask and maintain permutation invariance compared to the baselines.
+
+In variational dequantization and Discrete NF, we sort the categories randomly for set shuffling (the distribution is invariant to the category order/assignment) and in ascending order for set summation. In Discrete NF, we followed the published code from Tran et al. (2019) for their coupling layers and implemented it in PyTorch (Paszke et al., 2019). We use a discrete prior over the set elements which is jointly optimized with the flow. However, we experienced significant optimization issues due to the straight-through gradient estimator in the Gumbel Softmax.
+
+Across this paper, we experiment with the two optimizers Adam (Kingma and Ba, 2015) and RAdam (Liu et al., 2020), and experienced RAdam to work slightly better. The learning rate decay is applied every update and leads to exponential decay. However, we did not observe the choice of this hyperparameter to be crucial.
+
+Table 6: Hyperparameter overview for the set modeling experiments presented in Table 3
+
+| Hyperparameters | Categorical NF | Var. dequant. | Discrete NF |
| Latent dimension | {2,4,6} | 1 | 16 |
| #Encoding couplings | - / 4 / 4 | 4 | - |
| #Coupling layers | 8 | 8 | {4,8} |
| Coupling network | Transformer | Transformer | Transformer |
| - Number of layers | 2 | 2 | 2 |
| - Hidden size | 256 | 256 | 256 |
| Mask | Channel mask | Chess mask | Chess mask |
| #mixtures | 8 | 8 | - |
| Batch size | 1024 | 1024 | 1024 |
| Training iterations | 100k | 100k | 100k |
| Optimizer | {Adam, RAdam} | RAdam | {SGD, Adam, RAdam} |
| Learning rate | 7.5e-4 | 7.5e-4 | {1e-3, 1e-4, 1e-5} |
| Learning rate decay | 0.999975 | 0.999975 | 0.999975 |
| Temperature (GS) | - | - | {0.1, 0.2, 0.5} |
+
+# D.2 GRAPH COLORING
+
+Dataset details In our experiments, we focus on the 3-color problem meaning that a graph has to be colored using $K = 3$ colors. We generate the datasets by randomly sampling a graph and using an SAT solver $^2$ for finding one valid coloring assignment. In case no solution can be found, we discard the graph and sample a new graph. We further ensure that every graph cannot be colored by less than 3 colors to exclude too simple graphs. For creating the graphs, we take inspiration from Lemos et al. (2019) and first uniformly sample the number of nodes between $10 \leq |V| \leq 20$ for the small dataset, and $25 \leq |V| \leq 50$ for the large dataset. Next, we sample a value $p$ between 0.1 and 0.3 which represents the probability of having an edge between a random pair of nodes. Thus, $p$ controls how dense a graph is, and we aim to have both dense and sparse graphs in our dataset. Finally, for each pair of nodes, we sample from a Bernoulli distribution with probability $p$ of adding an edge between the two nodes or not. Finally, we check whether each node has at least one connection and that all nodes can be reached from any other node. This ensures that we have one connected graph and not multiple sub-graphs. Overall, we create a train/val/test size of $192k / 24k / 24k$ for the small dataset, and $450k / 20k / 30k$ for the large graphs. We visualize examples of the datasets in Figure 13.
+
+During training, we randomly permute the colors of a graph (e.g. red becomes blue, blue becomes green, green becomes red) as any permutation is a valid color assignment. When we sample a color assignment from our models, we explicitly use a temperature value of 1.0. For the autoregressive
+
+model and the VAE, this means that we sample from the softmax output. A common alternative is to take the argmax, which corresponds to a temperature value of 0.0. However, we stick to the original distribution because we want to test whether the models capture the full discrete distribution of valid color assignments and not only the most likely solution. For the normalizing flow, a temperature of 1.0 corresponds to sampling from the prior distribution as it was used during training.
+
+
+(a) $|V| = 10$
+
+
+(b) $|V| = 19$
+
+
+(c) $|V| = 25$
+
+
+(d) $|V| = 30$
+Figure 13: Examples of valid graph color assignments from the dataset (best viewed in color). Due to the graph sizes and dense adjacency matrices, edges can be occluded or cluttered in (c) and (d).
+
+Hyperparameter details Table 7 shows an overview of the used hyperparameters. If $\ell^2$ is used in the table, the first parameter refers to the hyperparameter value used on a small dataset and the second for the larger dataset. The activation function used within the graph neural networks is GELU (Hendrycks and Gimpel, 2016). Interestingly we experience that a larger latent space dimensionality is crucial for larger graphs despite having the same number of categories as the small dataset. This shows that having an encoding being flexible in the number of dimensions can be further important for datasets where complex relations between categorical variables need to be modeled. Increasing the number of dimensions on the small dataset did not show any significant differences in performance. The number of mixtures in the mixture coupling layers is in general beneficial to be large. However, this can also increase the sampling time. In the case of sampling time being crucial, the number of mixtures can be decreased for the tradeoff of slightly worse performance.
+
+The input to the autoregressive model is the graph with the color assignment at time step $T$ where each category including unassigned nodes is represented by an embedding vector. We experiment with an increasing number of hidden layers. While more layers are especially important for sub-optimal node ordering, the performance does not significantly improve for more than 5 layers. As the sampling time also increases linearly with the number of layers, we use 5 hidden layers for the models.
+
+For the variational autoencoder, we encode each node by a latent vector of size 4. As VAEs have shown to benefit from slowly adding the KL divergence between prior and posterior to the loss, we experiment with a scheduler where the slope is based on a sigmoid and stretched over 10k iterations. We apply a 5 layer graph attention network for both the encoder and decoder. Increasing the number of layers did not show a significant gain while making the loss scheduling more difficult, which is why we stuck with 5 layers.
+
+Detailed results Table 8 shows the standard deviation of the results reported in Table 2. Each model was run with 3 different seeds.
+
+Table 7: Hyperparameter overview for graph coloring experiments presented in Table 2
+
+| Hyperparameters | GraphCNF | Variational AE | Autoregressive |
| Latent dimension | {2,4}/{2,4,6,8} | 4 | - |
| #Coupling layers | {6,8} | - | - |
| (Coupling) network | GAT | GAT | GAT |
| - Number of layers | {3,4,5} | 5 | {3,4,5,6,7} |
| - Hidden size | 384 | 384 | 384 |
| - Number of heads | 4 | 4 | 4 |
| Mask | Channel mask | - | - |
| #mixtures | {4,8,16}/{4,8,16} | - | - |
| Batch size | 384/128 | 384/128 | 384/128 |
| Training iterations | 200k | 200k | 100k |
| Optimizer | RAdam | RAdam | RAdam |
| Learning rate | 7.5e-4 | 7.5e-4 | 7.5e-4 |
| KL scheduler | - | {1.0,0.1→0.5,0.1→1.0} | - |
+
+Table 8: Results on the graph coloring problem, including standard deviation over 3 seeds. The column time is excluded since the execution time per batch is constant over seeds.
+
+| Method | 10 ≤ |V| ≤ 20 | 25 ≤ |V| ≤ 50 |
| Validity | Bits per node | Validity | Bits per node |
| VAE | 44.95% ±5.32% | 0.84 ±0.04 | 7.75% ±1.59% | 0.64 ±0.02 |
| RNN+Smallest_first | 76.86% ±0.84% | 0.73 ±0.02 | 32.27% ±1.41% | 0.50 ±0.01 |
| RNN+Random | 88.62% ±0.65% | 0.70 ±0.01 | 49.28% ±1.53% | 0.46 ±0.01 |
| RNN+Largest_first | 93.41% ±0.42% | 0.68 ±0.01 | 71.32% ±0.77% | 0.43 ±0.01 |
| GraphCNF | 94.56% ±0.55% | 0.67 ±0.00 | 66.80% ±1.14% | 0.45 ±0.01 |
| - Affine | 93.90% ±0.68% | 0.69 ±0.02 | 65.78% ±1.03% | 0.47 ±0.01 |
+
+# D.3 MOLECULE GENERATION
+
+Dataset details The Zinc250k (Irwin et al., 2012) dataset we use contains 239k molecules of which we use 214k molecules for training, 8k for validation, and 17k for testing. We follow the preprocessing of Shi et al. (2020) and represent molecules in kekulized form in which hydrogen is removed. This leaves the molecules with up to 38 heavy atoms, with a mean and median size of about 23. The smallest graph consists of 8 nodes. Thereby, Zinc250k considers molecule with 8 different atom types where the distribution is significantly imbalanced. The most common atom is carbon with $73\%$ of all nodes in the dataset. Besides oxygen $(10\%)$ and nitrogen $(12\%)$ , the rest of the atoms occur in less than $2\%$ of all nodes, with the rarest atom being Bromine $(0.002\%)$ . Between those atoms, the dataset contains 3 different bonds or edge types, namely single, double and triple covalent bonds describing how many electrons are shared among the atoms. In over $90\%$ of all node pairs there exist no bond. In $7\%$ of the cases, the atoms are connected with a single connection, $2.4\%$ with a double, and $0.02\%$ with a triple connection. A similar imbalance is present in the Moses dataset and is based on the properties of molecules. Nevertheless, we experienced that GraphCNF was able to generate a similar distribution, where adding the third stage (adding virtual edges later) considerably helped to stabilize the edge imbalance.
+
+Hyperparameter details We summarize our hyperparameters in Table 9. Generally, a higher latent dimensionality is beneficial for representing nodes/atoms, similar to the graph coloring task. However, we experienced that a lower dimensionality for edges is slightly better, presumably because the flow already has a significant amount of latent variables for edges. Many edges, especially the virtual ones, do not contain much information. Besides, a deeper flow showed to gain better results offering more complex transformations. However, in contrast to the graph coloring model, GraphCNF on
+
+molecule generation requires a considerable amount of memory as we have to model a feature vector per edge. Nevertheless, we did not experience any issues due to the limited batch size of 96, and during testing, we could scale up the batch size easily to more than 128 on an NVIDIA GTX 1080Ti for both datasets.
+
+Table 9: Hyperparameter overview for molecule generation experiments presented in Table 1 and 5
+
+| Hyperparameters | GraphCNF |
| Latent dimension (V/E) | {4, 6, 8} / {2, 3, 4} |
| #Coupling layers (f1/f2/f3) | 4 / {4, 6} / {4, 6} |
| Coupling network (f1/f2,3) | Relational GCN / Edge-GNN |
| - Number of layers (f1/f2/f3) | {3/3/3, 3/4/4, 4/4/4} |
| - Hidden size (V/E) | {256, 384} / {128, 192} |
| Mask | Channel mask |
| #mixtures (V/E) | {8, 16} / {4, 8, 16} |
| Batch size (Zinc250k/Moses) | 64 / 96 |
| Training iterations | 150k |
| Optimizer | RAdam (Liu et al., 2020) |
| Learning rate | 2e-4, 5e-4, 7.5e-4, 1e-3 |
+
+# D.4 LANGUAGE MODELING
+
+Dataset details The three datasets we use for language modeling are the Penn Treebank (Marcus et al., 1994), text8 and Wikitext103 (Merit et al., 2017). The Penn Treebank with a preprocessing of Mikolov et al. (2012) consists of approximately 5M characters and has a vocabulary size of $K = 51$ . We follow the setup of Ziegler and Rush (2019) and split the dataset into sentences of a maximum length of 288. Furthermore, instead of an end-of-sentence token, the length is passed to the model and encoded by an external discrete prior which is created based on the sentence lengths in the training dataset.
+
+Text8 contains about 100M characters and has a vocabulary size of $K = 27$ . We again follow the preprocessing of Mikolov et al. (2012) and split the dataset into 90M characters for training, and 5M characters each for validation and testing. We train and test the models on a sequence length of 256.
+
+In contrast to the previous two datasets, we use Wikitext103 as a word-level language dataset. First, we create a vocabulary and limit it to the most frequent 10,000 words in the training corpus. We thereby use pre-trained Glove (Pennington et al., 2014) embeddings to represent the words in the baseline LSTM networks and to determine the logistic mixture parameters in the encoding distribution of Categorical Normalizing Flows. Due to this calculation of the mixture parameters, we use a small linear network as a decoder. The linear network consists of three linear layers of hidden size 512 with GELU (Hendrycks and Gimpel, 2016) activation and output size of 10,000 (the vocabulary size). Similarly to text8, we train and test the models on an input sequence length of 256.
+
+Hyperparameter details The hyperparameters for the language modeling experiments are summarized in Table 10. We apply the same hyperparameters for the flow and baseline if applicable. The best latent dimensionality for character-level is 3, although larger dimensionality showed to gain similar performance. For the word-level dataset, it is beneficial to increase the latent dimensionality to 10. However, note that 10 is still significantly smaller than the Glove vector size of 300. As Penn Treebank has a limited training dataset on which LSTM networks easily overfit, we use a dropout (Srivastava et al., 2014) of 0.3 throughout the models and dropout an input token with a chance of 0.1. The other datasets seemed to benefit slightly by a small input dropout to prevent overfitting at later stages of the training.
+
+Detailed results Table 11 shows the detailed numbers and standard deviations of the results reported in Figure 4. Each model was run with 3 different seeds based on which the standard deviation has been calculated.
+
+Table 10: Hyperparameter overview for the language modeling experiments presented in Table 11
+
+| Hyperparameters | Penn Treebank | text8 | Wikitext103 |
| (Max) Sequence length | 288 | 256 | 256 |
| Latent dimension | {2, 3, 4} | {2, 3, 4} | {8, 10, 12} |
| #Coupling layers | 1 | 1 | 1 |
| Coupling network | LSTM | LSTM | LSTM |
| - Number of layers | 1 | 2 | 2 |
| - Hidden size | 1024 | 1024 | 1024 |
| - Dropout | {0.0, 0.3} | 0.0 | 0.0 |
| - Input dropout | {0.0, 0.05, 0.1, 0.2} | {0.0, 0.05, 0.1} | {0.0, 0.05, 0.1} |
| #mixtures | 51 | 27 | 64 |
| Batch size | 128 | 128 | 128 |
| Training iterations | 100k | 150k | 150k |
| Optimizer | RAdam | RAdam | RAdam |
| Learning rate | 7.5e-4 | 7.5e-4 | 7.5e-4 |
+
+Table 11: Results on language modeling. The reconstruction error is shown in brackets.
+
+| Model | Penn Treebank | text8 | Wikitext103 |
| LSTM baseline | 1.28 ±0.01 | 1.44 ±0.01 | 4.81 ±0.05 |
| Latent NF - 1 layer | 1.30±0.01 (0.01) | 1.61±0.02 (0.03) | 6.39±0.19 (1.78) |
| Categorical NF - 1 layer | 1.27 ±0.01 (0.00) | 1.45 ±0.01 (0.00) | 5.43 ±0.09 (0.32) |
+
+# D.5 REAL-WORLD EXPERIMENTS
+
+To verify that Categorical Normalizing Flows can also be applied to real-world data, we have experimented on a credit-card risk record (Dua and Graff, 2019). The data contains different categorical attributes regarding potential credit risk, and we have used the following 9 attributes: "checking_status", "credit_history", "savings_status", "employment", "housing", "job", "own_telephone", "foreign-worker", and "class". The task is to model the joint density function of those 9 attributes over a dataset of 1000 entries. We have used the first 750 entries for training, 100 for validation, and 150 for testing. The results are shown in Table 12. Both the LatentNF and CNF perform equally well, given the small size of the dataset and number of categories. The results have been averaged over three seeds. However, we experienced a relatively high standard deviation due to the small size of the training and test set.
+
+Table 12: Results on credit card risk dataset (Dua and Graff, 2019). The reconstruction error is shown in brackets where applicable.
+
+| Model | Likelihood in bpd |
| Variational Dequantization | 1.95±0.04 |
| Latent NF | 1.36±0.03 (0.01) |
| Categorical NF | 1.37 ±0.03 (0.00) |
\ No newline at end of file
diff --git a/categoricalnormalizingflowsviacontinuoustransformations/images.zip b/categoricalnormalizingflowsviacontinuoustransformations/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..b6392d66f01c5aec2cab021b49ebd41412c6daab
--- /dev/null
+++ b/categoricalnormalizingflowsviacontinuoustransformations/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3cd2d46ba1254809df4af3a176f575af29999fed99233e87e9728a6beafbf59c
+size 1070813
diff --git a/categoricalnormalizingflowsviacontinuoustransformations/layout.json b/categoricalnormalizingflowsviacontinuoustransformations/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..0ef5eb3a6584c24a3fb5b224635bddc01c267db9
--- /dev/null
+++ b/categoricalnormalizingflowsviacontinuoustransformations/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8649084dbaafec7c411161e283dbf57c4622887d46fb6ad285269d198804bfb1
+size 790023
diff --git a/causalworldaroboticmanipulationbenchmarkforcausalstructureandtransferlearning/66b00b53-6a73-4dfe-878b-9c0a2b86dfe0_content_list.json b/causalworldaroboticmanipulationbenchmarkforcausalstructureandtransferlearning/66b00b53-6a73-4dfe-878b-9c0a2b86dfe0_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..4762104687cb64d8c3bc2af28cbcb3e9eba50f6c
--- /dev/null
+++ b/causalworldaroboticmanipulationbenchmarkforcausalstructureandtransferlearning/66b00b53-6a73-4dfe-878b-9c0a2b86dfe0_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:84cbbde285c8d3c5da890202f1abdb0bad8298083ac1d63a0ee6c9ca4722d2af
+size 92606
diff --git a/causalworldaroboticmanipulationbenchmarkforcausalstructureandtransferlearning/66b00b53-6a73-4dfe-878b-9c0a2b86dfe0_model.json b/causalworldaroboticmanipulationbenchmarkforcausalstructureandtransferlearning/66b00b53-6a73-4dfe-878b-9c0a2b86dfe0_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..1d9dc166efc4b5b7e6a17887f483badd4fd43774
--- /dev/null
+++ b/causalworldaroboticmanipulationbenchmarkforcausalstructureandtransferlearning/66b00b53-6a73-4dfe-878b-9c0a2b86dfe0_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:68232d3890c7fc61989493d183d74cbc03027035f068c1bbaec4ef04110e44cd
+size 119720
diff --git a/causalworldaroboticmanipulationbenchmarkforcausalstructureandtransferlearning/66b00b53-6a73-4dfe-878b-9c0a2b86dfe0_origin.pdf b/causalworldaroboticmanipulationbenchmarkforcausalstructureandtransferlearning/66b00b53-6a73-4dfe-878b-9c0a2b86dfe0_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..150a403be3a54fd573c1365faaa40348c4eb8791
--- /dev/null
+++ b/causalworldaroboticmanipulationbenchmarkforcausalstructureandtransferlearning/66b00b53-6a73-4dfe-878b-9c0a2b86dfe0_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:52ca5dc53539606355a0fec6ed0fe0afb8a0b490b3adaed79bfba81b1ecbae3e
+size 2581277
diff --git a/causalworldaroboticmanipulationbenchmarkforcausalstructureandtransferlearning/full.md b/causalworldaroboticmanipulationbenchmarkforcausalstructureandtransferlearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..68f21710e31655485bf4d848e39bad53163966b6
--- /dev/null
+++ b/causalworldaroboticmanipulationbenchmarkforcausalstructureandtransferlearning/full.md
@@ -0,0 +1,321 @@
+# CAUSALWORLD: A ROBOTIC MANIPULATION BENCHMARK FOR CAUSAL STRUCTURE AND TRANSFER LEARNING
+
+Ossama Ahmed, $^{1}$ Frederik Träuble, $^{2}$ Anirudh Goyal, $^{3}$ Alexander Neitz, $^{2}$ Yoshua Bengio, $^{3}$ Bernhard Schölkopf, $^{2}$ Stefan Bauer, $^{†2}$ Manuel Wüthrich $^{†2}$
+
+$^{1}$ ETH Zurich, $^{2}$ Max Planck Institute for Intelligent Systems, $^{3}$ Mila, University of Montreal
+
+# ABSTRACT
+
+Despite recent successes of reinforcement learning (RL), it remains a challenge for agents to transfer learned skills to related environments. To facilitate research addressing this problem, we propose CausalWorld, a benchmark for causal structure and transfer learning in a robotic manipulation environment. The environment is a simulation of an open-source robotic platform, hence offering the possibility of sim-to-real transfer. Tasks consist of constructing 3D shapes from a set of blocks - inspired by how children learn to build complex structures. The key strength of CausalWorld is that it provides a combinatorial family of such tasks with common causal structure and underlying factors (including, e.g., robot and object masses, colors, sizes). The user (or the agent) may intervene on all causal variables, which allows for fine-grained control over how similar different tasks (or task distributions) are. One can thus easily define training and evaluation distributions of a desired difficulty level, targeting a specific form of generalization (e.g., only changes in appearance or object mass). Further, this common parametrization facilitates defining curricula by interpolating between an initial and a target task. While users may define their own task distributions, we present eight meaningful distributions as concrete benchmarks, ranging from simple to very challenging, all of which require long-horizon planning as well as precise low-level motor control. Finally, we provide baseline results for a subset of these tasks on distinct training curricula and corresponding evaluation protocols, verifying the feasibility of the tasks in this benchmark.1
+
+# 1 INTRODUCTION
+
+Benchmarks have played a crucial role in advancing entire research fields, for instance computer vision with the introduction of CIFAR-10 and ImageNet (Krizhevsky et al., 2009; 2012). When it comes to the field of reinforcement learning (RL), similar breakthroughs have been achieved in domains such as game playing (Mnih et al., 2013; Silver et al., 2017), learning motor control for high-dimensional simulated robots (Akkaya et al., 2019), multi-agent settings (Baker et al., 2019; Berner et al., 2019) and for studying transfer in the context of meta-learning (Yu et al., 2019). Nevertheless, trained agents often fail to transfer the knowledge about the learned skills from a training environment to a different but related environment sharing part of the underlying structure. This can be attributed to the fact that it is quite common to evaluate an agent on the training environments themselves, which leads to over
+
+fitting on these narrowly defined environments (Whiteson et al., 2011), or that algorithms are com
+
+
+do(floor_color='white', block_size=0.065, ...etc)
+
+
+Figure 1: Example of do-interventions on exposed variables in CausalWorld.
+
+
+(a) Pushing
+
+
+(b) Picking
+
+
+(c) Pick and Place
+
+
+(d) Stacking2
+
+
+(e) Stacked Blocks
+Figure 2: Example tasks from the task generators provided in the benchmark. The goal shape is visualized in opaque red and the blocks in blue.
+
+
+(f) General
+
+
+(g) CreativeStackedBlocks
+
+
+(h) Towers
+
+pared using highly engineered and biased reward functions which may result in learning suboptimal policies with respect to the desired behaviour; this is particularly evident in robotics.
+
+In existing benchmarks (Yu et al., 2019; Goyal et al., 2019a; Cobbe et al., 2018; Bellemare et al., 2013; James et al., 2020) the amount of shared causal structure between the different environments is mostly unknown. For instance, in the Atari Arcade Learning environments, it is unclear how to quantify the underlying similarities between different Atari games and we generally do not know to which degree an agent can be expected to generalize.
+
+To overcome these limitations, we introduce a novel benchmark in a robotic manipulation environment that we call CausalWorld. It features a diverse set of environments that, in contrast to previous designs, share a large set of parameters and parts of the causal structure. Being able to intervene on these parameters (individually or collectively) permits the experimenter to evaluate agents' generalization abilities with respect to different types and magnitudes of changes in the environment. These parameters can be varied gradually, which yields a continuum of similar environments. This allows for fine-grained control of training and test distributions and the design of learning curricula.
+
+A remarkable skill that humans learn to master early on in their life is building complex structures using spatial-reasoning and dexterous manipulation abilities (Casey et al., 2008; Caldera et al., 1999; Kamii et al., 2004). Playing with toy blocks constitutes a natural environment for children to develop important visual-spatial skills, helping them 'generalize' in building complex composition designs from presented or imagined goal structures (Verdine et al., 2017; Nath & Szücs, 2014; Dewar, 2018; Richardson et al., 2014). Inspired by this, CausalWorld is designed to aid in learning and investigating these skills in a simulated robotic manipulation environment corresponding to the open-source TriFinger robot platform from Wüthrich et al. (2020), which can be built in the real world. Tasks are formulated as building 3D goal shapes using a set of available blocks by manipulating them - as seen in Fig. 1. This yields a diverse family of tasks, ranging from relatively simple (e.g. pushing a single object) to extremely hard (e.g. building a complex structure from a large number of objects).
+
+CausalWorld improves upon previous benchmarks by exposing a large set of parameters in the causal generative model of the environments, such as weight, shape and appearance of the building blocks and the robot itself. The possibility of intervening on any of these properties at any point in time allows one to set up training curricula or to evaluate an agent's generalization capability with respect to different parameters. Furthermore, in contrast to previous benchmarks (Chevalier-Boisvert et al., 2018; Cobbe et al., 2018), researchers may build their own real-world platform of this simulator at low cost, as detailed in Wüthrich et al. (2020), and transfer their trained policies to the real world.
+
+| Benchmark | do-interventionsinterface | procedurallygeneratedenvi-ments | online dis-tribution oftasks | setupcustomcurric-ula | disentanglegeneral-izationability | real-worldsimilarity | open-sourcerobot | low-levelmotorcontrol | long-termplan-ning | unifiedsuccessmetric |
| RLBench | X | X | X | X | X | ✓ | X | ✓ | X | X |
| MetaWorld | X | X | X | X | X | ✓ | X | ✓ | X | X |
| IKEA | X | X | X | X | X | ✓ | X | ✓ | ✓ | ✓ |
| MuJoBan | X | X | X | ✓ | X | ✓ | X | ✓ | ✓ | ✓ |
| BabyAI | X | ✓ | X | X | X | X | X | X | ✓ | ✓ |
| CoinRun | X | ✓ | X | X | X | X | X | X | X | ✓ |
| AtariArcade | X | X | X | X | X | X | X | X | ✓/X | ✓ |
| CausalWorld | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
+
+Table 1: Comparison of Causal World with RLBench (James et al., 2020), MetaWorld (Yu et al., 2019), IKEA (Lee et al., 2019), BabyAI (Chevalier-Boisvert et al., 2018), CoinRun (Cobbe et al., 2018), AtariArcade (Bellemare et al., 2013), MuJoBan etc. (Mirza et al., 2020),
+
+Finally, by releasing this benchmark we hope to facilitate research in causal structure learning, i.e. learning the causal graph (or certain aspects of it) as we operate in a complex real-world environment whose dynamics follow the laws of physics, which induce causal relations between the variables. Changes to the variables we expose can be considered do-interventions on the underlying structural causal model (SCM). Consequently, we believe that this benchmark offers an exciting opportunity to investigate causality and its connection to RL and robotics.
+
+Our main contributions can be summarized as follows:
+
+- We propose CausalWorld, a new benchmark comprising a parametrized family of robotic manipulation environments for advancing out-of-distribution generalization and causal structure learning in RL.
+- We provide a systematic way of defining curricula and disentangling generalization abilities of RL agents with respect to different changes in the environment, since we allow for do-interventions to be performed on different environment variables (parameters and states) individually.
+- We establish baseline results for some of the available tasks under different learning algorithms, thus verifying the feasibility of the tasks.
+- We show how different learning curricula affect generalization across different axes by reporting some of the in-distribution and out-of-distribution generalization capabilities of the trained agents.
+
+# 2 CAUSALWORLD BENCHMARK
+
+Here we make the desiderata outlined in the introduction more precise:
+
+1. The set of environments should be sufficiently diverse to allow for the design of challenging transfer tasks.
+2. We need to be able to intervene on different properties (e.g. masses, colors) individually, such that we can investigate different types of generalization.
+3. It should be possible to convert any environment to any other environment by gradually changing its properties through interventions; this requirement is important for evaluating different levels of transfer and for defining curricula.
+4. The environments should share some causal structure to allow algorithms to transfer the learned causal knowledge from one environment to another.
+5. There should be a unified measure of success, such that an objective comparison can be made between different learning algorithms.
+6. The benchmark should make it easy for users to define meaningful distributions of environments for training and evaluation. In particular, it should facilitate evaluation of in-distribution and out-of-distribution performance.
+7. The simulated benchmark should have a real-world counterpart to allow for sim2real.
+
+In light of these desiderata, we propose a setup in which a robot must build goal shapes using a set of available objects. It is worth noting that similar setups were proposed previously in a less realistic setting, e.g. in (Janner et al., 2018; Bapat et al., 2019; McCarthy et al.; Akkaya et al., 2019; Fahlman, 1974; Winston, 1970; Winograd, 1972). Specifically, a task is formulated as follows: given a set of available objects, the agent needs to build a specific goal structure, see Fig. 1 for an example. The vast amount of possible target shapes and environment properties (e.g. mass, shape and appearance of objects and the robot itself) makes this a diverse and challenging setting to evaluate different generalization aspects. CausalWorld is a simulated version (using the Bullet physics engine (Coumans et al., 2013)) of the open-source TriFinger robot platform from Wuthrich et al. (2020). Each environment is defined by a set of variables such, as gravity, floor friction, stage color, floor color, joint positions, various block parameters (e.g. size, color, mass, position, orientation), link colors, link masses and the goal shape. See Table 3 in the Appendix for a more extensive list of these variables.
+
+Desideratum 1 is satisfied since different environment properties and goal shapes give rise to very different tasks, ranging from relatively easy (e.g. re-positioning a single cube) to extremely hard (e.g. building a complex structure). Desideratum 2 is satisfied because we allow for arbitrary interventions on these properties, hence users or agents may change parameters individually or jointly. Desideratum 3 is satisfied because the parameters can be changed gradually. Desideratum 4 is satisfied because all the environments share the causal structure of the robot, and one may also use subsets of environments which share even more causal structure. We satisfy desideratum 5 by defining the measure of success for all environments as the volumetric overlap of the goal shape with available objects. Further, by splitting the set of parameters into a set A, intended for training and in-distribution evaluation, and a set B, intended for out-of-distribution evaluation, we satisfy desideratum 6. Finally, since the TriFinger robot (Wüthrich et al., 2020) can be built in the real-world, we satisfy desideratum 7. Desideratum 7 and 2 are in partial conflict since sim2real is only possible for the tasks which are constrained to the variables on which the robot can physically act upon.
+
+Task generators: To generate meaningful families of similar goal shapes, CausalWorld allows for defining task generators which can generate a variety of different goal shapes in an environment. For instance, one task generator may generate pushing tasks, while another one may generate tower-building tasks (see Fig. 2). Each task generator is initialized with a default goal shape from its corresponding family and comes with a sampler to sample new goal shapes from the same family. Additionally, upon construction, one can specify the environments' initial state and initial goal shape structure when deviating from the default. The maximum episode time to build a given shape is number_of_blocks $\times 10$ seconds. CausalWorld comes with eight pre-defined task generators (see Fig. 2).
+
+- Three generators create goal shapes with a single block: Pushing with the goal shape on the floor, Picking having the goal shape defined above the floor and Pick and Place where a fixed obstacle is placed between the initial block and goal pose.
+- Stacking2 involves a goal shape of two stacked blocks, which can also be considered one instance of the Towers generator.
+- The remaining generators use a variable number of blocks to generate much more complex and challenging target shapes, as detailed in the appendix: Towers, Stacked Blocks, Creative Stacked Blocks and General.
+
+Given that building new environments using current physics simulators is often tedious, we provide a simple API for users who wish to create task generators for new challenging shape families, which may be added to CausalWorld's task generators repository.
+
+Action and Observation Spaces: The robot can be chosen to operate in either joint position control mode, joint torque control mode, end-effector position control mode, or the delta of each. In any of these cases, the action is 9-dimensional (one per joint). We provide two observation modes: structured and pixel. In the structured mode, the observation vector is constructed using a rule for the ordering of the relevant variables, such as joint positions, joint velocities, block positions, etc. Thus, the size of the observation space depends on the number of blocks, which could potentially change with every new goal sampled, e.g. in Towers, (Creative) Stacked Blocks and General. In
+
+contrast, in the pixel mode, the agent receives six RGB images (hence the dimension of the observation is $6 \times 3 \times 128 \times 128$ ), the first three images are rendered from the three cameras mounted around the TriFinger robot, and the last three images specify the goal image of the target shape rendered from the same cameras. Additionally, CausalWorld allows users to set up a fully customized observation space.
+
+Rewards: The reward function $r$ is defined as the fractional volumetric overlap of the blocks with the goal shape, which ranges between 0 (no overlap) and 1 (complete overlap). Since this reward function is shared across all tasks, an agent that learned $r$ from some training tasks could in principle use it to solve unseen goal structures. There is also the possibility of modifying the reward function to 1) sparsify the reward further by returning a binary reward signal instead, or 2) add a dense reward function in order to introduce inductive biases via domain knowledge and solution guidance. We hope that the considerable complexity and diversity of goal shapes motivate and accelerate the development of algorithms that are not dependent on hand-tuned reward functions.
+
+
+Figure 3: Key components for generic training and evaluation of RL agents. Left: A learning curriculum which is composed of various intervention actors that decide which variables to intervene on (for a valid intervention, values need to be in the allowed training space (ATS)). Right: Evaluation protocols are shown which may intervene on variables at episode resets or within episodes (for a valid intervention, values need to be in the evaluation space (ES)). Middle: we represent the ATS and ES, where each intervention results in one point in the spaces. As shown ATS and ES may intersect, eg. if the protocols are meant to evaluate in-distribution generalization.
+
+Training and evaluation spaces: In this benchmark, a learning setting consists of an allowed training space (ATS) and an evaluation space (ES), both of which are subspaces of the full parameter space. During training, in the simplest setting, parameters are sampled iid from the ATS. However, unlike existing benchmarks, CausalWorld allows in addition for curricula within the ATS as well as settings where the agent itself intervenes on the parameters within an episode (see Fig. 3). Similarly, during evaluation, parameters may be sampled iid from the evaluation space at each episode reset, or there can be interventions within an episode. Moreover, in order to retrieve the setting considered in most RL benchmarks, we could set the ATS and the ES to be identical and intervene only on object and robot states (and keep other environment properties constant) at each episode reset. However, to evaluate out-of-distribution generalization, one should set the two spaces (ATS and ES) to be different; possibly even disjoint. Additionally, to evaluate robustness with respect to a specific parameter (e.g. object mass), one may define the training and evaluation spaces which only differ in that particular parameter. In order to facilitate the definition of appropriate training and evaluation settings, we pre-define two disjoint sets, $\mathbf{A}_i$ and $\mathbf{B}_i$ , for each parameter $i$ . Through this, one can for instance define the training space to be $\mathbf{A}_1 \times \mathbf{A}_2 \times \ldots$ and the evaluation space to be $\mathbf{B}_1 \times \mathbf{B}_2 \times \ldots$ to assess generalization with respect to all parameters simultaneously. Alternatively, the evaluation space could be defined as $\mathbf{A}_1 \times \mathbf{A}_2 \times \ldots \times \mathbf{B}_i \times \mathbf{A}_{i+1} \times \ldots$ to assess generalization with respect to parameter $i$ only. Lastly, users may also define their own spaces which could then be integrated into the benchmark to give rise to new learning settings.
+
+**Intervention actors:** To provide a convenient way of specifying learning curricula, we introduce intervention actors. At each time step, such an actor takes all the exposed variables of the environment as inputs and may intervene on them. To encourage modularity, one may combine multiple actors in a learning curriculum. This actor is defined by the episode number to start intervening, the episode number to stop intervening, the timestep within the episode it should intervene and the episode periodicity of interventions. We provide a set of predefined intervention actors, including
+
+an actor which samples parameters randomly at each episode reset, which corresponds to domain-randomization. It is also easy to define custom intervention actors, we hope that this facilitates investigation into optimal learning curricula (see Fig. 3).
+
+Probing the Causal Structure in RL: The problem setting in RL is usually formulated using the language of Markov Decision Processes (MDPs) or Partially Observable Markov Decision Processes (POMDPs) (Sutton & Barto, 1999), but can be also represented by Structural Causal Models (SCMs), as shown in (Buesing et al., 2018), refer to section E in the Appendix for a detailed explanation. This is achieved by formulating all conditional probability distributions as deterministic functions that take independent noise variables as inputs. These independent noise variables can specify different scenarios while the deterministic functions reflect the causal mechanisms of the system (Schölkopf et al., 2021). Changes in the environment can stem from two different sources:
+
+1. The agent may alter the state of the environment (e.g. the position of a block) indirectly, through its actions (e.g. pushing the block by applying appropriate torques at the motors).
+2. During the execution of a learning curriculum or an evaluation protocol, we may directly intervene on any variable of the SCM, including all the latent variables of the causal model that are not accessible to the RL agent (such as gravity, object mass or color).
+
+(1) is the default type of admissible interventions in RL benchmarks, whereas CausalWorld allows for interventions of type (2) in addition. The idea is that interventions on these latent variables, e.g. during a learning curriculum, will allow the agent to distinguish between spurious correlations that are only present in a particular setting and true causal relations that will hold across all settings (i.e. interventions). If the agent is able to learn such a representation of the underlying SCM structure, we would expect it to perform well even in out-of-distribution scenarios (Scholkopf et al., 2021; Dittadi et al., 2021) because the causal structure remains the same, even when the functional form of certain relations may vary (e.g. when transferring to the real robot). Moreover, we hope that by having access to a broad range of interventions in CausalWorld it will aid the inference of the underlying SCM structure through the different causal discovery methods (see Figure 4 for a subset of the expected SCM to be learned), which in turn addresses the lack of causal discovery benchmarks for real world challenges.
+
+
+Figure 4: A subset of an SCM represented as a DAG for an environment in CausalWorld with one block on the floor. Here, we only show a subset of the causal variables affecting the block position at time $t + 1$ .
+
+# 3 RELATED WORK
+
+Previous benchmarks for RL mostly focused on the single task learning setting such as OpenAI Gym and DM control suite (Tassa et al., 2018; Brockman et al., 2016). In contrast, a recent line of work, e.g. Meta-World and RLBench (Yu et al., 2019; James et al., 2020), aims at studying multi-task learning as well as meta-learning. Such benchmarks mostly provide non-parametric, hand-designed task variations, it is hence unclear how much structure is shared between them. For instance, it is not clear how different it is to "open a door" compared to "opening a drawer". To address the ambiguity in the shared structure between the tasks, CausalWorld was designed to allow interventions to be performed on many environment variables giving rise to a large space of tasks with well-defined
+
+relations between them, which we believe is a missing key component to address generalization in RL. A detailed comparison between CausalWorld and similar benchmarks is shown in Table 1.
+
+Similar parametric formulations of different environments were used in the generalization-for-RL literature, which has played an important role in advancing the field (Packer et al., 2018; Rajeswaran et al., 2017; Pinto et al., 2017; Yu et al., 2017; Henderson et al., 2017a; Dulac-Arnold et al., 2020; Chevalier-Boisvert et al., 2018). In these previous works, variables were mostly changed randomly, as opposed to the full control over the variables provided by CausalWorld.
+
+Another important open problem for the RL community is the standardization of the reported learning curves and results. RL methods have been shown to be sensitive to a range of different factors (Henderson et al., 2017b). Thus it is crucial to devise a set of metrics that measure reliability of RL algorithms and ensure their reproducibility. Chan et al. (2019) distinguishes between several evaluation modes like "evaluation during training" and "evaluation after learning". Osband et al. (2019) recently proposed a benchmarking suite that disentangles the ability of an algorithm to deal with different types of challenges. Its main components are: enforcing a specific methodology for an agent's evaluation beyond the environment definition and isolating core capabilities with targeted 'unit tests' rather than integrating the general learning ability.
+
+Moreover, causality has been historically studied from the perspective of probabilistic and causal reasoning (Pearl, 2009), cognitive psychology (Griffiths & Tenenbaum, 2005), and more recently in the context of machine learning (Goyal et al., 2019b; Schölkopf et al., 2021; Baradel et al., 2019; Bakhtin et al., 2019). On the contrary, we believe its link to robotics is not yet drawn systematically. To bridge this gap, one of the main motivations of CausalWorld was to facilitate research in causal learning for robotics, such as observational discovery of causal effects in physical reality, counterfactual reasoning, and causal structure learning.
+
+# 4 EXPERIMENTS
+
+To illustrate the usage of this benchmark and to verify the feasibility of some basic tasks, we evaluate current state-of-the-art model-free (MF-RL) algorithms on a subset of the goal shape families described in Section 2 and depicted in Fig. 2: (a) Pushing, (b) Picking, (c) Pick and Place, and (d) Stacking2. These goal shapes reflect basic skills that are required to solve more complex construction tasks.
+
+Setup: The idea here is to investigate how well an agent will perform on different evaluation distributions, depending on the curriculum it has been trained with. We train each method under the following curricula:
+
+- Curriculum 0: no environment changes; each episode is initialized from the default task lying in space A - note that here the initial state never changes (i.e. no interventions).
+- Curriculum 1: goal shape randomization; at the beginning of each episode a new goal shape is sampled from space A (i.e. interventions on goal position and orientation).
+- Curriculum 2: full randomization w.r.t. the task variables5; every episode a simultaneous intervention on all variables is sampled from space A (i.e. can be seen as equivalent to extreme domain randomization in one space).
+
+The curriculum will, as expected, affect the generalization capabilities of the trained agents. With CausalWorld's formulation, these generalization capabilities can easily be disentangled and benchmarked quantitatively, as explained in Section 2. For each of the goal shape families (a, b, c, d from Fig. 2), we train agents under the three described curricula using the following MF-RL algorithms: The original Proximal Policy Optimization (PPO) from Schulman et al. (2017), Soft Actor-Critic (SAC) from Haarnoja et al. (2018) and the Twin Delayed DDPG (TD3) from Fujimoto et al. (2018). We provided these methods with a hand-designed dense reward function as we did not observe any success with the sparse reward only. Each of the mentioned setups is trained for five different random seeds, resulting in 180 trained agents.
+
+
+Figure 5: Fractional success curves averaged over five random seeds for the tasks and learning algorithms specified above, under three different training curricula: (0) no curriculum, (1) goal position and orientation randomization in space A every episode and (2) a curriculum where we intervene on all variables in space A simultaneously every episode.
+
+Training model-free RL methods: We report the training curves averaged over the random seeds in Fig. 5. As can be seen from these fractional success training curves, MF-RL methods are capable of solving the single block goal shapes (pushing, picking, pick and place) seen during training time given enough experience. However, we observe that none of the methods studied here managed to solve stacking two blocks. The score below 0.5 indicates that it only learns to push the lower cube into the goal shape. This shows that multi-object target shapes can become nontrivial quickly and that there is a need for methods making use of the modular structure of object-based environments. To no surprise, the training curriculum has a major effect on learning. For example, methods rarely manage to pick up any significant success signal under extreme domain randomization as in curriculum 2, even after 100 million timesteps. Note that these curves represent the scores under the conditions of the training environments. Next, we will discuss shared evaluation protocols that allow to benchmark and compare agents trained under different conditions.
+
+Benchmarking generalization capabilities along various axes: For each of the four goal shape families, we define a set of 12 evaluation protocols that we consider meaningful and representative for benchmarking the different algorithms. In the protocols presented here, we sample the values from a protocol-specific set of variables at the start of each episode while keeping all other variables fixed to their default values. We evaluate each agent on 200 episodes by computing the fractional success score at the last time step of each episode and reporting the mean. These evaluation protocols allow to disentangle generalization abilities, as they show robustness with respect to different types of interventions, see Fig. 6. The following are some of the observations we made for pushing:
+
+- Agents that were trained on the default pushing task environment (curriculum 0) do well (as expected) on the default task (P0). Interestingly, we likewise see a generalization capability to initial poses from variable space A (P4). This can be explained by a substantial exploration of the block positions via manipulation during training. In contrast, we see that the agents exhibit weaknesses regarding goal poses (P5), they seem to overfit to their training settings in this case.
+
+
+Figure 6: Evaluation scores for pushing baselines. Each protocol was evaluated for 200 episodes and each bar is averaged over five models with different random seeds. The variables listed under each protocol are sampled from the specified space at the start of every episode while all other variables remain fixed [bp block pose, bm block mass, bs block size, gp goal pose, ff floor friction].
+
+- For agents trained with goal pose randomization (curriculum 1) we see similar results as with curriculum 0, with the difference that agents under this curriculum generalize robustly to different goal poses (P5), as one would expect.
+- Finally, agents that experience extreme domain randomization (curriculum 2) at training time, fail to learn any relevant skill as shown by the flat training curve in Fig. 5. An explanation for this behavior could be that the agent might need more data and optimization steps to handle this much more challenging setting. Another possibility is that it may simply not be possible to find a strategy which simultaneously works for all parameters (note that the agent does not have access to the randomized parameters and hence must be robust to them). This poses an interesting question for future work.
+
+As expected, we observe that an agent's generalization capabilities are related to the experience gathered under its training curriculum. CausalWorld allows us to explore this relationship in a differentiated manner, assessing which curricula lead to which generalization abilities. This will not only help uncover an agent's shortcomings but may also aid in investigating novel learning curricula and approaches for robustness in RL. Lastly, we note that this benchmark comprises extremely challenging tasks that appear to be out of reach of current model free methods without any additional inductive bias.
+
+# 5 CONCLUSION
+
+We have introduced a new benchmark - CausalWorld - to facilitate research in causal structure and transfer learning using a simulated environment of an open-source robot, where learned skills could potentially be transferred to the real world. We showed how allowing for interventions on the environment's properties yields a diverse family of tasks with a natural way of defining learning curricula and evaluation protocols that can disentangle different generalization capabilities. A natural extension of our work is to develop new RL algorithms that focuses on out of distribution generalization (whether its a different subspace of the state space or a completely different task). We hope that the flexibility and modularity of CausalWorld will allow researchers to easily define appropriate benchmarks of increasing difficulty as the field progresses, thereby coordinating research efforts towards ever new goals.
+
+# 6 ACKNOWLEDGMENTS
+
+The authors would like to thank Felix Widmaier, Vaibhav Agrawal and Shruti Joshi for the useful discussions and for the development of the TriFinger robot's simulator (Joshi et al., 2020), which served as a starting point for the work presented in this paper. AG is also grateful to Alex Lamb and Rosemary Nan Ke for useful discussions. We thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting FT.
+
+# REFERENCES
+
+Ilge Akkaya, Marcin Andrychowicz, Maciek Chociej, Mateusz Litwin, Bob McGrew, Arthur Petron, Alex Paino, Matthias Plappert, Glenn Powell, Raphael Ribas, et al. Solving rubik's cube with a robot hand. arXiv preprint arXiv:1910.07113, 2019.
+Safa Alver and Doina Precup. A brief look at generalization in visual meta-reinforcement learning. arXiv preprint arXiv:2006.07262, 2020.
+Bowen Baker, Ingmar Kanitscheider, Todor Markov, Yi Wu, Glenn Powell, Bob McGrew, and Igor Mordatch. Emergent tool use from multi-agent autocurricula. arXiv preprint arXiv:1909.07528, 2019.
+Anton Bakhtin, Laurens van der Maaten, Justin Johnson, Laura Gustafson, and Ross Girshick. Phyre: A new benchmark for physical reasoning. In Advances in Neural Information Processing Systems, pp. 5082-5093, 2019.
+Victor Bapst, Alvaro Sanchez-Gonzalez, Carl Doersch, Kimberly L Stachenfeld, Pushmeet Kohli, Peter W Battaglia, and Jessica B Hamrick. Structured agents for physical construction. arXiv preprint arXiv:1904.03177, 2019.
+Fabien Baradel, Natalia Neverova, Julien Mille, Greg Mori, and Christian Wolf. Cophy: Counterfactual learning of physical dynamics. arXiv preprint arXiv:1909.12000, 2019.
+Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47: 253-279, 2013.
+Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemysaw Debiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Chris Hesse, et al. Dota 2 with large scale deep reinforcement learning. arXiv preprint arXiv:1912.06680, 2019.
+Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016.
+Lars Buesing, Theophane Weber, Yori Zwols, Sebastien Racaniere, Arthur Guez, Jean-Baptiste Lespiau, and Nicolas Heess. Woulda, coulda, shoulda: Counterfactually-guided policy search. arXiv preprint arXiv:1811.06272, 2018.
+Yvonne M Caldera, Anne McDonald Culp, Marion O'Brien, Rosemarie T Truglio, Mildred Alvarez, and Aletha C Huston. Children's play preferences, construction play with blocks, and visual-spatial skills: Are they related? International Journal of Behavioral Development, 23(4):855-872, 1999.
+Beth M Casey, Nicole Andrews, Holly Schindler, Joanne E Kersh, Alexandra Samper, and Juanita Copley. The development of spatial skills through interventions involving block building activities. Cognition and Instruction, 26(3):269-309, 2008.
+Stephanie CY Chan, Sam Fishman, John Canny, Anoop Korattikara, and Sergio Guadarrama. Measuring the reliability of reinforcement learning algorithms. arXiv preprint arXiv:1912.05663, 2019.
+Maxime Chevalier-Boisvert, Dzmitry Bahdanau, Salem Lahlou, Lucas Willems, Chitwan Saharia, Thien Huu Nguyen, and Yoshua Bengio. Babyai: A platform to study the sample efficiency of grounded language learning. In International Conference on Learning Representations, 2018.
+Karl Cobbe, Oleg Klimov, Chris Hesse, Taehoon Kim, and John Schulman. Quantifying generalization in reinforcement learning. arXiv preprint arXiv:1812.02341, 2018.
+Erwin Coumans et al. Bullet real-time physics simulation. URL http://bulletphysics.org, 2013.
+Gwen Dewar. The benefits of toy blocks: The science of construction play. Parenting Science, 2018.
+
+Andrea Dittadi, Frederik Träuble, Francesco Locatello, Manuel Wüthrich, Vaibhav Agrawal, Ole Winther, Stefan Bauer, and Bernhard Schölkopf. On the transfer of disentangled representations in realistic settings. International Conference on Learning Representations (ICLR), 2021.
+Gabriel Dulac-Arnold, Nir Levine, Daniel J Mankowitz, Jerry Li, Cosmin Paduraru, Sven Gowal, and Todd Hester. An empirical investigation of the challenges of real-world reinforcement learning. arXiv preprint arXiv:2003.11881, 2020.
+Scott Elliott Fahlman. A planning system for robot construction tasks. Artificial intelligence, 5(1): 1-49, 1974.
+Scott Fujimoto, Herke van Hoof, and David Meger. Addressing function approximation error in actor-critic methods, 2018.
+Anirudh Goyal, Riashat Islam, Daniel Strouse, Zafarali Ahmed, Matthew Botvinick, Hugo Larochelle, Yoshua Bengio, and Sergey Levine. Infobot: Transfer and exploration via the information bottleneck. arXiv preprint arXiv:1901.10902, 2019a.
+Anirudh Goyal, Alex Lamb, Jordan Hoffmann, Shagun Sodhani, Sergey Levine, Yoshua Bengio, and Bernhard Schölkopf. Recurrent independent mechanisms. arXiv preprint arXiv:1909.10893, 2019b.
+Thomas L Griffiths and Joshua B Tenenbaum. Structure and strength in causal induction. Cognitive psychology, 51(4):334-384, 2005.
+Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor, 2018.
+Peter Henderson, Wei-Di Chang, Florian Shkurti, Johanna Hansen, David Meger, and Gregory Dudek. Benchmark environments for multitask learning in continuous domains. arXiv preprint arXiv:1708.04352, 2017a.
+Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. Deep reinforcement learning that matters. arXiv preprint arXiv:1709.06560, 2017b.
+Stephen James, Zicong Ma, David Rovick Arrojo, and Andrew J Davison. Rlbench: The robot learning benchmark & learning environment. IEEE Robotics and Automation Letters, 5(2):3019-3026, 2020.
+Michael Janner, Sergey Levine, William T Freeman, Joshua B Tenenbaum, Chelsea Finn, and Jiajun Wu. Reasoning about physical interactions with object-oriented prediction and planning. arXiv preprint arXiv:1812.10972, 2018.
+Shruti Joshi, Felix Widmaier, Vaibhav Agrawal, and Manuel Wüthrich. https://github.com/open-dynamic-robot-initiative/trifinger_simulation, 2020.
+Constance Kamii, Yoko Miyakawa, and Yasuhiko Kato. The development of logico-mathematical knowledge in a block-building activity at ages 1-4. Journal of Research in Childhood Education, 19(1):44-57, 2004.
+Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
+Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097-1105, 2012.
+Youngwoon Lee, Edward S. Hu, Zhengyu Yang, Alex Yin, and Joseph J. Lim. Iea furniture assembly environment for long-horizon complex manipulation tasks, 2019.
+Will McCarthy, David Kirsh, and Judith Fan. Learning to build physical structures better over time.
+
+Mehdi Mirza, Andrew Jaegle, Jonathan J. Hunt, Arthur Guez, Saran Tunyasuvunakool, Alistair Muldal, Théophane Weber, Peter Karkus, Sébastien Racanière, Lars Buesing, Timothy Lillicrap, and Nicolas Heess. Physically embedded planning problems: New challenges for reinforcement learning, 2020.
+Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
+Swiya Nath and Dénes Szücs. Construction play and cognitive skills associated with the development of mathematical abilities in 7-year-old children. Learning and Instruction, 32:73-80, 2014.
+Ian Osband, Yotam Doron, Matteo Hessel, John Aslanides, Eren Sezener, Andre Saraiva, Katrina McKinney, Tor Lattimore, Csaba Szepezvari, Satinder Singh, et al. Behaviour suite for reinforcement learning. arXiv preprint arXiv:1908.03568, 2019.
+Charles Packer, Katelyn Gao, Jernej Kos, Philipp Krähenbuhl, Vladlen Koltun, and Dawn Song. Assessing generalization in deep reinforcement learning. arXiv preprint arXiv:1810.12282, 2018.
+Judea Pearl. Causality. Cambridge university press, 2009.
+Lerrel Pinto, James Davidson, Rahul Sukthankar, and Abhinav Gupta. Robust adversarial reinforcement learning. arXiv preprint arXiv:1703.02702, 2017.
+Aravind Rajeswaran, Kendall Lowrey, Emanuel V Todorov, and Sham M Kakade. Towards generalization and simplicity in continuous control. In Advances in Neural Information Processing Systems, pp. 6550-6561, 2017.
+Miles Richardson, Thomas E Hunt, and Cassandra Richardson. Children's construction task performance and spatial ability: Controlling task complexity and predicting mathematics performance. *Perceptual and motor skills*, 119(3):741-757, 2014.
+Bernhard Schölkopf, Francesco Locatello, Stefan Bauer, Nan Rosemary Ke, Nal Kalchbrenner, Anirudh Goyal, and Yoshua Bengio. Towards causal representation learning. arXiv preprint arXiv:2102.11107, 2021.
+John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms, 2017.
+David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. nature, 550(7676):354-359, 2017.
+Richard S Sutton and Andrew G Barto. Reinforcement learning. Journal of Cognitive Neuroscience, 11(1):126-134, 1999.
+Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Budden, Abbas Abdolmaleki, Josh Merel, Andrew Lefrancq, et al. Deepmind control suite. arXiv preprint arXiv:1801.00690, 2018.
+Brian N Verdine, Roberta Michnick Golinkoff, Kathy Hirsh-Pasek, and Nora Newcombe. Links between spatial and mathematical skills across the preschool years. Wiley, 2017.
+Shimon Whiteson, Brian Tanner, Matthew E Taylor, and Peter Stone. Protecting against evaluation overfitting in empirical reinforcement learning. In 2011 IEEE symposium on adaptive dynamic programming and reinforcement learning (ADPRL), pp. 120-127. IEEE, 2011.
+Terry Winograd. Understanding natural language. Cognitive psychology, 3(1):1-191, 1972.
+Patrick H Winston. Learning structural descriptions from examples. 1970.
+Manuel Wüthrich, Felix Widmaier, Felix Grimminger, Joel Akpo, Shruti Joshi, Vaibhav Agrawal, Bilal Hammoud, Majid Khadiv, Miroslav Bogdanovic, Vincent Berenz, et al. Trifinger: An opensource robot for learning dexterity. arXiv preprint arXiv:2008.03596, 2020.
+
+Tianhe Yu, Deirdre Quillen, Zhanpeng He, Ryan Julian, Karol Hausman, Chelsea Finn, and Sergey Levine. Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning. arXiv preprint arXiv:1910.10897, 2019.
+Wenhao Yu, Jie Tan, C Karen Liu, and Greg Turk. Preparing for the unknown: Learning a universal policy with online system identification. arXiv preprint arXiv:1702.02453, 2017.
+
+# 7 APPENDIX
+
+# A OBSERVATIONS
+
+Observations in CausalWorld has two modes, "structured" and "pixel". When using "pixel" mode, 6 images are returned consisting of the current images rendered from 3 different views on top of the TriFinger platform, showing the current state of the environment, as well as the 3 equivalent goal images rendered from the same points of view, showing the goal shape that the robot have to build by the end of the episode.
+
+
+(a) Current View60
+
+
+(b) Goal View60
+
+
+(c) Current View120
+
+
+(d) Goal View120
+
+
+(e) Current View300
+
+
+(f) Goal View300
+Figure 7: Example "pixel" mode observations returned at each step of the environment.
+
+
+Figure 8: Structured observation description. For the scene features, all the blocks feature vector are concatenated first. Following that the partial goals feature vector are concatenated in the same order. Lastly, if there is any obstacles/ fixed blocks, their feature vectors are concatenated at the end following the same description as the partial goal features.
+
+# B TRIFINGER PLATFORM
+
+The robot from (Wüthrich et al., 2020) shown in figure 9 is open-sourced and can be reproduced and built in any research lab; since its inexpensive (about $5000), speeding up sim2real research.
+
+
+Figure 9: The TriFinger platform.
+
+# C TASK GENERATORS
+
+1. Pushing: task where the goal is to push one block towards a goal position with a specific orientation; restricted to goals on the floor level.
+2. Picking: task where the goal is to pick one block towards a goal height above the center of the arena; restricted to goals above the floor level.
+
+3. Pick And Place: task where the arena is divided by a fixed long block and the goal is to pick one block from one side of the arena to a goal position with a variable orientation on the other side of the fixed block.
+4. Stacking2: task where the goal is to stack two blocks above each other in a specific goal position and orientation.
+5. Towers: task where the goal is to stack multiple $n$ blocks above each other in a specific goal position and orientation - exactly above each other creating a tower of blocks.
+6. Stacked Blocks: task where the goal is to stack multiple $n$ blocks above each other in an arbitrary way to create a stable structure. The blocks don't have to be exactly above each other; making it more challenging than the ordinary towers task since the its harder to come up with a stable structure that covers the goal shape volume.
+7. Creative Stacked Blocks: exactly the same as the Stacked Blocks task except that the first and last levels of the goal are the only levels shown or "imposed" and the rest of the structure is not explicitly specified, leaving the rest of the goal shape to the imagination of the agent itself; this is considered the most challenging since its it needs the agent to understand how to build stable structures and imagine what can be filled in the middle to connect the two levels in a stable way.
+8. General: the goal shape is an arbitrary shape created by initially dropping an arbitrary number of blocks from above the ground and waiting till all blocks come to a rest position where this becomes the goal shape that the agent needs to fill up afterwards.
+
+| Variable | Sub Variable | Space A | Space B |
| gravity[z] | - | [-10,-7] | [-7,-4] |
| floor friction | - | [0.3,0.6] | [0.6,0.8] |
| stage friction | - | [0.3,0.6] | [0.6,0.8] |
| stage color [rgb] | - | [0.0,0.5]3 | [0.5,1]3 |
| floor color [rgb] | - | [0.0,0.5]3 | [0.5,1]3 |
| joint positions | - | [[-1.57,-1.2,-3.0]3, | [[-0.69,0,0]3, |
| [-0.69,0,0]3] | [1.0,1.57,3.0]3] |
| block | size | [0.055,0.075]3 | [0.075,0.095]3 |
| block | color | [0.0,0.5]3 | [0.5,1]3 |
| block | mass | [0.015,0.045] | [0.045,0.1] |
| block | position (cylindrical) | [0,-π,h/2], | [0.11,-π,h/2], |
| [0.11,π,0.15]] | [0.15,π,0.3]] |
| goal cuboid | size | [0.055,0.075]3 | [0.075,0.095]3 |
| goal cuboid | color | [0.0,0.5]3 | [0.5,1]3 |
| link | color | [0.0,0.5]3 | [0.5,1]3 |
| link | mass | [0.015,0.045] | [0.045,0.1] |
+
+Table 2: Description of a subset of the high level variables, exposed in CausalWorld, and their corresponding spaces, $h$ refers to the height of the block.
+
+| Task Generator | Variable | Space A | Space B |
| Picking | goal height | [0.08, 0.20] | [0.20, 0.25] |
| Towers | tower dims | [[0.08, 0.08, 0.08], [0.12, 0.12, 0.12]] | [[0.12, 0.12, 0.12], [0.20, 0.20, 0.20]] |
+
+Table 3: Example of task generators' specific high level variables, exposed in CausalWorld, and their corresponding spaces. For a full list of each task generators' variables and their corresponding spaces, please refer to the documentation at (https://sites.google.com/view/causal-world/home).
+
+| Task generators | Dense reward |
| Pushing | -750Δt(o1,e) - 250Δt(o1,g1) |
| Picking | -750Δt(o1,e) - 250Δt(o1,z,g1,z) - 125Δt(o1,x,y,g1,x,y) - 0.005||vt-vt-1|| |
| Pick and Place | -750Δt(o1,e) - 50Δt(o1,x,y,g1,x,y) - 250(|o1,t-z| - |o1,t-1-z|) - 0.005||vt-vt-1|| |
| Stacking | 1dt(o1,e)>0.02(-750Δt(o1,e) - 250Δt(o1,g1)) + 1dt(o1,e)<0.02(-750Δt(o2,e) - 250(|o2,z-g2,z| - |o1,t-z-g2,z|) - 1o2,z-g2,z>0125Δt(o2,x,y,g2,x,y)) - 0.005||vt-vt-1|| |
+
+Table 4: Description of the dense rewards applied in our experiments. The following notation was applied: $v^{t} \in \mathbf{R}^{3}$ joint velocities, $e_{i}^{t} \in \mathbf{R}^{3}$ i-th end-effector positions, $o_{i}^{t} \in \mathbf{R}^{3}$ i-th block position, $g_{i}^{t} \in \mathbf{R}^{3}$ i-th goal block position, $d^{t}(o,e) = \sum_{i}||e_{i}^{t} - o^{t}||$ the distance between end-effectors and the block, $\Delta_{o,e}^{t} = d^{t}(o,e) - d^{t-1}(o,e)$ the distance difference w.r.t. the previous timestep. The target height parameter $t$ for pick and place is 0.15 if block and goal are of different height. Otherwise, $t$ is half the goal height.
+
+# D TRAINING DETAILS
+
+The experiments were carried out using the stable baselines implementation of PPO, SAC and TD3. We used a 2 layer MLP Policy [256,256] for all the policies. PPO was trained on 20 workers up to 100 million timesteps in parallel and SAC as well as TD3 were trained serially for 10 million timesteps.
+
+| PPO | SAC | TD3 |
| discount | 0.99 | discount | 0.95 | discount | 0.96 |
| batch size | 120000 | entropy coeff | 1e-3 | batch size | 128 |
| learning rate | 2.5e-4 | batch size | 256 | learning rate | 1e-4 |
| entropy coef. | 0.01 | learning rate | 1e-4 | buffer size | 500000 |
| value function coef. | 0.5 | target entropy | auto | tau | 0.02 |
| gradient clipping (max) | 0.5 | buffer size | 1000000 | | |
| n minibatches per update | 40 | tau | 0.001 | | |
| n training epochs | 4 | | | | |
+
+Table 5: Learning algorithms hyper parameters used in the baselines experiments.
+
+
+radar_plots_automatic.evaluation_causal_world
+Figure 10: An example of model selection in CausalWorld by evaluating generalization across the various axes using the previously mentioned protocols. Here we compare two agents trained on different curricula using PPO.
+
+
+
+
+
+
+
+
+Figure 11: Evaluation scores, for pushing, picking, pick and place and stacking2 baselines, from top to bottom respectively. Each protocol was evaluated for 200 episodes and each bar is averaged over five models with different random seeds [bp block pose, bm block mass, bs block size, gp goal pose, ff floor friction].
+
+# E CAUSALITY IN REINFORCEMENT LEARNING
+
+Definition 1: A partially observable Markov decision processes (POMDP) is defined by the tuple $(S, A, T, R, \Omega, O, \gamma, \rho_0, H)$ with states $s \in S$ , actions $a \in A$ and observations $o \in \Omega$ determined by the state and action of the environment $O(o|s, a)$ . $T(s_{t+1}|s_t, a_t)$ is the transition probability distribution function, $R(s_t, a_t)$ is the reward function, $\gamma$ is the discount factor, $\rho_0(s)$ is the initial state distribution at the beginning of each episode, and $H$ is the time horizon per episode. The objective of RL algorithms is to learn a policy $\pi(a_t|h_t)$ with history $H_t = (O_1, A_1, \dots, A_{t-1}, O_t)$ that maximizes the discounted expected reward $J(\pi) = \mathbb{E}^\pi\left[\sum_{t=0}^H \gamma^t r_t\right]$ .
+
+Definition 2: A structural causal model (SCM) $M$ over $X = (X_{1},\dots,X_{N})$ is given by a DAG $G$ over nodes $X$ , independent noise RVs $U = (U_{1},\dots,U_{N})$ with distributions $P_{U_i}$ and functions $f_{1},\ldots ,f_{N}$ such that $X_{i} = f_{i}(pa_{i},U_{i})$ , where $pa_{i}\subset X$ are the parents of $X_{i}$ in $G$ . An SCM entails a distribution $P$ with density $p$ over $(X,U)$ .
+
+Definition 3: An intervention $I$ in an SCM $M$ consists of replacing the RHS $f_{i}(pa_{i},U_{i})$ by $f_{i}^{I}(pa_{i}^{I},U_{i})$ with $i\subseteq \{1,\dots,N\}$ where $pa_i^I$ are the parents in a new DAG $G^{I}$ . The resulting SCM is denoted with $M^{do(I)}$ with distribution $P^{do(I)}$ and density $p^{do(I)}$ .
+
+Representing POMDPs as SCMs: A POMDP can be presented as an SCM $M$ by unrolling the POMDP through time by expressing all conditional distributions, e.g. the transition kernel $P_{S_t + 1|S_t,A_t}$ , as deterministic functions (which reflects the causal mechanisms of the system) with independent noise variables $U$ , such as $S_{t + 1} = f_{st}(S_t,A_t,U_{st})$ . This is always possible using autoregressive formalization as shown in (Buesing et al., 2018). The distribution $P^{\pi}$ over trajectories $T$ determined by the SCM, which means running a different policy $\mu$ (instead of $\pi$ ) in the environment is an intervention by itself $I(\pi \rightarrow \mu)$ resulting in model distribution $P^{do(I(\pi \rightarrow \mu))}$ . Given the SCM, one can reason about the alternative outcomes of different actions or even different environment properties (such as friction) (known as counterfactual inference) which is not possible if we only model the system as a conditional distribution $P_{O|A,U_c}$ .
+
+Generalizing agents: An agent should generalize well to changes in the environment if it achieves an equal performance after an intervention $I$ is performed on the SCM $M$ resulting in $M^{do(I)}$ . This can be accomplished for instance as explained in (Scholkopf et al., 2021), by reusing the learned causal mechanisms. For example, if an intervention was performed on the colour of one of the available blocks in the environment, it won't change the underlying causal mechanisms. Thus, this sort of interventions on a non-causal variable shouldn't affect the resulting actions of the agent. Nevertheless, this is often not the case with RL agents trained using visual observations. (Alver & Precup, 2020) showed recently that algorithms which were designed specifically for Meta-RL can display strong overfitting when they are evaluated on challenging visual tasks. On the other hand, if an intervention was performed on a causal variable with respect to a policy $\pi$ , such as the mass of an object, the resulting action sequence will be expected to change for robust control. Additionally, if an intervention is performed on some of the goal variables, the reward function might also change and a robust controller should also be able to react to it accordingly.
\ No newline at end of file
diff --git a/causalworldaroboticmanipulationbenchmarkforcausalstructureandtransferlearning/images.zip b/causalworldaroboticmanipulationbenchmarkforcausalstructureandtransferlearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..2556d6d063755a857b7fd6abbc1ef960a7a0bbe1
--- /dev/null
+++ b/causalworldaroboticmanipulationbenchmarkforcausalstructureandtransferlearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1c8144cdce5e053f088c9a06c9f13c59124f9fc978db0abfac762156d3ef5ff8
+size 885899
diff --git a/causalworldaroboticmanipulationbenchmarkforcausalstructureandtransferlearning/layout.json b/causalworldaroboticmanipulationbenchmarkforcausalstructureandtransferlearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..4a694d7727e042156b763772c1125680f903eec4
--- /dev/null
+++ b/causalworldaroboticmanipulationbenchmarkforcausalstructureandtransferlearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:58753252374d04ff708acf65383e6f0e4aaf194687229f1fcb5f9b0232ba8b29
+size 481390
diff --git a/ccgancontinuousconditionalgenerativeadversarialnetworksforimagegeneration/3c93ecac-0974-46a1-875c-d69d86e28ea3_content_list.json b/ccgancontinuousconditionalgenerativeadversarialnetworksforimagegeneration/3c93ecac-0974-46a1-875c-d69d86e28ea3_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..ba6c754a2a2da51da3b25a6d3bcdf885a1b0455e
--- /dev/null
+++ b/ccgancontinuousconditionalgenerativeadversarialnetworksforimagegeneration/3c93ecac-0974-46a1-875c-d69d86e28ea3_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e30d2aa96f9096707255325e5b154365fa2c7b5507561efd4fa360b0946228cc
+size 193059
diff --git a/ccgancontinuousconditionalgenerativeadversarialnetworksforimagegeneration/3c93ecac-0974-46a1-875c-d69d86e28ea3_model.json b/ccgancontinuousconditionalgenerativeadversarialnetworksforimagegeneration/3c93ecac-0974-46a1-875c-d69d86e28ea3_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..47442f76cf3d99ef1701c8afaae8e8ebb9831cfd
--- /dev/null
+++ b/ccgancontinuousconditionalgenerativeadversarialnetworksforimagegeneration/3c93ecac-0974-46a1-875c-d69d86e28ea3_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2adb029537fd31d9df6db6918de10341f4896adef600b619b0fb0daa1dd2b18b
+size 223798
diff --git a/ccgancontinuousconditionalgenerativeadversarialnetworksforimagegeneration/3c93ecac-0974-46a1-875c-d69d86e28ea3_origin.pdf b/ccgancontinuousconditionalgenerativeadversarialnetworksforimagegeneration/3c93ecac-0974-46a1-875c-d69d86e28ea3_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..b4c0876c52f8d0962d938660fe8c0c6a60ba209f
--- /dev/null
+++ b/ccgancontinuousconditionalgenerativeadversarialnetworksforimagegeneration/3c93ecac-0974-46a1-875c-d69d86e28ea3_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:adf8f788f4cc704ac231098aae7f0ba7dad4dd02d326b14bda08b2dd0760bd62
+size 3722892
diff --git a/ccgancontinuousconditionalgenerativeadversarialnetworksforimagegeneration/full.md b/ccgancontinuousconditionalgenerativeadversarialnetworksforimagegeneration/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..fc5387c857e86adee967fa1337d59a007767bace
--- /dev/null
+++ b/ccgancontinuousconditionalgenerativeadversarialnetworksforimagegeneration/full.md
@@ -0,0 +1,869 @@
+# CCGAN: CONTINUOUS CONDITIONAL GENERATIVE ADVERSARIAL NETWORKS FOR IMAGE GENERATION
+
+Xin Ding*, Yongwei Wang*, Zuheng Xu, William J. Welch, Z. Jane Wang,
+
+The University of British Columbia
+
+{xin.ding@stat, yongweiw@ece, zuheng.xu@stat, will@stat,
+
+zjanew@ece}.ubc.ca
+
+# ABSTRACT
+
+This work proposes the continuous conditional generative adversarial network (CcGAN), the first generative model for image generation conditional on continuous, scalar conditions (termed regression labels). Existing conditional GANs (cGANs) are mainly designed for categorical conditions (e.g., class labels); conditioning on regression labels is mathematically distinct and raises two fundamental problems: (P1) Since there may be very few (even zero) real images for some regression labels, minimizing existing empirical versions of cGAN losses (a.k.a. empirical cGAN losses) often fails in practice; (P2) Since regression labels are scalar and infinitely many, conventional label input methods (e.g., combining a hidden map of the generator/discriminator with a one-hot encoded label) are not applicable. The proposed CcGAN solves the above problems, respectively, by (S1) reformulating existing empirical cGAN losses to be appropriate for the continuous scenario; and (S2) proposing a novel method to incorporate regression labels into the generator and the discriminator. The reformulation in (S1) leads to two novel empirical discriminator losses, termed the hard vicinal discriminator loss (HVDL) and the soft vicinal discriminator loss (SVDL) respectively, and a novel empirical generator loss. The error bounds of a discriminator trained with HVDL and SVDL are derived under mild assumptions in this work. A new benchmark dataset, RC-49, is also proposed for generative image modeling conditional on regression labels. Our experiments on the Circular 2-D Gaussians, RC-49, and UTKFace datasets show that CcGAN is able to generate diverse, high-quality samples from the image distribution conditional on a given regression label. Moreover, in these experiments, CcGAN substantially outperforms cGAN both visually and quantitatively.
+
+# 1 INTRODUCTION
+
+Conditional generative adversarial networks (cGANs), first proposed in (Mirza & Osindero, 2014), aim to estimate the distribution of images conditioning on some auxiliary information, especially class labels. Subsequent studies (Odena et al., 2017; Miyato & Koyama, 2018; Brock et al., 2019; Zhang et al., 2019) confirm the feasibility of generating diverse, high-quality (even photo-realistic), and class-label consistent fake images from class-conditional GANs. Unfortunately, these cGANs do not work well for image generation with continuous, scalar conditions, termed regression labels, due to two problems:
+
+(P1) cGANs are often trained to minimize the empirical versions of their losses (a.k.a. the empirical cGAN losses) on some training data, a principle also known as the empirical risk minimization (ERM) (Vapnik, 2000). The success of ERM relies on a large sample size for each distinct condition. Unfortunately, we usually have only a few real images for some regression labels. Moreover, since regression labels are continuous, some values may not even appear in the training set. Consequently, a cGAN cannot accurately estimate the image distribution conditional on such missing labels.
+(P2) In class-conditional image generation, class labels are often encoded by one-hot vectors or label embedding and then fed into the generator and discriminator by hidden concatenation (Mirza &
+
+Osindero, 2014), an auxiliary classifier (Odena et al., 2017) or label projection (Miyato & Koyama, 2018). A precondition for such label encoding is that the number of distinct labels (e.g., the number of classes) is finite and known. Unfortunately, in the continuous scenario, we may have infinite distinct regression labels.
+
+A naive approach to solve (P1)-(P2) is to "bin" the regression labels into a series of disjoint intervals and still train a cGAN in the class-conditional manner (these intervals are treated as independent classes) (Olmschenk, 2019). However, this approach has four shortcomings: (1) our experiments in Section 4 show that this approach often makes cGANs collapse; (2) we can only estimate the image distribution conditional on membership in an interval and not on the target label; (3) a large interval width leads to high label inconsistency; (4) inter-class correlation is not considered (images in successive intervals have similar distributions).
+
+In machine learning, vicinal risk minimization (VRM) (Vapnik, 2000; Chapelle et al., 2001) is an alternative rule to ERM. VRM assumes that a sample point shares the same label with other samples in its vicinity. Motivated by VRM, in generative modeling conditional on regression labels where we estimate a conditional distribution $p(\pmb{x}|\pmb{y})$ ( $\pmb{x}$ is an image and $y$ is a regression label), it is natural to assume that a small perturbation to $y$ results in a negligible change to $p(\pmb{x}|\pmb{y})$ . This assumption is consistent with our perception of the world. For example, the image distribution of facial features for a population of 15-year-old teenagers should be close to that of 16-year olds.
+
+We therefore introduce the continuous conditional GAN (CcGAN) to tackle (P1) and (P2). To our best knowledge, this is the first generative model for image generation conditional on regression labels. It is noted that Rezagholizadeh et al. (2018) and Rezagholiradeh & Haidar (2018) train GANs in an unsupervised manner and synthesize unlabeled fake images for a subsequent image regression task. Olmschenk et al. (2019) proposes a semi-supervised GAN for dense crowd counting. CcGAN is fundamentally different from these works since they do not estimate the conditional image distribution. Our contributions can be summarized as follows:
+
+- We propose in Section 2 the CcGAN to address (P1) and (P2), which consists of two novel empirical discriminator losses, termed the hard vicinal discriminator loss (HVDL) and the soft vicinal discriminator loss (SVDL), a novel empirical generator loss, and a novel label input method. We take the vanilla cGAN loss as an example to show how to derive HVDL, SVDL, and the novel empirical generator loss by reformulating existing empirical cGAN losses.
+- We derive in Section 3 the error bounds of a discriminator trained with HVDL and SVDL.
+- In Section 4, we propose a new benchmark dataset, RC-49, for the generative image modeling conditional on regression labels, since very few benchmark datasets are suitable for the studied continuous scenario. We conduct experiments on several datasets, and our experiments show that CcGAN not only generates diverse, high-quality, and label consistent images, but also substantially outperforms cGAN both visually and quantitatively.
+
+# 2 FROM CGAN TO CCGAN
+
+In this section, we provide the solutions (S1)-(S2) to (P1)-(P2) in a one-to-one manner by introducing the continuous conditional GAN (CcGAN). Please note that theoretically cGAN losses (e.g., the vanilla cGAN loss (Mirza & Osindero, 2014), the Wasserstein loss (Arjovsky et al., 2017), and the hinge loss (Miyato et al., 2018)) are suitable for both class labels and regression labels; however, their empirical versions fail in the continuous scenario (i.e., (P1)). Our first solution (S1) focuses on reformulating these empirical cGAN losses to fit into the continuous scenario. Without loss of generality, we only take the vanilla cGAN loss as an example to show such reformulation (the empirical versions of the Wasserstein loss and the hinge loss can be reformulated similarly).
+
+The vanilla discriminator loss and generator loss (Mirza & Osindero, 2014) are defined as:
+
+$$
+\begin{array}{l} \mathcal {L} (D) = - \mathbb {E} _ {y \sim p _ {r} (y)} \left[ \mathbb {E} _ {\boldsymbol {x} \sim p _ {r} (\boldsymbol {x} | y)} [ \log (D (\boldsymbol {x}, y)) ] \right] - \mathbb {E} _ {y \sim p _ {g} (y)} \left[ \mathbb {E} _ {\boldsymbol {x} \sim p _ {g} (\boldsymbol {x} | y)} [ \log (1 - D (\boldsymbol {x}, y)) ] \right] \\ = - \int \log (D (\boldsymbol {x}, y)) p _ {r} (\boldsymbol {x}, y) d \boldsymbol {x} d y - \int \log (1 - D (\boldsymbol {x}, y)) p _ {g} (\boldsymbol {x}, y) d \boldsymbol {x} d y, \tag {1} \\ \end{array}
+$$
+
+$$
+\mathcal {L} (G) = - \mathbb {E} _ {y \sim p _ {g} (y)} \left[ \mathbb {E} _ {\boldsymbol {z} \sim q (\boldsymbol {z})} [ \log (D (G (\boldsymbol {z}, y), y)) ] \right] = - \int \log (D (G (\boldsymbol {z}, y), y)) q (\boldsymbol {z}) p _ {g} (y) d \boldsymbol {z} d y, \tag {2}
+$$
+
+where $\pmb{x} \in \mathcal{X}$ is an image of size $d \times d$ , $y \in \mathcal{Y}$ is a label, $p_r(y)$ and $p_g(y)$ are respectively the true and fake label marginal distributions, $p_r(\pmb{x}|y)$ and $p_g(\pmb{x}|y)$ are respectively the true and fake image distributions conditional on $y$ , $p_r(\pmb{x}, y)$ and $p_g(\pmb{x}, y)$ are respectively the true and fake joint distributions of $\pmb{x}$ and $y$ , and $q(z)$ is the probability density function of $\mathcal{N}(\mathbf{0}, \pmb{I})$ .
+
+Since the distributions in the losses of Eqs. (1) and (2) are unknown, for class-conditional image generation, Mirza & Osindero (2014) follows ERM and minimizes the empirical losses:
+
+$$
+\widehat {\mathcal {L}} ^ {\delta} (D) = - \frac {1}{N ^ {r}} \sum_ {c = 1} ^ {C} \sum_ {j = 1} ^ {N _ {c} ^ {r}} \log \left(D \left(\boldsymbol {x} _ {c, j} ^ {r}, c\right)\right) - \frac {1}{N ^ {g}} \sum_ {c = 1} ^ {C} \sum_ {j = 1} ^ {N _ {c} ^ {g}} \log \left(1 - D \left(\boldsymbol {x} _ {c, j} ^ {g}, c\right)\right), \tag {3}
+$$
+
+$$
+\hat {\mathcal {L}} ^ {\delta} (G) = - \frac {1}{N ^ {g}} \sum_ {c = 1} ^ {C} \sum_ {j = 1} ^ {N _ {c} ^ {g}} \log \left(D \left(G \left(\boldsymbol {z} _ {c, j}, c\right), c\right)\right), \tag {4}
+$$
+
+where $C$ is the number of classes, $N^r$ and $N^g$ are respectively the number of real and fake images, $N_c^r$ and $N_c^g$ are respectively the number of real and fake images with label $c$ , $\pmb{x}_{c,j}^r$ and $\pmb{x}_{c,j}^g$ are respectively the $j$ -th real image and the $j$ -th fake image with label $c$ , and the $\pmb{z}_{c,j}$ are independently and identically sampled from $q(z)$ . Eq. (3) implies we estimate $p_r(x,y)$ and $p_g(x,y)$ by their empirical probability density functions as follows:
+
+$$
+\hat {p} _ {r} ^ {\delta} (\boldsymbol {x}, y) = \frac {1}{N ^ {r}} \sum_ {c = 1} ^ {C} \sum_ {j = 1} ^ {N _ {c} ^ {r}} \delta \left(\boldsymbol {x} - \boldsymbol {x} _ {c, j} ^ {r}\right) \delta (y - c), \quad \hat {p} _ {g} ^ {\delta} (\boldsymbol {x}, y) = \frac {1}{N ^ {g}} \sum_ {c = 1} ^ {C} \sum_ {j = 1} ^ {N _ {c} ^ {g}} \delta \left(\boldsymbol {x} - \boldsymbol {x} _ {c, j} ^ {g}\right) \delta (y - c), \tag {5}
+$$
+
+where $\delta (\cdot)$ is a Dirac delta mass centered at 0. However, $\hat{p}_r^\delta (\pmb {x},y)$ and $\hat{p}_g^\delta (\pmb {x},y)$ in Eq. (5) are not good estimates in the continuous scenario because of (P1).
+
+To overcome (P1), we propose a novel estimate for each of $p_r(x, y)$ and $p_g(x, y)$ , termed the hard vicinal estimate (HVE). We also provide an intuitive alternative to HVE, named the soft vicinal estimate (SVE). The HVEs of $p_r(x, y)$ and $p_g(x, y)$ are:
+
+$$
+\hat {p} _ {r} ^ {\mathrm {H V E}} (\boldsymbol {x}, y) = C _ {1} \cdot \left[ \frac {1}{N ^ {r}} \sum_ {j = 1} ^ {N ^ {r}} \exp \left(- \frac {(y - y _ {j} ^ {r}) ^ {2}}{2 \sigma^ {2}}\right) \right] \cdot \left[ \frac {1}{N _ {y , \kappa} ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} \mathbb {1} _ {\{| y - y _ {i} ^ {r} | \leq \kappa \}} \delta (\boldsymbol {x} - \boldsymbol {x} _ {i} ^ {r}) \right],
+$$
+
+$$
+\hat {p} _ {g} ^ {\mathrm {H V E}} (\boldsymbol {x}, y) = C _ {2} \cdot \left[ \frac {1}{N ^ {g}} \sum_ {j = 1} ^ {N ^ {g}} \exp \left(- \frac {(y - y _ {j} ^ {g}) ^ {2}}{2 \sigma^ {2}}\right) \right] \cdot \left[ \frac {1}{N _ {y , \kappa} ^ {g}} \sum_ {i = 1} ^ {N ^ {g}} \mathbb {1} _ {\{| y - y _ {i} ^ {g} | \leq \kappa \}} \delta (\boldsymbol {x} - \boldsymbol {x} _ {i} ^ {g}) \right], \tag {6}
+$$
+
+where $\pmb{x}_i^r$ and $\pmb{x}_i^g$ are respectively real image $i$ and fake image $i$ , $y_i^r$ and $y_i^g$ are respectively the labels of $\pmb{x}_i^r$ and $\pmb{x}_i^g$ , $\kappa$ and $\sigma$ are two positive hyper-parameters, $C_1$ and $C_2$ are two constants making these two estimates valid probability density functions, $N_{y,\kappa}^r$ is the number of the $y_i^r$ satisfying $|y - y_i^r| \leq \kappa$ , $N_{y,\kappa}^g$ is the number of the $y_i^g$ satisfying $|y - y_i^g| \leq \kappa$ , and $\mathbb{1}$ is an indicator function with support in the subscript. The terms in the first square brackets of $\hat{p}_r^{\mathrm{HVE}}$ and $\hat{p}_g^{\mathrm{HVE}}$ imply we estimate the marginal label distributions $p_r(y)$ and $p_g(y)$ by kernel density estimates (KDEs) (Silverman, 1986). The terms in the second square brackets are designed based on the assumption that a small perturbation to $y$ results in negligible changes to $p_r(\pmb{x}|y)$ and $p_g(\pmb{x}|y)$ . If this assumption holds, we can use images with labels in a small vicinity of $y$ to estimate $p_r(\pmb{x}|y)$ and $p_g(\pmb{x}|y)$ . The SVEs of $p_r(\pmb{x},y)$ and $p_g(\pmb{x},y)$ are:
+
+$$
+\hat {p} _ {r} ^ {\mathrm {S V E}} (\boldsymbol {x}, y) = C _ {3} \cdot \left[ \frac {1}{N ^ {r}} \sum_ {j = 1} ^ {N ^ {r}} \exp \left(- \frac {(y - y _ {j} ^ {r}) ^ {2}}{2 \sigma^ {2}}\right) \right] \cdot \left[ \frac {\sum_ {i = 1} ^ {N ^ {r}} w ^ {r} \left(y _ {i} ^ {r} , y\right) \delta \left(\boldsymbol {x} - \boldsymbol {x} _ {i} ^ {r}\right)}{\sum_ {i = 1} ^ {N ^ {r}} w ^ {r} \left(y _ {i} ^ {r} , y\right)} \right], \tag {7}
+$$
+
+$$
+\hat {p} _ {g} ^ {\mathrm {S V E}} (\pmb {x}, y) = C _ {4} \cdot \left[ \frac {1}{N ^ {g}} \sum_ {j = 1} ^ {N ^ {g}} \exp \left(- \frac {(y - y _ {j} ^ {g}) ^ {2}}{2 \sigma^ {2}}\right) \right] \cdot \left[ \frac {\sum_ {i = 1} ^ {N ^ {g}} w ^ {g} (y _ {i} ^ {g} , y) \delta (\pmb {x} - \pmb {x} _ {i} ^ {g})}{\sum_ {i = 1} ^ {N ^ {g}} w ^ {g} (y _ {i} ^ {g} , y)} \right],
+$$
+
+where $C_3$ and $C_4$ are two constants making these two estimates valid probability density functions,
+
+$$
+w ^ {r} \left(y _ {i} ^ {r}, y\right) = e ^ {- \nu \left(y _ {i} ^ {r} - y\right) ^ {2}} \quad \text {a n d} \quad w ^ {g} \left(y _ {i} ^ {g}, y\right) = e ^ {- \nu \left(y _ {i} ^ {g} - y\right) ^ {2}}, \tag {8}
+$$
+
+and the hyper-parameter $\nu > 0$ . In Eq. (7), similar to the HVEs, we estimate $p_r(y)$ and $p_g(y)$ by KDEs. Instead of using samples in a hard vicinity, the SVEs use all respective samples to estimate $p_r(\boldsymbol{x}|\boldsymbol{y})$ and $p_g(\boldsymbol{x}|\boldsymbol{y})$ but each sample is assigned with a weight based on the distance of its label from $\boldsymbol{y}$ . Two diagrams in Fig. 1 visualize the process of using hard/soft vicinal samples to estimate $p(\boldsymbol{x}|\boldsymbol{y})$ , i.e., a univariate Gaussian distribution conditional on its mean $\boldsymbol{y}$ .
+
+
+(a) Hard Vicinity
+
+
+(b) Soft Vicinity
+Figure 1: HVE (Eq. (6)) and SVE (Eq. (7)) estimate $p(\pmb{x}|\pmb{y})$ (a univariate Gaussian conditional on $y$ ) using two samples in hard and soft vicinities, respectively, of $y$ . To estimate $p(\pmb{x}|\pmb{y})$ (the red Gaussian curve) only from samples drawn from $p(\pmb{x}|\pmb{y}_1)$ and $p(\pmb{x}|\pmb{y}_2)$ (the blue Gaussian curves), estimation is based on the samples (red dots) in a hard vicinity (defined by $y \pm \kappa$ ) or a soft vicinity (defined by the weight decay curve) around $y$ . The histograms in blue are samples in the hard or soft vicinity. The labels $y_1, y$ , and $y_2$ on the $x$ -axis denote the means of $x$ conditional on $y_1, y$ , and $y_2$ , respectively.
+
+By plugging Eq. (6) and (7) into Eq. (1), we derive the hard vicinal discriminator loss (HVDL) and the soft vicinal discriminator loss (SVDL) as follows:
+
+$$
+\begin{array}{l} \hat {\mathcal {L}} ^ {\mathrm {H V D L}} (D) = - \frac {C _ {5}}{N ^ {r}} \sum_ {j = 1} ^ {N ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} \mathbb {E} _ {\epsilon^ {r} \sim \mathcal {N} (0, \sigma^ {2})} \left[ \frac {\mathbb {1} _ {\{| y _ {j} ^ {r} + \epsilon^ {r} - y _ {i} ^ {r} | \leq \kappa \}}}{N _ {y _ {j} ^ {r} + \epsilon^ {r}, \kappa} ^ {r}} \log \left(D \left(\boldsymbol {x} _ {i} ^ {r}, y _ {j} ^ {r} + \epsilon^ {r}\right)\right) \right] \tag {9} \\ - \frac {C _ {6}}{N ^ {g}} \sum_ {j = 1} ^ {N ^ {g}} \sum_ {i = 1} ^ {N ^ {g}} \mathbb {E} _ {\epsilon^ {g} \sim \mathcal {N} (0, \sigma^ {2})} \left[ \frac {\mathbb {1} _ {\{| y _ {j} ^ {g} + \epsilon^ {g} - y _ {i} ^ {g} | \leq \kappa \}}}{N _ {y _ {j} ^ {g} + \epsilon^ {g} , \kappa} ^ {g}} \log (1 - D (\boldsymbol {x} _ {i} ^ {g}, y _ {j} ^ {g} + \epsilon^ {g})) \right], \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \widehat {\mathcal {L}} ^ {\mathrm {S V D L}} (D) = - \frac {C _ {7}}{N ^ {r}} \sum_ {j = 1} ^ {N ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} \mathbb {E} _ {\epsilon^ {r} \sim \mathcal {N} (0, \sigma^ {2})} \left[ \frac {w ^ {r} \left(y _ {i} ^ {r} , y _ {j} ^ {r} + \epsilon^ {r}\right)}{\sum_ {i = 1} ^ {N ^ {r}} w ^ {r} \left(y _ {i} ^ {r} , y _ {j} ^ {r} + \epsilon^ {r}\right)} \log \left(D \left(\boldsymbol {x} _ {i} ^ {r}, y _ {j} ^ {r} + \epsilon^ {r}\right)\right) \right] \\ - \frac {C _ {8}}{N ^ {g}} \sum_ {j = 1} ^ {N ^ {g}} \sum_ {i = 1} ^ {N ^ {g}} \mathbb {E} _ {\epsilon^ {g} \sim \mathcal {N} (0, \sigma^ {2})} \left[ \frac {w ^ {g} \left(y _ {i} ^ {g} , y _ {j} ^ {g} + \epsilon^ {g}\right)}{\sum_ {i = 1} ^ {N ^ {g}} w ^ {g} \left(y _ {i} ^ {g} , y _ {j} ^ {g} + \epsilon^ {g}\right)} \log \left(1 - D \left(\boldsymbol {x} _ {i} ^ {g}, y _ {j} ^ {g} + \epsilon^ {g}\right)\right) \right], \tag {10} \\ \end{array}
+$$
+
+where $\epsilon^r \triangleq y - y_j^r$ , $\epsilon^g \triangleq y - y_j^g$ , and $C_5, C_6, C_7$ , and $C_8$ are some constants.
+
+Generator training: The generator of CcGAN is trained by minimizing Eq. (11),
+
+$$
+\widehat {\mathcal {L}} ^ {\epsilon} (G) = - \frac {1}{N ^ {g}} \sum_ {i = 1} ^ {N ^ {g}} \mathbb {E} _ {\epsilon^ {g} \sim \mathcal {N} (0, \sigma^ {2})} \log \left(D \left(G \left(\boldsymbol {z} _ {i}, y _ {i} ^ {g} + \epsilon^ {g}\right), y _ {i} ^ {g} + \epsilon^ {g}\right)\right). \tag {11}
+$$
+
+How do HVDL, SVDL, and Eq. (11) overcome (P1)? The solution (S1) includes:
+
+(i) Given a label $y$ as the condition, we use images in a hard/soft vicinity of $y$ to train the discriminator instead of just using images with label $y$ . It enables us to estimate $p_r(x|y)$ when there are not enough real images with label $y$ .
+(ii) From Eqs. (9) and (10), we can see that the KDEs in Eqs. (6) and (7) are adjusted by adding Gaussian noise to the labels. Moreover, in Eq. (11), we add Gaussian noise to seen labels (assume
+
+$y_{i}^{g}$ 's are seen) to train the generator to generate images at unseen labels. This enables estimation of $p_r(\pmb {x}|y^{\prime})$ when $y^\prime$ is not in the training set.
+
+How is (P2) solved? We propose a novel label input method. For $G$ , we add the label $y$ elementwisely to the output of its first linear layer. For $D$ , an extra linear layer is trained together with $D$ to embed $y$ in a latent space. We then incorporate the embedded label into $D$ by the label projection (Miyato & Koyama, 2018). Please refer to Supp. S.3 for more details.
+
+Remark 1. An algorithm is proposed in Supp. S.2 for training CcGAN in practice. Moreover, CcGAN does not require any specific network architecture, therefore it can also use the state-of-art architectures in practice such as SNGAN (Miyato et al., 2018) and BigGAN (Brock et al., 2019).
+
+# 3 ERROR BOUNDS
+
+In this section, we derive the error bounds of a discriminator trained with $\widehat{\mathcal{L}}^{\mathrm{HVDL}}$ and $\widehat{\mathcal{L}}^{\mathrm{SVDL}}$ under the theoretical loss $\mathcal{L}$ . First, without loss of generality, we assume $y\in [0,1]$ . Then, we introduce some notations. Let $\mathcal{D}$ stand for the Hypothesis Space of $D$ . Let $\hat{p}_r^{\mathrm{KDE}}(y)$ and $\hat{p}_g^{\mathrm{KDE}}(y)$ stand for the KDEs of $p_r(y)$ and $p_g(y)$ respectively. Let $p_w^r (y'|y)\triangleq \frac{w^r(y',y)p^r(y')}{W^r(y)}$ , $p_w^g (y'|y)\triangleq \frac{w^g(y',y)p^g(y')}{W^g(y)}$ , $W^{r}(y)\triangleq \int w^{r}(y^{\prime},y)p_{r}(y^{\prime})dy^{\prime}$ and $W^{g}(y)\triangleq \int w^{g}(y^{\prime},y)p_{g}(y^{\prime})dy^{\prime}$ . Denote by $D^{*}$ the optimal discriminator (Goodfellow et al., 2014) which minimizes $\mathcal{L}$ but may not be in $\mathcal{D}$ . Let $\widetilde{D}\triangleq \arg \min_{D\in \mathcal{D}}\mathcal{L}(D)$ . Let $\widehat{D}^{\mathrm{HVDL}}\triangleq \arg \min_{D\in \mathcal{D}}\widehat{\mathcal{L}}^{\mathrm{HVDL}}(D)$ ; similarly, we define $\widehat{D}^{\mathrm{SVDL}}$ .
+
+Definition 1. (Holder Class) Define the Hölder class of functions
+
+$$
+\Sigma (L) \triangleq \left\{p: \forall t _ {1}, t _ {2} \in \mathcal {Y}, \exists L > 0, s. t. | p ^ {\prime} (t _ {1}) - p ^ {\prime} (t _ {2}) | \leq L | t _ {1} - t _ {2} | \right\}. \tag {12}
+$$
+
+Please see Supp. S.5.1 for more details of these notations. Moreover, we will also work with the following assumptions: (A1) All $D$ 's in $\mathcal{D}$ are measurable and uniformly bounded by $U$ . Let $U \triangleq \max \{\sup_{D \in \mathcal{D}}[-\log D], \sup_{D \in \mathcal{D}}[-\log(1 - D)]\}$ and $U < \infty$ ; (A2) For $\forall x \in \mathcal{X}$ and $y, y' \in \mathcal{Y}$ , $\exists g^r(x) > 0$ and $M^r > 0$ , s.t. $|p_r(x|y') - p_r(x|y)| \leq g^r(x)|y' - y|$ with $\int g^r(x) d\boldsymbol{x} = M^r$ ; (A3) For $\forall x \in \mathcal{X}$ and $y, y' \in \mathcal{Y}$ , $\exists g^g(x) > 0$ and $M^g > 0$ , s.t. $|p_g(x|y') - p_g(x|y)| \leq g^g(x)|y' - y|$ with $\int g^g(x) d\boldsymbol{x} = M^g$ ; (A4) $p_r(y) \in \Sigma(L^r)$ and $p_g(y) \in \Sigma(L^g)$ .
+
+Theorem 1. Assume that (A1)-(A4) hold, then $\forall \delta \in (0,1)$ , with probability at least $1 - \delta$ ,
+
+$$
+\begin{array}{l} \mathcal {L} \left(\widehat {D} ^ {H V D L}\right) - \mathcal {L} \left(D ^ {*}\right) \\ \leq 2 U \left(\sqrt {\frac {C _ {1 , \delta} ^ {K D E} \log N ^ {r}}{N ^ {r} \sigma}} + L ^ {r} \sigma^ {2}\right) + 2 U \left(\sqrt {\frac {C _ {2 , \delta} ^ {K D E} \log N ^ {g}}{N ^ {g} \sigma}} + L ^ {g} \sigma^ {2}\right) + \kappa U (M ^ {r} + M ^ {g}) \\ + 2 U \sqrt {\frac {1}{2} \log \left(\frac {8}{\delta}\right)} \left(\mathbb {E} _ {y \sim \hat {p} _ {r} ^ {K D E} (y)} \left[ \sqrt {\frac {1}{N _ {y , \kappa} ^ {r}}} \right] + \mathbb {E} _ {y \sim \hat {p} _ {g} ^ {K D E} (y)} \left[ \sqrt {\frac {1}{N _ {y , \kappa} ^ {g}}} \right]\right) + \mathcal {L} (\widetilde {D}) - \mathcal {L} (D ^ {*}), \tag {13} \\ \end{array}
+$$
+
+for some constants $C_{1,\delta}^{KDE}$ , $C_{2,\delta}^{KDE}$ depending on $\delta$ .
+
+Theorem 2. Assume that (A1)-(A4) hold, then $\forall \delta \in (0,1)$ , with probability at least $1 - \delta$ ,
+
+$$
+\begin{array}{l} \mathcal {L} \left(\widehat {D} ^ {S V D L}\right) - \mathcal {L} \left(D ^ {*}\right) \\ \leq 2 U \left(\sqrt {\frac {C _ {1 , \delta} ^ {K D E} \log N ^ {r}}{N ^ {r} \sigma}} + L ^ {r} \sigma^ {2}\right) + 2 U \left(\sqrt {\frac {C _ {2 , \delta} ^ {K D E} \log N ^ {g}}{N ^ {g} \sigma}} + L ^ {g} \sigma^ {2}\right) \\ + 2 U \sqrt {\frac {1}{2} \log \left(\frac {1 6}{\delta}\right)} \left(\frac {1}{\sqrt {N ^ {r}}} \mathbb {E} _ {y \sim \hat {p} _ {r} ^ {K D E} (y)} \left[ \frac {1}{W ^ {r} (y)} \right] + \frac {1}{\sqrt {N ^ {g}}} \mathbb {E} _ {y \sim \hat {p} _ {g} ^ {K D E} (y)} \left[ \frac {1}{W ^ {g} (y)} \right]\right) \tag {14} \\ + U \left(M ^ {r} \mathbb {E} _ {y \sim \hat {p} _ {r} ^ {K D E} (y)} \left[ \mathbb {E} _ {y ^ {\prime} \sim \hat {p} _ {w} ^ {r} (y ^ {\prime} | y)} | y ^ {\prime} - y | \right] + M ^ {g} \mathbb {E} _ {y \sim \hat {p} _ {g} ^ {K D E} (y)} \left[ \mathbb {E} _ {y ^ {\prime} \sim \hat {p} _ {w} ^ {g} (y ^ {\prime} | y)} | y ^ {\prime} - y | \right]\right) \\ + \mathcal {L} (\tilde {D}) - \mathcal {L} (D ^ {*}), \\ \end{array}
+$$
+
+for some constant $C_{1,\delta}^{KDE}$ , $C_{2,\delta}^{KDE}$ depending on $\delta$ .
+
+Remark 2. The error bounds in both theorems reflect the distance of $\widehat{D}^{\mathrm{HVDL}}$ and $\widehat{D}^{\mathrm{SVDL}}$ from $D^{*}$ . Enlightened by the two upper bounds, when implementing CcGAN, we should (1) avoid letting $D$ output extreme values (close to 0 or 1) so that $U$ is kept at a moderate level; (2) avoid using a too small or a too large $\kappa$ or $\nu$ to keep the third and fourth terms moderate in Eqs. (13) and (14). Please see Supp. S.5.2.5 for a more detailed interpretation and Supp. S.5.2 for the proofs.
+
+# 4 EXPERIMENT
+
+In this section, we study the effectiveness of CcGAN on three datasets where cGAN (Mirza & Osindero, 2014) cannot generate realistic samples. For a fair comparison, cGAN and CcGAN use the same network architecture (a customized architecture for Circular 2-D Gaussians and the SNGAN (Miyato et al., 2018) architecture for RC-49 and UTKFace) except for the label input modules. For stability, image labels are normalized to $[0,1]$ in the RC-49 and UTKFace datasets during training.
+
+# 4.1 CIRCULAR 2-D GAUSSIANS
+
+We first test on the synthetic data generated from 120 2-D Gaussians with different means.
+
+Experimental setup: The means of the 120 Gaussians are evenly arranged on a unit circle centered at the origin $O$ of a 2-D space. The Gaussians share a common covariance matrix $\tilde{\sigma}^2 I_{2\times 2}$ , where $\tilde{\sigma} = 0.02$ . We generate 10 samples from each Gaussian for training. Fig. 2a shows 1,200 training samples (blue dots) from these Gaussians with their means (red dots) on a unit circle. The unit circle can be seen as a clock where we take the mean at 12 o'clock (point $A$ ) as the baseline point. Given another mean on the circle (point $B$ ), the label $y$ for samples generated from the Gaussian with mean $B$ is defined as the clockwise angle (in radians) between line segments $OA$ and $OB$ . E.g., the label for samples from the Gaussian at $A$ is 0. Both cGAN and our proposed CcGAN are trained on this training set. When implementing cGAN, angles are treated as class labels (each Gaussian is treated as a class); while when implementing CcGAN, angles are treated as real numbers. The network architectures of cGAN and CcGAN are shown in Supp. S.6.1. Both cGAN and CcGAN are trained for 6,000 iterations. We use the rule of thumb formulae in Supp. S.4 to select the hyper-parameters of HVDL and SVDL, i.e., $\sigma \approx 0.074$ , $\kappa \approx 0.017$ and $\nu = 3600$ (see Supp. S.6.2 for details).
+
+For testing, we choose 360 points evenly distributed on the unit circle as the means of 360 Gaussians. For each Gaussian, we generate 100 samples, yielding a test set with 36,000 samples. It should be noted that, among these 360 Gaussians, at least 240 are not used at the training. In other words, there are at least 240 labels in the testing set which do not appear in the training set. For each test angle, we generate 100 fake samples from each trained GAN, yielding 36,000 fake samples from each GAN in total. The quality of these fake samples is evaluated. We repeat the whole experiment three times and report in Table 1 the average quality over three repetitions.
+
+Evaluation metrics and quantitative results: In the label-conditional scenario, each fake sample $\pmb{x}$ with label $y$ is compared with the mean $(\sin(y), \cos(y))$ of a Gaussian on the unit circle with label $y$ . A fake sample is defined as "high-quality" if its Euclidean distance from $\pmb{x}$ to $(\sin(y), \cos(y))$ is smaller than $4\tilde{\sigma} = 0.08$ . A mode (i.e., a Gaussian) is said to be recovered if at least one high-quality sample is assigned to it. We also measure the quality of fake samples with label $y$ by computing the 2-Wasserstein Distance $(\mathcal{W}_2)$ (Peyre et al., 2019) between $p_r(\pmb{x}|y) = \mathcal{N}([\sin(y), \cos(y)]^\top, \tilde{\sigma}\pmb{I})$ and $p_g(\pmb{x}|y) = \mathcal{N}(\mu_y^g, \Sigma_y^g)$ , where we assume $p_g(\pmb{x}|y)$ is Gaussian and its mean and covariance are estimated by the sample mean and sample covariance of 100 fake samples with label $y$ . In Table 1, we report the average percentage of high-quality fake samples and the average percentage of recovered modes over 3 repetitions. We also report the average $\mathcal{W}_2$ over 360 testing angles. We can see CcGAN substantially outperforms cGAN.
+
+Visual results: We select 12 angles which do not appear in the training set. We then use cGAN and CcGAN to generate 100 samples for each unobserved angle. Fig. 2 visually confirms the observation from the numerical metrics: the fake samples from the two CcGAN methods are more realistic.
+
+# 4.2 RC-49
+
+Since most benchmark datasets in the GAN literature do not have continuous, scalar regression labels, we propose a new benchmark dataset—RC-49, a synthetic dataset created by rendering 49 3-D chair
+
+Table 1: Average quality of 36,000 fake samples from cGAN and CcGAN over three repetitions with standard deviations after the “±” symbol. “↓” (“↑”) indicates lower (higher) values are preferred.
+
+| Method | % High Quality ↑ | % Recovered Modes ↑ | 2-Wasserstein Dist. ↓ |
| cGAN (120 classes) | 68.8 ± 4.8 | 81.8 ± 3.9 | 3.32 × 10-2± 3.13 × 10-2 |
| CcGAN (HVDL) | 99.3 ± 0.4 | 100.0 ± 0.0 | 3.03 × 10-4± 5.05 × 10-5 |
| CcGAN (SVDL) | 99.6 ± 0.1 | 100.0 ± 0.0 | 2.56 × 10-4± 8.95 × 10-6 |
+
+
+(a) 1200 training samples and 120 means (red dots).
+Figure 2: Visual results for the Circular 2-D Gaussians simulation. (a) shows 1,200 training samples from 120 Gaussians, with 10 samples per Gaussian. In (b) to (d), each GAN generates 100 fake samples at each of 12 means not appearing in the training set, where green and blue dots stand for fake and real samples respectively.
+
+
+(b) cGAN
+
+
+(c) CcGAN (HVDL)
+
+
+(d) CcGAN (SVDL)
+
+models at different yaw angles. Each of 49 chair models is rendered at 899 yaw angles ranging from 0.1 to 89.9 with step size 0.1. Therefore, RC-49 consists of $44,05164 \times 64$ rendered RGB images and 899 distinct angles. Please see Supp. S.7 for more details of the data generation. Some example images are shown in Fig. 3.
+
+Experimental setup: Not all images are used for the cGAN and CcGAN training. A yaw angle is selected for training if its last digit is odd. Moreover, at each selected angle, only 25 images are randomly chosen for training. Thus, the training set includes 11250 images and 450 distinct angles. The remaining images are held out for evaluation.
+
+When training cGAN, we divide [0.1, 89.9] into 150 equal intervals where each interval is treated as a class. When training CcGAN, we use the rule of thumb formulae in Supp. S.4 to select the three hyper-parameters of HVDL and SVDL, i.e., $\sigma \approx 0.047$ , $\kappa \approx 0.004$ and $\nu = 50625$ . Both cGAN and CcGAN are trained for 30,000 iterations with batch size 256. Afterwards, we evaluate the trained GANs on all 899 angles by generating 200 fake images for each angle. Please see Supp. S.7 for the network architectures and more details about the training/testing setup.
+
+Quantitative and visual results: To evaluate (1) the visual quality, (2) the intra-label diversity, and (3) the label consistency (whether assigned labels of fake images are consistent with their true labels) of fake images, we study an overall metric and three separate metrics here. (i) Intra-FID (Miyato & Koyama, 2018) is utilized as the overall metric. It computes the Fréchet inception distance (FID) (Heusel et al., 2017) separately at each of the 899 evaluation angles and reports the average FID score. (ii) Naturalness Image Quality Evaluator (NIQE) (Mittal et al., 2012) measures the visual quality only. (iii) Diversity is the average entropy of predicted chair types of fake images over evaluation angles. (iv) Label Score is the average absolute error between assigned labels and predicted labels. Please see Supp. S.7.5 for details of these metrics.
+
+We report in Table 2 the performances of each GAN. The example fake images in Fig. 3 and line graphs in Fig. 5 support the quantitative results. cGAN often generates unrealistic, identical images for a target angle (i.e., low visual quality and low intra-label diversity). "Binning" [0.1, 89.9] into other number of classes (e.g., 90 classes and 210 classes) is also tried but does not improve cGAN's performance. In contrast, strikingly better visual quality and higher intra-label diversity of both CcGAN methods are visually evident. Please note that CcGAN is designed to sacrifice some (not too much) label consistency for better visual quality and higher diversity, and this explains why CcGAN does not outperform cGAN in terms of the label score in Table 2.
+
+Table 2: Average quality of 179,800 fake RC-49 images from cGAN and CcGAN with standard deviations after the “±” symbol. “↓” (“↑”) indicates lower (higher) values are preferred.
+
+| Method | Intra-FID ↓ | NIQE ↓ | Diversity ↑ | Label Score ↓ |
| cGAN (150 classes) | 1.720 ± 0.384 | 2.731 ± 0.162 | 0.779 ± 0.199 | 4.815 ± 5.152 |
| CcGAN (HVDL) | 0.612 ± 0.145 | 1.869 ± 0.181 | 2.353 ± 0.121 | 5.617 ± 4.452 |
| CcGAN (SVDL) | 0.515 ± 0.181 | 1.853 ± 0.159 | 2.610 ± 0.113 | 4.982 ± 4.439 |
+
+# 4.3 UTKFACE
+
+In this section, we compare CcGAN and cGAN on UTKFace (Zhang et al., 2017), a dataset consisting of RGB images of human faces which are labeled by age.
+
+Experimental setup: In this experiment, we only use images with age in [1, 60]. Some images with bad visual quality and watermarks are also discarded. After the preprocessing, 14,760 images are left. The number of images for each age ranges from 50 to 1051. We resize all selected images to $64 \times 64$ . Some example UTKFace images are shown in the first image array in Fig.4.
+
+When implementing cGAN, each age is treated as a class. For CcGAN we use the rule of thumb formulae in Supp. S.4 to select the three hyper-parameters of HVDL and SVDL, i.e., $\sigma \approx 0.041$ , $\kappa \approx 0.017$ and $\nu = 3600$ . Both cGAN and CcGAN are trained for 40,000 iterations with batch size 512. In testing, we generate 1,000 fake images from each trained GAN for each age. Please see Supp. S.8 for more details of data preprocessing, network architectures and training/testing setup.
+
+Quantitative and visual results: Similar to the RC-49 experiment, we evaluate the quality of fake images by Intra-FID, NIQE, Diversity (entropy of predicted races), and Label Score. We report in Table 3 the average quality of 60,000 fake images. We also show in Fig. 4 some example fake images from cGAN and CcGAN and line graphs of FID/NIQE versus ages in Fig. 5. Analogous to the quantitative comparisons, we can see that CcGAN performs much better than cGAN.
+
+Table 3: Average quality of 60,000 fake UTKFace images from cGAN and CcGAN with standard deviations after the “±” symbol. “↓” (“↑”) indicates lower (higher) values are preferred.
+
+| Method | Intra-FID ↓ | NIQE ↓ | Diversity ↑ | Label Score ↓ |
| cGAN (60 classes) | 4.516 ± 0.965 | 2.315 ± 0.306 | 0.254 ± 0.353 | 11.087 ± 8.119 |
| CcGAN (HVDL) | 0.572 ± 0.167 | 1.739 ± 0.145 | 1.338 ± 0.178 | 9.782 ± 7.166 |
| CcGAN (SVDL) | 0.547 ± 0.181 | 1.753 ± 0.196 | 1.326 ± 0.198 | 10.739 ± 8.340 |
+
+
+Figure 3: Three RC-49 example images for each of 10 angles: real images and example fake images from cGAN and two proposed CcGANs, respectively. CcGANs produce chair images with higher visual quality and more diversity.
+
+
+Figure 4: Three UTKFace example images for each of 10 ages: real images and example fake images from cGAN and two proposed CcGANs, respectively. CcGANs produce face images with higher visual quality and more diversity.
+
+
+(a) RC-49: FID vs Angle
+
+
+(b) RC-49: NIQE vs Angle
+
+
+(c)UTKFace:FID vs Age
+Figure 5: Line graphs of FID/NIQE versus regression labels on RC-49 and UTKFace. Figs. 5(a) to 5(d) show that two CcGANs consistently outperform cGAN across all regression labels. The graphs of CcGANs also appear smoother than those of cGAN because of HVDL and SVDL.
+
+
+(d)UTKFace:NIQE vs Age
+
+# 5 CONCLUSION
+
+As the first generative model, we propose the CcGAN in this paper for image generation conditional on regression labels. In CcGAN, two novel empirical discriminator losses (HVDL and SVDL), a novel empirical generator loss and a novel label input method are proposed to overcome the two problems of existing cGANs. The error bounds of a discriminator trained under HVDL and SVDL are studied in this work. A new benchmark dataset, RC-49, is also proposed for the continuous scenario. Finally we demonstrate the superiority of the proposed CcGAN to cGAN on the Circular 2-D Gaussians, RC-49, and UTKFace datasets.
+
+# ACKNOWLEDGMENTS
+
+This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) under Grants CRDPJ 476594-14, RGPIN-2019-05019, and RGPAS2017-507965.
+
+# REFERENCES
+
+Zeynep Akata, Florent Perronnin, Zaid Harchaoui, and Cordelia Schmid. Label-embedding for image classification. IEEE transactions on pattern analysis and machine intelligence, 38(7):1425-1438, 2015.
+Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. volume 70 of Proceedings of Machine Learning Research, pp. 214-223, International Convention Centre, Sydney, Australia, 2017. PMLR.
+Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale GAN training for high fidelity natural image synthesis. In International Conference on Learning Representations, 2019.
+Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015.
+Olivier Chapelle, Jason Weston, Léon Bottou, and Vladimir Vapnik. Vicinal risk minimization. In Advances in neural information processing systems, pp. 416-422, 2001.
+Harm De Vries, Florian Stub, Jérémie Mary, Hugo Larochelle, Olivier Pietquin, and Aaron C Courville. Modulating early visual processing by language. In Advances in Neural Information Processing Systems, pp. 6594-6604, 2017.
+Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672-2680, 2014.
+Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In Advances in Neural Information Processing Systems, pp. 6626-6637, 2017.
+
+Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of styleGAN. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8110-8119, 2020.
+Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.
+Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
+Anish Mittal, Rajiv Soundararajan, and Alan C Bovik. Making a "completely blind" image quality analyzer. IEEE Signal processing letters, 20(3):209-212, 2012.
+Takeru Miyato and Masanori Koyama. cGANs with projection discriminator. In International Conference on Learning Representations, 2018.
+Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957, 2018.
+Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with auxiliary classifier GANs. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2642-2651, 2017.
+Greg Olmschenk. Semi-supervised Regression with Generative Adversarial Networks Using Minimal Labeled Data. PhD thesis, 2019.
+Greg Olmschenk, Jin Chen, Hao Tang, and Zhigang Zhu. Dense crowd counting convolutional neural networks with minimal data using semi-supervised dual-goal generative adversarial networks. In IEEE Conference on Computer Vision and Pattern Recognition: Learning with Imperfect Data Workshop, 2019.
+Gabriel Peyre, Marco Cuturi, et al. Computational optimal transport. Foundations and Trends® in Machine Learning, 11(5-6):355-607, 2019.
+Mehdi Rezagholiradeh and Md Akmal Haidar. Reg-GAN: Semi-supervised learning based on generative adversarial networks for regression. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2806-2810. IEEE, 2018.
+Mehdi Rezagholizadeh, Md Akmal Haidar, and Dalei Wu. Semi-supervised regression with generative adversarial networks, November 22 2018. US Patent App. 15/789,518.
+Bernard W Silverman. Density estimation for statistics and data analysis, volume 26. CRC press, 1986.
+Vladimir Vapnik. The nature of statistical learning theory. Springer, 2nd edition, 2000.
+Larry Wasserman. Density estimation @ONLINE. http://www.stat.cmu.edu/~larry/ $=$ sml/densityestimation.pdf.
+Han Zhang, Ian Goodfellow, Dimitris Metaxas, and Augustus Odena. Self-attention generative adversarial networks. In Proceedings of the 36th International Conference on Machine Learning, volume 97, pp. 7354–7363, 2019.
+Zhifei Zhang, Yang Song, and Hairong Qi. Age progression/regression by conditional adversarial autoencoder. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5810-5818, 2017.
+Shengyu Zhao, Zhijian Liu, Ji Lin, Jun-Yan Zhu, and Song Han. Differentiable augmentation for data-efficient gan training. In Advances in neural information processing systems, 2020.
+
+# SUPPLEMENTARY MATERIAL
+
+# S.1 GITHUB REPOSITORY
+
+Please find the codes for this paper at Github:
+
+https://github.com/UBCDingXin/improved_CcGAN
+
+# S.2 ALGORITHMS FOR CCGAN TRAINING
+
+Algorithm 1: An algorithm for CcGAN training with the proposed HVDL.
+Data: $N^r$ real image-label pairs $\Omega^r = \{(x_1^r, y_1^r), \ldots, (x_{N^r}^r, y_{N^r}^r)\}$ , $N_{\mathrm{uv}}^r$ ordered distinct labels $\Upsilon = \{y_{[1]}^r, \ldots, y_{[N_{\mathrm{uv}}^r]}^r\}$ in the dataset, preset $\sigma$ and $\kappa$ , number of iterations $K$ , the discriminator batch size $m^d$ , and the generator batch size $m^g$ .
+Result: Trained generator $G$ .
+for $k = 1$ to $K$ do
+Train D;
+Draw $m^d$ labels $Y^d$ with replacement from $\Upsilon$ ;
+Create a set of target labels $Y^{d,\epsilon} = \{y_i + \epsilon | y_i \in Y^d, \epsilon \in \mathcal{N}(0, \sigma^2), i = 1, \ldots, m^d\}$ ( $D$ training is conditional on these labels);
+Initialize $\Omega_d^r = \phi, \Omega_d^f = \phi$ ;
+for $i = 1$ to $m^d$ do
+Randomly choose an image-label pair $(\boldsymbol{x}, y) \in \Omega^r$ satisfying $|y - y_i - \epsilon| \leq \kappa$ where $y_i + \epsilon \in Y^{d,\epsilon}$ and let $\Omega_d^r = \Omega_d^r \cup (\boldsymbol{x}, y_i + \epsilon)$ .
+Randomly draw a label $y'$ from $U(y_i + \epsilon - \kappa, y_i + \epsilon + \kappa)$ and generate a fake image $\boldsymbol{x}'$ by evaluating $G(\boldsymbol{z}, y')$ , where $\boldsymbol{z} \sim \mathcal{N}(\boldsymbol{0}, \boldsymbol{I})$ . Let $\Omega_d^f = \Omega_d^f \cup (\boldsymbol{x}', y_i + \epsilon)$ .
+end
+Update $D$ with samples in set $\Omega_d^r$ and $\Omega_d^f$ via gradient-based optimizers based on Eq.(6);
+Train G;
+Draw $m^g$ labels $Y^g$ with replacement from $\Upsilon$ ;
+Create another set of target labels $Y^{g,\epsilon} = \{y_i + \epsilon | y_i \in Y^g, \epsilon \in \mathcal{N}(0, \sigma^2), i = 1, \ldots, m^g\}$ ( $G$ training is conditional on these labels);
+Generate $m^g$ fake images conditional on $Y^{g,\epsilon}$ and put these image-label pairs in $\Omega_g^f$ ;
+Update $G$ with samples in $\Omega_g^f$ via gradient-based optimizers based on Eq.(11);
+end
+
+Remark S.3. It should be noted that, for computational efficiency, the normalizing constants $N_{y_j^r + \epsilon^r, \kappa}^r, N_{y_j^g + \epsilon^g, \kappa}^g, \sum_{i=1}^{N^r} w^r(y_i^r, y_j^r + \epsilon^r)$ , and $\sum_{i=1}^{N^g} w^g(y_i^g, y_j^g + \epsilon^g)$ in Eq. (9) and (10) are excluded from the training and only used for theoretical analysis.
+
+# S.3 MORE DETAILS OF THE PROPOSED LABEL INPUT METHOD IN SECTION 2
+
+We propose a novel way to input labels to the conditional generative adversarial networks. For the generator, we add a regression label element-wise to the feature map of the first linear layer. For the discriminator, labels are first projected to a latent space learned by an extra linear layer. Then, we incorporate the embedded labels into the discriminator by the label projection (Miyato & Koyama, 2018). Figs. S.3.6 and S.3.7 visualize our proposed label input method. Please refer to our codes for more details.
+
+# S.4 A RULE OF THUMB FOR HYPER-PARAMETER SELECTION
+
+In our experiments, we normalize labels to real numbers in $[0,1]$ and the hyper-parameter selection is conducted based on the normalized labels. To be more specific, the hyper-parameter $\sigma$ is computed based on a rule-of-thumb formula for the bandwidth selection of KDE (Silverman, 1986), i.e.,
+
+Algorithm 2: An algorithm for CcGAN training with the proposed SVDL.
+Result: Trained generator $G$ .
+```txt
+Data: $N^r$ real image-label pairs $\Omega^r = \{(\pmb{x}_1^r, y_1^r), \dots, (\pmb{x}_{N^r}^r, y_{N^r}^r)\}$ , $N_{\mathrm{uy}}^r$ ordered distinct labels $\Upsilon = \{y_{[1]}^r, \dots, y_{[N_{\mathrm{uy}}^r]}^r\}$ in the dataset, preset $\sigma$ and $\nu$ , number of iterations $K$ , the discriminator batch size $m^d$ , and the generator batch size $m^g$ .
+```
+
+1 for $k = 1$ to $K$ do
+2 Train D;
+3 Draw $m^d$ labels $Y^{d}$ with replacement from $\Upsilon$ .
+4 Create a set of target labels $Y^{d,\epsilon} = \{y_i + \epsilon |y_i\in Y^d,\epsilon \in \mathcal{N}(0,\sigma^2),i = 1,\dots ,m^d\}$ (D training is conditional on these labels);
+5 Initialize $\Omega_d^r = \phi ,\Omega_d^f = \phi$ .
+6 for $i = 1$ to $m^d$ do
+7 Randomly choose an image-label pair $(x,y)\in \Omega^r$ satisfying $e^{-\nu (y - y_i - \epsilon)^2} > 10^{-3}$ where $y_{i} + \epsilon \in Y^{d,\epsilon}$ and let $\Omega_d^r = \Omega_d^r\cup (\pmb {x},y_i + \epsilon)$ . This step is used to exclude real images with too small weights. ;
+8 Compute $w_{i}^{r}(y,y_{i} + \epsilon) = e^{-\nu (y_{i} + \epsilon -y)^{2}}$ .
+9 Randomly draw a label $y^\prime$ from $U(y_{i} + \epsilon -\sqrt{-\frac{\log 10 - 3}{\nu}},y_{i} + \epsilon +\sqrt{-\frac{\log 10 - 3}{\nu}})$ and generate a fake image $\pmb{x}^{\prime}$ by evaluating $G(z,y^{\prime})$ , where $\pmb {z}\sim \mathcal{N}(\pmb {0},\pmb {I})$ . Let $\Omega_d^f = \Omega_d^f\cup (\pmb {x}',y_i + \epsilon)$ . ;
+10 Compute $w_{i}^{g}(y^{\prime},y_{i} + \epsilon) = e^{-\nu (y_{i} + \epsilon -y^{\prime})^{2}}$
+11 end
+12 Update $D$ with samples in set $\Omega_d^r$ and $\Omega_d^f$ via gradient-based optimizers based on Eq.(7);
+13 Train G;
+14 Draw $m^g$ labels $Y^{g}$ with replacement from $\Upsilon$ .
+15 Create another set of target labels $Y^{g,\epsilon} = \{y_i + \epsilon |y_i\in Y^g,\epsilon \in \mathcal{N}(0,\sigma^2),i = 1,\ldots ,m^g\}$ (G training is conditional on these labels);
+16 Generate $m^g$ fake images conditional on $Y^{g,\epsilon}$ and put these image-label pairs in $\Omega_g^f$ .
+17 Update $G$ with samples in $\Omega_g^f$ via gradient-based optimizers based on Eq.(11);
+18 end
+
+
+Figure S.3.6: The label input method for the generator in CcGAN.
+
+
+Figure S.3.7: The label input method for the discriminator in CcGAN.
+
+$\sigma = \left(4\hat{\sigma}_{y^r}^5 /3N^r\right)^{1 / 5}$ , where $\hat{\sigma}_{y^r}$ is the sample standard deviation of normalized labels in the training set. Let $\kappa_{\mathrm{base}} = \max \left(y_{[2]}^{r} - y_{[1]}^{r},y_{[3]}^{r} - y_{[2]}^{r},\ldots ,y_{[N_{\mathrm{uy}}^{r}]}^{r} - y_{[N_{\mathrm{uy}}^{r} - 1]}^{r}\right)$ , where $y_{[l]}^{r}$ is the $l$ -th smallest
+
+normalized distinct real label and $N_{\mathrm{uy}}^r$ is the number of normalized distinct labels in the training set. The $\kappa$ is set as a multiple of $\kappa_{\mathrm{base}}$ (i.e., $\kappa = m_{\kappa} \kappa_{\mathrm{base}}$ ) where the multiplier $m_{\kappa}$ stands for 50% of the minimum number of neighboring labels used for estimating $p_r(x|y)$ given a label $y$ . For example, $m_{\kappa} = 1$ implies using 2 neighboring labels (one on the left while the other one on the right). In our experiments, $m_{\kappa}$ is generally set as 1 or 2. In some extreme case when many distinct labels have too few real samples, we may consider increasing $m_{\kappa}$ . We also found $\nu = 1 / \kappa^2$ works well in our experiments.
+
+# S.5 MORE DETAILS OF THEOREMS S.4 AND S.5
+
+# S.5.1 SOME NECESSARY DEFINITIONS AND NOTATIONS
+
+- The hypothesis space $\mathcal{D}$ is a set of functions that can be represented by $D$ (a neural network with determined architecture).
+- In the HVDL case, denote by $p_r^{y,\kappa}(\pmb{x}) \triangleq \int_{y - \kappa}^{y + \kappa} p_r(\pmb{x} | y') p_r(y') dy'$ the marginal distribution of real images with labels in $[y - \kappa, y + \kappa]$ and similarly to $p_g^{y,\kappa}(\pmb{x})$ of fake images.
+- In the SVDL case, given $y$ and weight functions (E.q. (8)), if the number of real and fake images are infinite, the empirical density converges to $p_r^{y,w^r}(\boldsymbol{x}) \triangleq \int p_r(\boldsymbol{x}|y') \frac{w^r(y',y)p_r(y')}{W^r(y)} dy'$ and $p_g^{y,w^g}(\boldsymbol{x}) \triangleq \int p_g(\boldsymbol{x}|y') \frac{w^g(y',y)p_g(y)}{W^g(y)} dy'$ respectively, where $W^r(y) \triangleq \int w^r(y',y)p_r(y') dy'$ and $W^g(y) \triangleq \int w^g(y',y)p_g(y') dy'$ .
+- Let $p_w^r(y'|y) \triangleq \frac{w^r(y', y)p^r(y')}{W^r(y)}$ and $p_w^g(y'|y) \triangleq \frac{w^g(y', y)p^g(y')}{W^g(y)}$ .
+- The Hölder Class defined in Definition 1 is a set of functions with bounded second derivatives, which controls the variation of the function when parameter changes. (A4) implies the two probability density functions $p_r(y)$ and $p_g(y)$ are assumed in the Hölder Class.
+- Given a $G$ , the optimal discriminator which minimizes $\mathcal{L}$ is in the form of
+
+$$
+D ^ {*} (\boldsymbol {x}, y) = \frac {p _ {r} (\boldsymbol {x} , y)}{p _ {r} (\boldsymbol {x} , y) + p _ {g} (\boldsymbol {x} , y)}. \tag {S.15}
+$$
+
+However, $D^{*}$ may not be covered by the hypothesis space $\mathcal{D}$ . The $\widetilde{D}$ is the minimizer of $\mathcal{L}$ in the hypothesis space $\mathcal{D}$ . Thus, $\mathcal{L}(\widetilde{D}) - \mathcal{L}(D^{*})$ should be a non-negative constant. In CcGAN, we minimize $\widehat{\mathcal{L}}^{\mathrm{HVDL}}(D)$ or $\widehat{\mathcal{L}}^{\mathrm{HVDL}}(D)$ with respect to $D \in \mathcal{D}$ , so we are more interested in the distance of $\widehat{D}^{\mathrm{HVDL}}$ and $\widehat{D}^{\mathrm{SVDL}}$ from $D^{*}$ , i.e., $\mathcal{L}(\widehat{D}^{\mathrm{HVDL}}) - \mathcal{L}(D^{*})$ and $\mathcal{L}(\widehat{D}^{\mathrm{SVDL}}) - \mathcal{L}(D^{*})$ .
+
+# S.5.2 PROOFS OF THEOREMS 1 AND 2
+
+# S.5.2.1 TECHNICAL LEMMAS
+
+Before we move to the proofs of Theorems 1 and 2, we provide several technical lemmas used in the later proof.
+
+Recall notations and assumptions in Sections 3 and S.5.1, then we derive the following lemmas.
+
+Lemma S.1. Suppose that (A1)-(A2) and (A4) hold, then $\forall \delta \in (0,1)$ , with probability at least $1 - \delta$ .
+
+$$
+\begin{array}{l} \sup _ {D \in \mathcal {D}} \left| \frac {1}{N _ {y , \kappa} ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} \mathbb {1} _ {\{| y - y _ {i} ^ {r} | \leq \kappa \}} [ - \log D \left(\boldsymbol {x} _ {i} ^ {r}, y\right) ] - \mathbb {E} _ {\boldsymbol {x} \sim p _ {r} (\boldsymbol {x} | y)} [ - \log D (\boldsymbol {x}, y) ] \right| \tag {S.16} \\ \leq U \sqrt {\frac {1}{2 N _ {y , \kappa} ^ {r}} \log \left(\frac {2}{\delta}\right)} + \frac {\kappa U M ^ {r}}{2}, \\ \end{array}
+$$
+
+for a given $y$ .
+
+Proof. Triangle inequality yields
+
+$$
+\begin{array}{l} \sup _ {D \in \mathcal {D}} \left| \frac {1}{N _ {y , \kappa} ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} \mathbb {1} _ {\{| y - y _ {i} ^ {r} | \leq \kappa \}} [ - \log D (\boldsymbol {x} _ {i} ^ {r}, y) ] - \mathbb {E} _ {\boldsymbol {x} \sim p _ {r} (\boldsymbol {x} | y)} [ - \log D (\boldsymbol {x}, y) ] \right| \\ \leq \sup _ {D \in \mathcal {D}} \left| \frac {1}{N _ {y , \kappa} ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} \mathbb {1} _ {\{| y - y _ {i} ^ {r} | \leq \kappa \}} [ - \log D (\boldsymbol {x} _ {i} ^ {r}, y) ] - \mathbb {E} _ {\boldsymbol {x} \sim p _ {r} ^ {y, \kappa} (\boldsymbol {x})} [ - \log D (\boldsymbol {x}, y) ] \right| \\ + \sup _ {D \in \mathcal {D}} \left| \mathbb {E} _ {\boldsymbol {x} \sim p _ {r} ^ {y, \kappa} (\boldsymbol {x})} [ - \log D (\boldsymbol {x}, y) ] - \mathbb {E} _ {\boldsymbol {x} \sim p _ {r} (\boldsymbol {x} | y)} [ - \log D (\boldsymbol {x}, y) ] \right| \\ \end{array}
+$$
+
+We then bound the two terms of the RHS separately as follows:
+
+1. Real images with labels in $[y - \kappa, k + \kappa]$ can be seen as independent samples from $p_r^{y,\kappa}(\boldsymbol{x})$ . Then the first term can be bounded by applying Hoeffding's inequality as follows: $\forall \delta \in (0,1)$ , with at least probability $1 - \delta$ ,
+
+$$
+\begin{array}{l} \sup _ {D \in \mathcal {D}} \left| \frac {1}{N _ {y , \kappa} ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} \mathbb {1} _ {\{| y - y _ {i} ^ {r} | \leq \kappa \}} \left[ U \frac {- \log D (\boldsymbol {x} _ {i} ^ {r} , y)}{U} \right] - \mathbb {E} _ {\boldsymbol {x} \sim p _ {r} ^ {y, \kappa} (\boldsymbol {x})} \left[ U \frac {- \log D (\boldsymbol {x} , y)}{U} \right] \right| \\ \leq U \sqrt {\frac {1}{2 N _ {y , \kappa} ^ {r}} \log \left(\frac {2}{\delta}\right)}. \tag {S.17} \\ \end{array}
+$$
+
+2. For the second term, by the definition of $p_r^{y,\kappa}(\boldsymbol{x})$ and defining $p_{\kappa}(y') = \frac{\mathbb{1}_{\{|y' - y|\leq\kappa\}}p(y')}{\int\mathbb{1}_{\{|y' - y|\leq\kappa\}}p(y')dy'}$ , we have
+
+$$
+\sup _ {D \in \mathcal {D}} \left| \mathbb {E} _ {\boldsymbol {x} \sim p _ {r} ^ {y, \kappa} (\boldsymbol {x})} [ - \log D (\boldsymbol {x}, y) ] - \mathbb {E} _ {\boldsymbol {x} \sim p _ {r} (\boldsymbol {x} | y)} [ - \log D (\boldsymbol {x}, y) ] \right|
+$$
+
+(by the definition of total variation and the boundness of $-\log D$ (S.18)
+
+$$
+\leq \frac {U}{2} \int \left| p _ {r} ^ {y, \kappa} (\boldsymbol {x}) - p _ {r} (\boldsymbol {x} | y) \right| d \boldsymbol {x}.
+$$
+
+Then, focusing on $|p_r^{y,\kappa}(\pmb{x}) - p_r(\pmb{x}|y)|$ ,
+
+$$
+\begin{array}{l} \left| p _ {r} ^ {y, \kappa} (\boldsymbol {x}) - p _ {r} (\boldsymbol {x} | y) \right| = \left| \int p (\boldsymbol {x} | y ^ {\prime}) p _ {\kappa} (y ^ {\prime}) d y ^ {\prime} - p (\boldsymbol {x} | y) \right| \\ \leq \int | p (\boldsymbol {x} | y ^ {\prime}) - p (\boldsymbol {x} | y) | p _ {\kappa} (y ^ {\prime}) d y ^ {\prime} \\ \left(\text {b y} (\mathrm {A 2})\right) \\ \leq \int g ^ {r} (\boldsymbol {x}) | y ^ {\prime} - y | p _ {\kappa} (y ^ {\prime}) d y ^ {\prime} \\ \leq \kappa g ^ {r} (\boldsymbol {x}). \\ \end{array}
+$$
+
+Thus, Eq. (S.18) is upper bounded as follows,
+
+$$
+\begin{array}{l} \sup _ {D \in \mathcal {D}} \left| \mathbb {E} _ {\boldsymbol {x} \sim p _ {r} ^ {y, \kappa} (\boldsymbol {x})} [ - \log D (\boldsymbol {x}, y) ] - \mathbb {E} _ {\boldsymbol {x} \sim p _ {r} (\boldsymbol {x} | y)} [ - \log D (\boldsymbol {x}, y) ] \right| \\ \leq \int \kappa g ^ {r} (\boldsymbol {x}) d \boldsymbol {x} \tag {S.19} \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \left(\text {b y} (\mathrm {A 2})\right) \\ = \kappa M ^ {r}. \\ \end{array}
+$$
+
+By combining Eq. (S.17) and (S.19), we can get Eq. (S.16), which finishes the proof.
+
+Similarly, we apply identical proof strategy to the fake images $\pmb{x}^g$ and generator distribution $p_g(\pmb{x}|y)$ .
+
+Lemma S.2. Suppose that (A1), (A3) and (A4) hold, then $\forall \delta \in (0,1)$ , with probability at least $1 - \delta$ ,
+
+$$
+\begin{array}{l} \sup _ {D \in \mathcal {D}} \left| \frac {1}{N _ {y , \kappa} ^ {g}} \sum_ {i = 1} ^ {N ^ {g}} \mathbb {1} _ {\{| y - y _ {i} ^ {g} | \leq \kappa \}} [ - \log (1 - D (\boldsymbol {x} _ {i}, y)) ] - \mathbb {E} _ {\boldsymbol {x} \sim p _ {g} (\boldsymbol {x} | y)} [ - \log (1 - D (\boldsymbol {x}, y)) ] \right| \tag {S.20} \\ \leq U \sqrt {\frac {1}{2 N _ {y , \kappa} ^ {g}} \log \left(\frac {2}{\delta}\right)} + \frac {\kappa U M ^ {g}}{2}, \\ \end{array}
+$$
+
+for a given $y$ .
+
+Proof. This proof is omitted because it is almost identical to the one for Lemma S.1.
+
+
+
+The following two lemmas provide the bounds for SVDL.
+
+Lemma S.3. Suppose that (A1), (A2) and (A4) hold, then $\forall \delta \in (0,1)$ , with probability at least $1 - \delta$ ,
+
+$$
+\begin{array}{l} \sup _ {D \in \mathcal {D}} \left| \frac {\frac {1}{N ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} w ^ {r} \left(y _ {i} ^ {r} , y\right) [ - \log D \left(\boldsymbol {x} _ {i} ^ {r} , y\right) ]}{\frac {1}{N ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} w ^ {r} \left(y _ {i} ^ {r} , y\right)} - \mathbb {E} _ {\boldsymbol {x} \sim p _ {r} (\boldsymbol {x} | y)} [ - \log D (\boldsymbol {x}, y) ] \right| \tag {S.21} \\ \leq \frac {U}{W ^ {r} (y)} \sqrt {\frac {1}{2 N ^ {r}} \log \left(\frac {4}{\delta}\right)} + \frac {U M ^ {r}}{2} \mathbb {E} _ {y ^ {\prime} \sim p _ {w} ^ {r} \left(y ^ {\prime} | y\right)} \left[ \left| y ^ {\prime} - y \right| \right], \\ \end{array}
+$$
+
+for a given $y$ .
+
+Proof. For brevity, denote by $f(x,y) = -\log D(x,y)$ and $\mathcal{F} = -\log \mathcal{D}$ . Then,
+
+$$
+\begin{array}{l} \sup _ {D \in \mathcal {D}} \left| \frac {\frac {1}{N ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} w ^ {r} \left(y _ {i} ^ {r} , y\right) \left[ - \log D \left(\boldsymbol {x} _ {i} ^ {r} , y\right) \right]}{\frac {1}{N ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} w ^ {r} \left(y _ {i} ^ {r} , y\right)} - \mathbb {E} _ {\boldsymbol {x} \sim p _ {r} (\boldsymbol {x} | y)} \left[ - \log D (\boldsymbol {x}, y) \right] \right| \\ = \sup _ {f \in \mathcal {F}} \left| \frac {\frac {1}{N ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} w ^ {r} \left(y _ {i} ^ {r} , y\right) f \left(\boldsymbol {x} _ {i} ^ {r} , y\right)}{\frac {1}{N ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} w ^ {r} \left(y _ {i} ^ {r} , y\right)} - \mathbb {E} _ {\boldsymbol {x} \sim p _ {r} (\boldsymbol {x} | y)} [ f (\boldsymbol {x}, y) ] \right| \tag {S.22} \\ \leq \sup _ {f \in \mathcal {F}} \left| \frac {\frac {1}{N ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} w ^ {r} \left(y _ {i} ^ {r} , y\right) f \left(\boldsymbol {x} _ {i} ^ {r} , y\right)}{\frac {1}{N ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} w ^ {r} \left(y _ {i} ^ {r} , y\right)} - \mathbb {E} _ {\boldsymbol {x} \sim p _ {r} ^ {y, w ^ {r}} (\boldsymbol {x})} [ f (\boldsymbol {x}, y) ] \right| \\ + \sup _ {f \in \mathcal {F}} \left| \mathbb {E} _ {\boldsymbol {x} \sim p _ {r} ^ {y, w r} (\boldsymbol {x})} [ f (\boldsymbol {x}, y) ] - \mathbb {E} _ {\boldsymbol {x} \sim p _ {r} (\boldsymbol {x} | y)} [ f (\boldsymbol {x}, y) ] \right| \\ \end{array}
+$$
+
+where the inequality is by triangular inequality. We then derive bounds for both two terms of the last line.
+
+1. For the first term, we can further split it into two parts,
+
+$$
+\begin{array}{l} \left| \begin{array}{c} \frac {1}{N ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} w ^ {r} \left(y _ {i} ^ {r}, y\right) f \left(\boldsymbol {x} _ {i} ^ {r}, y\right) \\ \hline \frac {1}{N ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} w ^ {r} \left(y _ {i} ^ {r}, y\right) \end{array} - \mathbb {E} _ {\boldsymbol {x} \sim p _ {r} ^ {y, w ^ {r}} (\boldsymbol {x})} [ f (\boldsymbol {x}, y) ] \right. \\ \leq \left| \frac {\frac {1}{N ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} w ^ {r} \left(y _ {i} ^ {r} , y\right) f \left(\boldsymbol {x} _ {i} ^ {r} , y\right)}{\frac {1}{N ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} w ^ {r} \left(y _ {i} ^ {r} , y\right)} - \frac {\frac {1}{N ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} w ^ {r} \left(y _ {i} ^ {r} , y\right) f \left(\boldsymbol {x} _ {i} ^ {r} , y\right)}{W ^ {r} (y)} \right| \tag {S.23} \\ + \left| \frac {\frac {1}{N ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} w ^ {r} \left(y _ {i} ^ {r} , y\right) f \left(\boldsymbol {x} _ {i} ^ {r} , y\right)}{W ^ {r} (y)} - \mathbb {E} _ {\boldsymbol {x} \sim p _ {r} ^ {y, w ^ {r}} (\boldsymbol {x})} [ f (\boldsymbol {x}, y) ] \right| \\ \end{array}
+$$
+
+Focusing on the first part of RHS of Eq.(S.23). By (A1),
+
+$$
+\begin{array}{l} \left| \frac {\frac {1}{N ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} w ^ {r} \left(y _ {i} ^ {r} , y\right) f \left(\boldsymbol {x} _ {i} ^ {r} , y\right)}{\frac {1}{N ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} w ^ {r} \left(y _ {i} ^ {r} , y\right)} - \frac {\frac {1}{N ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} w ^ {r} \left(y _ {i} ^ {r} , y\right) f \left(\boldsymbol {x} _ {i} ^ {r} , y\right)}{W ^ {r} (y)} \right| \\ \leq U \frac {\left| \frac {1}{N ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} w ^ {r} \left(y _ {i} ^ {r} , y\right) - W ^ {r} (y) \right|}{W ^ {r} (y)} \\ \end{array}
+$$
+
+Note that $\forall y, y', w^r(y', y) = e^{-\nu |y - y'|^2} \leq 1$ and hence given $y, w^r(y', y)$ is a random variable bounded by 1. Apply Hoeffding's inequality to the numerator of above, yielding that with probability at least $1 - \delta'$ ,
+
+$$
+\left| \frac {\frac {1}{N ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} w ^ {r} \left(y _ {i} ^ {r} , y\right) f \left(\boldsymbol {x} _ {i} ^ {r} , y\right)}{\frac {1}{N ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} w ^ {r} \left(y _ {i} ^ {r} , y\right)} - \frac {\frac {1}{N ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} w ^ {r} \left(y _ {i} ^ {r} , y\right) f \left(\boldsymbol {x} _ {i} ^ {r} , y\right)}{W ^ {r} (y)} \right| \leq \frac {U}{W ^ {r} (y)} \sqrt {\frac {1}{2 N ^ {r}} \log \left(\frac {2}{\delta^ {\prime}}\right)}. \tag {S.24}
+$$
+
+Then, consider the second part of RHS of Eq.(S.23). Recall that $p_r^{y,w^r}(\pmb{x}) \triangleq \int p_r(\pmb{x}|y') \frac{w^r(y',y)p^r(y')}{W^r(y)} dy'$ . Thus,
+
+$$
+\begin{array}{l} \left| \frac {\frac {1}{N ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} w ^ {r} \left(y _ {i} ^ {r} , y\right) f \left(\boldsymbol {x} _ {i} ^ {r} , y\right)}{W ^ {r} (y)} - \mathbb {E} _ {\boldsymbol {x} \sim p _ {r} ^ {y, w ^ {r}} (\boldsymbol {x})} [ f (\boldsymbol {x}, y) ] \right| \\ = \frac {1}{W ^ {r} (y)} \left| \frac {1}{N ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} w ^ {r} \left(y _ {i} ^ {r}, y\right) f \left(\boldsymbol {x} _ {i} ^ {r}, y\right) - \mathbb {E} _ {\left(\boldsymbol {x}, y ^ {\prime}\right) \sim p _ {r} \left(\boldsymbol {x}, y ^ {\prime}\right)} \left[ w ^ {r} \left(y ^ {\prime}, y\right) f \left(\boldsymbol {x} _ {i} ^ {r}, y\right) \right] \right|, \\ \end{array}
+$$
+
+where $p_r(x, y') = p_r(x|y')p^r(y')$ denotes the joint distribution of real image and its label. Again, since $w^r(y', y)f(\boldsymbol{x}_i^r, y)$ is uniformly bounded by $U$ under (A1), we can apply Hoeffding's inequality. This implies that with probability at least $1 - \delta'$ , the above can be upper bounded by
+
+$$
+\frac {U}{W ^ {r} (y)} \sqrt {\frac {1}{2 N ^ {r}} \log \left(\frac {2}{\delta^ {\prime}}\right)}. \tag {S.25}
+$$
+
+Combining Eq. (S.24) and (S.25) and by setting $\delta' = \frac{\delta}{2}$ , we have with probability at least $1 - \delta$ ,
+
+$$
+\left| \frac {\frac {1}{N ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} w ^ {r} \left(y _ {i} ^ {r} , y\right) f \left(\boldsymbol {x} _ {i} ^ {r} , y\right)}{\frac {1}{N ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} w ^ {r} \left(y _ {i} ^ {r} , y\right)} - \mathbb {E} _ {\boldsymbol {x} \sim p _ {r} ^ {y, w ^ {r}} (\boldsymbol {x})} [ f (\boldsymbol {x}, y) ] \right| \leq \frac {U}{W ^ {r} (y)} \sqrt {\frac {1}{2 N ^ {r}} \log \left(\frac {4}{\delta}\right)}.
+$$
+
+Since this holds for $\forall f\in \mathcal{F}$ , taking supremum over $f$ , we have
+
+$$
+\sup _ {f \in \mathcal {F}} \left| \frac {\frac {1}{N ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} w ^ {r} \left(y _ {i} ^ {r} , y\right) f \left(\boldsymbol {x} _ {i} ^ {r} , y\right)}{\frac {1}{N ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} w ^ {r} \left(y _ {i} ^ {r} , y\right)} - \mathbb {E} _ {\boldsymbol {x} \sim p _ {r} ^ {y, w ^ {r}} (\boldsymbol {x})} [ f (\boldsymbol {x}, y) ] \right| \leq \frac {U}{W ^ {r} (y)} \sqrt {\frac {1}{2 N ^ {r}} \log \left(\frac {4}{\delta}\right)}. \tag {S.26}
+$$
+
+2. For the second term on the RHS of Eq.(S.22). By (A1) that $|f| < U$
+
+$$
+\begin{array}{l} \sup _ {f \in \mathcal {F}} \left| \mathbb {E} _ {\boldsymbol {x} \sim p _ {r} ^ {y, w r} (\boldsymbol {x})} [ f (\boldsymbol {x}, y) ] - \mathbb {E} _ {\boldsymbol {x} \sim p _ {r} (\boldsymbol {x} | y)} [ f (\boldsymbol {x}, y) ] \right| \\ \leq U \left\| p _ {r} ^ {y, w ^ {r}} (\boldsymbol {x}) - p _ {r} (\boldsymbol {x} | y) \right\| _ {T V} \\ = \frac {U}{2} \int \left| p _ {r} ^ {y, w ^ {r}} (\boldsymbol {x}) - p _ {r} (\boldsymbol {x} | y) \right| d \boldsymbol {x}. \\ \end{array}
+$$
+
+Note that by the definition of $p_r^{y, w^r}(\pmb{x}) \triangleq \int p_r(\pmb{x} | y') \frac{w^r(y', y) p_r(y')}{W^r(y)} dy'$ and $p_w^r(y'|y) \triangleq \frac{w^r(y', y) p^r(y')}{W^r(y)}$ , we have
+
+$$
+\begin{array}{l} \left| p _ {r} ^ {y, w ^ {r}} (\boldsymbol {x}) - p _ {r} (\boldsymbol {x} | y) \right| = \left| \int p _ {r} (\boldsymbol {x} | y ^ {\prime}) p _ {w} ^ {r} \left(y ^ {\prime} | y\right) d y ^ {\prime} - p _ {r} (\boldsymbol {x} | y) \right| \\ \leq \int | p _ {r} (\boldsymbol {x} | y ^ {\prime}) - p _ {r} (\boldsymbol {x} | y) | p _ {w} ^ {r} (y ^ {\prime} | y) d y ^ {\prime}. \\ \end{array}
+$$
+
+By (A.2) and $y \in [0,1]$ , the above is upper bounded by $g^{r}(\pmb{x})\mathbb{E}_{y^{\prime}\sim p_{w}^{r}(y^{\prime}|y)}\left[|y - y^{\prime}|\right]$ . Thus,
+
+$$
+\begin{array}{l} \sup _ {f \in \mathcal {F}} \left| \mathbb {E} _ {\boldsymbol {x} \sim p _ {r} ^ {y, w r} (\boldsymbol {x})} [ f (\boldsymbol {x}, y) ] - \mathbb {E} _ {\boldsymbol {x} \sim p _ {r} (\boldsymbol {x} | y)} [ f (\boldsymbol {x}, y) ] \right| \\ \leq \frac {U}{2} \int g ^ {r} (\boldsymbol {x}) \mathbb {E} _ {y ^ {\prime} \sim p _ {w} ^ {r} \left(y ^ {\prime} \mid y\right)} \left[ \left| y ^ {\prime} - y \right| \right] d \boldsymbol {x} \tag {S.27} \\ = \frac {U M ^ {r}}{2} \mathbb {E} _ {y ^ {\prime} \sim p _ {w} ^ {r} (y ^ {\prime} | y)} [ | y ^ {\prime} - y | ]. \\ \end{array}
+$$
+
+Therefore, combining both Eq.(S.26) and (S.27), with probability at least $1 - \delta$
+
+$$
+\begin{array}{l} \sup _ {D \in \mathcal {D}} \left| \frac {\frac {1}{N ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} w ^ {r} (y _ {i} ^ {r} , y) [ - \log D (\boldsymbol {x} _ {i} ^ {r} , y) ]}{\frac {1}{N ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} w ^ {r} (y _ {i} ^ {r} , y)} - \mathbb {E} _ {\boldsymbol {x} \sim p _ {r} (\boldsymbol {x} | y)} [ - \log D (\boldsymbol {x}, y) ] \right| \\ \leq \frac {U}{W ^ {r} (y)} \sqrt {\frac {1}{2 N ^ {r}} \log \left(\frac {4}{\delta}\right)} + \frac {U M ^ {r}}{2} \mathbb {E} _ {y ^ {\prime} \sim p _ {w} ^ {r} (y ^ {\prime} | y)} \left[ | y ^ {\prime} - y | \right]. \\ \end{array}
+$$
+
+This finishes the proof.
+
+
+
+Lemma S.4. Suppose that (A1), (A3) and (A4) hold, then $\forall \delta \in (0,1)$ , with probability at least $1 - \delta$ ,
+
+$$
+\begin{array}{l} \sup _ {D \in \mathcal {D}} \left| \frac {\frac {1}{N ^ {g}} \sum_ {i = 1} ^ {N ^ {g}} w ^ {g} \left(y _ {i} ^ {g} , y\right) \left[ - \log \left(1 - D \left(\boldsymbol {x} _ {i} ^ {g} , y\right)\right) \right]}{\frac {1}{N ^ {g}} \sum_ {i = 1} ^ {N ^ {g}} w ^ {g} \left(y _ {i} ^ {g} , y\right)} - \mathbb {E} _ {\boldsymbol {x} \sim p _ {g} (\boldsymbol {x} | y)} \left[ - \log \left(1 - D (\boldsymbol {x}, y)\right) \right] \right| \tag {S.28} \\ \leq \frac {U}{W ^ {g} (y)} \sqrt {\frac {1}{2 N ^ {g}} \log \left(\frac {4}{\delta}\right)} + \frac {U M ^ {g}}{2} \mathbb {E} _ {y ^ {\prime} \sim p _ {w} ^ {g} (y ^ {\prime} | y)} \left[ | y ^ {\prime} - y | \right], \\ \end{array}
+$$
+
+for a given $y$ .
+
+Proof. This proof is omitted because it is almost identical to the one for Lemma S.21.
+
+
+
+As introduced in Section 2, we use KDE for the marginal label distribution with Gaussian kernel. The next theorem characterizes the difference between a $p_r(y), p_g(y)$ and their KDE using $n$ i.i.d. samples.
+
+Theorem S.3. Let $\hat{p}_r^{KDE}(y)$ and $\hat{p}_g^{KDE}(y)$ stand for the KDE of $p_r(y)$ and $p_g(y)$ respectively. Under condition (A4), if the KDEs are based on n i.i.d. samples from $p_r / p_g$ and a bandwidth $\sigma$ , for all $\delta \in (0,1)$ , with probability at least $1 - \delta$ ,
+
+$$
+\sup _ {t} \left| \hat {p} _ {r} ^ {K D E} (y) - p _ {r} (y) \right| \leq \sqrt {\frac {C _ {1 , \delta} ^ {K D E} \log n}{n \sigma}} + L ^ {r} \sigma^ {2}, \tag {S.29}
+$$
+
+$$
+\sup _ {t} \left| \hat {p} _ {g} ^ {K D E} (y) - p _ {g} (y) \right| \leq \sqrt {\frac {C _ {2 , \delta} ^ {K D E} \log n}{n \sigma}} + L ^ {g} \sigma^ {2}, \tag {S.30}
+$$
+
+for some constants $C_{1,\delta}^{KDE}, C_{2,\delta}^{KDE}$ depending on $\delta$ .
+
+Proof. By ((Wasserman); P.12), for any $p(t) \in \Sigma(L)$ (the Hölder Class, see Definition 1), with probability at least $1 - \delta$ ,
+
+$$
+\sup _ {t} \left| \hat {p} ^ {\mathrm {K D E}} (t) - p (t) \right| \leq \sqrt {\frac {C _ {\delta} ^ {\mathrm {K D E}} \log n}{n \sigma}} + c \sigma^ {2}, \tag {S.31}
+$$
+
+for some constants $C_{\delta}^{\mathrm{KDE}}$ and $c$ , where $C$ depends on $\delta$ and $c = L\int K(s)|s|^2 ds$ . Since in this work, $K$ is chosen as Gaussian kernel, $c = L\int K(s)|s|^2 ds = L$ .
+
+# S.5.2.2 ERROR BOUNDS FOR HVDL AND SVDL
+
+Based on the lemmas and theorems in Supp. S.5.2.1, we derive the error bounds of HVDL and SVDL, which will be used in the proofs of Theorems 1 and 2.
+
+Theorem S.4. Assume that (A1)-(A4) hold, then $\forall \delta \in (0,1)$ , with probability at least $1 - \delta$ ,
+
+$$
+\begin{array}{l} \sup _ {D \in \mathcal {D}} \left| \widehat {\mathscr {L}} ^ {H V D L} (D) - \mathscr {L} (D) \right| \\ \leq U \left(\sqrt {\frac {C _ {1 , \delta} ^ {K D E} \log N ^ {r}}{N ^ {r} \sigma}} + L ^ {r} \sigma^ {2}\right) + U \left(\sqrt {\frac {C _ {2 , \delta} ^ {K D E} \log N ^ {g}}{N ^ {g} \sigma}} + L ^ {g} \sigma^ {2}\right) + \frac {\kappa U (M ^ {r} + M ^ {g})}{2} \tag {S.32} \\ + U \sqrt {\frac {1}{2} \log \left(\frac {8}{\delta}\right)} \left(\mathbb {E} _ {y \sim \hat {p} _ {r} ^ {K D E} (y)} \left[ \sqrt {\frac {1}{N _ {y , \kappa} ^ {r}}} \right] + \mathbb {E} _ {y \sim \hat {p} _ {g} ^ {K D E} (y)} \left[ \sqrt {\frac {1}{N _ {y , \kappa} ^ {g}}} \right]\right), \\ \end{array}
+$$
+
+for some constants $C_{1,\delta}^{KDE}, C_{2,\delta}^{KDE}$ depending on $\delta$ .
+
+Proof. We first decompose $\sup_{D\in \mathcal{D}}\left|\widehat{\mathcal{L}}^{\mathrm{HVDL}}(D) - \mathcal{L}(D)\right|$ as follows
+
+$$
+\begin{array}{l} \sup _ {D \in \mathcal {D}} \left| \widehat {\mathcal {L}} ^ {\mathrm {H V D L}} (D) - \mathcal {L} (D) \right| \\ \leq \sup _ {D \in \mathcal {D}} \left| \int \left[ \int [ - \log D (\boldsymbol {x}, y) ] p _ {r} (\boldsymbol {x} | y) d \boldsymbol {x} \right] (p _ {r} (y) - \hat {p} _ {r} ^ {\mathrm {K D E}} (y)) d y \right| \\ + \sup _ {D \in \mathcal {D}} \left| \int \left[ \int [ - \log (1 - D (\boldsymbol {x}, y)) ] p _ {g} (\boldsymbol {x} | y) d \boldsymbol {x} \right] (p _ {g} (y) - \hat {p} _ {g} ^ {\mathrm {K D E}} (y)) d y \right| \\ + \sup _ {D \in \mathcal {D}} \left| \int \left[ \frac {1}{N _ {y , \kappa} ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} \mathbb {1} _ {\{| y - y _ {i} ^ {r} | \leq \kappa \}} [ - \log D (\boldsymbol {x} _ {i} ^ {r}, y) ] - \mathbb {E} _ {\boldsymbol {x} \sim p _ {r} (\boldsymbol {x} | y)} [ - \log D (\boldsymbol {x}, y) ] \right] \hat {p} _ {r} ^ {\mathrm {K D E}} (y) d y \right| \\ + \sup _ {D \in \mathcal {D}} \left| \int \left[ \frac {1}{N _ {y , \kappa} ^ {g}} \sum_ {i = 1} ^ {N ^ {r}} \mathbb {1} _ {\{| y - y _ {i} ^ {g} | \leq \kappa \}} [ - \log (1 - D (\boldsymbol {x} _ {i} ^ {g}, y)) ] - \mathbb {E} _ {\boldsymbol {x} \sim p _ {g} (\boldsymbol {x} | y)} [ - \log (1 - D (\boldsymbol {x}, y)) ] \right] \hat {p} _ {g} ^ {\mathrm {K D E}} (y) d y \right|. \\ \end{array}
+$$
+
+These four terms in the RHS can be bounded separately as follows
+
+1. The first term can be bounded by using Theorem S.3 and the boundness of $D$ and $y \in [0, 1]$ . For the first term, $\forall \delta_1 \in (0, 1)$ , with at least probability $1 - \delta_1$ ,
+
+$$
+\begin{array}{l} \sup _ {D \in \mathcal {D}} \left| \int \left[ \int [ - \log D (\boldsymbol {x}, y) ] p _ {r} (\boldsymbol {x} | y) d \boldsymbol {x} \right] (p _ {r} (y) - \hat {p} _ {r} ^ {\mathrm {K D E}} (y)) d y \right| \\ \leq U \left(\sqrt {\frac {C _ {1 , \delta_ {1}} ^ {\mathrm {K D E}} \log N ^ {r}}{N ^ {r} \sigma}} + L ^ {r} \sigma^ {2}\right), \tag {S.33} \\ \end{array}
+$$
+
+for some constants $C_{1,\delta_1}^{\mathrm{KDE}}$ depending on $\delta_1$ .
+
+2. The second term can be bounded by using Theorem S.3 and the boundness of $D$ and $y \in [0,1]$ . For the first term, $\forall \delta_2 \in (0,1)$ , with at least probability $1 - \delta_{2}$
+
+$$
+\begin{array}{l} \sup _ {D \in \mathcal {D}} \left| \int \left[ \int [ - \log (1 - D (\boldsymbol {x}, y)) ] p _ {g} (\boldsymbol {x} | y) d \boldsymbol {x} \right] (p _ {g} (y) - \hat {p} _ {g} ^ {\mathrm {K D E}} (y)) d y \right| \\ \leq U \left(\sqrt {\frac {C _ {2 , \delta_ {2}} ^ {\mathrm {K D E}} \log N ^ {g}}{N ^ {r} \sigma}} + L ^ {g} \sigma^ {2}\right), \tag {S.34} \\ \end{array}
+$$
+
+for some constants $C_{2,\delta_2}^{\mathrm{KDE}}$ depending on $\delta_2$ .
+
+3. The third term can be bounded by using Lemma S.1 and S.2. For the third term, $\forall \delta_3\in (0,1)$ with at least probability $1 - \delta_{3}$
+
+$$
+\begin{array}{l} \sup _ {D \in \mathcal {D}} \left| \int \left[ \frac {1}{N _ {y , \kappa} ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} \mathbb {1} _ {\{| y - y _ {i} ^ {r} | \leq \kappa \}} [ - \log D (\boldsymbol {x} _ {i} ^ {r}, y) ] - \mathbb {E} _ {\boldsymbol {x} \sim p _ {r} (\boldsymbol {x} | y)} [ - \log D (\boldsymbol {x}, y) ] \right] \hat {p} _ {r} ^ {\mathrm {K D E}} (y) d y \right| \\ \leq \int_ {D \in \mathcal {D}} \sup _ {r \in \mathcal {R} _ {1}} \left| \frac {1}{N _ {y , \kappa} ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} \mathbb {1} _ {\{| y - y _ {i} ^ {r} | \leq \kappa \}} [ - \log D (\boldsymbol {x} _ {i} ^ {r}, y) ] - \mathbb {E} _ {\boldsymbol {x} \sim p _ {r} (\boldsymbol {x} | y)} [ - \log D (\boldsymbol {x}, y) ] \right| \hat {p} _ {r} ^ {\mathrm {K D E}} (y) d y \\ \leq \int \left[ U \sqrt {\frac {1}{2 N _ {y , \kappa} ^ {r}} \log \left(\frac {2}{\delta_ {3}}\right)} + \frac {\kappa U M ^ {r}}{2} \right] \hat {p} _ {r} ^ {\mathrm {K D E}} (y) d y \\ \end{array}
+$$
+
+Note that $N_{y,\kappa}^{r} = \sum_{i=1}^{N^{r}} \mathbb{1}_{\{|y - y_{i}^{r}|\}}$ , which is a random variable of $y_{i}'s$ . The above can be expressed as
+
+$$
+\begin{array}{l} \sup _ {D \in \mathcal {D}} \left| \int \left[ \frac {1}{N _ {y , \kappa} ^ {r}} \sum_ {i = 1} ^ {N ^ {r}} \mathbb {1} _ {\{| y - y _ {i} ^ {r} | \leq \kappa \}} \left[ - \log D (\pmb {x} _ {i} ^ {r}, y) \right] - \mathbb {E} _ {\pmb {x} \sim p _ {r} (\pmb {x} | y)} \left[ - \log D (\pmb {x}, y) \right] \right] \hat {p} _ {r} ^ {\mathrm {K D E}} (y) d y \right| \\ \leq \frac {\kappa U M ^ {r}}{2} + U \sqrt {\frac {1}{2} \log \left(\frac {2}{\delta_ {3}}\right)} \mathbb {E} _ {y \sim \hat {p} _ {r} ^ {\mathrm {K D E}} (y)} \left[ \sqrt {\frac {1}{N _ {y , \kappa} ^ {r}}} \right]. \tag {S.35} \\ \end{array}
+$$
+
+4. Similarly, for the fourth term, $\forall \delta_4\in (0,1)$ , with at least probability $1 - \delta_{4}$
+
+$$
+\begin{array}{l} \sup _ {D \in \mathcal {D}} \left| \int \left\{\int \left[ \frac {1}{N _ {y , \kappa} ^ {g}} \sum_ {i = 1} ^ {N ^ {r}} \mathbb {1} _ {\{| y - y _ {i} ^ {g} | \leq \kappa \}} [ - \log (1 - D (\boldsymbol {x} _ {i} ^ {g}, y)) ] \right. \right. \right. \\ \left. - \mathbb {E} _ {\boldsymbol {x} \sim p _ {g} (\boldsymbol {x} | y)} \left[ - \log (1 - D (\boldsymbol {x}, y)) \right] d \boldsymbol {x} \right\} \hat {p} _ {g} ^ {\mathrm {K D E}} (y) d y \tag {S.36} \\ \leq \frac {\kappa U M ^ {g}}{2} + U \sqrt {\frac {1}{2} \log \left(\frac {2}{\delta_ {4}}\right)} \mathbb {E} _ {y \sim \hat {p} _ {g} ^ {\mathrm {K D E}} (y)} \left[ \sqrt {\frac {1}{N _ {y , \kappa} ^ {g}}} \right]. \\ \end{array}
+$$
+
+With $\delta_1 = \delta_2 = \delta_3 = \delta_4 = \frac{\delta}{4}$ , combining Eq. (S.33) - (S.36) leads to the upper bound in Theorem S.4.
+
+Theorem S.5. Assume that (A1)-(A4) hold, then $\forall \delta \in (0,1)$ , with probability at least $1 - \delta$ ,
+
+$$
+\begin{array}{l} \sup _ {D \in \mathcal {D}} \left| \widehat {\mathscr {L}} ^ {S V D L} (D) - \mathscr {L} (D) \right| \\ \leq U \left(\sqrt {\frac {C _ {1 , \delta} ^ {K D E} \log N ^ {r}}{N ^ {r} \sigma}} + L ^ {r} \sigma^ {2}\right) + U \left(\sqrt {\frac {C _ {2 , \delta} ^ {K D E} \log N ^ {g}}{N ^ {g} \sigma}} + L ^ {g} \sigma^ {2}\right) \tag {S.37} \\ + U \sqrt {\frac {1}{2} \log \left(\frac {1 6}{\delta}\right)} \left(\frac {1}{\sqrt {N ^ {r}}} \mathbb {E} _ {y \sim \hat {p} _ {r} ^ {K D E} (y)} \left[ \frac {1}{W ^ {r} (y)} \right] + \frac {1}{\sqrt {N ^ {g}}} \mathbb {E} _ {y \sim \hat {p} _ {g} ^ {K D E} (y)} \left[ \frac {1}{W ^ {g} (y)} \right]\right) \\ + \frac {U}{2} \left(M ^ {r} \mathbb {E} _ {y \sim \hat {p} _ {r} ^ {K D E} (y)} \left[ \mathbb {E} _ {y ^ {\prime} \sim p _ {w} ^ {r} (y ^ {\prime} | y)} | y ^ {\prime} - y | \right] + M ^ {g} \mathbb {E} _ {y \sim \hat {p} _ {g} ^ {K D E} (y)} \left[ \mathbb {E} _ {y ^ {\prime} \sim p _ {w} ^ {g} (y ^ {\prime} | y)} | y ^ {\prime} - y | \right]\right) \\ \end{array}
+$$
+
+for some constant $C_{1,\delta}^{KDE}, C_{2,\delta}^{KDE}$ depending on $\delta$ .
+
+Proof. Similar to the decomposition for Theorem S.4, we can decompose $\sup_{D\in \mathcal{D}}\left|\widehat{\mathcal{L}}^{\mathrm{SVDL}}(D) - \mathcal{L}(D)\right|$ into four terms which can be bounded by using Theorem S.3, the boundness of $D$ , Lemma S.3, and Lemma S.4. The detail is omitted because it is almost identical to the one of Theorem S.4.
+
+# S.5.2.3 PROOF OF THEOREM 1
+
+Based on Theorem S.4, we derive Theorem 1.
+
+Proof. We first decompose $\mathcal{L}(\widehat{D}^{\mathrm{HVDL}}) - \mathcal{L}(D^{*})$ as follows
+
+$$
+\begin{array}{l} \mathcal {L} \left(\widehat {D} ^ {\mathrm {H V D L}}\right) - \mathcal {L} \left(D ^ {*}\right) \\ = \mathcal {L} (\widehat {D} ^ {\mathrm {H V D L}}) - \widehat {\mathcal {L}} (\widehat {D} ^ {\mathrm {H V D L}}) + \widehat {\mathcal {L}} (\widehat {D} ^ {\mathrm {H V D L}}) - \widehat {\mathcal {L}} (\widetilde {D}) + \widehat {\mathcal {L}} (\widetilde {D}) - \mathcal {L} (\widetilde {D}) + \mathcal {L} (\widetilde {D}) - \mathcal {L} (D ^ {*}) \\ (b y \hat {\mathcal {L}} (\hat {D} ^ {\mathrm {H V D L}}) - \hat {\mathcal {L}} (\tilde {D}) \leq 0) \\ \leq 2 \sup _ {D \in \mathcal {D}} \left| \widehat {\mathcal {L}} ^ {\mathrm {H V D L}} (D) - \mathcal {L} (D) \right| + \mathcal {L} (\widetilde {D}) - \mathcal {L} (D ^ {*}) \\ \end{array}
+$$
+
+(by Theorem S.4)
+
+$$
+\begin{array}{l} \leq 2 U \left(\sqrt {\frac {C _ {1 , \delta} ^ {\mathrm {K D E}} \log N ^ {r}}{N ^ {r} \sigma}} + L ^ {r} \sigma^ {2}\right) + 2 U \left(\sqrt {\frac {C _ {2 , \delta} ^ {\mathrm {K D E}} \log N ^ {g}}{N ^ {g} \sigma}} + L ^ {g} \sigma^ {2}\right) + \kappa U (M ^ {r} + M ^ {g}) \\ + 2 U \sqrt {\frac {1}{2} \log \left(\frac {8}{\delta}\right)} \left(\mathbb {E} _ {y \sim \hat {p} _ {r} ^ {\mathrm {K D E}} (y)} \left[ \sqrt {\frac {1}{N _ {y , \kappa} ^ {r}}} \right] + \mathbb {E} _ {y \sim \hat {p} _ {g} ^ {\mathrm {K D E}} (y)} \left[ \sqrt {\frac {1}{N _ {y , \kappa} ^ {g}}} \right]\right) + \mathcal {L} (\widetilde {D}) - \mathcal {L} (D ^ {*}). \tag {S.38} \\ \end{array}
+$$
+
+
+
+# S.5.2.4 PROOF OF THEOREM 2
+
+Based on Theorem S.5, we derive Theorem 2.
+
+Proof. The detail is omitted because it is almost identical to the one of Theorem 1 in Supp. S.5.2.3. $\square$
+
+# S.5.2.5 INTERPRETATION OF THEOREMS 1 AND 2
+
+Both theorems imply HVDL and SVDL perform well if the output of $D$ is not too close to 0 or 1 (i.e., favor small $U$ ). The first two terms in both upper bounds control the quality of KDE, which implies KDE works better if we have larger $N^r$ and $N^g$ and a smaller $\sigma$ . The rest terms of the two bounds are different. In the HVDL case, we favor smaller $\kappa$ , $M^r$ , and $M^g$ . However, we should avoid setting $\kappa$ for a too small value because we prefer larger $N_{y,\kappa}^r$ and $N_{y,\kappa}^g$ . In the SVDL case, we prefer small $M^r$ and $M^g$ but large $W^r(y)$ and $W^g(y)$ . Large $W^r(y)$ and $W^g(y)$ imply that the weight function decays slowly (i.e., small $\nu$ ; similar to large $N_{y,\kappa}^r$ and $N_{y,\kappa}^g$ in Eq.(S.32)). However, we should avoid setting $\nu$ too small because a small $\nu$ leads to large $\mathbb{E}_{y' \sim \hat{p}_w^r(y'|y)} |y' - y|$ and $\mathbb{E}_{y' \sim \hat{p}_w^g(y'|y)} |y' - y|$ (i.e., $y'$ 's which are far away from $y$ have large weights). In our experiments, we use some rule-of-thumb formulae to select $\kappa$ and $\nu$ . As a future work, a refined hyper-parameter selection method should be proposed.
+
+# S.6 MORE DETAILS OF THE SIMULATION IN SECTION 4.1
+
+# S.6.1 NETWORK ARCHITECTURES
+
+Please refer to Table S.6.1 and Table S.6.2 for the network architectures we adopted for cGAN and CcGAN in our Simulation experiments.
+
+# S.6.2 TRAINING SETUPS
+
+The cGAN and CcGAN are trained for 6000 iterations on the training set with the Adam (Kingma & Ba, 2015) optimizer (with $\beta_{1} = 0.5$ and $\beta_{2} = 0.999$ ), a constant learning rate $5 \times 10^{-5}$ and batch size 128. The rule of thumb formulae in Section S.4 are used to select the hyper-parameters for HVDL and SVDL, where we let $m_{\kappa} = 2$ . Thus, the three hyper-parameters in this experiments are set as follows: $\sigma = 0.074$ , $\kappa = 0.017$ , $\nu = 3600$ .
+
+Table S.6.1: Network architectures for the generator and discriminator of cGAN in the simulation. "fc" denotes a fully-connected layer. "BN" stands for batch normalization. The label $y$ is treated as a class label and encoded by label-embedding (Akata et al., 2015) so its dimension equals to the number of distinct angles in the training set (i.e., $y \in \mathbb{R}^{120}$ ).
+
+| (a) Generator | (b) Discriminator |
| z ∈ R2 ∼ N(0, I); y ∈ R120 | A sample x ∈ R2 |
| concat(z, y) ∈ R122 | fc → 100; ReLU |
| fc → 100; BN; ReLU | fc → 100; ReLU |
| fc → 100; BN; ReLU | fc → 100; ReLU |
| fc → 100; BN; ReLU | fc → 100; ReLU |
| fc → 100; BN; ReLU | concat(output of previous layer, y) ∈ R220, where y ∈ R120 is the label of x. |
| fc → 100; BN; ReLU | fc → 100; ReLU |
| fc → 2 | fc → 1; Sigmoid |
+
+Table S.6.2: Network architectures for the generator and discriminator of our proposed CcGAN in the simulation. The label $y$ is treated as a real scalar so its dimension is 1. We do not directly input $y$ into the generator and discriminator. We first convert each $y$ into the coordinate of the mean represented by this $y$ , i.e., $(\sin(y), \cos(y))$ . Then we insert this coordinate into the networks.
+
+| (a) Generator | (b) Discriminator |
| z ∈ R2 ∼ N(0, I); y ∈ R | A sample x ∈ R2 with label y ∈ R |
| concat(z, sin(y), cos(y)) ∈ R4 | concat(x, sin(y), cos(y)) ∈ R4 |
| fc→ 100; BN; ReLU | fc→ 100; ReLU |
| fc→ 100; BN; ReLU | fc→ 100; ReLU |
| fc→ 100; BN; ReLU | fc→ 100; ReLU |
| fc→ 100; BN; ReLU | fc→ 100; ReLU |
| fc→ 100; BN; ReLU | fc→ 100; Sigmoid |
| fc→ 2 | |
+
+# S.6.3 TESTING SETUPS
+
+When evaluating the trained cGAN, if a test label $y'$ is unseen in the training set, we first find its closest, seen label $y$ . Then, we generate samples from the trained cGAN at $y$ instead of at $y'$ . On the contrary, generating samples from CcGAN at unseen labels is well-defined.
+
+# S.6.4 EXTRA EXPERIMENTS
+
+# S.6.4.1 VARYING NUMBER OF GAUSSIANS FOR TRAINING DATA GENERATION
+
+In this section, we study the influence of the number of Gaussians used for training data generation on the performance of cGAN and CcGAN. We vary the number of Gaussians from 120 to 10 with step size 10 but keep other settings in Section 4.1 unchanged and plot the line graphs of 2-Wasserstein Distance (log scale) versus the number of Gaussians in Fig. S.6.1. Reducing the number of Gaussians for training implies a larger gap between any two consecutive distinct angles in the training set. As the number of Gaussians decreases, the continuous scenario gradually degenerates to the categorical scenario, therefore the assumption that a small perturbation to $y$ results in a negligible change to $p(x|y)$ is no longer satisfied. Consequently, the 2-Wasserstein distances of the proposed two CcGAN methods gradually increase and eventually surpass the 2-Wasserstein distance of cGAN when the number of Gaussians is small (e.g., less than 40). Note that reducing the number of Gaussians in the training data generation will not improve the performance of cGAN in the testing
+
+because many angles seen in the testing stage (we evaluate each method on 360 angles) do not appear in the training set.
+
+
+Figure S.6.1: Line graphs of 2-Wasserstein Distance (log scale) versus the number of Gaussians for training data generation. As the number of Gaussians decreases, the continuous scenario gradually degenerates to the categorical scenario, therefore the assumption that a small perturbation to $y$ results in a negligible change to $p(x|y)$ is no longer satisfied. Consequently, the 2-Wasserstein distances of two CcGAN methods gradually increase and eventually surpass the 2-Wasserstein distance of cGAN when the number of Gaussians is small (e.g., less than 40).
+
+# S.7 MORE DETAILS OF THE EXPERIMENT ON RC-49 IN SECTION 4.2
+
+# S.7.1 DESCRIPTION OF RC-49
+
+To generate RC-49, firstly we randomly select 49 3-D chair object models from the "Chair" category provided by ShapeNet (Chang et al., 2015). Then we use Blender v2.791 to render these 3-D models. Specifically, during the rendering, we rotate each chair model along with the yaw axis for a degree between $0.1^{\circ}$ and $89.9^{\circ}$ (angle resolution as $0.1^{\circ}$ ) where we use the scene image mode to compose our dataset. The rendered images are converted from the RGBA to RGB color model. In total, the RC-49 dataset consists of 44051 images of image size $64 \times 64$ in the PNG format.
+
+# S.7.2 NETWORK ARCHITECTURES
+
+The RC-49 dataset is a more sophisticated dataset compared with the simulation, thus it requires networks with deeper layers. We employ the SNGAN architecture (Miyato et al., 2018) in both cGAN and CcGAN consisting of residual blocks for the generator and the discriminator. Moreover, for the generator in cGAN, the regression labels are input into the network by the label embedding (Akata et al., 2015) and the conditional batch normalization (De Vries et al., 2017). For the discriminator in cGAN, the regression labels are fed into the network by the label embedding and the label projection (Miyato & Koyama, 2018). For CcGAN, the regression labels are fed into networks by our proposed label input method in Section 2. Please refer to our codes for more details about the network specifications of cGAN and CcGAN.
+
+# S.7.3 TRAINING SETUPS
+
+The cGAN and CcGAN are trained for 30,000 iterations on the training set with the Adam (Kingma & Ba, 2015) optimizer (with $\beta_{1} = 0.5$ and $\beta_{2} = 0.999$ ), a constant learning rate $10^{-4}$ and batch size 256. The rule of thumb formulae in Section S.4 are used to select the hyper-parameters for HVDL and SVDL, where we let $m_{\kappa} = 2$ . Thus, the three hyper-parameters in this experiments are set as follows: $\sigma = 0.0473$ , $\kappa = 0.004$ , $\nu = 50625$ .
+
+# S.7.4 TESTING SETUPS
+
+The RC-49 dataset consists of 899 distinct yaw angles and at each angle there are 49 images (corresponding to 49 types of chairs). At the test stage, we ask the trained cGAN or CcGAN to generate 200 fake images at each of these 899 yaw angles. Please note that, among these 899 yaw angles, only 450 of them are seen at the training stage so real images at the rest 449 angles are not used in the training.
+
+We evaluate the quality of the fake images from three perspectives, i.e., visual quality, intra-label diversity, and label consistency. One overall metric (Intra-FID) and three separate metrics (NIQE, Diversity, and Label Score) are used. Their details are shown in Supp. S.7.5.
+
+# S.7.5 PERFORMANCE MEASURES
+
+Before we conduct the evaluation in terms of the four metrics, we first train an autoencoder (AE), a regression CNN and a classification CNN on all real images in RC-49. The bottleneck dimension of the AE is 512 and the AE is trained to reconstruct the real images in RC-49 with MSE as the loss function. The regression CNN is trained to predict the yaw angle of a given image. The classification CNN is trained to predict the chair type of a given image. The autoencoder and both two CNNs are trained for 200 epochs with a batch size 256.
+
+- Intra-FID (Miyato & Koyama, 2018): We take Intra-FID as the overall score to evaluate the quality of fake images and we prefer the small Intra-FID score. At each evaluation angle, we compute the FID (Heusel et al., 2017) between 49 real images and 200 fake images in terms of the bottleneck feature of the pre-trained AE. The Intra-FID score is the average FID over all 899 evaluation angles. Please note that we also try to use the classification CNN to compute the Intra-FID but the Intra-FID scores vary in a very wide range and sometimes obviously contradict with the three separate metrics.
+- NIQE (Mittal et al., 2012): NIQE is used to evaluate the visual quality of fake images with the real images as the reference and we prefer the small NIQE score. We train one NIQE model with the 49 real images at each of the 899 angles so we have 899 NIQE models. During evaluation, a NIQE score is computed for each evaluation angle based on the NIQE model at that angle. Finally, we report the average and standard deviations of the 899 NIQE scores over the 899 yaw angels. Note that the NIQE is implemented by the NIQE module in MATLAB.
+- Diversity: Diversity is used to evaluate the intra-label diversity and the larger the better. In RC-49, there are 49 chair types. At each evaluation angle, we ask the pre-trained classification to predict the chair types of the 200 fake images and an entropy is computed based on these predicted chair types. The diversity reported in Table 2 is the average of the 899 entropies over all evaluation angles.
+- Label Score: Label Score is used to evaluate the label consistency and the smaller the better. We ask the pre-trained regression CNN to predict the yaw angles of all fake images and the predicted angles are then compared with the assigned angles. The Label Score is defined as the average absolute distance between the predicted angles and assigned angles over all fake images, which is equivalent to the Mean Absolute Error (MAE).
+
+# S.7.6 EXTRA EXPERIMENTS
+
+# S.7.6.1 MORE LINE GRAPHS
+
+# S.7.6.2 INTERPOLATION
+
+In Fig. S.7.3, we present some interpolation results of the two CcGAN methods (i.e., HVDL and SVDL). For an input pair $(z,y)$ , we fix the noise $z$ but perform label-wise interpolations, i.e., varying label $y$ from 4.5 to 85.5. Clearly, all generated images are visually realistic and we can see the chair distribution smoothly changes over continuous angles. Please note that, Fig. S.7.3 is meant to show the smooth change of the chair distribution instead of one single chair so the chair type may change over angles. This confirms CcGAN is capable of capturing the underlying conditional image distribution rather than simply memorizing training data.
+
+
+(a) FID vs Angle
+
+
+(b) NIQE vs Angle
+
+
+(c) Diversity vs Angle
+
+
+Figure S.7.2: Line graphs of FID/NIQE/Diversity versus yaw angles on RC-49. Figs. S.7.2(a) to S.7.2(c) show that two CcGANs consistently outperform cGAN across all angles. The graphs of CcGANs also appear smoother than those of cGAN because of HVDL and SVDL.
+Figure S.7.3: Some example RC-49 fake images from the two CcGAN methods. We fix the noise $z$ but vary the label $y$ .
+
+# S.7.6.3 DEGENERATED CCGAN
+
+In this experiment, we consider the extreme case of the proposed CcGAN (degenerated CcGAN), i.e., $\sigma \rightarrow 0$ and $\kappa \rightarrow 0$ or $\nu \rightarrow +\infty$ . Then we train the degenerated CcGAN with the same experimental setting as for CcGANs. Some examples from degenerated CcGANs are shown in Fig. S.7.4. Since, at each angle, the degenerated CcGAN only uses the images at this angle, it leads to the mode collapse problem (e.g., the row in the yellow rectangle) and bad visual quality (e.g., images in the red rectangle) at some angles.
+
+Note that the degenerated CcGAN is still different from cGAN, since we still treat $y$ as a continuous scalar instead of a class label here and we use the proposed label input method to incorporate $y$ into the generator and the discriminator.
+
+# S.7.6.4 CGAN: DIFFERENT NUMBER OF CLASSES
+
+In this experiment, we show that cGAN still fails even though we bin [0.1, 89.9] into other number of classes. We experimented with three different bin setting - grouping labels into 90 classes, 150 classes and 210 classes, respectively. Experimental results are shown in Fig. S.7.5 and we observe that all three cGANs fail.
+
+# S.7.6.5 VARYING SAMPLE SIZE FOR EACH DISTINCT ANGLE
+
+To test cGAN and CcGAN under more challenging scenarios, we vary the sample size for each distinct angle in the training set from 45 to 5. We visualize the line graphs of Intra-FID versus the sample size for each distinct training angle in Fig. S.7.6. From this figure, we can see the two CcGAN methods substantially outperform cGAN no matter what is the sample size for each distinct angle in the training set. The overall trend in this figure also shows that smaller sample size reduces the performance of both cGAN and CcGAN.
+
+# S.7.6.6 VARYING THE STRENGTH OF THE CORRELATION BETWEEN THE IMAGE AND ITS LABEL
+
+To study how the strength of the correlation between the image $\pmb{x}$ and its label $y$ (i.e., the label power) influences the performance of cGAN and CcGAN, in this study, we randomly add Gaussian noises with a preset standard deviation to the raw regression labels in the training set. The strength of the correlation is controlled by the standard deviation of the Gaussian noises which varies from
+
+
+Figure S.7.4: Some example RC-49 fake images from a degenerated CcGAN.
+
+
+Figure S.7.5: Example RC-49 fake images from cGAN when we bin the yaw angle range into different number of classes.
+
+
+Figure S.7.6: Line graphs of Intra-FID versus the sample size for each distinct training angle. The grey vertical dashed line stands for the sample size used in the main study of the RC-49 experiment in Section 4.2. Two CcGAN methods substantially outperform cGAN no matter what the sample size for each distinct angle in the training set. The overall trend in this figure shows that a smaller sample size deteriorates the performance of both cGAN and CcGAN.
+
+0 (no Gaussian noise) to 25 (the unit is degree). A large standard deviation corresponds to a weak correlation. The training setup is consistent with the main study in Section 4.2 except that we replicate the training set five times and randomly add Gaussian noises to the raw regression labels in the replicated training set. Therefore, each training sample has five noisy labels. We plot the line graphs of Intra-FID versus the standard deviation of Gaussian noise in Fig. S.7.7. From Fig. S.7.7, we can see that the performance of two CcGAN methods deteriorates as the standard deviation increases; however, the line graph of the performance of cGAN does not have a clear increasing or decreasing trend and the Intra-FID of cGAN always stays at a high level.
+
+
+Figure S.7.7: Line graphs of Intra-FID versus the standard deviation of Gaussian noise. The overall trend in the figure shows that the performance of two CcGAN methods deteriorate as the standard deviation increases.
+
+# S.8 MORE DETAILS OF THE EXPERIMENT ON THE UTKFACE DATASET IN SECTION 4.3
+
+# S.8.1 DESCRIPTION OF THE UTKFACE DATASET
+
+The UTKFace dataset is an age regression dataset (Zhang et al., 2017), with human face images collected in the wild. We use the preprocessed version (cropped and aligned), with ages spanning from 1 to 60. After data cleaning (i.e., removing images of very low quality or with clearly wrong labels), the overall number of images is 14760. Images are resized to $64 \times 64$ . The histogram of UTKFace dataset w.r.t. ages 1-60 is shown in S.8.8.
+
+From Fig. S.8.8, we can see UTKFace dataset is very imbalanced so the samples from the minority age groups are unlikely to be chosen at each iteration during the GAN training. Consequently, cGAN and CcGAN may not be well-trained at these minority age groups. To increase the chance of drawing these minority samples during training, we randomly replicate samples in the minority age groups to ensure that the sample size of each age is more than 200.
+
+
+Figure S.8.8: The histogram of UTKFace dataset with ages range from 1 to 60.
+
+# S.8.2 NETWORK ARCHITECTURES
+
+The network architectures used in this experiment is similar to those in the RC-49 experiment. Please refer to our codes for more details about the network specifications.
+
+# S.8.3 TRAINING SETUPS
+
+The cGAN and CcGAN are trained for 40,000 iterations on the training set with the Adam (Kingma & Ba, 2015) optimizer (with $\beta_{1} = 0.5$ and $\beta_{2} = 0.999$ ), a constant learning rate $10^{-4}$ and batch size 512. The rule of thumb formulae in Section S.4 are used to select the hyper-parameters for HVDL and SVDL, where we let $m_{\kappa} = 1$ .
+
+# S.8.4 PERFORMANCE MEASURES
+
+Similar to the RC-49 experiment, we evaluate the quality of fake images by Intra-FID, NIQE, Diversity, and Label Score. We also train an AE (bottleneck dimension is 512), a classification CNN, and a regression CNN on all images. Please note that, the UTKFace dataset consists of face images from 5 races based on which we train the classification CNN. The AE and both two CNNs are trained for 200 epochs with a batch size 256.
+
+# S.8.5 EXTRA EXPERIMENTS
+
+# S.8.5.1 MORE LINE GRAPHS
+
+
+(a) FID vs Age
+
+
+(b) NIQE vs Age
+
+
+(c) Diversity vs Age
+Figure S.8.9: Line graphs of FID/NIQE/Diversity versus ages on UTKFace. Figs. S.8.9(a) to S.8.9(c) show that two CcGANs consistently outperform cGAN across almost all ages. The graphs of CcGANs also appear smoother than those of cGAN because of HVDL and SVDL.
+
+# S.8.5.2 INTERPOLATION
+
+To perform label interpolation experiments, we keep the noise vector $z$ fixed and vary label from age 3 to age 57 for CcGANs with HVDL and SVDL losses. The interpolation results are illustrated in S.8.10. As age $y$ increases, we observe the synthetic face gradually becomes older in appearance. This observation convincingly shows that both HVDL and SVDL based CcGANs do not simply memorize or overfit to the training set. Indeed, our CcGANs demonstrate continuous control over synthetic images with respect to ages.
+
+
+Figure S.8.10: Some examples of generated UTKFace images from CcGAN when the discriminator is trained with HVDL and SVDL. We fix the noise $z$ but vary the label $y$ from 3 to 57.
+
+# S.8.5.3 DEGENERATED CCGAN
+
+We consider the extreme cases of the proposed CcGANs on the UTKFace dataset. As shown in Fig. S.8.11, the degenerated CcGANs fails to generate facial images at some ages (e.g., 51 and 57) because of too small sample sizes.
+
+# S.8.5.4 CGAN: DIFFERENT NUMBER OF CLASSES
+
+In the last experiment, we bin samples into different number of classes based on ground-truth labels, in order to increase the number of training samples at each class. Then we train cGAN using samples from the binned classes. We experimented with two different bin setting, i.e., binning image samples into 60 classes and 40 classes, respectively. The results are reported in Fig.S.8.12. The results demonstrate cGANs consistently fail to generate diverse synthetic images with labels aligned with their conditional information. Moreover, the image quality is much worse than those from the proposed CcGANs. In conclusion, compared with existing cGANs, our proposed CcGANs have substantially better performance in terms of the image quality and diversity.
+
+
+Figure S.8.11: Some example UTKFace fake images from a degenerated CcGAN.
+
+
+Figure S.8.12: Example UTKFace fake images from cGAN when we bin the age range into different number of classes.
+
+# S.8.5.5 TRAINING ON IMAGES FOR ODD AGES ONLY AND TESTING ON EVEN-NUMBERED AGES
+
+In this section, we train cGAN and CcGAN on images for odd ages only and test them on even-numbered ages. Since the training set in this experiment is half of the one in the main study in Section 4.3, we reduce the number of iterations for the GAN training from 40,000 to 20,000 while other settings are unchanged. The quantitative results for cGAN and two CcGAN methods are summarized in Table S.8.3. From Table S.8.3, we can see two CcGAN methods are still much better than cGAN in terms of all metrics except Label Score, since CcGAN is designed to sacrifice some (not too much) label consistency for much better visual quality and diversity.
+
+Table S.8.3: Training cGAN and CcGAN on images for odd ages only and testing them on even-numbered ages.
+
+| Method | Intra-FID ↓ | NIQE ↓ | Diversity ↑ | Label Score ↓ |
| cGAN (30 classes) | 4.724 ± 1.339 | 2.763 ± 0.384 | 0.299 ± 0.349 | 9.114 ± 7.398 |
| CcGAN (HVDL) | 0.724 ± 0.161 | 1.795 ± 0.230 | 1.133 ± 0.257 | 10.341 ± 3.931 |
| CcGAN (SVDL) | 0.777 ± 0.248 | 1.803 ± 0.214 | 1.257 ± 0.112 | 13.141 ± 5.862 |
+
+# S.8.5.6 TRAINING WITH SMALLER SAMPLE SIZES
+
+The histogram in Fig. S.8.8 shows that the UTKFace dataset is highly imbalanced. To balance the training data and also test the performance of cGAN and CcGAN under smaller sample sizes, we vary the maximum sample size for each distinct age in the training from 200 to 50. Note that, in the main study in Section 4.3, we do not restrict the maximum sample size. Since we have a much smaller sample size, we reduce the number of iterations for the GAN training from 40,000 to 20,000 and slightly increase $m_{\kappa}$ in Supp. S.4 from 1 to 2 (we therefore use a wider hard/soft vicinity). We visualize the line graphs of Intra-FID versus the maximum sample size for each age of cGAN and CcGAN in Fig. S.8.13. From the figure, we can clearly see that a smaller sample size worsens the performance of both cGAN and CcGAN. Moreover, the Intra-FID scores of cGAN always stay at a high level and are much larger than those of two CcGAN methods.
+
+
+Figure S.8.13: Line graphs of Intra-FID versus the maximum sample size for each distinct angle in the training set.
+
+# S.8.5.7 DIFFAUGMENT CANNOT SAVE CGAN IN THE CONTINUOUS SCENARIO
+
+DiffAugment (Zhao et al., 2020) is a concurrent work which proposes differentiable transformations (i.e., translation, random brightness, contrast, saturation, and cutout) on both real and fake images during the GAN training. This method shows promising results in improving the performance of unconditional GANs (e.g., StyleGAN2 (Karras et al., 2020)) and class-conditional GANs (e.g., BigGAN (Brock et al., 2019)) when training samples are limited. However, DiffAugment is fundamentally different from CcGAN since it is designed for the unconditional and class-conditional scenarios rather than the continuous scenario. Even though incorporating DiffAugment into the cGAN training in the continuous scenario, the two problems (P1) and (P2) discussed in Section 1 are still unsolved. First, (P1) is still unsolved because DiffAugment does not provide a solution better than binning the regression labels into a series of disjoint intervals to tackle the problem that some regression labels do not exist in the training set. Second, since DiffAugment is designed for the unconditional and class-conditional scenarios where the number of distinct conditions is always finite and known, DiffAugment doesn't provide a solution to (P2). Besides these two unsolved problems, another concern of DiffAugment in the continuous scenario is that the ordinal information in the regression labels is not utilized while our CcGAN implicitly uses this ordinal information to construct the soft/hard vicinity.
+
+To support our arguments, we incorporate DiffAugment into the cGAN training in the UTKFace experiment while other settings are kept constant. When implementing DiffAugment, we use the official codes from the GitHub repository of DiffAugment $^{2}$ . The strongest transformation combination (Color + Translation + Cutout) is used in the cGAN training. Quantitative results from cGAN+DiffAugment are summarized in Table S.8.4 and some example images from cGAN+DiffAugment are shown in Fig. S.8.14. The quantitative results show that DiffAugment substantially improves the visual quality and diversity of the baseline cGAN; however, the performance of cGAN+DiffAugment is still much worse than that of the proposed two CcGAN methods. The visual results also support the quantitative evaluations. Therefore, cGAN+DiffAugment still does not solve the two fundamental problems in the continuous scenario, since it is not designed for this purpose.
+
+Table S.8.4: Average quality of 60,000 fake UTKFace images from cGAN and CcGAN with standard deviations after the “±” symbol. “↓” (“↑”) indicates lower (higher) values are preferred.
+
+| Method | Intra-FID ↓ | NIQE ↓ | Diversity ↑ | Label Score ↓ |
| cGAN (60 classes) | 4.516 ± 0.965 | 2.315 ± 0.306 | 0.254 ± 0.353 | 11.087 ± 8.119 |
| cGAN (60 classes) + DiffAugment | 1.328 ± 0.156 | 2.077 ± 0.245 | 1.102 ± 0.183 | 11.212 ± 8.329 |
| CcGAN (HVDL) | 0.572 ± 0.167 | 1.739 ± 0.145 | 1.338 ± 0.178 | 9.782 ± 7.166 |
| CcGAN (SVDL) | 0.547 ± 0.181 | 1.753 ± 0.196 | 1.326 ± 0.198 | 10.739 ± 8.340 |
+
+
+Figure S.8.14: Some example UTKFace fake images from cGAN+DiffAugment. Even with the help of DiffAugment, cGAN still has poor visual quality in the continuous scenario.
+
+# S.9 POTENTIAL APPLICATIONS AND IMPACTS OF CCGANS
+
+Generally, there are three label scenarios where we can apply CcGANs: Scenario I, mathematically continuous labels (e.g., angles); Scenario II, discrete but ordinal labels (e.g., ages); and Scenario III, discrete, categorical labels but sharing close relationships among different label categories (e.g., fine-grained bird image generation). CcGANs can have potential applications in all three scenarios. For example, in Scenario I, CcGANs could have potential impacts on autonomous driving which involves predicting the steering angle (a continuous scalar) to have better controllability over autonomous cars. In Scenario II, the proposed methods are potentially meaningful in some medical applications. E.g., in medical experiments, an important task is cell counting, where the cell counting regression needs to predict the number of cells (i.e., ordinal integers) from a microscopic image. Even with limited microscopic cell images, the proposed CcGAN can generate visually synthetic and diverse microscopic images for the regression model training. In this way, CcGAN may help save tedious efforts of medical researchers in gathering microscopic images. In Scenario III, as suggested by AnonReviewer 5 (Q3), CcGAN could be used on some fine-grained image classification datasets, e.g., on the bird dataset where birds of different categories may share close similarities. The generated bird images can be used to enhance the fine-grained bird image classifiers, and potentially help us better recognize birds and protect the environment. More generally, CcGANs can be potentially used for image generation in regression datasets (associated with scalar labels y). In summary, CcGANs can cover a wide range of tasks and applications which could potentially benefit the society.
\ No newline at end of file
diff --git a/ccgancontinuousconditionalgenerativeadversarialnetworksforimagegeneration/images.zip b/ccgancontinuousconditionalgenerativeadversarialnetworksforimagegeneration/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..6900dab1021e759a4bbdfbd4a6f1703af3c5f791
--- /dev/null
+++ b/ccgancontinuousconditionalgenerativeadversarialnetworksforimagegeneration/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b781f752068bb1770ea29d49411631a7fb9dfe18bfd9efd93a9c014ce466ba7b
+size 1965098
diff --git a/ccgancontinuousconditionalgenerativeadversarialnetworksforimagegeneration/layout.json b/ccgancontinuousconditionalgenerativeadversarialnetworksforimagegeneration/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..8730736c72d35129d544d9af56f0c6afdc52059d
--- /dev/null
+++ b/ccgancontinuousconditionalgenerativeadversarialnetworksforimagegeneration/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1a12d1f96661e59b03ae4863a65bbaace766deb885f72818e842140adba774e4
+size 1187420
diff --git a/certifyorpredictboostingcertifiedrobustnesswithcompositionalarchitectures/38753308-3c99-465f-8b5a-a516c8a9b66b_content_list.json b/certifyorpredictboostingcertifiedrobustnesswithcompositionalarchitectures/38753308-3c99-465f-8b5a-a516c8a9b66b_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..598e9c024b9f1630b09dd68462bfe9df059a3b12
--- /dev/null
+++ b/certifyorpredictboostingcertifiedrobustnesswithcompositionalarchitectures/38753308-3c99-465f-8b5a-a516c8a9b66b_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:50ad52bc21e824e89d287fcb530308e61dfa387678d0b3214fe12e2c9b25addb
+size 131722
diff --git a/certifyorpredictboostingcertifiedrobustnesswithcompositionalarchitectures/38753308-3c99-465f-8b5a-a516c8a9b66b_model.json b/certifyorpredictboostingcertifiedrobustnesswithcompositionalarchitectures/38753308-3c99-465f-8b5a-a516c8a9b66b_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..2920ab3e872ebc7bc5c86cd33b00dc59f78f0b5c
--- /dev/null
+++ b/certifyorpredictboostingcertifiedrobustnesswithcompositionalarchitectures/38753308-3c99-465f-8b5a-a516c8a9b66b_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:936e143f12ed6a2cde97883e8f9fc57442a219c1c05aab8b67bd98b0bd6740f8
+size 155769
diff --git a/certifyorpredictboostingcertifiedrobustnesswithcompositionalarchitectures/38753308-3c99-465f-8b5a-a516c8a9b66b_origin.pdf b/certifyorpredictboostingcertifiedrobustnesswithcompositionalarchitectures/38753308-3c99-465f-8b5a-a516c8a9b66b_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..3f1d152b53a570afba2e68e59544e46fc5596a5f
--- /dev/null
+++ b/certifyorpredictboostingcertifiedrobustnesswithcompositionalarchitectures/38753308-3c99-465f-8b5a-a516c8a9b66b_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:058a5d01fcc21094affcb368ac04dca9c8f4a25333f7b309a09e06fd660f5f70
+size 613321
diff --git a/certifyorpredictboostingcertifiedrobustnesswithcompositionalarchitectures/full.md b/certifyorpredictboostingcertifiedrobustnesswithcompositionalarchitectures/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..576a8c5c8cab7857016b50d4b06b4b5e87a162f0
--- /dev/null
+++ b/certifyorpredictboostingcertifiedrobustnesswithcompositionalarchitectures/full.md
@@ -0,0 +1,555 @@
+# CERTIFY OR PREDICT: BOOSTING CERTIFIED ROBUSTNESS WITH COMPOSITIONAL ARCHITECTURES
+
+Mark Niklas Müller, Mislav Balunović, Martin Vechev
+
+Department of Computer Science, ETH Zurich, Switzerland
+
+{mark.mueller, mislav.balunovic, martin.vechev}@inf.ethz.ch
+
+# ABSTRACT
+
+A core challenge with existing certified defense mechanisms is that while they improve certified robustness, they also tend to drastically decrease natural accuracy, making it difficult to use these methods in practice. In this work, we propose a new architecture which addresses this challenge and enables one to boost the certified robustness of any state-of-the-art deep network, while controlling the overall accuracy loss, without requiring retraining. The key idea is to combine this model with a (smaller) certified network where at inference time, an adaptive selection mechanism decides on the network used to process the input sample. The approach is compositional: one can combine any pair of state-of-the-art (e.g., EfficientNet or ResNet) and certified networks, without restriction. The resulting architecture enables much higher natural accuracy than previously possible with certified defenses alone, while substantially boosting the certified robustness of deep networks. We demonstrate the effectiveness of this adaptive approach on a variety of datasets and architectures. For instance, on CIFAR-10 with an $\ell_{\infty}$ perturbation of 2/255, we are the first to obtain a high natural accuracy $(90.1\%)$ with non-trivial certified robustness $(27.5\%)$ . Notably, prior state-of-the-art methods incur a substantial drop in accuracy for a similar certified robustness.
+
+# 1 INTRODUCTION
+
+Most recent defenses against adversarial examples have been broken by stronger and more adaptive attacks (Athalye et al., 2018; Tramer et al., 2020), highlighting the importance of investigating certified defenses with suitable robustness guarantees (Raghunathan et al., 2018; Wong & Kolter, 2018; Zhang et al., 2020; Balunovic & Vechev, 2020). And while there has been much progress in developing new certified defenses, a fundamental roadblock to their practical adoption is that they tend to produce networks with an unsatisfying natural accuracy.
+
+In this work we propose a novel architecture which brings certified defenses closer to practical use: the architecture enables boosting certified robustness of state-of-the-art deep neural networks without incurring significant accuracy loss and without requiring retraining. Our proposed architecture is compositional and consists of three components: (i) a core-network with high natural accuracy, (ii) a certification-network with high certifiable robustness (need not have high accuracy), and (iii) a selection mechanism that adaptively decides which one of the two networks should process the input sample. The benefit of this architecture is that we can plug in any state-of-the-art deep neural network as a core-network and any certified defense for the certification-network, thus benefiting from any future advances in standard training and certified defenses.
+
+A key challenge with certifying the robustness of a decision made by the composed architecture is obtaining a certifiable selection mechanism. Towards that, we propose two different selection mechanisms, one based on an auxiliary selection-network and another based on entropy, and design effective ways to certify both. Experimentally, we demonstrate the promise of this architecture: we are able to train a model with much higher natural accuracy than models trained using prior certified defenses while obtaining non-trivial certified robustness. For example, on the challenging CIFAR-10 dataset with an $\ell_{\infty}$ perturbation of 2/255, we obtain $90.1\%$ natural accuracy and a certified robustness of $27.5\%$ . On the same task, prior approaches cannot obtain the same natural accuracies for any non-trivial certified robustness.
+
+Main contributions Our main contributions are:
+
+- A new architecture, called ACE (short for Architecture for Certification), which boosts certified robustness of networks with high natural accuracy (e.g., EfficientNet).
+- Methods to train our newly proposed architecture and to certify the robustness of the entire composed network, including the certification of the selection mechanism.
+- Experimental evaluation on the CIFAR-10, TinyImageNet and ImageNet200 datasets, demonstrating the promise of ACE: at the same non-trivial certified robustness levels, we can achieve significantly higher accuracies than prior work.
+- We release our code as open source: https://github.com/eth-sri/ACE
+
+# 2 RELATED WORK
+
+There has been much recent work on certified defenses, that is, training neural networks with provable robustness guarantees. These works include methods based on semidefinite relaxations (Raghunathan et al., 2018), linear relaxations and duality (Wong & Kolter, 2018; Wong et al., 2018; Xu et al., 2020), abstract interpretation (Mirman et al., 2018), and interval bound propagation (Gowal et al., 2018). The three most recent advances are COLT (Balunovic & Vechev, 2020), based on convex layer-wise adversarial training, CROWN-IBP (Zhang et al., 2020), based on a combination of linear relaxations Zhang et al. (2018) and interval propagation, and LiRPA (Xu et al., 2020) scaling to problems with many more classes by directly bounding the cross entropy loss instead of logit margins. As mentioned earlier, a key challenge with these methods is that in order to gain certified robustness, they tend to incur a drastic drop in natural accuracy.
+
+In parallel to certified defenses, there has also been interest in certifying already trained models (Katz et al., 2017; Tjeng et al., 2017; Gehr et al., 2018; Weng et al., 2018; Bunel et al., 2018; Wang et al., 2018a; Singh et al., 2019). While these methods were initially focused mostly on $L_{p}$ robustness, these works (as well as ours) can be naturally extended to other notions of robustness, such as geometric (Balunović et al., 2019) or semantic (Mohapatra et al., 2020) perturbations. A line of work that weakens deterministic guarantees so to scale to larger networks is that of randomized smoothing which offers probabilistic guarantees (Lecuyer et al., 2018; Cohen et al., 2019; Salman et al., 2019a). While interesting, this technique incurs overhead at inference time due to additional sampling, and further, generalizing smoothing to richer transformations (e.g., geometric) is non-trivial (Fischer et al., 2020). In contrast, our work handles large networks while providing deterministic guarantees and because of its compositional nature, directly benefits from any advancements in certification and certified defenses with richer perturbations.
+
+Our proposed architecture is partially inspired by prior work on designing custom architectures for dynamic routing in neural networks (Teerapittayanon et al., 2016; Bolukbasi et al., 2017; McGill & Perona, 2017; Wang et al., 2018b). While the main goal of these architectures is to speed up inference, our observation is that similar type of ideas are applicable to the problem of enhancing certifiable robustness of existing neural networks.
+
+# 3 BACKGROUND
+
+We now present the necessary background needed to define our method.
+
+Adversarial Robustness We define adversarial robustness of a model $h$ as a requirement that $h$ classifies all inputs in a $p$ -norm ball $B_{\epsilon}^{p}(\boldsymbol{x})$ of radius $\epsilon$ around the sample $\boldsymbol{x}$ to the same class:
+
+$$
+\underset {j} {\arg \max } h (\boldsymbol {x}) _ {j} = \underset {j} {\arg \max } h \left(\boldsymbol {x} ^ {\prime}\right) _ {j}, \quad \forall \boldsymbol {x} ^ {\prime} \in B _ {\epsilon} ^ {p} (\boldsymbol {x}) := \left\{\boldsymbol {x} ^ {\prime} = \boldsymbol {x} + \boldsymbol {\eta} \mid | \boldsymbol {\eta} | _ {p} \leq \epsilon_ {p} \right\} \tag {1}
+$$
+
+In this work we focus on an $\ell_{\infty}$ based threat model and use the notation $\epsilon_p$ to indicate the upper bound to the $\ell_p$ -norm of admissible perturbations. The robust accuracy of a network is derived from this definition as the probability that an unperturbed sample from the test distribution is classified correctly and Equation 1 holds. As it is usually infeasible to compute exact robustness, we define certifiably robust accuracy (also certifiable accuracy or certifiable robustness), as a provable lower
+
+
+(a) SelectionNet ACE
+
+
+(b) Entropy ACE
+Figure 1: Two variants of ACE. Dashed and solid arrows represent selection mechanism and classification networks, respectively. The colored, dashed arrows represent the selection decision obtained by thresholding either the output of the selection network or the entropy. Based on this selection, we output either the result of the certification-network $h_{\theta_b}^b$ (red) or the core-network $h_{\theta_t}^t$ (blue).
+
+bound to the robust accuracy of a network. This lower bound is obtained by attempting to prove adversarial robustness for all correctly classified samples $\pmb{x}$ using any certification method. For a fixed, arbitrary certification method, we introduce the binary function $\operatorname{cert}(\mathbb{X}, f, y)$ as the result of an attempt to certify that $f(\pmb{x}) = y$ , $\forall \pmb{x} \in \mathbb{X}$ . In practice, adversarial robustness is usually evaluated as an upper bound to the exact robust accuracy of a network and denoted as adversarial accuracy. This upper bound is usually computed using an adversarial attack such as PGD (Madry et al., 2017).
+
+Certification and Training with Convex Relaxations Here we briefly summarize robustness certification via convex relaxations. The idea is to start with an initial convex set capturing all admissible perturbations of an original sample $\pmb{x}$ , denoted as $\mathbb{C}_0 \supseteq B_\epsilon^\infty(\pmb{x})$ and then propagate this convex set sequentially through all layers of the network. The key challenge is to design a transformer $T_f$ that maps a convex input set to a convex output set for every function $f$ corresponding to a network layer, while ensuring soundness: we need to guarantee that each point $\pmb{x}$ in the convex input set $\mathbb{C}_{in}$ is mapped to a point in the convex output set, i.e. $f(\pmb{x}) \in \mathbb{C}_{out} := T_f(\mathbb{C}_{in})$ . Finally, we obtain a convex shape that captures all possible outputs of the network. To prove robustness, we have to show that the output of the target class is greater than that of any other class, which is often simple. Depending on the type of transformer $T_f$ used, this framework leads to various certification methods such as IBP (Gowal et al., 2018) or DeepZ (Singh et al., 2018). A more comprehensive description of these methods can be found in Salman et al. (2019b). There has recently been a plethora of work which uses these methods to train provably robust neural networks, which we reference in Section 2. In this work we use models pretrained with CROWN-IBP (Zhang et al., 2020), LiRPA (Xu et al., 2020) and COLT (Balunović & Vechev, 2020) and train selection- and certification-networks using CROWN-IBP, IBP and COLT. IBP (Mirman et al., 2018; Gowal et al., 2018) and CRONW-IBP compute convex relaxations of the loss using intervals and restricted polyhedra, respectively, and minimize this loss during training. COLT (Balunović & Vechev, 2020) proceeds layer by layer and tries to find adversarial examples inside the convex relaxations of the latent spaces of perturbed samples, using PGD adversarial attacks.
+
+# 4 COMPOSITIONAL ARCHITECTURE FOR CERTIFIABLE ROBUSTNESS
+
+In this section, we formally introduce our proposed compositional architecture.
+
+Overview We first describe the idea behind the two variants of our architecture, illustrated in Figure 1. The first variant, named SelectionNet ACE, is shown in Figure 1a. Here, the selection mechanism is an auxiliary selection network $h_{\theta_s}^s$ which decides whether to pass sample $x$ through the core- or the certification-network. If the output of the selection network is greater than $T$ , then we pass $x$ through the certification-network $h_{\theta_b}^b$ , and otherwise we pass it through the core-network $h_{\theta_t}^t$ . The second variant, Entropy ACE, is shown in Figure 1b. Here we perform the selection for
+
+every sample $\mathbf{x}$ based on the output probabilities $p(\mathbf{y})$ produced by the certification-network $h_{\theta_b}^b$ via the softmax function. If the entropy $H(p(\mathbf{y}))$ of the output probability distribution exceeds a fixed threshold $T$ , we pass the sample through the core-network $h_{\theta_t}^t$ , otherwise we return the output of the certification-network. In Figure 1 we show the selection mechanism using dashed arrows and the two possible outputs of the end-to-end architecture using solid, red and blue arrows.
+
+Formally, we propose the compositional neural network architecture $h_\theta : \mathcal{X} \to \mathcal{Y}$ defined as
+
+$$
+h _ {\theta} (\boldsymbol {x}) = g _ {\theta_ {s}} (\boldsymbol {x}) \cdot h _ {\theta_ {b}} ^ {b} (\boldsymbol {x}) + \left(1 - g _ {\theta_ {s}} (\boldsymbol {x})\right) \cdot h _ {\theta_ {t}} ^ {t} (\boldsymbol {x}) \tag {2}
+$$
+
+The architecture combines three components: A selection-mechanism $g_{\theta_s} : \mathcal{X} \to \{0,1\}$ decides whether to forward an input $\pmb{x}$ through the core- or certification-network ( $g_{\theta_s}$ is instantiated in two different ways below), while the core-network $h_{\theta_t}^t : \mathcal{X} \to \mathcal{Y}$ and the certification-network $h_{\theta_b}^b : \mathcal{X} \to \mathcal{Y}$ assign an output label $y \in \mathcal{Y}$ to an input $\pmb{x} \in \mathcal{X}$ . We note that arbitrary network architectures and training methods can be used for each of the component networks. We evaluate some of these choices in our experimental evaluation section later.
+
+# 4.1 SELECTION MECHANISM
+
+The core of ACE is a selection mechanism that decides which network to use for inference. Ideally, the selection mechanism should pass inputs for which the certification-network is correct and certifiably robust through the certification-network, and all other inputs through the core-network. To train this mechanism, we set the following selection target for each sample $x$ in the training set:
+
+$$
+y _ {s} \left(h _ {\theta_ {b}} ^ {b}, \boldsymbol {x}\right) = \operatorname {c e r t} \left(B _ {\epsilon} ^ {\infty} (\boldsymbol {x}), h _ {\theta_ {b}} ^ {b}, y\right) \tag {3}
+$$
+
+The output of the binary function cert, explained in Section 3, is 1 if and only if we can certify that network $h_{\theta_b}^b$ classifies all inputs from the region $B_{\epsilon}^{\infty}(\pmb{x})$ to a label $y$ .
+
+If the separation described above were fully accurate and certifiable, the certifiable robustness of the certification-network would be retained by the combined network, while the natural accuracy and adversarial robustness would be lower bounded by those of the core-network. However, as the task of predicting certifiable correctness is a strictly more difficult variant of the meta recognition task (Scheirer et al., 2012), a perfect selection is usually unattainable. Here, we balance a trade-off between certifiable and natural accuracy, as a higher selection rate and consequently recall will generally increase certifiable accuracy at the cost of a larger drop in natural and adversarial accuracy. Clearly, a lower selection rate and consequently higher precision have the opposite effect. Note that, if there exist perturbations $\pmb{x}_1', \pmb{x}_2' \in B_{\epsilon}^{\infty}(\pmb{x})$ such that $g_{\theta_s}(\pmb{x}_1') = 0$ and $g_{\theta_s}(\pmb{x}_2') = 1$ , then we would have to certify the robustness of both the core- and certification-network (we would like to avoid certifying the core-network). Thus, only if we can prove that such a pair does not exist, meaning that the selection is robust, we can certify only one of these two networks.
+
+As the typically large and deep core-network tends to have a very low certifiable accuracy, low robustness of the selection mechanism leads directly to low certifiable accuracy of the composed network (because we would need to certify the core-network more often). An ideal selection mechanism has to be robust, accurate, and allow tuning of the selection rate.
+
+Towards that, we suggest two variants that fit these requirements: (i) SelectionNet, a selection-network trained on the binary selection task, and (ii) Entropy Selection, based on a threshold on the entropy of the output of the certification-network. We refer to the resulting architectures as SelectionNet ACE and Entropy ACE.
+
+# 4.1.1 SELECTIONNET
+
+We propose using a selection-network $h_{\theta_s}^s: \mathcal{X} \to \mathbb{R}$ to make the decision, which leads to the architecture illustrated in Figure 1a with the following inference procedure: (i) Pass sample $\pmb{x}$ through the selection-network resulting in the output $h_{\theta_s}^s(\pmb{x})$ . (ii) If the output is greater than the threshold $T$ , the sample is passed through the certification-network, and otherwise through the core-network. Formally, we define the selection mechanism as $g_{\theta_s}(\pmb{x}) := \mathbb{1}_{h_{\theta_s}^s(\pmb{x}) > T}$ .
+
+Training After training a provable certification-network using any choice of a provable training mechanism, we obtain the selection targets according to Equation 3. We then frame the selection
+
+problem as a binary classification task and train the selection-network directly on the inputs from the training set using the selection targets obtained above as labels. We note that for the selection-network, similarly to the certification-network, one can use any network architecture and provable training mechanism. Further, we propose to reduce the training time by using the certification-network as a feature extractor and only training the last linear layer of the selection-network. This also reduces the selection overhead during inference and certification with convex relaxation based methods. The core-network is trained completely independently, typically using a pre-trained model.
+
+# 4.1.2 ENTROPY SELECTION
+
+As a second variant we propose using an entropy based selection mechanism $g_{\theta_s}(\pmb{x}) \coloneqq H(\mathrm{softmax}(h_{\theta_b}^b (\pmb{x}))) < T$ , inspired by Teerapittayanon et al. (2016), which leads to the network architecture illustrated in Figure 1b and a slightly different inference procedure:
+
+We first map the output of the certification-network $\pmb{y}_b = h_{\theta_b}^b(\pmb{x})$ to a discrete probability distribution $p(\pmb{y}_b)$ over the labels, via the softmax function. Then we can compute the entropy $H(p(\pmb{y}_b))$ as
+
+$$
+H \left(p \left(\boldsymbol {y} _ {b}\right)\right) = - \sum_ {j = 1} ^ {n} p \left(\boldsymbol {y} _ {b}\right) _ {j} \log \left(p \left(\boldsymbol {y} _ {b}\right) _ {j}\right). \tag {4}
+$$
+
+If the entropy is below a threshold $T$ , we return the certification-network output, otherwise we pass the sample through the core-network.
+
+Certification Using convex relaxation-based certification methods requires sound overapproximations of all layers. To derive an approximation of the entropy, we first recast Equation 4, including softmax and introduce the log-sum-exp trick to improve numerical stability, as
+
+$$
+H \left(p \left(\boldsymbol {y} _ {b}\right)\right) = c + \log \left(\sum_ {i} e ^ {y _ {b, i} - c}\right) - \sum_ {j} y _ {b, j} \exp \left(y _ {b, j} - c - \log \left(\sum_ {i} e ^ {y _ {b, i} - c}\right)\right). \tag {5}
+$$
+
+We provide a proof of this identity in Appendix H.6. We can now construct an entropy transformer from element-wise transformations corresponding to individual operations in Equation 5. In this work we use intervals and zonotopes (Appendix H), however, one can explore other approximations.
+
+Joint Training Using vanilla provable training for the certification-network leads to a significant dependence of the entropy on the difficulty of a sample perturbation. This causes a wide range of possible entropies over the admissible perturbations, reducing the robustness of a thresholding based selection. To address this, we would like to decrease the width of the entropy range, while ideally decreasing the entropy for certifiable samples and increasing it for non-certifiable samples. We do this by introducing an entropy loss term $\mathcal{L}_H(\pmb{y}_b, y_s(\pmb{x})) = \mathrm{sign}(y_s(\pmb{x})) \cdot H(\mathrm{softmax}(\pmb{y}_b))$ using the convention $\mathrm{sign}(0) = -1$ , and the weighting factor $\lambda$ :
+
+$$
+\mathcal {L} _ {\text {j o i n t}} \left(h _ {\theta_ {b}} ^ {b}, \boldsymbol {x}, y\right) = (1 - \lambda) \cdot \mathcal {L} _ {C E} \left(\boldsymbol {y} _ {b}, y\right) + \lambda \cdot \mathcal {L} _ {H} \left(\boldsymbol {y} _ {b}, y _ {s} \left(h _ {\theta_ {b}} ^ {b}, \boldsymbol {x}\right)\right). \tag {6}
+$$
+
+We replace the cross entropy loss $\mathcal{L}_{CE}$ with the joint loss $\mathcal{L}_{joint}$ enabling adversarial and provable training against perturbations targeting both classification and entropy, improving selection robustness. We train the certification-network with this joint loss, completely independent from the core network. The selection target $y_{s}(h_{\theta_{b}}^{b},\pmb{x})$ for the natural, adversarial, and robust losses is, unlike in Equation 3, computed based on the natural, adversarial, and certifiable correctness of the certification-network, respectively, and not always on the certifiable correctness. Using this loss with networks of low accuracy has the disadvantage that the entropy loss encourages a more ambiguous output distribution for samples that are not classified (provably) correctly. Therefore, we perform pretraining with $\lambda = 0$ .
+
+# 4.2 END-TO-END CERTIFICATION
+
+After the network is trained, we need to prove that the classification by the compositional network
+
+$$
+y = \arg \max _ {j} \left[ g _ {\theta_ {s}} \left(\boldsymbol {x} ^ {\prime}\right) \cdot h _ {\theta_ {b}} ^ {b} \left(\boldsymbol {x} ^ {\prime}\right) + \left(1 - g _ {\theta_ {s}} \left(\boldsymbol {x} ^ {\prime}\right)\right) \cdot h _ {\theta_ {t}} ^ {t} \left(\boldsymbol {x} ^ {\prime}\right) \right] _ {j}, \quad \forall \boldsymbol {x} ^ {\prime} \in B _ {\epsilon} ^ {\infty} (\boldsymbol {x}) \tag {7}
+$$
+
+is correct and robust (as defined by Equation 1). Using one of the certification methods introduced in Section 3 to instantiate the certification function $\operatorname{cert}(\mathbb{X}, f, y)$ , the proof of robustness can be broken down into the following steps:
+
+1. Evaluate the selection mechanism $g_{\theta_s}$ on the unperturbed sample $\pmb{x}$ resulting in $g_{\theta_s}(\pmb{x})$ .
+2. Certify robustness of the decision mechanism $\operatorname{cert}(B_{\epsilon}^{\infty}(\pmb{x}), g_{\theta_s}, g_{\theta_s}(\pmb{x}))$ .
+3. If this certification was successful, certify the network selected for the unperturbed sample, otherwise certify both the core- and the certification-network cert $(B_{\epsilon}^{\infty}(\pmb {x}),h_{\theta_{\{b,t\}}}^{[b,t]},y)$
+
+The deep core-network is typically not trained for certifiability and certification is often computationally infeasible. Therefore, we assume that certification of the core-network always fails. Consequently, only samples with a positive natural selection decision can be certified, making certification independent of the core-network. The results of the first and the second step can be used to determine which network will classify the unperturbed sample and which networks could classify a perturbed sample. This information can be used to compute the natural and adversarial accuracy of the combined network without evaluating them jointly.
+
+# 5 EXPERIMENTAL EVALUATION
+
+In this section, we demonstrate the effectiveness of ACE by showing that existing certified defenses cannot achieve the high natural accuracies at non-trivial provable accuracies that we obtain.
+
+Models and Datasets We evaluate ACE on 3 different certification-network architectures similar to the models used in Gowal et al. (2018) and Balunovic & Vechev (2020), on CIFAR-10, ImageNet200, and TinyImageNet with $\ell_{\infty}$ perturbations between 1/255 and 8/255, reporting Top-1 accuracies. TinyImageNet is a selection of 200 classes from ImageNet with samples cropped to meaningful regions of the image and downscaled to $64\times 64$ . ImageNet200 is the full sized ImageNet restricted to the same 200 classes but always center cropped for evaluation. We denote as Conv2, Conv3 and Conv5 feed-forward networks with 2, 3, and 5 convolutional layers, respectively. Conv3 corresponds to the largest network from Balunovic & Vechev (2020), and Conv5 corresponds to the largest network from Zhang et al. (2020), DM-Large. More details can be found in Appendix A, Table 3. We use an adversarially trained EfficientNet-B0 (Tan & Le, 2019) with ImageNet pretraining as a core-network, using adversarial instead of natural training, as we believe empirical robustness to also be relevant in domains where deterministic guarantees are desired.
+
+Training and Certification We perform all experiments, with the exception of reference network training, on a single GeForce RTX 2080 Ti GPU and implement training and certification in PyTorch (Paszke et al., 2019). We train selection- and certification-networks using IBP (Gowal et al., 2018), CROWN-IBP (Zhang et al., 2020) and COLT (Balunovic & Vechev, 2020). The hyperparameters can be found in Appendix B. We use adversarial pretraining for COLT trained models. For Entropy Selection we set the joint loss factor to $\lambda = 0.5$ . We only use the relatively fast, convex relaxation-based certification methods IBP (Gowal et al., 2018), CROWN-IBP (Zhang et al., 2020), and DeepZ (Singh et al., 2018) for IBP and COLT trained networks respectively, unless specified otherwise. We use 40 step PGD to evaluate adversarial accuracy, using the strategy described in Appendix E to avoid gradient masking effects (Papernot et al., 2017).
+
+Comparison with Existing Architectures We show that ACE can, in contrast to state-of-the-art provable training methods, achieve both high natural and non-trivial provable accuracies, by enabling an efficient trade-off between natural and certifiable accuracy instead of maximizing provable robustness at any cost. As direct comparison with prior work is difficult, we show that using our best effort we were not able to achieve comparable combinations of natural and certifiable accuracy using the state-of-the-art COLT and CROWN-IBP provable training methods.
+
+COLT is computationally expensive, restricting it to relatively small models. We train the biggest model evaluated by Balunović & Vechev (2020), Conv3, on CIFAR-10 at $\epsilon_{\infty} = 2 / 255$ using COLT with a varying natural loss component and use DeepZ for certification. In Figure 2 we show certified vs natural accuracy for each of these models (yellow points). To compare with our approach, we train an ACE model (teal squares) using one of these networks (teal triangle) as certification-network. We observe that the ACE model obtains higher certified accuracies at all natural accuracies, still yielding a certified accuracy of $36.8\%$ at the highest natural accuracy $(85.1\%)$ obtained using an individual, naturally trained Conv3 network. Using the Conv3 model trained by Balunović & Vechev (2020)
+
+
+Figure 2: Natural and certified accuracy of different COLT trained models on CIFAR-10 with $\epsilon_{\infty} = 2/255$ . We compare individual Conv3 networks (yellow dots), trained with COLT and varying natural loss components, with different ACE models (squares) based on an EfficientNet-B0 core-network (purple) and different certification-networks (triangles): Conv3 with DeepZ certification (teal) and Conv3 with MILP certification (blue). Further up and to the right is better. The horizontal distance between the yellow and teal line is the increase in natural accuracy due to using ACE instead of changing the natural loss component int COLT training.
+
+
+Figure 3: Natural and certified accuracy on CIFAR-10 with $\epsilon_{\infty} = 8 / 255$ . We compare individual Conv5 networks (yellow dots), trained with CROWN-IBP and varying natural loss components, with different ACE models (squares) based on an EfficientNet-B0 core-network (purple) and different certification-networks (triangles): CROWN-IBP trained Conv5 from Zhang et al. (2020) (red), CROWN-IBP trained Conv5 with $\kappa_{end} = 0.5$ (teal) and IBP trained Conv3 (blue). All selection-networks are IBP trained.
+
+(blue triangle) in combination with MILP certification as certification-network, we obtain an even stronger ACE model (blue squares). We compare to CROWN-IBP trained reference networks in Appendix C, yielding the same conclusion.
+
+CROWN-IBP can be applied to larger models, including an inherent robustness-accuracy trade-off parameter $\kappa$ , weighting the natural and robust loss components, and outperforms COLT on CIFAR-10 at $\epsilon_{\infty} = 8 / 255$ , making it the perfect benchmark for these larger perturbations. Using the original implementation and the largest model Zhang et al. (2020) evaluate on CIFAR-10, Conv5, we vary $\kappa_{end}$ to obtain several models. We show certified vs natural accuracy for each of these CROWN-IBP models (yellow points) in Figure 3. Using the Conv5 network published by Zhang et al. (2020) with $\kappa_{end} = 0.0$ (red triangle) and one we trained with $\kappa_{end} = 0.5$ (teal triangle), we train ACE models (red and teal squares) using IBP for the selection network training, which both outperform the individual Conv5 networks over a wide range of natural accuracies. Even when using a much weaker certification-network, such as an IBP trained Conv3 (blue triangle), we obtain an ACE model (blue squares) yielding more attractive trade-offs at high natural accuracies.
+
+These results show that across different provable training and certification methods, network architectures and perturbation sizes, ACE produces much more favorable robustness-accuracy trade-offs than varying hyperparameters of existing certified defenses. ACE models can always use a certification-network trained at the efficiency sweetspot of the employed provable training method, allowing any improvements in certified defenses to be utilized, while allowing for flexibility in the trade-off between accuracy and robustness. As ACE is truly orthogonal to all of these methods, it should be seen as a complement to and not a replacement for provable training methods.
+
+
+Figure 4: Natural and certified accuracy on TinyImageNet with $\epsilon_{\infty} = 1 / 255$ . We compare the LiRPA trained networks from Xu et al. (2020) WideResNet (black triangle), DenseNet (teal), ResNeXt (yellow), and CNN7+BN (red) with an ACE model (black squares) using the same LiRPA trained WRN as certification-network and as feature extractor for the otherwise IBP trained selection-network. The ACE model uses an EfficientNet-B0 core-network (purple).
+
+Table 1: Natural, adversarial and certifiable accuracy for various ACE models. The training methods COLT, LiRPA with loss fusion, IBP and CROWN-IBP (C-IBP) used for the training of certification-CERT) and selection-networks (SELECT) are indicated separately.
+
+| Dataset | ε∞ | Selection Method | ACE Training | Certification Network | Top-1 Natural | Model Accuracy [%] Adversarial | Certified |
| CERT | SELECT |
| CIFAR-10 | 2/255 | SelectionNet | COLT | COLT | Conv3 | 90.1 | 78.4 | 27.5 |
| 8/255 | SelectionNet | C-IBP | IBP | Conv5 | 80.1 | 48.8 | 10.5 |
| ImageNet200 | 1/255 | SelectionNet | COLT | COLT | Conv3 | 70.0 | 60.5 | 3.1 |
| TinyImageNet | 1/255 | SelectionNet | LiRPA | IBP† | WRN | 50.0 | 35.9 | 10.5 |
+
+$\dagger$ LiRPA trained WideResNet from Xu et al. (2020) used as feature extractor for the selection-network.
+
+The compositional structure of ACE has the additional advantage of permitting every component network to work at a different resolution. For tasks where high resolution images are available, the core network can process full-scale images, while down-scaled versions can be passed through the selection and certification network. For ImageNet200, we use full-scale images in the core-network to obtain a high natural (70.0%) accuracies, while the certification- and selection-network yields a non-trivial certifiable accuracy (3.1%) at $\epsilon_{\infty} = 1 / 255$ using samples scaled down to $64 \times 64$ . Here, the certifiable accuracy is limited by the lack of a strong certifiably robust network to use as a certification-network.
+
+For TinyImageNet no full size images are available, reducing the advantage of an ACE model. However using the LiRPA trained WideResNet (black triangle in Figure 4) from Xu et al. (2020) as certification-network and feature extractor for the otherwise IBP trained selection-network, we train an ACE model (black squares) showing a very similar trade-off characteristic as the CIFAR-10 models, demonstrating that ACE scales to larger tasks.
+
+In Table 1 we present results on the CIFAR-10 and TinyImageNet datasets for selected models. An extended table showing more results can be found in Appendix G.
+
+Selection Recall that the key to our compositional architecture is a provably robust selection mechanism that can differentiate samples based on their certifiability by the certification-network. We try to certify the natural selection decision made by a Conv3 selection-network on CIFAR-10 at $\epsilon_{\infty} = 2 / 255$ and split samples into three groups: samples selected for all admissible perturbations (provably selected), samples not selected for any admissible perturbation (provably non-selected), and the remainder for which we cannot prove either decision
+
+Table 2: Certifiable accuracy of the certification-network depending on selection decision using a COLT trained Conv3 selection- and certification-network for CIFAR-10 at $\epsilon_{\infty} = 2 / 255$
+
+ | Certifiable Accuracy [%] |
| provably selected | 72.9 |
| non-provably selected | 52.3 |
| not selected | 28.4 |
| full test set | 47.9 |
+
+
+Figure 5: Natural and adversarial accuracy for two ACE networks: one with SelectionNet + Conv3 and one with entropy selection + Conv2. We evaluate classification networks on the full test set and its subsets selected by the two selection mechanisms.
+
+
+Figure 6: Width of the entropy range over admissible perturbations. Samples are denoted as adversarial if we can successfully attack the certification-network and nonadversarial otherwise.
+
+(non-provably selected). In Table 2 we show that for the provably selected samples the certifiable accuracy $(72.9\%)$ is much higher than on the full test set $(47.9\%)$ , while it is much lower for the provably non-selected samples $(28.4\%)$ . This shows that the selection-network successfully separates samples based on certification difficulty.
+
+Evaluating Core- and Certification-Networks Next, we train two ACE networks for CIFAR-10 with $\epsilon_{\infty} = 2 / 255$ using COLT: one with a selection-network and Conv3 certification-network, and another with entropy selection and a Conv2 certification-network. Both use EfficientNet-B0 as a core-network. We evaluate the natural and adversarial accuracy of both the core- and certification-network on the full test set and its subsets selected by the selection mechanism. The results are shown in Figure 5. We observe that the accuracy of both certification-networks is significantly higher on their respective selected datasets, while the accuracy of the core-networks decreases. This indicates that the selection mechanism can successfully separate samples by classification difficulty and assign easier samples to the certification- and more difficult samples to the core-network.
+
+Effectiveness of the Entropy Loss Recall that we introduced the entropy loss to make Entropy Selection more robust, by decreasing the sensitivity to different perturbations. To assess its effectiveness, we train two Conv2 certification-networks using COLT for CIFAR-10 with $\epsilon_{\infty} = 2 / 255$ with and without entropy loss. Note that using entropy loss corresponds to $\lambda = 0.5$ and not using it corresponds to $\lambda = 0.0$ in Equation 6. We split the test set into two groups based on whether an adversarial attack on the certification-network is successful (adversarial) or not (non-adversarial). For each sample, we compute the difference between the largest and the smallest entropy that can be obtained by perturbing it. Figure 6 shows histograms of these differences (or widths) for both the adversarial and non-adversarial group. Clearly, the non-adversarial samples lead to much narrower entropy ranges if an entropy loss was used, while there is no significant difference if no entropy loss is used. This demonstrates that the entropy loss successfully increased the robustness of the entropy selection mechanism for non-adversarial samples (as adversarial ones are not certifiable anyway).
+
+# 6 CONCLUSION
+
+We proposed a new architecture that boosts the certifiable robustness of any state-of-the-art network, while retaining high accuracy and without requiring retraining. The key idea is to combine this network with a provably trained certification-network and a certifiable selection mechanism, which adaptively decides at inference-time which of the two networks to use. We presented two such selection mechanisms with corresponding training and certification methods. Our experiments show that using this method, one can achieve both high natural accuracies and non-trivial certifiable robustness, beyond the reach of state-of-the-art certified defenses. Our architecture is also fully orthogonal to certified defenses, allowing any advances in this field to be carried over directly.
+
+# REFERENCES
+
+Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420, 2018.
+Mislav Balunović and Martin Vechev. Adversarial training and provable defenses: Bridging the gap. In International Conference on Learning Representations, 2020.
+Mislav Balunović, Maximilian Baader, Gagandeep Singh, Timon Gehr, and Martin Vechev. Certifying geometric robustness of neural networks. In Advances in Neural Information Processing Systems, 2019.
+Tolga Bolukbasi, Joseph Wang, Ofer Dekel, and Venkatesh Saligrama. Adaptive neural networks for efficient inference. arXiv preprint arXiv:1702.07811, 2017.
+Rudy R Bunel, Ilker Turkaslan, Philip Torr, Pushmeet Kohli, and Pawan K Mudigonda. A unified view of piecewise linear neural network verification. In Advances in Neural Information Processing Systems, pp. 4790-4799, 2018.
+Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. Certified adversarial robustness via randomized smoothing. In Proceedings of the 36th International Conference on Machine Learning, 2019.
+Marc Fischer, Maximilian Baader, and Martin Vechev. Certification of semantic perturbations via randomized smoothing. arXiv preprint arXiv:2002.12463, 2020.
+Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, and Martin Vechev. Ai2: Safety and robustness certification of neural networks with abstract interpretation. In 2018 IEEE Symposium on Security and Privacy (SP), pp. 3-18. IEEE, 2018.
+Khalil Ghorbal, Eric Goubault, and Sylvie Putot. The zonotope abstract domain taylor1+. In International Conference on Computer Aided Verification, pp. 627-633. Springer, 2009.
+Sven Gowal, Krishnamurthy Dvijotham, Robert Stanforth, Rudy Bunel, Chongli Qin, Jonathan Uesato, Relja Arandjelovic, Timothy Mann, and Pushmeet Kohli. On the effectiveness of interval bound propagation for training verifiably robust models. arXiv preprint arXiv:1810.12715, 2018.
+Guy Katz, Clark Barrett, David L Dill, Kyle Julian, and Mykel J Kochenderfer. Reluplex: An efficient smt solver for verifying deep neural networks. In International Conference on Computer Aided Verification, pp. 97-117. Springer, 2017.
+Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, and Suman Jana. Certified robustness to adversarial examples with differential privacy. 2019 IEEE Symposium on Security and Privacy (S&P), 2018.
+Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
+Mason McGill and Pietro Perona. Deciding how to decide: Dynamic routing in artificial neural networks. arXiv preprint arXiv:1703.06217, 2017.
+Matthew Mirman, Timon Gehr, and Martin Vechev. Differentiable abstract interpretation for provably robust neural networks. In International Conference on Machine Learning, 2018.
+Jeet Mohapatra, Tsui-Wei Weng, Pin-Yu Chen, Sijia Liu, and Luca Daniel. Towards verifying robustness of neural networks against a family of semantic perturbations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 244–252, 2020.
+Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security, pp. 506-519, 2017.
+
+Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. In Advances in neural information processing systems, pp. 8026-8037, 2019.
+Aditi Raghunathan, Jacob Steinhardt, and Percy Liang. Certified defenses against adversarial examples. In International Conference on Learning Representations, 2018.
+Hadi Salman, Greg Yang, Jerry Li, Pengchuan Zhang, Huan Zhang, Ilya Razenshteyn, and Sebastien Bubeck. Provably robust deep learning via adversarially trained smoothed classifiers. arXiv preprint arXiv:1906.04584, 2019a.
+Hadi Salman, Greg Yang, Huan Zhang, Cho-Jui Hsieh, and Pengchuan Zhang. A convex relaxation barrier to tight robustness verification of neural networks. In Advances in Neural Information Processing Systems, pp. 9835-9846, 2019b.
+Walter J Scheirer, Anderson de Rezende Rocha, Jonathan Parris, and Terrance E Boult. Learning for meta-recognition. IEEE Transactions on Information Forensics and Security, 7(4):1214-1224, 2012.
+Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Puschel, and Martin Vechev. Fast and effective robustness certification. In Advances in Neural Information Processing Systems, pp. 10802-10813, 2018.
+Gagandeep Singh, Rupanshu Ganvir, Markus Puschel, and Martin Vechev. Beyond the single neuron convex barrier for neural network certification. In Advances in Neural Information Processing Systems, 2019.
+Mingxing Tan and Quoc V Le. Efficientnet: Rethinking model scaling for convolutional neural networks. arXiv preprint arXiv:1905.11946, 2019.
+Surat Teerapittayanon, Bradley McDanel, and Hsiang-Tsung Kung. Branchynet: Fast inference via early exiting from deep neural networks. In 2016 23rd International Conference on Pattern Recognition (ICPR), pp. 2464-2469. IEEE, 2016.
+Vincent Tjeng, Kai Xiao, and Russ Tedrake. Evaluating robustness of neural networks with mixed integer programming. arXiv preprint arXiv:1711.07356, 2017.
+Florian Tramer, Nicholas Carlini, Wieland Brendel, and Aleksander Madry. On adaptive attacks to adversarial example defenses. arXiv preprint arXiv:2002.08347, 2020.
+Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, and Suman Jana. Efficient formal safety analysis of neural networks. In Advances in Neural Information Processing Systems, pp. 6367-6377, 2018a.
+Xin Wang, Fisher Yu, Zi-Yi Dou, Trevor Darrell, and Joseph E Gonzalez. Skipnet: Learning dynamic routing in convolutional networks. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 409-424, 2018b.
+Tsui-Wei Weng, Huan Zhang, Hongge Chen, Zhao Song, Cho-Jui Hsieh, Duane Boning, Inderjit S Dhillon, and Luca Daniel. Towards fast computation of certified robustness for relu networks. arXiv preprint arXiv:1804.09699, 2018.
+Eric Wong and Zico Kolter. Provable defenses against adversarial examples via the convex outer adversarial polytope. In Proceedings of the 35th International Conference on Machine Learning. PMLR, 2018.
+Eric Wong, Frank Schmidt, Jan Hendrik Metzen, and J. Zico Kolter. Scaling provable adversarial defenses. In Advances in Neural Information Processing Systems 31. 2018.
+Kaidi Xu, Zhouxing Shi, Huan Zhang, Minlie Huang, Kai-Wei Chang, Bhavya Kailkhura, Xue Lin, and Cho-Jui Hsieh. Automatic perturbation analysis on general computational graphs. arXiv preprint arXiv:2002.12920, 2020.
+
+Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, and Luca Daniel. Efficient neural network robustness certification with general activation functions. In Advances in neural information processing systems, 2018.
+Huan Zhang, Hongge Chen, Chaowei Xiao, Sven Gowal, Robert Stanforth, Bo Li, Duane Boning, and Cho-Jui Hsieh. Towards stable and efficient training of verifiably robust neural networks. In International Conference on Learning Representations, 2020.
+
+# A NETWORK ARCHITECTURES
+
+In Table 3 we list the detailed architectures for CIFAR-10 and ImageNet200, respectively, used in the experiments described in Section 5. The Conv3 architecture for CIFAR-10 is identical to the largest network from Balunović & Vechev (2020).
+
+Table 3: Network architectures of the certification- and selection-networks used for CIFAR-10 and ImageNet200 (Conv3 (IN)). All layers are followed by a ReLU activation. The last fully connected layer is omitted. For Conv3 (IN) a global average pooling layer precedes this last linear layer. "CONV c h×w/s" corresponds to a 2D convolution with c output channels, a h×w kernel size, and a stride of s in both dimensions.
+
+| Conv2 | Conv3 | Conv5 | Conv3 (IN) |
| Conv 32 4×4/2 | Conv 32 3×3/1 | Conv 64 3×3/1 | Conv 32 5×5/2 |
| Conv 32 4×4/2 | Conv 32 4×4/2 | Conv 64 3×3/1 | Conv 64 5×5/2 |
| FC 200 | Conv 128 4×4/2 | Conv 128 3×3/2 | Conv 128 5×5/2 |
| FC 250 | Conv 128 3×3/1 | |
| | Conv 128 3×3/1 | |
| | FC 512 | |
+
+# B TRAINING HYPERPARAMETERS
+
+CIFAR-10 Training For CIFAR-10 IBP training is conducted for 200 epochs, annealing $\epsilon$ and $\kappa$ over the first 100 epochs, with an initial learning rate of 1e-3, reducing by half every 10 epochs after annealing is completed. We choose $\kappa_{end} = 0.5$ for all models. COLT training is conducted for 40 epochs per stage with an initial learning rate of 1e-3, decreasing by a factor of 0.75 between stages and by a factor of 0.5 every 5 epochs after an initial loss mixing period of 5 epochs.
+
+ImageNet Training For IBP training on ImageNet200 and TinyImageNet we train for 50 epochs using a batch size of 100, an initial learning rate of 1e-3, reducing it by half every 5 epochs after annealing both $\kappa$ and $\epsilon$ for 10 epochs. We choose $\kappa_{end} = 0.5$ for all models. For COLT training on ImageNet200 we use feature extraction and freeze all weights up to the last linear layer. We train for 20 epochs per stage, using an initial learning rate of 1e-3, reduced by a factor of 0.75 between stages and 0.5 every 3 epochs after an initial loss mixing period of 5 epochs.
+
+# C ADDITIONAL EXPERIMENTS COMPARING WITH CROWN-IBP
+
+In this section we present additional experiments comparing ACE models with CROWN-IBP.
+
+CROWN-IBP trained networks can not match the performance of COLT trained and MILP certified networks on CIFAR-10 at $\epsilon_{\infty} = 2 / 255$ (Balunović & Vechev, 2020; Zhang et al., 2020). However, as their certification is notably cheaper, we still compare their approach in isolation to an ACE compositional using the largest model Zhang et al. (2020) evaluate on CIFAR-10, Conv5. We follow their instructions for multi GPU training and obtain several models by varying the parameter $\kappa_{end}$ (yellow dots in Figure 7), obtaining very similar results for the settings they report ( $\kappa_{end} \in \{0, 0.5\}$ ). Note that every setpoint requires about 4 GPU days, while training an ACE model on top of an available certification-network only takes few hours and new setpoints can be evaluated in minutes. In Figure 7 we show certifiable vs natural accuracy for each of these CROWN-IBP models (yellow points). To compare it with our method, we chose the CROWN-IBP trained Conv5 with $\kappa_{end} = 0.5$ as certification-network (teal triangle) and use it to train our ACE models with various thresholds $T$ , shown as teal squares in Figure 7. Clearly, our models achieve much better robustness-accuracy trade-offs, especially in regions of high natural accuracy. We do not compare the CROWN-IBP results to those obtained using MILP in the abstract, as the latter requires a much more expensive certification approach. However, even when using a much weaker certification network, such as an IBP trained Conv3 (black triangle), we can still train ACE models (black squares) obtaining more attractive trade-offs at high natural accuracies.
+
+
+Figure 7: Natural and certified accuracy on CIFAR-10 with $\epsilon_{\infty} = 2 / 255$ . We compare individual Conv5 networks (yellow dots), trained with CROWN-IBP and varying natural loss components, with different ACE models (squares) based on an EfficientNet-B0 core-network (purple) and different certification-networks (triangles): CROWN-IBP trained Conv5 (teal) and IBP trained Conv3 (black). Further up and to the right is better.
+
+# D ADDITIONAL EXPERIMENTS ON COLT TRAINED ACE MODELS
+
+In this section we describe additional experiments using COLT trained Conv3 networks.
+
+We use the Conv3 network published by Balunović & Vechev (2020) and certify it using DeepZ (Singh et al., 2018) and MILP (Tjeng et al., 2017). To unlock the full potential of this MILP certified certificaiton network, we would ideally want to train our selection network with MILP based labels. However, certifying the whole or a significant portion of the training set using MILP is infeasible. Therefore, we have to use surrogate labels when training the selection-network. We evaluate three approaches to training a selection network and present the results in Figure 8:
+
+- Transfer the selection network of a different ACE model based on a certification-network with higher zonotope certified accuracy (teal)
+- Compute selection labels using zonotope certification (brown)
+- Compute selection labels using adversarial correctness (yellow)
+
+We observe that using the last approach consistently, over a range of selection rates, performs worse than both other methods, suggesting that the selection network can learn features that make a sample hard to certify while not necessarily leading to successful adversarial attacks. Interestingly, the selection network transferred from a different Conv3 network (with a higher zonotope certified accuracy), does not only help to improve the performance of the ACE model at high natural accuracies when certified using MILP, but also when using DeepZ. This suggests that the properties making a sample difficult to certify show at least some level of stability over different networks.
+
+# E ADVERSARIAL ACCURACY COMPUTATION
+
+The adversarial accuracies listed in Table 1 are intended to be purely informative and not to be considered as a strong indicator of the true robust accuracy. Nevertheless, we consider the potential problem of gradient masking (Papernot et al., 2017) caused by the compositional structure and developed the following three approaches to calculate adversarial accuracy:
+
+- We compute separate adversarial samples attacking the core-, selection- and certification-network. If we find any perturbation leading to the selection of a classification network, we consider it reachable and evaluate the corresponding adversarial example. Only if no attack on a reachable network is successful, we consider the adversarial attack to have failed. Note, that a successful adversarial example to either classification network would not necessarily be classified by this network. Therefore, this approach does not provide a true upper bound to the robust accuracy.
+- We attack the classification networks as described above, but use certifiable instead of empirical reachability, that is consider a network reachable unless we can proof that it can not be reached. This approach provides an even more conservative adversarial accuracy.
+
+
+Figure 8: Natural and certified accuracy on CIFAR-10 at $\epsilon_{\infty} = 2 / 255$ . ACE networks using a COLT trained Conv3 network directly from Balunovic & Vechev (2020) as certification network and three different selection networks, certified using DeepZ (dots) or MILP (squares).Selection networks transferred from a different Conv3 network (teal), trained on certifiable correctness (brown) and on adversarial correctness (yellow).
+
+- We compute two adversarial attacks. One aimed at the core- and the other at the certification-network. To this end, we combine the classification networks loss term with a loss component from the selection network, designed to perturb the sample such that the currently targeted network gets selected. We only consider an attack successful, if an adversarial example gets classified incorrectly by the compositional network. To reduce the gradient obfuscation problem in this setting, we weight the selection network loss term based on whether we already select the currently targeted network.
+
+While the gap between the first two approaches is usually very small at less than $1\%$ , the third approach sometimes yields notably higher adversarial accuracies. Therefore, we decided to report the most conservative numbers, obtained using the second approach. We use PGD with 40 steps, a step size of $0.035\epsilon$ and one restart with a random initialization of the perturbation.
+
+# F NATURAL AND ADVERSARIAL CORE NETWORKS
+
+In this section, we compare ACE models obtained using naturally and adversarially trained core-networks.
+
+In both cases we use an EfficientNet-B0 with ImageNet pretraining. On CIFAR-10 at $\epsilon_{\infty} = 2 / 255$ natural training leads to a natural accuracy of $97.4\%$ but only $6.7\%$ of adversarial accuracy, while adversarial training yields $95.1\%$ and $85.6\%$ , respectively. In Figure 9 we show certified over natural accuracy and compare individual Conv3 networks (yellow dots), trained with COLT and varying natural loss components, with ACE models (squares) using the same Conv3 certification-network (triangles), but different core-networks. The ACE model with a naturally trained core-network is shown in blue and that with an adversially trained core-network in teal. At very high natural accuracies the relative drop in natural accuracy from core-network to ACE model is significantly higher when a naturally trained core-network is used. This permits a much higher selection rate, leading to a significant increase in certified accuracy for any given natural accuracy. Despite these improvements, we maintain that using an adversially trained core network is more representative of potential real world usage, where empirical robustness guarantees are also considered.
+
+# G RESULTS ON SELECTED SETPOINTS
+
+In Table 4 we present selected setpoints of ACE models trained on CIFAR-10, TinyImageNet and ImageNet200, demonstrating that ACE can be applied to a range of models, datasets and provable training methods.
+
+
+Figure 9: Natural and certified accuracy of different COLT trained models on CIFAR-10 with $\epsilon_{\infty} = 2 / 255$ . We compare individual Conv3 networks (yellow dots), trained with COLT and varying natural loss components, with ACE models (squares) using the same Conv3 certification-network (triangles), but different core-networks: a naturally trained EfficientNet-B0 (blue) and an adversariably trained EfficientNet-B0 (teal).
+
+Table 4: Natural, adversarial and certifiable accuracy for various ACE models. The training methods COLT, LiRPA with loss fusion, IBP and CROWN-IBP (C-IBP) used for the training of certification-CERT) and selection-networks (SELECT) are indicated separately.
+
+| Dataset | ε∞ | Selection Method | ACE Training | Certification Network | Top-1 Natural | Model Accuracy [%] Adversarial | Certified |
| CERT | SELECT |
| CIFAR-10 | 2/255 | SelectionNet | COLT | COLT | Conv2 | 90.4 | 79.0 | 20.5 |
| COLT | COLT | Conv3 | 90.1 | 78.4 | 27.5 |
| IBP | IBP | Conv3 | 90.5 | 80.5 | 18.5 |
| C-IBP | C-IBP | Conv5 | 88.6 | 77.6 | 25.6 |
| Entropy | COLT | - | Conv2 | 93.4 | 85.1 | 19.4† |
| IBP | - | Conv2 | 90.2 | 71.7 | 7.2 |
| 8/255 | SelectionNet | COLT | COLT | Conv3 | 77.1 | 46.7 | 7.4 |
| IBP | IBP | Conv3 | 83.1 | 49.9 | 6.4 |
| C-IBP | IBP | Conv5 | 80.1 | 48.8 | 10.5 |
| ImageNet200 | 1/255 | SelectionNet | COLT | COLT | Conv3 | 70.0 | 60.5 | 3.1 |
| IBP | IBP | Conv3 | 68.3 | 57.4 | 3.8 |
| TinyImageNet | 1/255 | SelectionNet | LiRPA | IBP‡ | WRN | 50.0 | 35.9 | 10.5 |
+
+Evaluated using MILP certification (Tjeng et al., 2017) on the first 1000 samples of the test set.
+$\ddagger$ LiRPA trained WideResNet from Xu et al. (2020) used as feature extractor for the selection-network.
+
+# H ZONOTOPE TRANSFORMERS
+
+For both training and certification with convex relaxations it is essential to have precise, also called tight, transformers to reduce the accumulation of errors. Singh et al. (2018) provide such transformers for the ReLU, tanh and sigmoid function. In this section a general approach to constructing transformers for the zonotope domain and $\mathbb{C}^1$ continuous, concave or convex functions will be described and transformers for the exponential and logarithm function, and the product of two zonotopes introduced. These can then be combined into a transformer for the entropy function.
+
+# H.1 ZONOTOPE DOMAIN
+
+The zonotope domain Ghorbal et al. (2009) is a classic numeric abstract domain, shown to be suitable as convex relaxation for analyzing neural networks Gehr et al. (2018), as it is exact for affine transformations and efficient abstract transformers for the ReLU, sigmoid and tanh function exist Singh et al. (2018). A zonotope $\mathcal{Z} \subseteq \mathbb{R}^n$ approximation of a n-dimensional variable $x \in \mathbb{R}^n$ is described by an affine form $\hat{x}_j$ for every dimension $x_j$ as
+
+$$
+\hat {x} _ {j} = a _ {j, 0} + \sum_ {i = 1} ^ {p} a _ {j, i} \cdot \epsilon_ {i}, \quad a _ {j, i} \in \mathbb {R}, \epsilon_ {i} \in [ - 1, 1 ] \tag {8}
+$$
+
+with the center coefficient $a_{j,0}$ , error coefficients $a_{j,i}$ and shared error terms $\epsilon_{i}$ . These shared error terms allow the representation of implicit dependencies between dimensions and make the zonotope domain strictly more powerful than the interval domain (equivalence is given for a diagonal error matrix). Further, the matrix notation
+
+$$
+\hat {\boldsymbol {x}} = \boldsymbol {a} _ {0} + \boldsymbol {A} \epsilon , \quad \boldsymbol {a} _ {0} \in \mathbb {R} ^ {n \times 1}, \boldsymbol {A} \in \mathbb {R} ^ {n \times p}, \epsilon \in [ - 1, 1 ] ^ {p \times 1} \tag {9}
+$$
+
+makes affine transformations of the form $y = Bx + c$ very simple to apply:
+
+$$
+\hat {\boldsymbol {y}} = \underbrace {\boldsymbol {B} \boldsymbol {a} _ {0 , i n} + \boldsymbol {c}} _ {\boldsymbol {a} _ {0, o u t}} + \underbrace {\left(\boldsymbol {B} \boldsymbol {A} _ {i n}\right)} _ {\boldsymbol {A} _ {o u t}} \epsilon = \boldsymbol {a} _ {0, i n} + \boldsymbol {A} _ {o u t} \epsilon \tag {10}
+$$
+
+# H.2 GENERAL TRANSFORMER CONSTRUCTION
+
+Sound neuron-wise transformers for the zonotope domain can be visualized in the input-output-plane of the to be approximated function as parallelograms with vertical left and right edges, fully enclosing the function on the input interval. They can be described as
+
+$$
+\hat {y} _ {j} = \lambda_ {j} \hat {x} _ {j} + \xi_ {j} + \mu_ {j} \epsilon_ {p + 1} \tag {11}
+$$
+
+for the $j^{\mathrm{th}}$ dimension of the input zonotope $\hat{\pmb{x}}$ with $p$ error terms and the neuron-/dimension-wise parameters slope $\lambda_{j}$ , offset $\xi_{j}$ and looseness $\mu_{j}$
+
+The height of the parallelogram $2\mu_{j}$ corresponds to the magnitude of the new error term. As the width is only dependent on the range of the input, the parallelogram's area is only dependent on this height. A transformer is strictly more precise than another, if the parallelogram representation of the former is fully enclosed in the one of the latter. While a smaller area generally corresponds to a smaller loss of precision, no guarantees can be given.
+
+Viability shall be defined as the absence of strictly more precise transformers of the same form. The looseness $\mu$ of viable transformers is uniquely determined by the slope $\lambda$ , as the offset $\xi$ and $\mu$ can be chosen so that both the upper and lower edges touch the function plot in at least one point. All transformers with different offsets and looseness but the same slope will enclose this viable transformer and are therefore strictly less precise.
+
+If one of these contact point lies within the input interval $x \in [x_{lb}, x_{ub}]$ , the slope is by definition a subgradients to the function at this point. If one contact points lie on the borders of the input interval, the slope can be reduced/increased in the direction of the local one-sided gradient until either it is the tangent to that point or an additional contact point is made. The new slope is in both cases by definition a subgradient to the function on the input domain at both contact points. This new transformer is also strictly more precise than the original one. It follows, that all viable slopes are subgradients to the function at one point in the interval.
+
+Convexity can be assumed without loss of generality for convex and concave functions as the latter can be negated to ensure convexity. For convex, $C^1$ continuous functions all tangents to the curve of the function yield viable transformers, consequently they can be parametrized by the x-position $x_{lb} \leq t \leq x_{ub}$ of the contact point. Using the mean value theorem and convexity it follows that there will be a point $t_{crit}$ where the upper edge of the parallelogram will connect the lower and upper endpoints of the graph. For $t < t_{crit}$ it will make contact on the upper endpoint and for $t > t_{crit}$ on the lower endpoint. This allows to describe the parameters $\lambda$ , $\xi$ and $\mu$ of a zonotope transformer for an element-wise function $f(x): \mathbb{R} \to \mathbb{R}$ on the interval $[x_{lb}, x_{ub}]$ as
+
+$$
+\lambda = f ^ {\prime} (t) \tag {12}
+$$
+
+$$
+\xi = \frac {1}{2} \left(f (t) - \lambda t + \left\{ \begin{array}{l l} f \left(x _ {l b}\right) - \lambda x _ {l b}, & \text {i f} t \geq t _ {c r i t} \\ f \left(x _ {u b}\right) - \lambda x _ {u b}, & \text {i f} t < t _ {c r i t} \end{array} \right.\right) \tag {13}
+$$
+
+$$
+\mu = \frac {1}{2} \left(\lambda t - f (t) + \left\{ \begin{array}{l l} f \left(x _ {l b}\right) - \lambda x _ {l b}, & \text {i f} t \geq t _ {c r i t} \\ f \left(x _ {u b}\right) - \lambda x _ {u b}, & \text {i f} t < t _ {c r i t} \end{array} \right.\right) \tag {14}
+$$
+
+$$
+\left. \nabla_ {x} f (x) \right| _ {x = t _ {c r i t}} = \frac {f \left(x _ {u b}\right) - f \left(x _ {l b}\right)}{x _ {u b} - x _ {l b}} \tag {15}
+$$
+
+Minimum Area Transformer - A minimum area transformer can now be derived by minimizing the looseness $\mu$ for $x_{lb} \leq t \leq t_{crit}$ and $t_{crit} \leq t \leq x_{ub}$ . This yields the constrained optimization problems:
+
+$$
+\min _ {t} \frac {f ^ {\prime} (t) \left(t - x _ {u b}\right) - f (t) + f \left(x _ {u b}\right)}{2}, \quad s. t. \quad t \geq x _ {l b}, \quad t \leq t _ {c r i t} \tag {16}
+$$
+
+$$
+\min _ {t} \frac {f ^ {\prime} (t) \left(t - x _ {l b}\right) - f (t) + f \left(x _ {l b}\right)}{2}, \quad s. t. \quad t \geq t _ {c r i t}, \quad t \leq x _ {u b} \tag {17}
+$$
+
+These can be solved using the method of Lagrange multipliers. Equation 16 yields the Lagrangian function:
+
+$$
+\mathcal {L} (t, \gamma) = \frac {1}{2} \left(f ^ {\prime} (t) \left(t - x _ {u b}\right) - f (t) + f \left(x _ {u b}\right)\right) - \gamma_ {1} \left(t - x _ {l b}\right) + \gamma_ {2} \left(t - t _ {c r i t}\right) \tag {18}
+$$
+
+$$
+\nabla_ {t} \mathcal {L} (t, \gamma) = \frac {1}{2} \left(f ^ {\prime \prime} (t) \left(t - x _ {u b}\right) + \underline {{f ^ {\prime} (t) (1 - 1)}}\right) - \gamma_ {1} + \gamma_ {2} \stackrel {!} {=} 0 \tag {19}
+$$
+
+$$
+\nabla_ {\gamma_ {1}} \mathcal {L} (t, \gamma) = t - x _ {l b} \stackrel {!} {=} 0 \tag {20}
+$$
+
+$$
+\nabla_ {\gamma_ {2}} \mathcal {L} (t, \gamma) = t - t _ {c r i t} \stackrel {!} {=} 0 \tag {21}
+$$
+
+As $x_{lb} < t_{crit} < x_{ub}$ , at most one of the two constraints can be active at any time. This yields three cases:
+
+Case 1: neither constraint is active, $\gamma_{1} = \gamma_{2} = 0$
+
+$$
+\nabla_{t}\mathcal{L}(t,\gamma) = \underbrace{f^{\prime\prime}(t)}_{\substack{\geq 0\text{convex}\\ >0\text{strictly convex}}}(t - x_{ub}) = 0
+$$
+
+$$
+t _ {1} = x _ {u b} \quad \nsubseteq \quad t \leq t _ {c r i t}
+$$
+
+$$
+f ^ {\prime \prime} (t _ {2}) = 0 \quad \Rightarrow \text {s a d d l e p o i n t}
+$$
+
+Case 2: $t = x_{lb}, \gamma_1 \neq 0, \gamma_2 = 0$
+
+$$
+\gamma_ {1} = \frac {1}{2} \left(f ^ {\prime \prime} \left(x _ {l b}\right) \left(x _ {l b} - x _ {u b}\right) \right.
+$$
+
+$$
+t _ {3} = x _ {l b} \quad \downarrow \quad \nabla_ {t} \mu (t) | _ {t = x _ {l b}} = \underbrace {f ^ {\prime \prime} (x _ {l b})} _ {\geq 0} \underbrace {(x _ {l b} - x _ {u b})} _ {\leq 0} \leq 0 \Rightarrow \text {b o u n d a r y m a x i m u m}
+$$
+
+Case 3: $t = t_{crit}, \gamma_1 = 0, \gamma_2 \neq 0$
+
+$$
+\gamma_ {2} = \frac {1}{2} \left(f ^ {\prime \prime} \left(x _ {l b}\right) \left(x _ {l b} - x _ {u b}\right) \right.
+$$
+
+$$
+t _ {4} = t _ {c r i t} \quad \nabla_ {t} \mu (t) | _ {t = t _ {c r i t}} = \underbrace {f ^ {\prime \prime} \left(t _ {c r i t}\right)} _ {\geq 0} \underbrace {\left(t _ {c r i t} - x _ {u b}\right)} _ {\leq 0} \leq 0 \Rightarrow \text {b o u n d a r y m i n i m u m}
+$$
+
+Analogously, equation 17 yields a boundary minimum at $t = t_{\text{crit}}$ . Consequently $t = t_{\text{crit}}$ yields the minimum area transformer for convex functions. $t_{\text{crit}}$ can be computed either analytically or numerically by solving equation 15 as the point where the local gradient is equal to the mean gradient over the whole interval. It can be observed that this yields the same slope as the minimum area transformer for the ReLU function Singh et al. (2018) even though this derivation can not be applied there directly due to the ReLU functions $C^1$ discontinuity.
+
+# H.3 EXPONENTIAL TRANSFORMER
+
+The exponential function has the feature that its output is always strictly positive, which is important when used as input to the logarithmic function to compute the entropy. Therefore, a guarantee of positivity for the output zonotope is desirable. A constraint yielding such a guarantee can be obtained by inserting $\hat{x}_j = x_{lb}$ , $\epsilon_{p + 1} = -\mathrm{sign}(\mu)$ and $\hat{y}_j \geq 0$ with $\lambda(t) = e^t$ into equation 11:
+
+$$
+0 \leq \lambda x _ {l b} + \frac {1}{2} (f (t) - \lambda t + f (x _ {u b} - \lambda x _ {u b})) - \frac {1}{2} (\lambda t - f (t) + f (x _ {u b} - \lambda x _ {u b})
+$$
+
+$$
+0 \leq \lambda \left(x _ {l b} - t\right) + f (t)
+$$
+
+$$
+0 \leq e ^ {t} (x _ {l b} - t + 1)
+$$
+
+$$
+t \leq 1 + x _ {l b} \equiv t _ {c r i t, 2} \tag {22}
+$$
+
+This constitutes the additional upper limit $t_{\text{crit},2}$ on $t$ . Therefore it is sufficient to reevaluate 16 as it will either be inactive in equation 17 if $t_{\text{crit}} \leq t_{\text{crit2}}$ for the solutions computed previously or the constraints will be insatiable ensuring that 17 will have no solutions. If a strictly positive output is required a small delta can simply be subtracted from the upper limit $t_{\text{crit},2}$ . It is easy to see that $t$ is now constrained to $[x_{lb}, \min(x_{ub}, t_{\text{crit},2})]$ and that the minimum area solution will be obtained with $t_{\text{opt}} = \min(t_{\text{crit}}, t_{\text{crit},2})$ . The critical points can be computed explicitly to $t_{\text{crit}} = \log(\frac{e^{x_{ub}} - e^{x_{lb}}}{x_{ub} - x_{lb}})$ and $t_{\text{crit},2} = x_{lb} + 1$ . This can be inserted into equations 11 to 14 to obtain a positive, sound and viable transformer. This transformer is visualized for different choices of $t$ in figure 10.
+
+
+Figure 10: Illustration of the transformer for the exponential function for, from left to right $t = x_{lb}$ , minimum area: $t = t_{crit}$ , $t = x_{ub}$ and minimum area while strictly positive: $t = t_{crit,2}$ .
+
+If there is no point in the input interval where the gradient of the to be approximated function is 0, as is always the case for the exponential function, the box transformer is not a viable zonotope transformer. But the viable transformer with the smallest gradient at $t = x_{lb}$ is strictly more precise than the box transformer (cf. figure 10).
+
+# H.4 LOGARITHMIC TRANSFORMER
+
+The logarithmic transformer can be constructed by plugging $f(t) = -\log (t)$ and $f^{\prime}(t) = \frac{-1}{x}$ into equations 12 to 14 and their results into equation 11. Equation 15 can be solved to $t_{crit} = \frac{x_{lb} - x_{ub}}{ln(x_{lb}) - ln(x_{ub})}$ . The resulting transformer is visualized in figure 11. It becomes apparent that the choice of $\lambda$ can have a significant impact on the looseness of the obtained transformer.
+
+
+Figure 11: Illustration of the transformer for the logarithmic function for, from left to right $t = x_{lb}$ , minimum area: $t = t_{crit}$ and $t = x_{ub}$ .
+
+Similar to the exponential transformer, the box transformer is not a viable logarithmic zonotope transformer, but the viable transformer with the smallest gradient at $t = x_{ub}$ is strictly more precise than the box transformer (cf. figure 11).
+
+# H.5 PRODUCT TRANSFORMER
+
+The pointwise or Hadamard product is different from the previously introduced transformers as it involves two zonotopes instead of just one. For this derivation the two one-dimensional zonotopes $\hat{x}$ and $\hat{z}$ with $p$ shared error terms and $k_{1}$ and $k_{2}$ individual error terms shall be considered. Typically, error terms will be shared up to a certain index (potentially 0) and all following error terms will be individual to one of the zonotopes. In any case this form can be obtained by reordering the error terms and can therefore be assumed without loss of generality.
+
+$$
+\hat {x} = a _ {0} + \mathbf {A} _ {i n d} \left[ \begin{array}{c} \epsilon_ {1} \\ \vdots \\ \epsilon_ {p} \\ \epsilon_ {p + 1} \\ \vdots \\ \epsilon_ {p + k _ {1}} \end{array} \right], \quad \hat {z} = b _ {0} + \mathbf {B} _ {i n d} \left[ \begin{array}{c} \epsilon_ {1} \\ \vdots \\ \epsilon_ {p} \\ \epsilon_ {p + k _ {1} + 1} \\ \vdots \\ \epsilon_ {p + k _ {1} + k _ {2}} \end{array} \right] \quad a _ {0}, b _ {0} \in \mathbb {R}, \mathbf {A} _ {i n d} \in \mathbb {R} ^ {p + k _ {1}}, \mathbf {B} _ {i n d} \in \mathbb {R} ^ {p + k _ {2}}
+$$
+
+A shared error vector $\epsilon \in$ with $q = p + k_{1} + k_{2}$ error terms can be obtained, by concatenating the individual error terms of the second zonotope $\hat{z}$ to the error vector of the first and padding the error coefficient matrices correspondingly with zeros:
+
+$$
+\hat {x} = a _ {0} + \underbrace {\left[ \begin{array}{c} \mathbf {A} _ {i n d} ^ {\top} \\ 0 \\ \vdots \\ 0 \end{array} \right] ^ {\top}} _ {\mathbf {A}} \underbrace {\left[ \begin{array}{c} \epsilon_ {1} \\ \vdots \\ \epsilon_ {p + k _ {1}} \\ 0 \\ \vdots \\ 0 \end{array} \right]} _ {\epsilon}, \quad a _ {0} \in \mathbb {R}, \mathbf {A} \in \mathbb {R} ^ {q}, \epsilon \in [ - 1, 1 ] ^ {q}
+$$
+
+$$
+\hat {z} = b _ {0} + \underbrace {\left[ \begin{array}{c} b _ {1} \\ \vdots \\ b _ {p} \\ 0 \\ \vdots \\ 0 \\ b _ {p + 1} \\ \vdots \\ b _ {p + k _ {2}} \end{array} \right]} _ {\mathbf {B}} ^ {\top} \underbrace {\left[ \begin{array}{c} \epsilon_ {1} \\ \vdots \\ \epsilon_ {p} \\ 0 \\ \vdots \\ 0 \\ \epsilon_ {p + k _ {1} + 1} \\ \vdots \\ \epsilon_ {q} \end{array} \right]} _ {\epsilon}, \quad b _ {0} \in \mathbb {R}, \mathbf {B} \in \mathbb {R} ^ {q}, \epsilon \in [ - 1, 1 ] ^ {q}
+$$
+
+Now the Hadamard product can be written as
+
+$$
+\hat {y} ^ {\prime} = \hat {x} \odot \hat {z} = \underbrace {a _ {0} b _ {0}} _ {c _ {0} ^ {\prime}} + \underbrace {(a _ {0} \mathbf {B} + b _ {0} \mathbf {A})} _ {C ^ {\prime}} \epsilon + \underbrace {\epsilon^ {\top} \mathbf {A B} ^ {\top} \epsilon} _ {(*)} \tag {23}
+$$
+
+$$
+(*) = \sum_ {i} a _ {i} b _ {i} \underbrace {\epsilon_ {i} ^ {2}} _ {\in [ 0, 1 ]} + \sum_ {i} \sum_ {j = i + 1} ^ {q} \left(a _ {i} b _ {j} + a _ {j} b _ {i}\right) \underbrace {\epsilon_ {i} \epsilon_ {j}} _ {\in [ - 1, 1 ]} \tag {24}
+$$
+
+To bring $\hat{y}^\prime$ into zonotope form $\hat{y}$ , the term 24 has to be approximated by adding a new error term $\epsilon_{q + 1}$ with the error coefficient $c_{q + 1}$ and a constant $c_0^{\prime \prime}$ :
+
+$$
+c _ {q + 1} = \frac {1}{2} \sum_ {i} \left| a _ {i} b _ {i} \right| + \sum_ {i} \sum_ {j = i + 1} ^ {q} \left| a _ {i} b _ {j} + a _ {j} b _ {i} \right| \tag {25}
+$$
+
+$$
+c _ {0} ^ {\prime \prime} = \frac {1}{2} \sum_ {i} a _ {i} b _ {i} \tag {26}
+$$
+
+$$
+\hat {y} = \underbrace {\left(c _ {0} ^ {\prime} + c _ {0} ^ {\prime \prime}\right)} _ {c _ {0}} + \underbrace {\left[ \begin{array}{l l} \mathbf {C} ^ {\prime} & c _ {q + 1} \end{array} \right]} _ {\mathbf {C}} \underbrace {\left[ \begin{array}{l} \epsilon \\ \epsilon_ {q + 1} \end{array} \right]} _ {\epsilon_ {\text {n e w}}}, \tag {27}
+$$
+
+Unfortunately evaluating equation 25 and 26 is quadratic in the number of error terms in time and when using a matrix formulation utilizing GPU vector operations in space. When the number of error terms is too high and using the transformer described above becomes infeasible, a switch to the box transformer is possible:
+
+$$
+y _ {l b} = \min \left(x _ {l b} z _ {l b}, x _ {l u} z _ {l b}, x _ {u b} z _ {l b}, x _ {u b} z _ {u b}\right) \tag {28}
+$$
+
+$$
+y _ {u b} = \max \left(x _ {l b} z _ {l b}, x _ {l u} z _ {l b}, x _ {u b} z _ {l b}, x _ {u b} z _ {u b}\right) \tag {29}
+$$
+
+$$
+\hat {y} = \frac {y _ {u b} + y _ {l b}}{2} + \frac {y _ {u b} - y _ {l b}}{2} \epsilon_ {\text {n e w}}, \quad \epsilon_ {\text {n e w}} \in [ - 1, 1 ] ^ {1} \tag {30}
+$$
+
+# H.6 ENTROPY TRANSFORMER
+
+Based on these elementary transformers, the entropy transformer can be assembled by chaining transformers for the individual component functions according to equation 5, which is reproduced below for convenience.
+
+$$
+\begin{array}{l} H (\boldsymbol {y}) = - \sum_ {j} \frac {e ^ {y _ {j}}}{\sum_ {i} e ^ {y _ {i}}} \log \left(\frac {e ^ {y _ {j}}}{\sum_ {i} e ^ {y _ {i}}}\right) \\ = - \sum_ {j} \frac {e ^ {y _ {j}}}{\sum_ {i} e ^ {y _ {i}}} \left(y _ {j} - \log \left(\sum_ {i} e ^ {y _ {i}}\right)\right) \\ = \log (\sum_ {i} e ^ {y _ {i}}) - \sum_ {j} y _ {j} \frac {e ^ {y _ {j}}}{\sum_ {i} e ^ {y _ {i}}} \\ = \log \left(\sum_ {i} e ^ {y _ {i}}\right) - \sum_ {j} y _ {j} \exp \left(y _ {j} - \log \left(\sum_ {i} e ^ {y _ {i}}\right)\right) \\ = c + \log \left(\sum_ {i} e ^ {y _ {i} - c}\right) - \sum_ {j} y _ {j} \exp \left(y _ {j} - c - \log \left(\sum_ {i} e ^ {y _ {i} - c}\right)\right) \\ \end{array}
+$$
+
+The second term requires four transformers (product, exponential, logarithmic, exponential) adding $3n_{class} + 1$ error terms ( $n_{class}, n_{class}, 1, n_{class}$ ) to the output. Since the log-sum-exp term has to be computed only once the first term does not add any additional error terms, while still increasing the corresponding error coefficients.
+
+# H.6.1 TIGHTNESS
+
+The tightness of the entropy transformer was evaluated in comparison to a box transformer and an upper bound obtained from an optimization approach. To compute the looseness, random input zonotopes with fully populated error coefficient matrices drawn from $\mathcal{N}(0,\sigma_{\epsilon}^{2})$ and centre coefficients drawn from $\mathcal{N}(1,3^2)$ were created and then propagated through an entropy transformer, before the looseness was computed as the difference between upper and lower bounds. When an input box was required, the zonotopes where converted to a box representation with the same bounds. The mean over 50 samples is reported. The following five different transformer versions were considered:
+
+- ZonoIter - The zonotope transformer obtained by chaining the previously introduced transformers and optimizing the slopes $\lambda$ of all transformers sample-wise to minimize looseness.
+- ZonoIterLM - As ZonoIter, but using the box transformer for the product to obtain a low memory requirement transformer.
+- Zono - As ZonoIter, but using minimum area slopes instead.
+- ZonoLM - As Zono, ut using the box transformer for the product to obtain a low memory requirement transformer.
+- Box - The transformer obtained by using interval arithmetic to propagate bounds.
+
+An analysis of the looseness over input error size, illustrated in figure 12. shows that while the box transformer is clearly the least precise over the whole domain, different errors dominated the behaviour of the various zonotope transformers in different regimes.
+
+The high looseness of Zono and ZonoLM at large input errors suggest that minimum area slopes are not ideal in this regime, while the small penalty for switching to the low memory versions indicates that the error terms incurred from the product transformer are small by comparison to the log and exp contributions. Reducing the input errors on step, flips this behaviour. The product error dominates the differences between minimum area and optimized slopes by a significant margin.
+
+
+Figure 12: Comparison of the looseness of various versions of the entropy transformer over the standard deviation $\sigma_{\epsilon}$ of the entries of the input zonotope error coefficient matrix drawn from the distribution $\mathcal{N}(0, \sigma_{\epsilon}^{2})$ . Adv is a lower bound to the optimal looseness obtained by adversarially attacking the input region, described by the input zonotope.
\ No newline at end of file
diff --git a/certifyorpredictboostingcertifiedrobustnesswithcompositionalarchitectures/images.zip b/certifyorpredictboostingcertifiedrobustnesswithcompositionalarchitectures/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..301c8cdb9a90378cb1951cacb4ba51012740ebdd
--- /dev/null
+++ b/certifyorpredictboostingcertifiedrobustnesswithcompositionalarchitectures/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b76cf9dac1c9dcdf1e84ceea0919e05389baee55757480672e6029ecf6de4e9d
+size 712401
diff --git a/certifyorpredictboostingcertifiedrobustnesswithcompositionalarchitectures/layout.json b/certifyorpredictboostingcertifiedrobustnesswithcompositionalarchitectures/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..e7bc639dba2c7da6b00656abbe9dbf5b1bbda6a6
--- /dev/null
+++ b/certifyorpredictboostingcertifiedrobustnesswithcompositionalarchitectures/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dc38dc89da93d57fd6d8c427188986c891538de03fb887bb109bd66a47a0992b
+size 666877
diff --git a/chaosoflearningbeyondzerosumandcoordinationviagamedecompositions/b256e912-9b3f-47a2-a383-a236a9a53041_content_list.json b/chaosoflearningbeyondzerosumandcoordinationviagamedecompositions/b256e912-9b3f-47a2-a383-a236a9a53041_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..bd538065f156066cdbedba512499214ef1c9aa36
--- /dev/null
+++ b/chaosoflearningbeyondzerosumandcoordinationviagamedecompositions/b256e912-9b3f-47a2-a383-a236a9a53041_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:79d9fbdbb91fc5c00933127ad8e702fe2981a882dbf87dec125e7b21b3cbb6ad
+size 174418
diff --git a/chaosoflearningbeyondzerosumandcoordinationviagamedecompositions/b256e912-9b3f-47a2-a383-a236a9a53041_model.json b/chaosoflearningbeyondzerosumandcoordinationviagamedecompositions/b256e912-9b3f-47a2-a383-a236a9a53041_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..7940ce2b355bf1c308a1cb0d1749190ea3a851f9
--- /dev/null
+++ b/chaosoflearningbeyondzerosumandcoordinationviagamedecompositions/b256e912-9b3f-47a2-a383-a236a9a53041_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:18ae4cccbdedac55c6a213d4246caea3f169ef0a3e868deef7ccc11f7483f643
+size 200879
diff --git a/chaosoflearningbeyondzerosumandcoordinationviagamedecompositions/b256e912-9b3f-47a2-a383-a236a9a53041_origin.pdf b/chaosoflearningbeyondzerosumandcoordinationviagamedecompositions/b256e912-9b3f-47a2-a383-a236a9a53041_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..e0bd13902848b3a126e6bb9b377c1cee772d953a
--- /dev/null
+++ b/chaosoflearningbeyondzerosumandcoordinationviagamedecompositions/b256e912-9b3f-47a2-a383-a236a9a53041_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:85fafcf92a971d8c767b0770d66e8c5636e31732560fd2b536e35f2db1ada3ef
+size 536190
diff --git a/chaosoflearningbeyondzerosumandcoordinationviagamedecompositions/full.md b/chaosoflearningbeyondzerosumandcoordinationviagamedecompositions/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..c8eb07ea3df2bec98d7890db971a0d3965f621ed
--- /dev/null
+++ b/chaosoflearningbeyondzerosumandcoordinationviagamedecompositions/full.md
@@ -0,0 +1,738 @@
+# CHAOS OF LEARNING BEYOND ZERO-SUM AND COORDINATION VIA GAME DECOMPOSITIONS
+
+Yun Kuen Cheung
+
+Department of Computer Science Royal Holloway University of London Egham, UK
+
+yunkuen.cheung@rhul.ac.uk
+
+Yixin Tao
+
+Department of Mathematics London School of Economics London, UK
+
+Y.Tao16@lse.ac.uk
+
+# ABSTRACT
+
+It is of primary interest for Machine Learning to understand how agents learn and interact dynamically in competitive environments and games (e.g. GANs). But this has been a difficult task, as irregular behaviors are commonly observed in such systems. This can be explained theoretically, for instance, by the works of Cheung & Piliouras (2019; 2020), which showed that in two-person zero-sum games, if agents employ one of the most well-known learning algorithms, Multiplicative Weights Update (MWU), then Lyapunov chaos occurs everywhere in the cumulative payoff space. In this paper, we study how persistent chaos can occur in the general normal-form game settings, where the agents might have the motivation to coordinate (which is not true for zero-sum games) and the number of agents can be arbitrary.
+
+We characterize bimatrix games where MWU, its optimistic variant (OMWU) or Follow-the-Regularized-Leader (FTRL) algorithms are Lyapunov chaotic almost everywhere in the cumulative payoff space. Our characterizations are derived by extending the volume-expansion argument of Cheung & Piliouras via the canonical game decomposition into zero-sum and coordination components. Interestingly, the two components induce opposite volume-changing behaviors, so the overall behavior can be analyzed by comparing the strengths of the components against each other. The comparison is done via our new notion of "matrix domination" or via a linear program. For multi-player games, we present a local equivalence of volume change between general games and graphical games, which is used for volume and chaos analyses of MWU and OMWU in potential games.
+
+# 1 INTRODUCTION
+
+In Machine Learning (ML), it is of primary interest to understand how agents learn in competitive environments. This is more strongly propelled recently due to the success of Generative Adversarial Networks (GANs), which can be viewed as two neural-networks playing a zero-sum game. As such, Evolutionary Game Theory (EGT) (Hofbauer & Sigmund (1998); Sandholm (2010)), a decades-old area devoted to the study of adaptive (learning) behaviors of agents in competitive environments arising from Economics, Biology, and Physics, has drawn attention from the ML community. In contrast with the typical optimization (or no-regret) approach in ML, EGT provides a dynamical-systemic perspective to understand ML processes, which has already provided new insights into a number of ML-related problems. This perspective is particularly helpful in studying "learning in games", where irregular behaviors are commonly observed, but the ML community currently lacks of a rigorous method to analyze such systems. In this paper, we study Lyapunov chaos, a central notion that captures instability and unpredictability in dynamical systems. We characterize general normal games where popular learning algorithms exhibit chaotic behaviors.
+
+Lyapunov chaos captures the butterfly effect: when the starting point of a dynamical system is slightly perturbed, the resulting trajectories and final outcomes diverge quickly; see Definition 1 for a formal definition. The perturbations correspond to round-off errors of numerical algorithms in ML (and Computer Science in general).1 While significant efforts have been spent in analyzing and
+
+minimizing round-off effects of floating-point computations (Demmel (1997)), they are unavoidable in general2. As round-offs are inevitable, and the round-off schemes can vary from machine to machine due to various hardware and software factors, we surely want to avoid chaotic learning that does not fulfill our primary goals in building predictable and reproducible learning systems. Such issues are exemplified by a quote from Ali Rahimi's NIPS'2017 test-of-time award speech:
+
+"Someone on another team changed the default rounding mode of some Tensorflow internals from 'truncate toward zero' to 'round to even'. Our training broke, our error rate went from less than $25\%$ error to $\sim 99.97\%$ error."
+
+To avoid chaotic learning, we first need to understand how it can arise. This is the main motivation of our work. Recently, in the context of "learning in games", Cheung & Piliouras (2019; 2020) presented interesting theoretical analyses to show that in two-person zero-sum and graphical constant-sum games, if the agents employ Multiplicative Weights Update (MWU) or Follow-the-Regularized-Leader (FTRL) algorithms, then Lyapunov chaos occurs everywhere in the cumulative payoff space; the same result holds for the optimistic variant of MWU (OMWU) in coordination games. While zero-sum and coordination games are interesting in their own rights, they are rather small subspaces within the family of general normal games. In this paper, we present techniques and tools for characterizing general games where MWU, FTRL, or OMWU are Lyapunov chaotic almost everywhere. Next, we give an overview of our contributions and a discussion of related work.
+
+Our Contributions. To show the results about chaos mentioned above, Cheung & Piliouras (2019; 2020) used a classical technique in the study of dynamical systems called volume analysis. Volume analysis considers a set of starting points of positive Lebesgue measure, e.g. a ball centred at a point; volume is an alternative name for Lebesgue measure in this context. When this set of starting points evolves according to the rule of dynamical system, it becomes a new set with a different volume. Intuitively, volume is a measure of the range of possible outcomes, so the larger it is, the more unpredictable the system is. If the set's volume increases exponentially with time, then its diameter increases exponentially too, which implies Lyapunov chaos. This indicates that when players repeatedly play the games by employing the respective learning algorithms, a slight perturbation on the initiating condition can lead to a wide range of possible cumulative payoffs in the long run. This can be shown to imply instability in the mixed strategy space.
+
+The technical starting point of Cheung & Piliouras is to show that when all agents in a bimatrix game $\mathbf{G}$ use MWU with step-size $\epsilon$ , and when a set $S$ in the cumulative payoff space evolves for one time step, its volume change can be expressed as $\epsilon^2 \int_S C_{\mathbf{G}}(s) ds + \mathcal{O}(\epsilon^4)$ , where $C_{\mathbf{G}}$ is a function that depends on the game $\mathbf{G}$ , which we will define in Section 2. Clearly, the sign of $C_{\mathbf{G}}$ dictates the volume change behaviors for all small enough $\epsilon$ . Cheung & Piliouras showed that $C_{\mathbf{G}}$ is a positive function when $\mathbf{G}$ is a two-person zero-sum game. For a large region in the cumulative payoff space, this implies the volume change per time step is $\Omega(\epsilon^2) \cdot \text{volume}(S)$ , i.e. volume expands exponentially. They also showed that $C_{\mathbf{G}}$ is a negative function if $\mathbf{G}$ is a two-person coordination game.
+
+To extend their volume expansion results to general bimatrix games, we first discover that $C_{\mathbf{G}}$ admits a clean decoupling w.r.t. the canonical decomposition of such games into zero-sum and coordination components (Basar & Ho (1974); Kalai & Kalai). Precisely, given any two-person general game $(\mathbf{A}, \mathbf{B})$ , it can be written uniquely as a direct sum of a zero-sum game $(\mathbf{Z}, -\mathbf{Z})$ and a coordination game $(\mathbf{C}, \mathbf{C})$ , where $\mathbf{Z} = (\mathbf{A} - \mathbf{B}) / 2$ and $\mathbf{C} = (\mathbf{A} + \mathbf{B}) / 2$ . Interestingly, we find that $C_{\mathbf{G}}(\cdot) = C_{(\mathbf{Z}, -\mathbf{Z})}(\cdot) + C_{(\mathbf{C}, \mathbf{C})}(\cdot)$ in Lemma 6. Recall from the last paragraph that $C_{(\mathbf{Z}, -\mathbf{Z})}(\cdot)$ is always positive, while $C_{(\mathbf{C}, \mathbf{C})}(\cdot)$ is always negative. Thus, to see whether volume expansion occurs, it boils down to comparing the strengths of $C_{(\mathbf{Z}, -\mathbf{Z})}(\cdot)$ and $-C_{(\mathbf{C}, \mathbf{C})}(\cdot)$ .
+
+We also discover that the function $C_{\mathbf{G}}$ is invariant upon additions of trivial matrices to the game $\mathbf{G}$ ; see Definition 7 and Lemma 8. An immediate application of trivial matrices is for bimatrix potential games (Monderer & Shapley (1996)), for which we show it can be transformed to a coordination game via additions of trivial matrices. With the result in Cheung & Piliouras (2020), this immediately implies that OMWU in any bimatrix potential game is Lyapunov chaotic everywhere in the cumulative payoff space (Observation 10).
+
+Based on the above discoveries, we identify two characterizations of bimatrix games where MWU and FTRL are Lyapunov chaotic almost everywhere (Theorems 15 and 17). As said before,
+
+the key is to compare the strengths of the $C$ -functions of the zero-sum and the coordination components. The comparison is done via our new notion of matrix domination (see Definition 11), and also a linear program (Eqn. (7)) which is designed to prune out the trivial-matrix projection and keep the remaining part minimal. Such family of games has positive Lebesgue measure in the bimatrix game space, so it is not confined to any proper game subspace4. This justifies the claim that the occurrences of chaos are not only circumstantial, but a rather substantial issue in learning in games. Analogous result holds for OMWU.
+
+For games with any number of players (multi-player games), we use an observation in Cheung & Piliouras (2019), coupled with our new findings about bimatrix games discussed above, to present a new family of graphical games where MWU is Lyapunov chaotic almost everywhere (Theorem 18); the new family strictly includes all graphical constant-sum games. To facilitate volume analyses of learning in multi-player games, we establish their local equivalence of volume change with graphical games. Briefly, we show that $C_{\mathbf{G}}(\mathbf{p})$ for a general game $\mathbf{G}$ is the same as $C_{\mathbf{H}}(\mathbf{p})$ for some graphical game $\mathbf{H}$ ; $\mathbf{H}$ will depend on the point $\mathbf{p}$ , that's why we say the equivalence is local (Theorem 19). This provides an intuitive procedure for understanding volume changes, as the volume change of learning in a graphical game is easier to compute. This is used to show that the volume-changing behaviors of MWU and OMWU are opposite to each other. We use these to analyze MWU and OMWU in multi-player potential games; in particular, we show that $C_{\mathbf{G}}(\mathbf{p})$ of a multi-player potential game $\mathbf{G}$ is identical to $C_{\mathbf{C}}(\mathbf{p})$ of a corresponding multi-player graphical coordination game, while $C_{\mathbf{C}}(\mathbf{p}) \leq 0$ for any $\mathbf{p}$ (Proposition 21).5
+
+Related Work. MWU and its variant, such as FTRL and Optimistic MWU, play important roles in online learning. We recommend the texts of Cesa-Bianchi & Lugosi (2006) and Hart & Mas-Colell (2013) for a modern overview of online learning from Machine Learning or Economics perspectives.
+
+Recently, there is a stream of works that examine how learning algorithms behave in games or min-max optimization from a dynamical-systemic perspective. This provides new insights into learning systems which could hardly be obtained with classical tools in ML. For instance, some learning systems are shown to be nearly-periodic under the notion of Poincaré recurrence (Piliouras & Shamma (2014); Mertikopoulos et al. (2018)). Using potential functions first proposed in EGT and other tools from mathematics, surprising behaviors of first-order methods in zero-sum games and min-max optimization were discovered (Daskalakis & Panageas (2018; 2019); Bailey & Piliouras (2018); Cheung (2018)). In Appendix A, we give an account on some further related work.
+
+Empirical evidences of Lyapunov chaos of learning in games were reported by Sato et al. (2002) and Galla & Farmer (2013). Li-Yorke chaos, another classical chaos notion, was proved to occur in several learning-in-game systems (Palaiopanos et al. (2017); Chotibut et al. (2020)).
+
+Volume analysis has long been a technique of interest in the study of population and game dynamics. It was discussed in a number of famous texts; see (Hofbauer & Sigmund, 1998, Section 11), (Fudenberg & Levine, 1998, Section 3) and (Sandholm, 2010, Chapter 9).
+
+We use game decomposition in this paper, which is a natural and generic approach to extend compelling results for a specific family of games to more general games. Let $\mathcal{H}$ denote the specific family of games with some compelling properties. Given a general game, we seek to decompose it into a sum of its projection on $\mathcal{H}$ , plus one or more residue components. If the residues are small, then it is plausible that those compelling properties extend (approximately). In seeking of games where learning is stable, game decompositions were used (Candogan et al. (2011; 2013a;b); Letcher et al. (2019)) with $\mathcal{H}$ being potential games.
+
+# 2 PRELIMINARY
+
+In this paper, every bold lower-case alphabet denotes a vector, every bold upper-case alphabet denotes a matrix or a game. When we say a "game", we always mean a normal-form game. Given $n$ , let $\Delta^n$ denote the mixed strategy space of dimension $n - 1^6$ , i.e. $\{(z_1, z_2, \dots, z_n) \mid \sum_{j=1}^{n} z_j = 1\}$ . Normal-Form Games. Let $N$ denote the number of players in a game. Let $S_i$ denote the strategy set of Player $i$ , and $S := S_1 \times \dots \times S_N$ . Let $n_i = |S_i|$ . $\mathbf{s} = (s_1, \dots, s_N) \in S$ denotes a strategy
+
+profile of all players, and $u_{i}(\mathbf{s})$ denotes the payoff to Player $i$ when each player picks $s_i$ . A mixed strategy profile is denoted by $\mathbf{x} = (\mathbf{x}_1,\dots ,\mathbf{x}_N)\in \Delta^{n_1}\times \dots \times \Delta^{n_N}$ , and $u_{i}$ is extended to take mixed strategies as inputs via $u_{i}(\mathbf{x}) = \mathbb{E}_{\mathbf{s}\sim \mathbf{x}}[u_{i}(\mathbf{s})]$ . We let $-(i_1,\dots ,i_g)$ denote the player set other than players $i_1,\dots ,i_g$ . We also let
+
+$$
+U _ {j _ {1} j _ {2} \dots j _ {g}} ^ {i _ {1} i _ {2} \dots i _ {g}} (\mathbf {x}) = \mathbb {E} _ {\mathbf {s} _ {- (i _ {1}, \dots , i _ {g})} \sim \mathbf {x} _ {- (i _ {1}, \dots , i _ {g})}} \left[ u _ {i _ {1}} \left(s _ {i _ {1}} = j _ {1}, \dots , s _ {i _ {g}} = j _ {g}, \mathbf {s} _ {- (i _ {1}, \dots , i _ {g})}\right) \right], \tag {1}
+$$
+
+which is the expected payoff to Player $i_1$ when: for $1 \leq f \leq g$ , Player $i_f$ picks strategy $j_f$ , while for each player $i \notin \{i_1, \dots, i_g\}$ , she picks a strategy randomly following $\mathbf{x}_i$ . We also use $U_{j_1 j_2 \dots j_g}^{i_1 i_2 \dots i_g}$ if $\mathbf{x}$ is clear from the context. We say a game is a zero-sum game if $\sum_i u_i(\mathbf{s}) = 0$ for all $\mathbf{s} \in S$ , and we say a game is a coordination game if $u_i(\mathbf{s}) = u_k(\mathbf{s})$ for all Players $i$ and $k$ and for all $\mathbf{s} \in S$ .
+
+When $N = 2$ , such games are called bimatrix games, for which we adopt the notations below. Let $(\mathbf{A},\mathbf{B})$ denote a bimatrix game, where for any $j\in S_1$ , $k\in S_2$ , $A_{jk}\coloneqq u_1(j,k)$ , $B_{jk}\coloneqq u_2(j,k)$ . $\mathbf{x}$ and $\mathbf{y}$ denote mixed strategies of Players 1 and 2 respectively. A bimatrix game is a zero-sum game if $\mathbf{A} = -\mathbf{B}$ ; it is a coordination game if $\mathbf{A} = \mathbf{B}$ . Note that $U_j^1 = [\mathbf{A}\mathbf{y}]_j$ , $U_k^2 = [\mathbf{B}^\top \mathbf{x}]_k$ , which we denote by $A_{j},B_{k}$ respectively when $\mathbf{x},\mathbf{y}$ are clear from context; $B_{j},A_{k}$ are defined analogously.
+
+MWU, FTRL and OMWU in Games. All three algorithms have a step-size $\epsilon$ , and can be implemented as updating in the cumulative payoff (dual) space. In each round, the players' actions (mixed strategies) in the strategy space are functions of the cumulative payoff vectors to be defined below, and these actions are then used to determine the payoffs in the next round. For a player with $d$ strategies, let $\mathbf{p}^t \in \mathbb{R}^d$ denote her cumulative payoff vector at time $t$ , and let $\mathbf{p}^0 \in \mathbb{R}^d$ denote the starting point chosen by the player. For MWU in a game, the update rule for Player $i$ is
+
+$$
+p _ {j} ^ {t + 1} = p _ {j} ^ {t} + \epsilon \cdot U _ {j} ^ {i} (\mathbf {x} ^ {t}), \tag {2}
+$$
+
+where $U_{j}^{i}$ is the function defined in Eqn. (1), and $\mathbf{x}^t$ is the mixed strategy as below:
+
+$$
+x _ {j} ^ {t} = x _ {j} \left(\mathbf {p} ^ {t}\right) = \exp \left(p _ {j} ^ {t}\right) / \left(\sum_ {\ell \in S _ {i}} \exp \left(p _ {\ell} ^ {t}\right)\right) \tag {3}
+$$
+
+For OMWU in a game, the update rule for Player $i$ starts with $\mathbf{p}^1 = \mathbf{p}^0$ , and for $t \geq 2$ , $p_j^{t+1} = p_j^t + \epsilon \cdot \left[2U_j^i(\mathbf{x}^t) - U_j^i(\mathbf{x}^{t-1})\right]$ , where $\mathbf{x}^t$ is determined by Eqn. (3).
+
+For FTRL in a game, the update rule for Player $i$ is same as Eqn. (2), but $\mathbf{x}^t$ is determined as below using a convex regularizer function $h_i: \Delta^d \to \mathbb{R}$ : $\mathbf{x}^t = \arg \max_{\mathbf{x} \in \Delta^d} \{\langle \mathbf{p}^t, \mathbf{x} \rangle - h_i(\mathbf{x})\}$ . As all the results for MWU can be directly generalized to FTRL as discussed in (Cheung & Piliouras, 2019, Appendix D), to keep our exposition simple, in the rest of this paper, we focus on MWU and OMWU, and their comparisons. For bimatrix game, we use $\mathbf{p}, \mathbf{q}$ to denote the cumulative payoff vectors of Players 1 and 2 respectively.
+
+Dynamical Systems, Lyapunov Chaos and Volume Analysis. A learning-in-game system can be viewed as a discrete-time dynamical system, for which we present a simplified definition which suits our need. A discrete-time dynamical system in $\mathbb{R}^d$ is determined by a starting point $\mathbf{r}(0) \in \mathbb{R}^d$ and an update rule $\mathbf{r}(t+1) = f(\mathbf{r}(t))$ , where $f: \mathbb{R}^d \to \mathbb{R}^d$ is a function. The sequence $\mathbf{r}(0), \mathbf{r}(1), \mathbf{r}(2), \dots$ is called a trajectory of the dynamical system. When $f$ is clear from the context, we let $\Phi: (\mathbb{N} \cup \{0\}) \times \mathbb{R}^d \to \mathbb{R}^d$ denote the function such that $\Phi(t, \mathbf{r})$ is the value of $\mathbf{r}(t)$ generated by the dynamical system with starting point set to $\mathbf{r}$ . Given a set $\mathcal{U} \subset \mathbb{R}^d$ , we let $\Phi(t, \mathcal{U}) = \{\Phi(t, \mathbf{r}) | \mathbf{r} \in \mathcal{U}\}$ . Let $\mathcal{B}(\mathbf{r}, z)$ denote the open ball with center $\mathbf{r}$ and radius $z$ .
+
+There are several similar but not identical definitions of Lyapunov chaos, all capturing the **butterfly effect:** when the starting point is slightly perturbed, the resulting trajectories diverge quickly. We use the following definition, which was also used by Cheung & Piliouras (2019; 2020) implicitly. Intuitively, a system is Lyapunov chaotic in an open set $\mathcal{O} \subset \mathbb{R}^d$ if for any $\mathbf{r} \in \mathcal{O}$ and any open ball $B$ around $\mathbf{r}$ , as long as $\Phi(t, B)$ remains inside $\mathcal{O}$ , there exists $\mathbf{r}' \in B$ such that $\|\Phi(t, \mathbf{r}') - \Phi(t, \mathbf{r})\|$ grows exponentially with $t$ . Lyapunov exponent in the definition below is a measure of how fast the exponential growth is; the larger it is, the more unpredictable the dynamical system is.
+
+Definition 1. A dynamical system is Lyapunov chaotic in an open set $\mathcal{O} \subset \mathbb{R}^d$ if there exists a constant $\lambda > 0$ and a Lyapunov exponent $\gamma \equiv \gamma(\mathcal{O}) > 0$ , such that for any $\mathbf{r} \in \mathcal{O}$ , for any sufficiently small $\delta > 0$ and for all $t$ satisfying $0 \leq t < \min\{\tau \mid \tau \geq 0, \Phi(\tau, \mathcal{B}(\mathbf{r}, \delta)) \notin \mathcal{O}\}$ ,
+
+$$
+\sup _ {\mathbf {r} ^ {\prime} \in \mathcal {B} (\mathbf {r}, \delta)} \| \Phi (t, \mathbf {r} ^ {\prime}) - \Phi (t, \mathbf {r}) \| \geq \lambda \cdot \delta \cdot \exp (\gamma t).
+$$
+
+Definition 2. A dynamical system is Lyapunov chaotic everywhere if it is Lyapunov chaotic in any bounded open subset of $\mathbb{R}^d$ .
+
+In the above definitions, all norms and radii are Euclidean norms. For capturing round-off errors in computer algorithms and ML systems, it is more natural to use $\ell_{\infty}$ -norm with $\delta$ be the maximum round-off error per step, say $\sim 10^{-16}$ when IEEE 754 binary64 (standard double) is used.
+
+When $\mathcal{O}$ is a small set, it is easy to determine whether a dynamical system is Lyapunov chaotic in $\mathcal{O}$ , since the dynamic can be locally approximated by a linear dynamical system, where the eigenvalues of the local Jacobian characterizes chaotic behaviors. But when $\mathcal{O}$ is large, determining whether Lyapunov chaos occurs is difficult in general. Cheung & Piliouras (2019) found that volume analysis can be useful in this regard, based on the following simple observation.
+
+Proposition 3. In $\mathbb{R}^d$ , if a set $\mathcal{U}$ has volume at least $v$ , then its radius w.r.t. any point $\mathbf{r} \in \mathcal{U}$ is at least $v^{1/d}/2$ . Thus, if the volume of $\Phi(t,\mathcal{U})$ of some dynamical system is $\Omega(\exp(\gamma t))$ for some $\gamma > 0$ , then the radius of $\Phi(t,\mathcal{U})$ w.r.t. any point $\mathbf{r} \in \Phi(t,\mathcal{U})$ is $\Omega(\exp(\frac{\gamma}{d} \cdot t))$ .
+
+Cheung and Piliouras showed Lemma 4 below, which, for bimatrix games, reduces volume analysis to analyzing the sign of the function $C_{(\mathbf{A},\mathbf{B})}(\mathbf{p},\mathbf{q})$ defined in Eqn. (4) below; the sign also determines the local volume-changing behavior around the point $(\mathbf{p},\mathbf{q})$ when MWU is used. Based on Proposition 3 that converts volume expansion to radius expansion, the sign can be used to determine if the dynamical system is Lyapunov chaotic.
+
+Let $A_{j} = \sum_{k^{\prime}}A_{jk^{\prime}}y_{k^{\prime}} = \nabla_{\mathbf{x}_{j}}[\mathbf{x}^{\mathsf{T}}\mathbf{A}\mathbf{y}]$ and $A_{k} = \sum_{j^{\prime}}x_{j^{\prime}}A_{j^{\prime}k} = \nabla_{\mathbf{y}_{k}}[\mathbf{x}^{\mathsf{T}}\mathbf{A}\mathbf{y}]$ ; and, similarly, $B_{j} = \sum_{k^{\prime}}B_{jk^{\prime}}y_{k^{\prime}} = \nabla_{\mathbf{x}_{j}}[\mathbf{x}^{\mathsf{T}}\mathbf{B}\mathbf{y}]$ and $B_{k} = \sum_{j^{\prime}}x_{j^{\prime}}B_{j^{\prime}k} = \nabla_{\mathbf{y}_{k}}[\mathbf{x}^{\mathsf{T}}\mathbf{B}\mathbf{y}]$ . Then,
+
+$$
+C _ {(\mathbf {A}, \mathbf {B})} (\mathbf {p}, \mathbf {q}) = - \mathbb {E} _ {\mathbf {x}, \mathbf {y}} \left[ \left(A _ {j k} - A _ {j} - A _ {k}\right) \left(B _ {j k} - B _ {j} - B _ {k}\right) \right] + \mathbb {E} _ {\mathbf {x}, \mathbf {y}} \left[ A _ {j k} \right] \cdot \mathbb {E} _ {\mathbf {x}, \mathbf {y}} \left[ B _ {j k} \right]. \tag {4}
+$$
+
+Note that here $\mathbf{x}$ and $\mathbf{y}$ are the shorthand for $\mathbf{x}(\mathbf{p})$ and $\mathbf{y}(\mathbf{q})$ , which are the mixed strategies (i.e. probability distributions over strategies) of Players 1 and 2 respectively, as computed via Eqn. (3). Also, $\mathbb{E}_{\mathbf{x},\mathbf{y}}[f(j,k)] = \mathbb{E}_{(j,k)\sim (\mathbf{x}(\mathbf{p}),\mathbf{y}(\mathbf{q}))}[f(j,k)]$ is the expected value of $f(j,k)$ when the strategies $j$ and $k$ are randomly chosen according to the distributions $\mathbf{x}(\mathbf{p})$ and $\mathbf{y}(\mathbf{q})$ respectively.
+
+For multi-player game $\mathbf{G}$ , the analogous function $C_{\mathbf{G}}(\cdot)$ is given below; the $U$ quantities were defined in Eqn. (1). Lemma 4 is adapted from Cheung & Piliouras (2019) for games with any number of players. Derivation of Eqn. (5) uses the Jacobian of the corresponding dynamical system and integration by substitution; see Appendix D.
+
+$$
+C _ {\mathbf {G}} \left(\mathbf {p} _ {1}, \dots , \mathbf {p} _ {N}\right) = - \sum_ {i \in [ N ], j \in S _ {i}} \sum_ {k > i, \ell \in S _ {k}} x _ {i j} x _ {k \ell} \left(U _ {\ell j} ^ {k i} - U _ {\ell} ^ {k}\right) \left(U _ {j \ell} ^ {i k} - U _ {j} ^ {i}\right). \tag {5}
+$$
+
+Lemma 4. Suppose $\mathcal{O}$ is a set in the cumulative payoff space $\mathbb{R}^d$ where $d = n_1 + \dots +n_N$ , and
+
+$$
+\bar {c} _ {\mathbf {G}} (\mathcal {O}) := \inf _ {(\mathbf {p} _ {1}, \dots , \mathbf {p} _ {N}) \in \mathcal {O}} C _ {\mathbf {G}} \left(\mathbf {p} _ {1}, \dots , \mathbf {p} _ {N}\right) > 0. \tag {6}
+$$
+
+Then for the dynamical system in which MWU with any sufficiently small step-size $\epsilon$ is employed to play the game $\mathbf{G}$ , it is Lyapunov chaotic in $\mathcal{O}$ with Lyapunov exponent $\bar{c}_{\mathbf{G}}(\mathcal{O})\cdot \epsilon^{2} / 2d$ .
+
+If MWU is replaced by OMWU, then the same result holds by replacing the condition (6) with $\bar{c}_{\mathbf{G}}(\mathcal{O}) \coloneqq \inf_{(\mathbf{p}_1,\dots ,\mathbf{p}_N)\in \mathcal{O}}[-C_{\mathbf{G}}(\mathbf{p}_1,\dots ,\mathbf{p}_N)] > 0$
+
+Note that if we start from a Nash equilibrium in the strategy space, MWU and OMWU will stay at the equilibrium. However, if this equilibrium $(\mathbf{x}^{*},\mathbf{y}^{*})$ satisfies the conditions in Corollary 5 below, there are points arbitrarily close to the equilibrium that keep moving away from the equilibrium (if the region $\{(\mathbf{x}',\mathbf{y}') = (\mathbf{x}(\mathbf{p}),\mathbf{y}(\mathbf{q}))|(\mathbf{p},\mathbf{q})\in \mathcal{O}\}$ is large in the strategy simplex $\Delta^{n_1}\times \Delta^{n_2}$ ).
+
+Corollary 5 (Adapted from (Cheung & Piliouras, 2020, Theorem 5)). Let $(\mathbf{x}^{*},\mathbf{y}^{*})$ be a point in the interior of the strategy space. Suppose that there exists $(\mathbf{p},\mathbf{q})$ in the cumulative payoff space, such that $\mathbf{x}^{*} = \mathbf{x}(\mathbf{p}^{*})$ and $\mathbf{y}^{*} = \mathbf{y}(\mathbf{q}^{*})$ . Furthermore, suppose $C_{(\mathbf{A},\mathbf{B})}(\mathbf{p}^{*},\mathbf{q}^{*}) > 0$ , and $(\mathbf{p},\mathbf{q}) \in \mathcal{O}$ where $\mathcal{O}$ is the set described in Lemma 4. Then there are strategy points arbitrarily close to $(\mathbf{x}^{*},\mathbf{y}^{*})$ such that MWU in the game $(\mathbf{A},\mathbf{B})$ eventually leaves the corresponding strategy set of $\mathcal{O}$ , i.e. $\{(\mathbf{x}',\mathbf{y}') = (\mathbf{x}(\mathbf{p}),\mathbf{y}(\mathbf{q}))|(\mathbf{p},\mathbf{q}) \in \mathcal{O}\}$ .
+
+We give the intuitions behind the proof of Corollary 5. Suppose the contrary, i.e. for any open neighbourhood of $(\mathbf{p}^*, \mathbf{q}^*)$ , its flow never escapes from $\mathcal{O}$ . Then there are two contradicting facts. First, the volume of the flow expands at least exponentially with time. Second, by Eqn. (2), each $p_j^t$ grows at most linearly with $t$ (since $|U_j^i| \leq \max_{j,k} \{|A_{jk}|, |B_{jk}|\}$ ), and thus the volume of the flow can only expand at most polynomially with time.
+
+When the game is zero-sum, i.e., $\mathbf{B} = -\mathbf{A}$ , hence $C_{(\mathbf{A},\mathbf{B})}(\mathbf{p},\mathbf{q}) = \mathbb{E}_{\mathbf{x},\mathbf{y}}\left[(A_{jk} - A_j - A_k)^2\right] - \mathbb{E}_{\mathbf{x},\mathbf{y}}\left[(A_{jk})\right]^2$ . Since $\mathbb{E}_{\mathbf{x},\mathbf{y}}[A_{jk}] = \mathbb{E}_{\mathbf{x},\mathbf{y}}[A_j] = \mathbb{E}_{\mathbf{x},\mathbf{y}}[A_k]$ and hence $\mathbb{E}_{\mathbf{x},\mathbf{y}}[A_{jk} - A_j - A_k] = -\mathbb{E}_{\mathbf{x},\mathbf{y}}[A_{jk}]$ , $C_{(\mathbf{A},\mathbf{B})}(\mathbf{p},\mathbf{q})$ is indeed the variance of the random variable $A_{jk} - A_j - A_k$ , and thus is non-negative. By Eqn. (4), we have $C_{(\mathbf{A},\mathbf{B})}(\mathbf{p},\mathbf{q}) = -C_{(\mathbf{A}, - \mathbf{B})}(\mathbf{p},\mathbf{q})$ . Thus, for any coordination game $(\mathbf{A},\mathbf{A})$ , we have $C_{(\mathbf{A},\mathbf{A})}(\mathbf{p},\mathbf{q}) = -C_{(\mathbf{A}, - \mathbf{A})}(\mathbf{p},\mathbf{q}) \leq 0$ .
+
+# 3 BIMATRIX GAMES
+
+In this section, we focus on general bimatrix games $(\mathbf{A},\mathbf{B})$ . In Section 3.1, we present two tools for analyzing $C_{(\mathbf{A},\mathbf{B})}(\cdot)$ , and then we provide an example to show how to use these tools. In Section 3.2, we present two characterizations such that the dynamics are Lyapunov chaotic almost everywhere.
+
+# 3.1 TOOLS FOR ANALYZING BIMATRIX GAME
+
+First Tool: Canonical Decomposition for Bimatrix Games. For every bimatrix game $(\mathbf{A},\mathbf{B})$ , it admits a canonical decomposition (Basar & Ho (1974); Kalai & Kalai) into the sum of a zero-sum game $(\mathbf{Z}, - \mathbf{Z})$ and a coordination game $(\mathbf{C},\mathbf{C})$ , where $\mathbf{Z} = \frac{1}{2} (\mathbf{A} - \mathbf{B})$ and $\mathbf{C} = \frac{1}{2} (\mathbf{A} + \mathbf{B})$ , i.e.
+
+$$
+(\mathbf {A}, \mathbf {B}) = (\mathbf {Z}, - \mathbf {Z}) + (\mathbf {C}, \mathbf {C}).
+$$
+
+We call $(\mathbf{Z}, -\mathbf{Z})$ the zero-sum part of the game $(\mathbf{A}, \mathbf{B})$ , and $(\mathbf{C}, \mathbf{C})$ the coordination part of the game. Our first result shows that the function $C(\cdot)$ can be decomposed neatly into the two parts too.
+
+Lemma 6. For any bimatrix game $(\mathbf{A},\mathbf{B})$
+
+$$
+C _ {(\mathbf {A}, \mathbf {B})} (\mathbf {p}, \mathbf {q}) \equiv C _ {(\mathbf {Z}, - \mathbf {Z})} (\mathbf {p}, \mathbf {q}) + C _ {(\mathbf {C}, \mathbf {C})} (\mathbf {p}, \mathbf {q}).
+$$
+
+Proof. We use Eqn. (4) to expand the following:
+
+$$
+\begin{array}{l} 4 \cdot C _ {(\mathbf {Z}, - \mathbf {Z})} (\mathbf {p}, \mathbf {q}) + 4 \cdot C _ {(\mathbf {C}, \mathbf {C})} (\mathbf {p}, \mathbf {q}) \\ = \mathbb {E} \left[ \left(A _ {j k} - B _ {j k} - A _ {j} + B _ {j} - A _ {k} + B _ {k}\right) ^ {2} \right] - \mathbb {E} \left[ A _ {j k} - B _ {j k} \right] ^ {2} \\ - \mathbb {E} \left[ \left(A _ {j k} + B _ {j k} - A _ {j} - B _ {j} - A _ {k} - B _ {k}\right) ^ {2} \right] + \mathbb {E} \left[ A _ {j k} + B _ {j k} \right] ^ {2} \\ = \mathbb {E} \left[ \left(A _ {j k} - B _ {j k} - A _ {j} + B _ {j} - A _ {k} + B _ {k}\right) ^ {2} - \left(A _ {j k} + B _ {j k} - A _ {j} - B _ {j} - A _ {k} - B _ {k}\right) ^ {2} \right] \\ - \left(\mathbb {E} \left[ A _ {j k} \right] - \mathbb {E} \left[ B _ {j k} \right]\right) ^ {2} + \left(\mathbb {E} \left[ A _ {j k} \right] + \mathbb {E} \left[ B _ {j k} \right]\right) ^ {2} \\ = \mathbb {E} \left[ 4 \left(- B _ {j k} + B _ {j} + B _ {k}\right) \left(A _ {j k} - A _ {j} - A _ {k}\right) \right] + 4 \cdot \mathbb {E} \left[ A _ {j k} \right] \cdot \mathbb {E} \left[ B _ {j k} \right] = 4 \cdot C _ {\left(\mathbf {A}, \mathbf {B}\right)} (\mathbf {p}, \mathbf {q}). \\ \end{array}
+$$
+
+By the end of Section 2, we discussed that $C_{(\mathbf{Z}, - \mathbf{Z})}(\mathbf{p},\mathbf{q})$ is always non-negative and $C_{(\mathbf{C},\mathbf{C})}(\mathbf{p},\mathbf{q})$ is always non-positive. By the above lemma, we can analyze the volume-changing behavior of a bimatrix game $(\mathbf{A},\mathbf{B})$ by looking at its zero-sum and coordination parts independently. One simple intuition is that if the coordination (resp. zero-sum) part is small, then the volume-changing behavior of $(\mathbf{A},\mathbf{B})$ is closer to the behavior of the zero-sum (resp. coordination) part. We realize this intuition quantitatively in the next subsection.
+
+Second Tool: Trivial matrix. Trivial matrices are matrices which do not affect the volume-changing behavior, as depicted in Lemma 8 below.
+
+Definition 7 (Trivial Matrix). $\mathbf{T} \in \mathbb{R}^{n \times m}$ is a trivial matrix if there exists real numbers $u_1, u_2, \dots, u_n$ and $v_1, v_2, \dots, v_m$ such that $T_{jk} = u_j + v_k$ for all $j \in [n], k \in [m]$ .
+
+Lemma 8. For any two trivial matrices $\mathbf{T}^1, \mathbf{T}^2$ , for any two matrices $\mathbf{A}$ , $\mathbf{B} \in \mathbb{R}^{n \times m}$ ,
+
+$$
+C _ {(\mathbf {A}, \mathbf {B})} (\mathbf {p}, \mathbf {q}) \equiv C _ {(\mathbf {A} + \mathbf {T} ^ {1}, \mathbf {B} + \mathbf {T} ^ {2})} (\mathbf {p}, \mathbf {q}).
+$$
+
+One immediate application of this lemma is for two player potential games.
+
+Definition 9. A game $\mathbf{G}$ is a potential game if there exists a potential function $\mathcal{P}:S\to \mathbb{R}$ such that for any Player $i$ and any strategy profile $\mathbf{s}\in S$ , $\mathcal{P}(s_i,\mathbf{s}_{-i}) - \mathcal{P}(s_i',\mathbf{s}_{-i}) = u_i(s_i,\mathbf{s}_{-i}) - u_i(s_{i'},\mathbf{s}_{-i})$ .
+
+For the potential game, we have the following observation:
+
+Observation 10. For any bimatrix potential game $(\mathbf{A},\mathbf{B})$ , there is a coordination game $(\mathbf{P},\mathbf{P})$ such that $\mathbf{A} - \mathbf{P}$ , $\mathbf{B} - \mathbf{P}$ are trivial matrices. $\mathbf{P}$ is the matrix representation of the potential function $\mathcal{P}$ .
+
+This observation immediately implies that the volume-changing behavior of a potential game is equivalent to that of a corresponding coordination game.
+
+We give a concrete example to show how these tools help us to analyze the $C_{(\mathbf{A},\mathbf{B})}(\cdot)$ .
+
+A Simple Example. We will show how to use our tools to demonstrate $C(\cdot) \geq 0$ everywhere for the following game. In the example, each player has three strategies. The payoff bimatrix $(\mathbf{A}, \mathbf{B})$ is given below. The first number gives the payoff of the row player, who chooses a strategy from $\{a, b, c\}$ ; the second number gives the payoff of the column player, who chooses a strategy from $\{1, 2, 3\}$ . We first use our first tool to decompose this game into zero-sum part $(\mathbf{Z}, -\mathbf{Z})$ and
+
+ | Strategy 1 | Strategy 2 | Strategy 3 |
| Strategy a | (4,4) | (12,-4) | (-6,10) |
| Strategy b | (-8,8) | (0,0) | (12,-4) |
| Strategy c | (14,-2) | (-8,8) | (4,4) |
+
+coordination part $(\mathbf{C},\mathbf{C})$ , where $\mathbf{Z} = \left[ \begin{array}{rrr}0 & 8 & -8\\ -8 & 0 & 8\\ 8 & 0 & 0 \end{array} \right]$ and $\mathbf{C} = \left[ \begin{array}{rrr}4 & 4 & 2\\ 0 & 0 & 4\\ 6 & 0 & 4 \end{array} \right]$ . At this point, we still cannot easily figure out which one is larger between $C_{(\mathbf{Z}, - \mathbf{Z})}(\cdot)$ and $C_{(\mathbf{C}, - \mathbf{C})}(\cdot)$ . However, we can further decompose the coordination part by the second tool: $\mathbf{C} = \left[ \begin{array}{rrr}4 & 2 & 4\\ 2 & 0 & 2\\ 4 & 2 & 4 \end{array} \right] + \left[ \begin{array}{rrr}0 & 2 & -2\\ -2 & 0 & 2\\ 2 & -2 & 0 \end{array} \right]$ , where the first matrix on the RHS is a trivial matrix (using notations in Definition 7, $u = v^{\top} = [2,0,2]$ ). It's easy to see the second matrix on the RHS is $\frac{1}{4}\mathbf{Z}$ . Then by Lemmas 6 and 8, and the definition of the function $C$ , for any point $(\mathbf{p},\mathbf{q})$ in the cumulative payoff space,
+
+$$
+C _ {(\mathbf {A}, \mathbf {B})} (\mathbf {p}, \mathbf {q}) = C _ {(\mathbf {Z}, - \mathbf {Z})} (\mathbf {p}, \mathbf {q}) + C _ {(\frac {1}{4} \mathbf {Z}, \frac {1}{4} \mathbf {Z})} (\mathbf {p}, \mathbf {q}) = \left(1 - (1 / 4) ^ {2}\right) \cdot C _ {(\mathbf {Z}, - \mathbf {Z})} (\mathbf {p}, \mathbf {q}) \geq 0.
+$$
+
+# 3.2 RESULTS FOR BIMATRIX GAMES
+
+In this subsection, we identify several characterizations for general bimatrix games in which we have chaotic behavior with MWU dynamic in a following set $S^{\delta}$ in the cumulative payoff space $\mathbb{R}^{n_1 + n_2}$ . Note that when $\delta$ is tiny, its strategy correspondence covers almost the entirety of the strategy simplex $\Delta^{n_1} \times \Delta^{n_2}$ , thus we informally say that if the dynamical system is Lyapunov chaotic in $S^{\delta}$ for a tiny $\delta$ , then it is Lyapunov chaotic almost everywhere.
+
+$$
+S ^ {\delta} = \left\{\left(\mathbf {p}, \mathbf {q}\right) | \forall j \in S _ {1}, k \in S _ {2}, x _ {j} (\mathbf {p}) \geq \delta \wedge y _ {k} (\mathbf {q}) \geq \delta \right\}.
+$$
+
+In order to show chaotic behavior of MWU in a specific bimatrix game $(\mathbf{A},\mathbf{B})$ , it is sufficient to show $C_{(\mathbf{A},\mathbf{B})}(\mathbf{p},\mathbf{q})$ is strictly positive in the region $S^{\delta}$ , due to Lemma 4. In the previous subsection, we show that for each game $(\mathbf{A},\mathbf{B})$ , it can be decomposed into a zero-sum part $(\mathbf{Z}, - \mathbf{Z})$ and a coordination part $(\mathbf{C},\mathbf{C})$ . Furthermore, $C_{(\mathbf{A},\mathbf{B})}(\mathbf{p},\mathbf{q}) = C_{(\mathbf{Z}, - \mathbf{Z})}(\mathbf{p},\mathbf{q}) + C_{(\mathbf{C},\mathbf{C})}(\mathbf{p},\mathbf{q})$ . We also raise an intuition that if the zero-sum part is small, then the volume behavior in the game $(\mathbf{A},\mathbf{B})$ will be similar that in the coordination part; conversely, if the coordination part is small, then the volume behavior will be similar to the zero-sum part. However, we have not yet presented a way to compare the largeness of the two parts. This is what we do here.
+
+# 3.2.1 FIRST CHARACTERIZATION: MATRIX DOMINATION
+
+The first characterization we identify is matrix domination. In this part, we show that under certain conditions, the zero-sum part is always no less than the coordination part, i.e. $C_{(\mathbf{Z}, -\mathbf{Z})}(\mathbf{p}, \mathbf{q}) \geq -C_{(\mathbf{C}, \mathbf{C})}(\mathbf{p}, \mathbf{q})$ for all $(\mathbf{p}, \mathbf{q})$ . This directly implies $C_{(\mathbf{A}, \mathbf{B})}(\mathbf{p}, \mathbf{q})$ will be non-negative in the whole cumulative payoff space. Interestingly, the condition we identify is both necessary and sufficient. Similar result can also be achieved in the case that coordination part is always no less than the zero-sum part. We first introduce the definition of the matrix domination.
+
+Definition 11. We say matrix $\mathbf{K}$ dominates matrix $\mathbf{L}$ if they are of the same dimension, and for any row indices $j, j'$ and column indices $k, k'$ ,
+
+$$
+\left| \mathbf {K} _ {j k} + \mathbf {K} _ {j ^ {\prime} k ^ {\prime}} - \mathbf {K} _ {j k ^ {\prime}} - \mathbf {K} _ {j ^ {\prime} k} \right| \geq \left| \mathbf {L} _ {j k} + \mathbf {L} _ {j ^ {\prime} k ^ {\prime}} - \mathbf {L} _ {j k ^ {\prime}} - \mathbf {L} _ {j ^ {\prime} k} \right|.
+$$
+
+Note that the domination induces a partial order on all matrices: if $\mathbf{K}$ dominates $\mathbf{L}$ and $\mathbf{L}$ dominates $\mathbf{M}$ , then $\mathbf{K}$ dominates $\mathbf{M}$ . The theorem below gives the necessary and sufficient condition.
+
+Theorem 12. $C_{(\mathbf{A},\mathbf{B})}(\mathbf{p},\mathbf{q})$ is non-negative for all $(\mathbf{p},\mathbf{q})$ if and only if matrix of the zero-sum part $\mathbf{Z}$ dominates the matrix of the coordination part $\mathbf{C}$ .
+
+The above theorem is based on the following crucial observation.
+
+Observation 13. For any matrix $\mathbf{Z}$
+
+$$
+C_{(\mathbf{Z}, - \mathbf{Z})}(\mathbf{p},\mathbf{q}) = \frac{1}{4}\sum_{\substack{j,j^{\prime}\in S_{1}\\ k,k^{\prime}\in S_{2}}}x_{j}(\mathbf{p})\cdot y_{k}(\mathbf{q})\cdot x_{j^{\prime}}(\mathbf{p})\cdot y_{k^{\prime}}(\mathbf{q})\cdot (Z_{jk} + Z_{j^{\prime}k^{\prime}} - Z_{jk^{\prime}} - Z_{j^{\prime}k})^{2}.
+$$
+
+Matrix domination only implies $C_{(\mathbf{A},\mathbf{B})}(\mathbf{p},\mathbf{q})$ is non-negative. In order to have $C_{(\mathbf{A},\mathbf{B})}(\mathbf{p},\mathbf{q})$ to be strictly positive in the set $S$ , we need $\theta$ -domination.
+
+Definition 14. We say matrix $\mathbf{K}$ $\theta$ -dominates $(\theta >0)$ matrix $\mathbf{L}$ if $\mathbf{K}$ dominates $\mathbf{L}$ , and there exist $j$ , $j^{\prime}$ , $k$ , $k^{\prime}$ such that $|\mathbf{K}_{jk} + \mathbf{K}_{j^{\prime}k^{\prime}} - \mathbf{K}_{jk^{\prime}} - \mathbf{K}_{j^{\prime}k}|\geq |\mathbf{L}_{jk} + \mathbf{L}_{j^{\prime}k^{\prime}} - \mathbf{L}_{jk^{\prime}} - \mathbf{L}_{j^{\prime}k}| + \theta$ .
+
+The following theorem holds due to Lemma 4.
+
+Theorem 15. For any general bimatrix game $(\mathbf{A},\mathbf{B})$ which is decomposed into zero-sum part $(\mathbf{Z}, - \mathbf{Z})$ and coordination part $(\mathbf{C},\mathbf{C})$ , if $\mathbf{Z}$ $\theta$ -dominates $\mathbf{C}$ , then MWU with any sufficiently small step-size $\epsilon$ in the game $(\mathbf{A},\mathbf{B})$ is Lyapunov chaotic in $S^{\delta}$ with Lyapunov exponent $\frac{\theta^2\delta^4}{2(n_1 + n_2)}\epsilon^2$ .
+
+Note that in Definition 14, $\mathbf{K}$ $\theta$ -dominates $\mathbf{L}$ if a finite number of inequalities are satisfied. In the context of Theorem 15, it is easy to see that there are quite many games $(\mathbf{A},\mathbf{B})$ , such that $\mathbf{Z}$ $\theta$ -dominates $\mathbf{C}$ with all those inequalities strictly satisfied. Thus, there exists an open neighbourhood around these games such that every game in the neighbourhood has its zero-sum part $\theta$ -dominates its coordination part. This shows that such family of games has positive Lebesgue measure.
+
+# 3.2.2 SECOND CHARACTERIZATION: LINEAR PROGRAM
+
+Note that matrix domination is not always true. In some scenarios, the zero-sum matrix might not dominate the coordination matrix. Yet, it is still possible that $C_{(\mathbf{A},\mathbf{B})}(\mathbf{p},\mathbf{q})$ is strictly positive in the region $S^{\delta}$ , when every entry in the coordination matrix is small.
+
+Precisely, for a general bimatrix game $(\mathbf{A},\mathbf{B})$ , if its coordination part $(\mathbf{C},\mathbf{C})$ is small in the sense that the absolute values of all entries in $\mathbf{C}$ are smaller than some constant $r$ , then we can bound $C_{(\mathbf{C}, - \mathbf{C})}(\cdot)$ by $\mathcal{O}(r^2)$ . This is not the only case where we can bound $C_{(\mathbf{C}, - \mathbf{C})}(\cdot)$ by a small term. Even if the entries in matrix $\mathbf{C}$ are large, we can use trivial matrices to reduce them without affecting $C_{(\mathbf{C}, - \mathbf{C})}(\cdot)$ . This is done via a linear programming approach described below.
+
+Given a matrix $\mathbf{K}$ , let $r(\mathbf{K})$ be the optimal value of following linear program:
+
+$$
+\min _ {r \geq 0, \mathbf {g}, \mathbf {h}} r \text {s u c h t h a t} \forall j, k, - r \leq K _ {j k} - g _ {j} - h _ {k} \leq r. \tag {7}
+$$
+
+Note that $\{g_j + h_k\}_{j,k}$ is a trivial matrix. Let $\mathbf{K}' = \mathbf{K} - \{g_j + h_k\}_{j,k}$ . By Lemma 8, $C_{(\mathbf{K}, - \mathbf{K})}(\cdot) = C_{(\mathbf{K}', - \mathbf{K}')}(\cdot)$ . The following lemma shows that the value of $C_{(\mathbf{K}, - \mathbf{K})}(\cdot)$ is closely related to $r(\mathbf{K})$ .
+
+Lemma 16. For any $(\mathbf{p},\mathbf{q})$ in $S^{\delta} = \{(\mathbf{p},\mathbf{q})|\forall j,k,x_j(\mathbf{p})\geq \delta$ and $y_{k}(\mathbf{q})\geq \delta \}$ $(r(\mathbf{K})\cdot \delta)^2\leq$ $C_{(\mathbf{K}, - \mathbf{K})}(\mathbf{p},\mathbf{q})\leq r(\mathbf{K})^2.$
+
+Then the theorem below follows by applying Lemma 16 with Lemma 4.
+
+Theorem 17. For any general bimatrix game $(\mathbf{A},\mathbf{B})$ which is decomposed into zero-sum part $(\mathbf{Z}, - \mathbf{Z})$ and coordination part $(\mathbf{C},\mathbf{C})$ , if there exists $\theta >0$ such that $(r(\mathbf{Z})\cdot \delta)^2\geq (r(\mathbf{C}))^2 +$ $(\theta \delta^{2})^{2}$ , then MWU with any sufficiently small step-size $\epsilon$ in the game $(\mathbf{A},\mathbf{B})$ is Lyapunov chaotic in $S^{\delta}$ with Lyapunov exponent $\frac{\theta^2\delta^4}{2(n_1 + n_2)}\epsilon^2$ .
+
+Intuitively, $r(\mathbf{Z})$ is a distance measure from the zero-sum game $(\mathbf{Z}, -\mathbf{Z})$ to the trivial game space; analogously, $r(\mathbf{C})$ is a distance measure from the coordination game $(\mathbf{C}, \mathbf{C})$ to the trivial game space. Theorem 17 shows that if the coordination part is much closer to the trivial game space than the zero-sum part, then MWU in this game is Lyapunov chaotic in $S^{\delta}$ .
+
+# 4 EXPERIMENT
+
+To illuminate that volume expansion occurs when MWU is employed in game with small coordination part, we simulate MWU in the reduced payoff space $^{10}$ in a game which is the sum of zero-sum game $\left(\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}, \begin{bmatrix} -1 & 0 \\ 0 & -1 \end{bmatrix}\right)$ and coordination game $\left(\begin{bmatrix} -0.05 & 0.03 \\ 0.03 & -0.05 \end{bmatrix}, \begin{bmatrix} -0.05 & 0.03 \\ 0.03 & -0.05 \end{bmatrix}\right)$ .
+
+In the strategy space, $\mathbf{x}^{*} = \mathbf{y}^{*} = (0.5, 0.5)$ is the unique Nash equilibrium of the game. In the reduced dual space, the origin corresponds to the equilibrium. We pick a square of side length 0.004 around the origin as the set of starting points (the small red square in the middle of Figure 1). As these starting points are evolved via MWU with step-size 0.02, we take snapshots after every 1900 time steps, which are shown with colors blue, pink, lime, purple, orange and green (then the colors repeat) respectively. As shown in the figure, the volume (which is area in two-dimensional space) increases, and its shape changes from a square to a parallelogram.
+
+
+Figure 1: Volume expansion of MWU in the bimatrix game $\begin{bmatrix} 0.95 & 0.03 \\ [0.03 & 0.95] \end{bmatrix}$ , $\begin{bmatrix} -1.05 & 0.03 \\ 0.03 & -1.05 \end{bmatrix}$
+
+# 5 CONCLUSION AND FUTURE WORKS
+
+In this paper, we analyze the volume-changing behavior of several well-known learning algorithms (MWU, OMWU, FTRL) on general bimatrix games and multi-player games, which leads to a Lyapunov chaos analysis. For bimatrix games, we do this by decomposing a game into the zero-sum part and the coordination part. This decomposition turns the volume analysis into comparing the strengths of volume-expansion (zero-sum part) and volume-contraction (coordination part) of the MWU dynamics. The comparison of strengths is made via the notion of matrix domination and the use of a linear program. For multi-player games, by the local equivalence, we show that the volume-changing behavior of MWU and OMWU are opposite to each other even in multi-player games. We also show, for a general multi-player potential game, the key function $C_{\mathbf{G}}$ is equal to a corresponding multi-player coordination game, which implies it is not positive.
+
+Studying learning in matrix (normal-form) games, which are among one of the most classical game models, is a good theoretical starting point. Matrix games admit mathematically amenable analyses, as demonstrated in our work and many previous works. For future works, we are immensely interested in chaos analyses on settings that are more relevant to applications in ML, e.g. general GANs and differential games. We believe the techniques we use (volume analysis, game decomposition, etc.) can be applicable.
+
+# ACKNOWLEDGMENTS
+
+We thank several anonymous reviewers for their suggestions, which help to improve the readability of this paper from its earlier version. Yixin Tao acknowledges NSF grant CCF-1527568, CCF-1909538 and ERC Starting Grant ScaleOpt-757481. Yun Kuen Cheung acknowledges Singapore NRF 2018 Fellowship NRF-NRFF2018-07.
+
+# REFERENCES
+
+James P. Bailey and Georgios Piliouras. Multiplicative weights update in zero-sum games. In EC, pp. 321-338, 2018.
+Luis Barreira. Poincare recurrence: old and new. In XIVth International Congress on Mathematical Physics. World Scientific., pp. 415-422, 2006.
+Tamer Basar and Yu-Chi Ho. Informational properties of the nash solutions of two stochastic nonzero-sum games. Journal of Economic Theory, 7(4):370-387, 1974.
+Victor Boone and Georgios Piliouras. From darwin to poincaré and von neumann: Recurrence and cycles in evolutionary and algorithmic game theory. In International Conference on Web and Internet Economics (WINE), pp. 85-99. Springer, 2019.
+Ozan Candogan, Ishai Menache, Asuman E. Ozdaglar, and Pablo A. Parrilo. Flows and decompositions of games: Harmonic and potential games. Math. Oper. Res., 36(3):474-503, 2011. doi: 10.1287/moor.1110.0500. URL https://doi.org/10.1287/moor.1110.0500.
+Ozan Candogan, Asuman E. Ozdaglar, and Pablo A. Parrilo. Dynamics in near-potential games. Games Econ. Behav., 82:66-90, 2013a. doi: 10.1016/j.geb.2013.07.001. URL https://doi.org/10.1016/j.geb.2013.07.001.
+Ozan Candogan, Asuman E. Ozdaglar, and Pablo A. Parrilo. Near-potential games: Geometry and dynamics. ACM Trans. Economics and Comput., 1(2):11:1-11:32, 2013b. doi: 10.1145/2465769.2465776. URL https://doi.org/10.1145/2465769.2465776.
+Nikolo Cesa-Bianchi and Gabor Lugosi. Prediction, Learning, and Games. Cambridge University Press, 2006.
+Yun Kuen Cheung. Multiplicative weights updates with constant step-size in graphical constant-sum games. In NeurIPS, pp. 3532-3542, 2018.
+Yun Kuen Cheung and Georgios Piliouras. Vortices instead of equilibria in minmax optimization: Chaos and butterfly effects of online learning in zero-sum games. In Conference on Learning Theory, COLT 2019, 25-28 June 2019, Phoenix, AZ, USA, pp. 807-834, 2019. URL http://proceedings.mlr.press/v99/cheung19a.html.
+Yun Kuen Cheung and Georgios Piliouras. Chaos, extremism and optimism: Volume analysis of learning in games. In NeurIPS, 2020.
+Chao-Kai Chiang, Tianbao Yang, Chia-Jung Lee, Mehrdad Mahdavi, Chi-Jen Lu, Rong Jin, and Shenghuo Zhu. Online optimization with gradual variations. In COLT 2012 - The 25th Annual Conference on Learning Theory, June 25-27, 2012, Edinburgh, Scotland, pp. 6.1-6.20, 2012. URL http://proceedings.mlr.press/v23/chiang12/chiang12.pdf.
+Thiparat Chotibut, Fryderyk Falniowski, Michal Misirewicz, and Georgios Piliouras. Family of chaotic maps from game theory. Dynamical Systems, pp. 1-16, 2020. doi: 10.1080/14689367.2020.1795624. Published online. Volume and number not yet assigned.
+Constantinos Daskalakis and Ioannis Panageas. The limit points of (optimistic) gradient descent in min-max optimization. In Advances in Neural Information Processing Systems, pp. 9256-9266, 2018.
+Constantinos Daskalakis and Ioannis Panageas. Last-iterate convergence: Zero-sum games and constrained min-max optimization. ITCS, 2019.
+Constantinos Daskalakis, Alan Deckelbaum, and Anthony Kim. Near-optimal no-regret algorithms for zero-sum games. Games and Economic Behavior, 92:327-348, 2015.
+James W. Demmel. Applied Numerical Linear Algebra. SIAM, 1997. doi: 10.1137/1.9781611971446.
+Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. In EuroCOLT, pp. 23-37, 1995.
+
+Yoav Freund and Robert E. Schapire. Game theory, on-line prediction and boosting. In $COLT$ , pp. 325-332, 1996.
+Drew Fudenberg and David K. Levine. The Theory of Learning in Games. MIT Press Books. The MIT Press, 1998.
+Tobias Galla and J. Doyne Farmer. Complex dynamics in learning complicated games. PNAS, 110 (4):1232-1236, 2013.
+Sergiu Hart and Andreu Mas-Colell. Simple Adaptive Strategies: From Regret-Matching to Uncoupled Dynamics. Number 8408 in World Scientific Books. World Scientific Publishing Co. Pte. Ltd., June 2013. ISBN ARRAY(0x3d2ca060). URL https://ideas.repec.org/b/ wsi/wsbook/8408.html.
+Elad Hazan and Satyen Kale. Extracting certainty from uncertainty: regret bounded by variation in costs. Mach. Learn., 80(2-3):165-188, 2010. doi: 10.1007/s10994-010-5175-x. URL https://doi.org/10.1007/s10994-010-5175-x.
+Josef Hofbauer and Karl Sigmund. *Evolutionary Games and Population Dynamics*. Cambridge University Press, 1998. doi: 10.1017/CBO9781139173179.
+Adam Tauman Kalai and Ehud Kalai. Engineering cooperation in two-player games. http://www.robots.ox.ac.uk/~sjrob/Outgoing/GT_talks/kalai.pdf.
+Michael J. Kearns, Michael L. Littman, and Satinder P. Singh. Graphical models for game theory. In UAI '01: Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence, University of Washington, Seattle, Washington, USA, August 2-5, 2001, pp. 253-260, 2001. URL https://dslpitt.org/uai/displayArticleDetails.jsp?mmnu=1&smnu=2&article_id=107&proceeding_id=17.
+Alistair Letcher, David Balduzzi, Sébastien Racanière, James Martens, Jakob N. Foerster, Karl Tuyls, and Thore Graepel. Differentiable game mechanics. J. Mach. Learn. Res., 20:84:1-84:40, 2019. URL http://jmlr.org/papers/v20/19-008.html.
+Nick Littlestone and Manfred K Warmuth. The weighted majority algorithm. Information and computation, 108(2):212-261, 1994.
+Panayotis Mertikopoulos, Christos Papadimitriou, and Georgios Piliouras. Cycles in adversarial regularized learning. In SODA, pp. 2703-2717, 2018.
+D. Monderer and L. S. Shapley. Potential games. Games and Economic Behavior, pp. 124-143, 1996.
+Gerasimos Palaiopanos, Ioannis Panageas, and Georgios Piliouras. Multiplicative weights update with constant step-size in congestion games: Convergence, limit cycles and chaos. In NIPS, pp. 5874-5884, 2017.
+Julien Perolat, Remi Munos, Jean-Baptiste Lespiau, Shayegan Omidshafiei, Mark Rowland, Pedro Ortega, Neil Burch, Thomas Anthony, David Balduzzi, Bart De Vylder, et al. From poincaré recurrence to convergence in imperfect information games: Finding equilibrium via regularization. arXiv preprint arXiv:2002.08456, 2020.
+Georgios Piliouras and Jeff S. Shamma. Optimization despite chaos: Convex relaxations to complex limit sets via poincaré recurrence. In SODA, pp. 861-873, 2014.
+H. Poincaré. Sur le problème des trois corps et les équations de la dynamique. Acta Math, 13:1-270, 1890.
+Ali Rahimi. NIPS 2017 test-of-time award presentation. https://www.youtube.com/watch?v=ORHFOnaEzPc.
+Alexander Rakhlin and Karthik Sridharan. Optimization, learning, and games with predictable sequences. In NIPS, pp. 3066-3074, 2013a.
+
+Alexander Rakhlin and Karthik Sridharan. Online learning with predictable sequences. In *COLT 2013 - The 26th Annual Conference on Learning Theory*, June 12-14, 2013, Princeton University, NJ, USA, pp. 993-1019, 2013b. URL http://jmlr.org/proceedings/papers/v30/Rakhlin13.html.
+William H. Sandholm. Population Games and Evolutionary Dynamics. MIT Press, 2010.
+Yuzuru Sato, Eizo Akiyama, and J. Doyne Farmer. Chaos in learning a simple two-person game. *PNAS*, 99(7):4748-4751, 2002.
+Vasilis Syrgkanis, Alekh Agarwal, Haipeng Luo, and Robert E. Schapire. Fast convergence of regularized learning in games. In Proceedings of the 28th International Conference on Neural Information Processing Systems, NIPS'15, pp. 2989-2997, 2015.
+Emmanouil-Vasileios Vlatakis-Gkaragkounis, Lampros Flokas, and Georgios Piliouras. Poincaré recurrence, cycles and spurious equilibria in gradient-descent-ascent for non-convex non-concave zero-sum games. In Advances in Neural Information Processing Systems, pp. 10450-10461, 2019.
+
+# A FURTHER RELATED WORK
+
+In the study of no-regret learning (e.g. Littlestone & Warmuth (1994); Freund & Schapire (1995)), a vast literature concerns general or even adversarial settings, in which the online arrivals of payoff values come with no pattern or even from an adversary. More recently, settings where the online payoffs are more well-behaved, under the term of "predictable sequence" coined by Rakhlin & Sridharan (2013b), have been studied. These settings include game dynamics, as the online payoffs are determined by the mixed strategy choices of the players, while these choices are updated gradually and somewhat predictably. For these settings, online learning algorithms that perform particularly well, e.g. achieving regret bound below the canonical $\mathcal{O}(\sqrt{T})$ limit, are designed and studied (Hazan & Kale (2010); Chiang et al. (2012); Syrgkanis et al. (2015)). For instance, Nesterov's excessive gap technique and optimistic mirror descent are found to achieve near-optimal regret $\mathcal{O}(\log T)$ in zero-sum games (Daskalakis et al. (2015); Rakhlin & Sridharan (2013a)), and thus the empirical average of the learning sequence converges to Nash equilibrium of the game (see Freund & Schapire (1996) for an explanation). OMWU (with time-varying step-sizes), and more generally optimistic variant of FTRL (Rakhlin & Sridharan (2013b)), are some canonical examples of such online learning algorithms.
+
+Recently, there is a stream of work that examines how learning algorithms behave in games or min-max optimization from a dynamical-systemic perspective. Replicator dynamics (RD; the continuous-time analogue of MWU) and continuous-time FTRL are found to achieve optimal regret in general settings (Mertikopoulos et al. (2018)). Furthermore, RD in zero-sum games or graphical constant-sum games admits a constant of motion and preserves volume; these two properties are used to show that such dynamical systems are near-periodic (Piliouras & Shamma (2014); Mertikopoulos et al. (2018); Boone & Piliouras (2019); Vlatakis-Gkaragkounis et al. (2019); Perolat et al. (2020)), captured rigorously under the notion of Poincaré recurrence (Poincaré (1890); Barreira (2006)). However, when MWU, the forward Euler discretization of RD, is used in discrete-time setting in zero-sum games, the near-periodicity is destroyed totally; indeed, the system will never visit the same point (or its tiny neighbourhood) twice, converge to the boundary of the strategy simplex, and fluctuate there irregularly (Bailey & Piliouras (2018); Cheung (2018)). In contrast, (discrete-time) OMWU in zero-sum game is shown to converge to Nash equilibrium (Daskalakis & Panageas (2019)); yet, in the more general setting of min-max optimization, it was found that Optimistic Gradient Descent Ascent (OGDA) can have limit points other than (local) min-max solutions (Daskalakis & Panageas (2018)).
+
+# B PROOFS IN SECTION 3
+
+Proof of Lemma 8. First, observe that it suffices to prove that the lemma holds when $\mathbf{T}^1$ is a trivial matrix and $\mathbf{T}^2$ is the zero matrix. Then the lemma holds for any trivial matrices $\mathbf{T}^1, \mathbf{T}^2$ due to symmetry: $C_{(\mathbf{A},\mathbf{B})}(\mathbf{p},\mathbf{q}) = C_{(\mathbf{A} + \mathbf{T}^1,\mathbf{B})}(\mathbf{p},\mathbf{q}) = C_{(\mathbf{A} + \mathbf{T}^1,\mathbf{B} + \mathbf{T}^2)}(\mathbf{p},\mathbf{q})$ .
+
+Due to the definition of trivial matrix, we can write $T_{jk}^{1} = u_{j} + v_{k}$ . Then
+
+$$
+\begin{array}{l} C _ {(\mathbf {A} + \mathbf {T} ^ {1}, \mathbf {B})} (\mathbf {p}, \mathbf {q}) - C _ {(\mathbf {A}, \mathbf {B})} (\mathbf {p}, \mathbf {q}) \\ = - \mathbb {E} \left[ \left(A _ {j k} + u _ {j} + v _ {k} - A _ {j} - u _ {j} - \sum_ {\ell \in S _ {2}} v _ {\ell} y _ {\ell} - A _ {k} - v _ {k} - \sum_ {\ell \in S _ {1}} u _ {\ell} x _ {\ell}\right) \left(B _ {j k} - B _ {j} - B _ {k}\right) \right] \\ + \mathbb {E} [ A _ {j k} + u _ {j} + v _ {k} ] \cdot \mathbb {E} [ B _ {j k} ] + \mathbb {E} [ (A _ {j k} - A _ {j} - A _ {k}) (B _ {j k} - B _ {j} - B _ {k}) ] - \mathbb {E} [ A _ {j k} ] \cdot \mathbb {E} [ B _ {j k} ] \\ \end{array}
+$$
+
+$$
+\begin{array}{l} = - \mathbb {E} \left[ \left(- \sum_ {\ell \in S _ {2}} v _ {\ell} y _ {\ell} - \sum_ {\ell \in S _ {1}} u _ {\ell} x _ {\ell}\right) \left(B _ {j k} - B _ {j} - B _ {k}\right) \right] + \mathbb {E} \left[ u _ {j} + v _ {k} \right] \cdot \mathbb {E} \left[ B _ {j k} \right] \\ = \mathbb {E} \left[ v _ {k} + u _ {j} \right] \cdot \mathbb {E} \left[ B _ {j k} - B _ {j} - B _ {k} \right] + \mathbb {E} \left[ u _ {j} + v _ {k} \right] \cdot \mathbb {E} \left[ B _ {j k} \right]. \\ \end{array}
+$$
+
+By recalling that $\mathbb{E}[B_{jk} - B_j - B_k] = -\mathbb{E}[B_{jk}]$ , we have $C_{(\mathbf{A} + \mathbf{T}^1,\mathbf{B})}(\mathbf{p},\mathbf{q}) - C_{(\mathbf{A},\mathbf{B})}(\mathbf{p},\mathbf{q}) = 0$ .
+
+Proof of Observation 10. Let $\mathcal{P}_{jk}$ be the potential value of a potential game when Player 1 plays strategy $j$ and Player 2 plays strategy $k$ . Then according to the definition the potential function, for any $j_1, j_2$ and $k$ ,
+
+$$
+A _ {j _ {1} k} - A _ {j _ {2} k} = \mathcal {P} _ {j _ {1} k} - \mathcal {P} _ {j _ {2} k}.
+$$
+
+In particular, for any $j, k$ , $A_{jk} = \mathcal{P}_{jk} + A_{1k} - \mathcal{P}_{1k}$ . This implies that there exists $\mathbf{v}$ such that $A_{jk} = \mathcal{P}_{jk} + v_k$ for any $j$ and $k$ .
+
+Similarly, there exists $\mathbf{u}$ such that $B_{jk} = \mathcal{P}_{jk} + u_j$ for any $j$ and $k$ . This implies that any two-player potential games are coordination games plus trivial matrices.
+
+Proof of Theorem 12. We first prove that if $\mathbf{Z}$ dominates $\mathbf{C}$ , then $C_{(\mathbf{A},\mathbf{B})}(\mathbf{p},\mathbf{q})$ is always non-negative. By Observation 13,
+
+$$
+\begin{array}{l} C _ {(\mathbf {A}, \mathbf {B})} (\mathbf {p}, \mathbf {q}) = C _ {(\mathbf {Z}, - \mathbf {Z})} (\mathbf {p}, \mathbf {q}) + C _ {(\mathbf {C}, \mathbf {C})} (\mathbf {p}, \mathbf {q}) \\ = C _ {(\mathbf {Z}, - \mathbf {z})} (\mathbf {p}, \mathbf {q}) - C _ {(\mathbf {C}, - \mathbf {C})} (\mathbf {p}, \mathbf {q}) \\ = \frac {1}{4} \sum_ {j, j ^ {\prime}, k, k ^ {\prime}} x _ {j} (\mathbf {p}) y _ {k} (\mathbf {q}) x _ {j ^ {\prime}} (\mathbf {p}) y _ {k ^ {\prime}} (\mathbf {q}) \cdot \\ \left(\left(Z _ {j k} + Z _ {j ^ {\prime} k ^ {\prime}} - Z _ {j k ^ {\prime}} - Z _ {j ^ {\prime} k}\right) ^ {2} - \left(C _ {j k} + C _ {j ^ {\prime} k ^ {\prime}} - C _ {j k ^ {\prime}} - C _ {j ^ {\prime} k}\right) ^ {2}\right) \\ \geq 0. \\ \end{array}
+$$
+
+In contrast, if $\mathbf{Z}$ does not dominate $\mathbf{C}$ , then there exist $\hat{j},\hat{j}^{\prime},\hat{k},\hat{k}^{\prime}$ and $\delta >0$ such that
+
+$$
+\left(C _ {\hat {j} \hat {k}} + C _ {\hat {j} ^ {\prime} \hat {k} ^ {\prime}} - C _ {\hat {j} \hat {k} ^ {\prime}} - C _ {\hat {j} ^ {\prime} \hat {k}}\right) ^ {2} \geq \left(Z _ {\hat {j} \hat {k}} + Z _ {\hat {j} ^ {\prime} \hat {k} ^ {\prime}} - Z _ {\hat {j} \hat {k} ^ {\prime}} - Z _ {\hat {j} ^ {\prime} \hat {k}}\right) ^ {2} + \delta .
+$$
+
+For each $\eta > 0$ , we construct $\mathbf{p}$ and $\mathbf{q}$ such that $x_{\hat{j}}(\mathbf{p}) = x_{\hat{j}'}(\mathbf{p}) = y_{\hat{k}}(\mathbf{q}) = y_{\hat{k}'}(\mathbf{q}) = \frac{1 - \eta}{2}$ . Furthermore, we let $\Upsilon$ denote the maximum absolute value of all entries in matrices $\mathbf{A}$ and $\mathbf{B}$ . Then, for all $j$ and $k$ , $|Z_{jk}| \leq \Upsilon$ and $|C_{jk}| \leq \Upsilon$ . Therefore,
+
+$$
+\begin{array}{l} C _ {(\mathbf {A}, \mathbf {B})} (\mathbf {p}, \mathbf {q}) = C _ {(\mathbf {Z}, - \mathbf {Z})} (\mathbf {p}, \mathbf {q}) + C _ {(\mathbf {C}, \mathbf {C})} (\mathbf {p}, \mathbf {q}) \\ = C _ {(\mathbf {Z}, - \mathbf {Z})} (\mathbf {p}, \mathbf {q}) - C _ {(\mathbf {C}, - \mathbf {C})} (\mathbf {p}, \mathbf {q}) \\ = \frac {1}{4} \sum_ {j, j ^ {\prime}, k, k ^ {\prime}} x _ {j} (\mathbf {p}) y _ {k} (\mathbf {q}) x _ {j ^ {\prime}} (\mathbf {p}) y _ {k ^ {\prime}} (\mathbf {q}) \cdot \\ \left(\left(Z _ {j k} + Z _ {j ^ {\prime} k ^ {\prime}} - Z _ {j k ^ {\prime}} - Z _ {j ^ {\prime} k}\right) ^ {2} - \left(C _ {j k} + C _ {j ^ {\prime} k ^ {\prime}} - C _ {j k ^ {\prime}} - C _ {j ^ {\prime} k}\right) ^ {2}\right) \\ \leq - \delta \left(\frac {1 - \eta}{2}\right) ^ {4} + | S _ {1} | ^ {2} \cdot | S _ {2} | ^ {2} \cdot \eta \cdot 1 6 \Upsilon^ {2}. \\ \end{array}
+$$
+
+The last inequality holds as $(C_{jk} + C_{j'k'} - C_{jk'} - C_{j'k})^2 - (Z_{jk} + Z_{j'k'} - Z_{jk'} - Z_{j'k})^2 \leq 16\Upsilon^2$ . The value of $-\delta \left(\frac{1 - \eta}{2}\right)^4 + |S_1|^2 \cdot |S_2|^2 \cdot \eta \cdot 16\Upsilon^2$ will be negative if we pick a small enough $\eta$ .
+
+Proof of Observation 13. Consider a random process, where $j, j' \in S_1$ are randomly picked according to distribution $\mathbf{x}(\mathbf{p})$ , and $k, k' \in S_2$ are randomly picked according to distribution $\mathbf{y}(\mathbf{q})$ . Then the RHS of Observation 13 can be expressed as $\frac{1}{4} \cdot \mathbb{E}\left[(Z_{jk} + Z_{j'k'} - Z_{jk'} - Z_{j'k})^2\right]$ .
+
+Then we expand the squared term in the expectation. Observing the symmetries within the expansion, we immediately have
+
+$$
+\frac {1}{4} \cdot \mathbb {E} \left[ (Z _ {j k} + Z _ {j ^ {\prime} k ^ {\prime}} - Z _ {j k ^ {\prime}} - Z _ {j ^ {\prime} k}) ^ {2} \right] = \mathbb {E} \left[ (Z _ {j k}) ^ {2} \right] - \mathbb {E} \left[ Z _ {j k} Z _ {j k ^ {\prime}} \right] - \mathbb {E} \left[ Z _ {j k} Z _ {j ^ {\prime} k} \right] + \mathbb {E} \left[ Z _ {j k} Z _ {j ^ {\prime} k ^ {\prime}} \right].
+$$
+
+Let $Z_{j}\coloneqq [\mathbf{Zy}]_{j}$ and $Z_{k} = [\mathbf{Z}^{\top}\mathbf{x}]_{k}$ . Then we have
+
+$$
+\mathbb {E} \left[ Z _ {j k} Z _ {j k ^ {\prime}} \right] = \sum_ {j, k} x _ {j} y _ {k} Z _ {j k} \sum_ {k ^ {\prime}} y _ {k ^ {\prime}} Z _ {j k ^ {\prime}} = \sum_ {j} x _ {j} [ \mathbf {Z y} ] _ {j} \sum_ {k} y _ {k} Z _ {j k} = \sum_ {j} x _ {j} (Z _ {j}) ^ {2} = \mathbb {E} \left[ (Z _ {j}) ^ {2} \right].
+$$
+
+Similarly, $\mathbb{E}\left[Z_{jk}Z_{j'k}\right] = \mathbb{E}\left[(Z_k)^2\right]$ . Lastly, $\mathbb{E}\left[Z_{jk}Z_{j'k'}\right] = \mathbb{E}\left[Z_{jk}\right]^2$ . Thus, the RHS of Observation 13 is simplified to
+
+$$
+\mathbb {E} \left[ (Z _ {j k}) ^ {2} \right] - \mathbb {E} \left[ (Z _ {j}) ^ {2} \right] - \mathbb {E} \left[ (Z _ {k}) ^ {2} \right] + \mathbb {E} \left[ Z _ {j k} \right] ^ {2}.
+$$
+
+We complete the proof by noting that from the definition of $C_{(\mathbf{Z}, - \mathbf{Z})}(\cdot)$ in Eqn. (4), $C_{(\mathbf{Z}, - \mathbf{Z})}(\cdot)$ can be rewritten as
+
+$$
+\mathbb {E} \left[ (Z _ {j k}) ^ {2} \right] - \mathbb {E} \left[ Z _ {j} Z _ {j k} \right] - \mathbb {E} \left[ Z _ {k} Z _ {j k} \right] + \mathbb {E} \left[ Z _ {j k} \right] ^ {2},
+$$
+
+while $\mathbb{E}\left[Z_jZ_{jk}\right] = \sum_jx_jZ_j\sum_ky_kZ_{jk} = \sum_jx_jZ_jZ_j = \mathbb{E}\left[(Z_j)^2\right]$ , and similarly $\mathbb{E}\left[Z_jZ_{jk}\right] = \mathbb{E}\left[(Z_k)^2\right]$ .
+
+Proof of Theorem 15. By Lemma 4, it suffices to prove $\bar{c}_{(\mathbf{A},\mathbf{B})}(S^{\delta}) = \inf_{(\mathbf{p},\mathbf{q})\in S^{\delta}}C_{(\mathbf{A},\mathbf{B})}(\mathbf{p},\mathbf{q})\geq$ $\theta^2\delta^4$
+
+This holds because the matrix $\mathbf{Z}$ $\theta$ -dominates $\mathbf{C}$ , which implies there exist $j, j^{\prime}, k,$ and $k^{\prime}$ such that
+
+$$
+\left(\mathbf {Z} _ {j k} + \mathbf {Z} _ {j ^ {\prime} k ^ {\prime}} - \mathbf {Z} _ {j k ^ {\prime}} - \mathbf {Z} _ {j ^ {\prime} k}\right) ^ {2} \geq \left(\mathbf {C} _ {j k} + \mathbf {C} _ {j ^ {\prime} k ^ {\prime}} - \mathbf {C} _ {j k ^ {\prime}} - \mathbf {C} _ {j ^ {\prime} k}\right) ^ {2} + \theta^ {2}.
+$$
+
+By applying Observation 13, $C_{(\mathbf{Z}, - \mathbf{Z})}(\mathbf{p},\mathbf{q})\geq C_{(\mathbf{C}, - \mathbf{C})}(\mathbf{p},\mathbf{q}) + \theta^2\delta^4$ , because for $(\mathbf{p},\mathbf{q})\in S^{\delta}$ every $x_{j}(\mathbf{p}),y_{k}(\mathbf{q}),x_{j^{\prime}}(\mathbf{p}),y_{k^{\prime}}(\mathbf{q})$ is at least $\delta$ . By noting that $C_{(\mathbf{C},\mathbf{C})}(\mathbf{p},\mathbf{q}) = -C_{(\mathbf{C}, - \mathbf{C})}(\mathbf{p},\mathbf{q})$ and $C_{(\mathbf{A},\mathbf{B})}(\mathbf{p},\mathbf{q}) = C_{(\mathbf{Z}, - \mathbf{Z})}(\mathbf{p},\mathbf{q}) + C_{(\mathbf{C},\mathbf{C})}(\mathbf{p},\mathbf{q})$ , the result follows.
+
+Proof of Lemma 16. A key observation is the following equality:
+
+$$
+C _ {(\mathbf {K}, - \mathbf {K})} (\mathbf {p}, \mathbf {q}) = \min _ {\mathbf {g}, \mathbf {h}} F (\mathbf {g}, \mathbf {h}), \text {w h e r e} F (\mathbf {g}, \mathbf {h}) = \sum_ {j, k} x _ {j} (\mathbf {p}) \cdot y _ {k} (\mathbf {q}) \cdot \left(K _ {j k} - g _ {j} - h _ {k}\right) ^ {2}. \tag {8}
+$$
+
+Recall the notations $K_{j} = \sum_{k}y_{k}K_{jk}$ and $K_{k} = \sum_{j}x_{j}K_{jk}$ , and we let $e\coloneqq \sum_{j,k}x_jy_kK_{jk}\equiv \mathbb{E}\left[K_{jk}\right]$ . The equality (8) holds due to the following observations: (i) $F(\mathbf{g},\mathbf{h})$ is a smooth convex function of its variables, thus all minimum points have the same function value; (ii) if $\frac{\partial F}{\partial g_j}$ and $\frac{\partial F}{\partial h_k}$ are all zero at some point $(\mathbf{g},\mathbf{h})$ , then the point is a minimal point of $F$ ; (iii) $C_{(\mathbf{K}, - \mathbf{K})}(\mathbf{p},\mathbf{q})$ is the variance of the random variable $K_{jk} - K_{j} - K_{k}$ (see the end of Section 2), and thus by a definition of variance,
+
+$$
+\begin{array}{l} C _ {(\mathbf {K}, - \mathbf {K})} (\mathbf {p}, \mathbf {q}) = \mathbb {E} \left[ \left(K _ {j k} - K _ {j} - K _ {k} - \mathbb {E} \left[ K _ {j k} - K _ {j} - K _ {k} \right]\right) ^ {2} \right] \\ = \mathbb {E} \left[ \left(K _ {j k} - K _ {j} - K _ {k} + e\right) ^ {2} \right] \quad (\text {s i n c e} \mathbb {E} \left[ K _ {j k} - K _ {j} - K _ {k} \right] = - \mathbb {E} \left[ K _ {j k} \right] = - e) \\ = \sum_ {j, k} x _ {j} y _ {k} (K _ {j k} - (K _ {j} - e) - K _ {k}) ^ {2} = F (\mathbf {g} ^ {\#}, \mathbf {h} ^ {\#}), \\ \end{array}
+$$
+
+where $g_{j}^{\#} = K_{j} - e$ and $h_k^\# = K_k$ ; and (iv) at $(\mathbf{g}^{\#},\mathbf{h}^{\#})$ , the partial derivatives stated in (ii) are all zero.
+
+With this observation and comparing this with the definition of $r(\mathbf{Z})$ , it's easy to figure out that $C_{(\mathbf{K}, - \mathbf{K})}(\mathbf{p},\mathbf{q})\leq r(\mathbf{K})^2$
+
+To see $C_{(\mathbf{K}, - \mathbf{K})}(\mathbf{p},\mathbf{q})\geq (r(\mathbf{K}))\cdot \delta)^2$ , we first let $\mathbf{g}^*$ and $\mathbf{h}^*$ to be the optimal choice of $\mathbf{g}$ and $\mathbf{h}$ in $C_{(\mathbf{K}, - \mathbf{K})}(\mathbf{p},\mathbf{q}) = \min_{\mathbf{g},\mathbf{h}}\sum_{j,k}x_j(\mathbf{p})\cdot y_k(\mathbf{q})\cdot (K_{jk} - g_j - h_k)^2$ . Due to the specification of the linear program (7), we have
+
+$$
+2 \cdot r (\mathbf {K}) \leq \max _ {j, k} \left\{K _ {j k} - g _ {j} ^ {*} - h _ {k} ^ {*} \right\} - \min _ {j, k} \left\{K _ {j k} - g _ {j} ^ {*} - h _ {k} ^ {*} \right\}.
+$$
+
+Therefore,
+
+$$
+\left. \right. \max \left\{\left(\max _ {j, k} \left\{K _ {j k} - g _ {j} ^ {*} - h _ {k} ^ {*} \right\}\right) ^ {2}, \left(\min _ {j, k} \left\{K _ {j k} - g _ {j} ^ {*} - h _ {k} ^ {*} \right\}\right) ^ {2} \right\} \geq r (\mathbf {K}) ^ {2}.
+$$
+
+This immediately implies that $C_{(\mathbf{K}, - \mathbf{K})}(\mathbf{p},\mathbf{q})\geq (r(\mathbf{K})\cdot \delta)^2$
+
+# C MULTI-PLAYER GAMES
+
+Computing volume change of learning algorithm in multi-player game is slightly more involved than the two-player case. We present a local equivalence formula of volume change between normal-form and graphical games. This provides an intuitive procedure for understanding volume changes. Proposition 20 shows that in multi-player game, the volume-changing behaviors of MWU and OMWU are again opposite to each other (which was shown for bimatrix game in Cheung & Piliouras (2020)).
+
+Graphical Games. A graphical game Kearns et al. (2001) is a special type of $N$ -player game where the payoffs can be compactly represented. In a graphical game $\mathbf{H}$ , for each pair of players $i, k$ , there is an edge-game which is a bimatrix game between the two players, denoted by $(\mathbf{H}^{i,k}, (\mathbf{H}^{k,i})^{\mathrm{T}})$ , where $\mathbf{H}^{i,k} \in \mathbb{R}^{n_i \times n_k}$ is the payoff matrix that denotes the payoffs to Player $i$ . Then the payoff to Player $i$ at strategy profile $\mathbf{s} = (s_1, s_2, \dots, s_N)$ is the sum of payoffs to Player $i$ in all her edge-games, i.e. $u_i(\mathbf{s}) = \sum_{k \neq i} H_{s_i, s_k}^{i,k}$ . As is standard, this payoff function is extended via expectation when the inputs are mixed strategies.
+
+Here, we first use an observation from Cheung & Piliouras (2019) to construct a family of multiplayer graphical games where MWU is Lyapunov chaotic in $S^{N,\delta} \coloneqq \{(\mathbf{p}_1,\dots ,\mathbf{p}_N)|\forall i\in [N],j\in S_i,x_{ij}(\mathbf{p}_i)\geq \delta \}$ . It was observed that the function $C_{\mathbf{H}}(\mathbf{p})$ defined in Eqn. (5) is the sum of $C_{(\mathbf{H}^{i,k},(\mathbf{H}^{k,i})^{\top})}(\mathbf{p}_i,\mathbf{p}_k)$ of all pairs of Players $i < k$ . This observation yields Theorem 18.
+
+Theorem 18. Let $\mathcal{G}^{\uparrow}$ denote the family of bimatrix games which satisfy the condition either in Theorem 15 or in Theorem 17. In an $N$ -player graphical game where each edge-game is drawn from $\mathcal{G}^{\uparrow}$ , if all players are employing MWU with a sufficiently small step-size $\epsilon$ , then the dynamical system is Lyapunov chaotic in $S^{N,\delta}$ with Lyapunov exponent $N(N - 1)\theta^2\delta^4\epsilon^2 /4\sum_{i = 1}^{N}n_i$ .
+
+Local Equivalence of General Games and Graphical Games. Next, we present a theorem which connects the value of $C_{\mathbf{G}}(\mathbf{p})$ of a general game to $C_{\mathbf{H}}(\mathbf{p})$ , where $\mathbf{H}$ is a graphical game.
+
+Theorem 19. Given an $N$ -player normal-form game $\mathbf{G}$ and any point $\mathbf{p}$ in the cumulative payoff space, the value of $C_{\mathbf{G}}(\mathbf{p})$ is the same as $C_{\mathbf{H}}(\mathbf{p})$ , where $\mathbf{H}$ is a graphical game specified as follows: for each pair of Players $i, k$ and $j \in S_i, \ell \in S_k$ , the payoff to Player $i$ in her edge-game with Player $k$ when Player $i$ picks $j$ and Player $k$ picks $\ell$ is $H_{j\ell}^{ik} \coloneqq U_{j\ell}^{ik}$ , where $U_{j\ell}^{ik}$ is defined in Eqn. (1).
+
+This theorem shows that for any game $\mathbf{G}$ , the value of $C_{\mathbf{G}}(\mathbf{p})$ is the same as in a particular graphical game, where each pair of players, $(i,k)$ play a bimatrix game whose utility is exactly the utility of the original game $\mathbf{G}$ , but taking the expectation on the randomness of the other players' strategies. If the original game $\mathbf{G}$ is a graphical game, then in the graphical game $H_{j\ell}^{ik} = U_{j\ell}^{ik} + c_{-i, - k}$ , where $c_{-i, - k}$ is a parameter which does not depend on Players $i$ and $k$ .
+
+Theorem 19 will be used in Appendix D to show the following proposition, which shows that the volume-changing behaviors of MWU and OMWU are opposite to each other in multi-player game, generalizing a prior result in Cheung & Piliouras (2020).
+
+Proposition 20. The volume integrands of MWU and OMWU in a multi-player game $\mathbf{G}$ are respectively $1 + C_{\mathbf{G}}(\mathbf{p})\cdot \epsilon^{2} + \mathcal{O}(\epsilon^{3})$ and $1 - C_{\mathbf{G}}(\mathbf{p})\cdot \epsilon^{2} + \mathcal{O}(\epsilon^{3})$ . Thus, volume expands locally around a cumulative payoff point $\mathbf{p}$ for MWU (resp. OMWU) if $C_{\mathbf{G}}(\mathbf{p})$ is positive (resp. negative).
+
+Multiplayer Potential Game. By Observation 10, we know that the volume behavior of a potential game is equivalent to a corresponding coordination game in bimatrix game. In this section, we want to show, this holds even in the multi player setting.
+
+Proposition 21. Suppose $\mathcal{P}$ is the potential function of a potential game $\mathbf{U}$ . Let $\mathbf{U}^{\mathcal{P}}$ be a game that all players will receive $\mathcal{P}(\mathbf{s})$ when players play strategies $\mathbf{s}$ . Then $C_{\mathbf{U}}(\mathbf{p}) = C_{\mathbf{U}^{\mathcal{P}}}(\mathbf{p}) \leq 0$ .
+
+In Appendix E, we will discuss some situations where $C_{\mathbf{U}}(\mathbf{p})$ is strictly less than 0, thus OMWU is Lyapunov chaotic therein.
+
+# D LOCAL EQUIVALENCE OF VOLUME CHANGE BETWEEN NORMAL-FORM AND GRAPHICAL GAMES
+
+Here, we concern the volume change of a learning algorithm in multi-player game. We first recap from Cheung & Piliouras (2020) on how the volume change is computed for dynamical systems
+
+which are gradual (i.e. those governed by a small step-size), followed by a continuous-time analogue of OMWU in games, which are crucial for analyzing the volume change of discrete-time OMWU. Then we compute the volume changes of MWU and OMWU in multi-player graphical games and normal-form games respectively. Once these are done, the proofs of Proposition 20 and Theorem 17 become apparent.
+
+# D.1 DISCRETE-TIME DYNAMICAL SYSTEMS AND VOLUME OF FLOW
+
+We consider discrete-time dynamical systems in $\mathbb{R}^d$ . Such a dynamical system is determined recursively by a starting point $\mathbf{s}(0) \in \mathbb{R}^d$ and an update rule of the form $\mathbf{s}(t + 1) = G(\mathbf{s}(t))$ , for some function $G: \mathbb{R}^d \to \mathbb{R}^d$ . Here, we focus on the special case when the update rule is gradual, i.e. it is in the form of
+
+$$
+\mathbf {s} (t + 1) = \mathbf {s} (t) + \epsilon \cdot F (\mathbf {s} (t)),
+$$
+
+where $F: \mathbb{R}^d \to \mathbb{R}^d$ is a smooth function and step-size $\epsilon > 0$ . When $F$ and $\epsilon$ are given, the flow of the starting point $\mathbf{s}(0)$ at time $t$ , denoted by $\Phi(t, \mathbf{s}(0))$ , is simply the point $\mathbf{s}(t)$ generated by the above recursive update rule. Then the flow of a set $S \subset \mathbb{R}^d$ at time $t$ , denoted by $\Phi(t, S)$ , is the set $\{\Phi(t, \mathbf{s}) \mid \mathbf{s} \in S\}$ . Since $F$ does not depend on time $t$ , we have the following equality: $\Phi(t_1 + t_2, S) = \Phi(t_2, \Phi(t_1, S))$ .
+
+By equipping $\mathbb{R}^d$ with the standard Lebesgue measure, the volume of a measurable set $S$ , denoted by $\mathrm{vol}(S)$ , is simply its measure. Given a bounded and measurable set $S \subset \mathbb{R}^d$ , if the discrete flow in one time step maps $S$ to $S' = \Phi(1, S)$ injectively, then by integration by substitution for multi-variables,
+
+$$
+\operatorname {v o l} \left(S ^ {\prime}\right) = \int_ {\mathbf {s} \in S} \det \left(\mathbf {I} + \epsilon \cdot \mathbf {J} (\mathbf {s})\right) d V, \tag {9}
+$$
+
+where $\mathbf{I}$ is the identity matrix, and $\mathbf{J}(\mathbf{s})$ is the Jacobian matrix defined below:
+
+$$
+\mathbf {J} (\mathbf {s}) = \left[ \begin{array}{c c c c} \frac {\partial}{\partial s _ {1}} F _ {1} (\mathbf {s}) & \frac {\partial}{\partial s _ {2}} F _ {1} (\mathbf {s}) & \dots & \frac {\partial}{\partial s _ {d}} F _ {1} (\mathbf {s}) \\ \frac {\partial}{\partial s _ {1}} F _ {2} (\mathbf {s}) & \frac {\partial}{\partial s _ {2}} F _ {2} (\mathbf {s}) & \dots & \frac {\partial}{\partial s _ {d}} F _ {2} (\mathbf {s}) \\ \vdots & \vdots & \ddots & \vdots \\ \frac {\partial}{\partial s _ {1}} F _ {d} (\mathbf {s}) & \frac {\partial}{\partial s _ {2}} F _ {d} (\mathbf {s}) & \dots & \frac {\partial}{\partial s _ {d}} F _ {d} (\mathbf {s}) \end{array} \right]. \tag {10}
+$$
+
+Clearly, analyzing the determinant in the integrand in Eqn. (9) is crucial in volume analysis; we call it the volume integrand. When the determinant is expanded using the Leibniz formula, it becomes a polynomial of $\epsilon$ , in the form of $1 + C(\mathbf{s}) \cdot \epsilon^{h} + \mathcal{O}(\epsilon^{h+1})$ for some integer $h \geq 1$ . Thus, when the step-size $\epsilon$ is sufficiently small, the sign of $C(\mathbf{s})$ dictates on whether the volume expands or contracts.
+
+# D.2 CONTINUOUS-TIME ANALOGUE OF OMWU
+
+OMWU does not fall into the category of dynamical systems defined above, since its update rule is in the form of $\mathbf{s}(t + 1) = G(\mathbf{s}(t),\mathbf{s}(t - 1))$ . Fortunately, Cheung and Piliouras Cheung & Piliouras (2020) showed that OMWU can be well-approximated by the online Euler discretization of a system of ordinary differential equations (ODE), and thus it can be well-approximated by a dynamical system.
+
+The ODE system is given below. $\mathbf{p}$ is a dual (cumulative payoff) vector variable, $\mathbf{u}:\mathbb{R}^{+}\to \mathbb{R}^{d}$ is the function such that $\mathbf{u}(t)$ gives the instantaneous payoff vector at time $t$ . We assume that $\mathbf{u}$ is twice differentiable with bounded second-derivatives, and $\dot{\mathbf{u}}$ denotes the time-derivative of $\mathbf{u}$ .
+
+$$
+\dot {\mathbf {p}} = \mathbf {u} + \epsilon \cdot \dot {\mathbf {u}}, \tag {11}
+$$
+
+Online Euler discretization (OED) of Eqn. (11) refers to the following time-discretization of the ODE system. In applications, $\dot{\mathbf{u}}$ might not be explicitly given, and the sequence $\mathbf{u}(0),\mathbf{u}(1),\mathbf{u}(2),\dots$ are available online (i.e., at time $t$ we only have access of $\mathbf{u}(\tau)$ for $\tau = 0,1,\dots ,t$ ). As the discretization step is $\epsilon$ , we approximate $\dot{\mathbf{u}}(t)$ by $(\mathbf{u}(t) - \mathbf{u}(t - 1)) / \epsilon$ . By using this approximation, OED of Eqn. (11) yields
+
+$$
+\mathbf {p} (t + 1) = \mathbf {p} (t) + \boldsymbol {\epsilon} \cdot \left[ \mathbf {u} (t) + \boldsymbol {\epsilon} \cdot \frac {\mathbf {u} (t) - \mathbf {u} (t - 1)}{\boldsymbol {\epsilon}} \right] = \mathbf {p} (t) + \boldsymbol {\epsilon} \cdot \left[ 2 \cdot \mathbf {u} (t) - \mathbf {u} (t - 1) \right],
+$$
+
+which is exactly the OMWU update rule in general context.
+
+When compared the OED with the standard Euler discretization
+
+$$
+\mathbf {p} (t + 1) = \mathbf {p} (t) + \epsilon \cdot [ \mathbf {u} (t) + \epsilon \cdot \dot {\mathbf {u}} (t) ],
+$$
+
+OED incurs a local error that appears due to the approximation of $\dot{\mathbf{u}}(t)$ . The local error can be bounded by $\mathcal{O}(\epsilon^3)$ . Cheung and Piliouras Cheung & Piliouras (2020) showed that eventually the determinant of the volume integrand is a of the form $1 + C(\mathbf{s}) \cdot \epsilon^2 + \mathcal{O}(\epsilon^3)$ , the local error does not affect the first and second highest-order terms, and hence can be ignored henceforth.
+
+# D.3 MWU IN GRAPHICAL GAMES
+
+Let $\mathbf{H}$ be a graphical game of $N$ players, where between every pair of Players $i$ and $k$ , the payoff bimatrices are $(\mathbf{H}^{ik}, (\mathbf{H}^{ki})^{\top})$ . In the cumulative payoff space, let $\mathbf{p} = (\mathbf{p}_1, \dots, \mathbf{p}_N)$ denote the cumulative payoff profile, and let $\mathbf{x} = (\mathbf{x}_1, \dots, \mathbf{x}_N)$ denote the corresponding mixed strategy profile, where $\mathbf{x}_i$ is a function of $\mathbf{p}_i$ . We will write $\mathbf{x}_i$ and $\mathbf{x}_i(\mathbf{p}_i)$ interchangeably. The expected payoff to strategy $j$ of Player $i$ is
+
+$$
+u_{ij}(\mathbf{p}) = \sum_{\substack{k\in [N]\\ k\neq i}}[\mathbf{H}^{ik}\cdot \mathbf{x}_{k}(\mathbf{p}_{k})]_{j},
+$$
+
+which will be used to compute the Jacobian matrices of MWU and OMWU.
+
+For MWU, the Jacobian matrix $\mathbf{J}$ is a squared matrix with each row and each column indexed by $(i,j)$ , where $i$ is a Player and $j\in S_{i}$ . The precise values of its entries are given below:
+
+$$
+\forall j _ {1}, j _ {2} \in S _ {i}, \epsilon J _ {(i, j _ {1}), (i, j _ {2})} = \epsilon \cdot \frac {\partial u _ {i j _ {1}}}{\partial p _ {i j _ {2}}} = 0 \tag {12}
+$$
+
+and
+
+$$
+\forall i \neq k, j \in S _ {i}, \ell \in S _ {k}, \epsilon J _ {(i, j), (k, \ell)} = \epsilon \cdot \frac {\partial u _ {i j}}{p _ {k \ell}} = \epsilon x _ {k \ell} \cdot \left(H _ {j \ell} ^ {i k} - [ \mathbf {H} ^ {i k} \cdot \mathbf {x} _ {k} ] _ {j}\right). \tag {13}
+$$
+
+Then by expansion using Leibniz formula, the determinant of $(\mathbf{I} + \epsilon \cdot \mathbf{J})$ is
+
+$$
+\begin{array}{l} 1 - \sum_{\substack{i\in [N]\\ j\in S_{i}}}\sum_{\substack{k > i\\ \ell \in S_{k}}}(\epsilon J_{(i,j),(k,\ell)})(\epsilon J_{(k,\ell),(i,j)}) + \mathcal{O}(\epsilon^{3}) \\ = 1 - \epsilon^ {2} \cdot \sum_ {\substack {i \in [ N ] \\ j \in S _ {i}}} \sum_ {\substack {k > i \\ \ell \in S _ {k}}} x _ {i j} x _ {k \ell} \left(H _ {\ell j} ^ {k i} - \left[ \mathbf {H} ^ {k i} \cdot \mathbf {x} _ {i} \right] _ {\ell}\right) \left(H _ {j \ell} ^ {i k} - \left[ \mathbf {H} ^ {i k} \cdot \mathbf {x} _ {k} \right] _ {j}\right) + \mathcal {O} (\epsilon^ {3}). \tag{14} \\ \end{array}
+$$
+
+By noting the similarity of the double summation to $C_{(\mathbf{A},\mathbf{B})}(\cdot)$ in Eqn. (4), we can immediately rewrite the above expression as
+
+$$
+1 + \epsilon^ {2} \cdot \sum_ {i, k: 1 \leq i < k \leq N} C _ {\left(\mathbf {H} ^ {i k}, \left(\mathbf {H} ^ {k i}\right) ^ {\top}\right)} \left(\mathbf {p} _ {i}, \mathbf {p} _ {k}\right) + \mathcal {O} \left(\epsilon^ {3}\right). \tag {15}
+$$
+
+# D.4 OMWU IN GRAPHICAL GAMES
+
+For OMWU, as we pointed out already, we will first consider its continuous analogue first. Thus, we need to compute $\dot{\mathbf{u}}$ in the continuous-time setting. By chain rule, we have
+
+$$
+\dot{u}_{ij}(\mathbf{p}) = \sum_{\substack{k\in [N]\\ k\neq i\\ \ell \in S_{k}}}\frac{\partial[\mathbf{H}^{ik}\cdot\mathbf{x}_{k}(\mathbf{p}_{k})]_{j}}{\partial p_{k\ell}}\cdot \frac{\mathrm{d}p_{k\ell}}{\mathrm{d}t} = \sum_{\substack{k\in [N]\\ k\neq i\\ \ell \in S_{k}}}x_{k\ell}\cdot \big(H_{j\ell}^{ik} - [\mathbf{H}^{ik}\cdot \mathbf{x}_{k}]_{j}\big)\cdot \frac{\mathrm{d}p_{k\ell}}{\mathrm{d}t},
+$$
+
+and hence
+
+$$
+\frac{\mathrm{d}p_{ij}}{\mathrm{d}t} = \sum_{\substack{k\in [N]\\ k\neq i}}[\mathbf{H}^{ik}\cdot \mathbf{x}_{k}]_{j} + \epsilon \cdot \sum_{\substack{k\in [N]\\ k\neq i\\ \ell \in S_{k}}}x_{k\ell}\cdot \left(H_{j\ell}^{ik} - [\mathbf{H}^{ik}\cdot \mathbf{x}_{k}]_{j}\right)\cdot \frac{\mathrm{d}p_{k\ell}}{\mathrm{d}t}.
+$$
+
+Note that this is a recurrence formulae for $\frac{\mathrm{d}\mathbf{p}}{\mathrm{d}t}$ . By iterating it $^{12}$ , we have
+
+$$
+\frac{\mathrm{d}p_{ij}}{\mathrm{d}t} = \sum_{\substack{k\in [N]\\ k\neq i}}[\mathbf{H}^{ik}\cdot \mathbf{x}_{k}]_{j} + \epsilon \cdot \sum_{\substack{k\in [N]\\ k\neq i\\ \ell \in S_{k}}}x_{k\ell}\cdot \big(H_{j\ell}^{ik} - [\mathbf{H}^{ik}\cdot \mathbf{x}_{k}]_{j}\big)\cdot \left(\sum_{\substack{r\in [N]\\ r\neq k}}[\mathbf{H}^{kr}\cdot \mathbf{x}_{r}]_{\ell}\right) + \mathcal{O}(\epsilon^{2}).
+$$
+
+Hence, its standard Euler discretization, which approximates the OED with local error $\mathcal{O}(\epsilon^3)$ , can be written as below (where we ignore the $\mathcal{O}(\epsilon^3)$ error terms):
+
+$$
+p_{ij}(t + 1) = p_{ij}(t) + \epsilon \sum_{\substack{k\in [N]\\ k\neq i}}[\mathbf{H}^{ik}\cdot \mathbf{x}_{k}]_{j} + \epsilon^{2}\sum_{\substack{k\in [N]\\ k\neq i\\ \ell \in S_{k}}}x_{k\ell}\cdot \bigl(H^{ik}_{j\ell} - [\mathbf{H}^{ik}\cdot \mathbf{x}_{k}]_{j}\bigr)\cdot \left(\sum_{\substack{r\in [N]\\ r\neq k}}[\mathbf{H}^{kr}\cdot \mathbf{x}_{r}]_{\ell}\right).
+$$
+
+With this, we are ready to compute the Jacobian matrix $\mathbf{J}$ for OMWU. For all $j_1,j_2\in S_i$
+
+$$
+\epsilon J _ {(i, j _ {1}), (i, j _ {2})} = \epsilon^ {2} \sum_ {\substack {k \in [ N ] \\ k \neq i \\ \ell \in S _ {k}}} x _ {k \ell} \cdot \left(H _ {j _ {1} \ell} ^ {i k} - \left[ \mathbf {H} ^ {i k} \cdot \mathbf {x} _ {k} \right] _ {j _ {1}}\right) \cdot x _ {i j _ {2}} \cdot \left(H _ {\ell j _ {2}} ^ {k i} - \left[ \mathbf {H} ^ {k i} \cdot \mathbf {x} _ {i} \right] _ {\ell}\right) \tag{16}
+$$
+
+and for all $i\neq k,j\in S_i,\ell \in S_k$
+
+$$
+\epsilon J _ {(i, j), (k, \ell)} = \epsilon x _ {k \ell} \left(H _ {j \ell} ^ {i k} - \left[ \mathbf {H} ^ {i k} \cdot \mathbf {x} _ {k} \right] _ {j}\right) + \mathcal {O} \left(\epsilon^ {2}\right) \tag {17}
+$$
+
+Then by expansion using Leibniz formula, the determinant of $(\mathbf{I} + \epsilon \cdot \mathbf{J})$ is
+
+$$
+1 + \left(\underbrace{\sum_{\substack{i\in[N]\\ j\in S_{i}}}\epsilon J_{(i,j),(i,j)}}_{T_{1}} - \underbrace{\sum_{\substack{i\in[N]\\ j\in S_{i}}}\sum_{\substack{k > i\\ \ell\in S_{k}}}(\epsilon J_{(i,j),(k,\ell)})(\epsilon J_{(k,\ell),(i,j)})}_{T_{2}}\right) + \mathcal{O}(\epsilon^{3}).
+$$
+
+By a direct expansions on $T_{1}$ and $T_{2}$ , it is easy to see that $T_{1} = 2T_{2}$ (after ignoring $\mathcal{O}(\epsilon^{3})$ terms). On the other hand, the coefficient of $\epsilon^2$ in $T_{2}$ is exactly the same as the double summation in Eqn. (14), thus it equals to $-\sum_{i,k:1\leq i < k\leq N}C_{(\mathbf{H}^{ik},(\mathbf{H}^{ki})^{\top})}(\mathbf{p}_i,\mathbf{p}_k)$ . Overall, we show that the determinant equals to
+
+$$
+1 - \epsilon^ {2} \cdot \sum_ {i, k: 1 \leq i < k \leq N} C _ {\left(\mathbf {H} ^ {i k}, \left(\mathbf {H} ^ {k i}\right) ^ {\top}\right)} \left(\mathbf {p} _ {i}, \mathbf {p} _ {k}\right) + \mathcal {O} \left(\epsilon^ {3}\right). \tag {18}
+$$
+
+Observation 22. The coefficient of $\epsilon^2$ in Eqn. (18) is the exact negation of the coefficient of $\epsilon^2$ in Eqn. (15).
+
+# D.5 COMPLETING THE LOCAL EQUIVALENCE PROOF
+
+In a multiplayer normal-form game $\mathbf{G}$ , recall that notation Eqn. (1). We point out the following formulae:
+
+$$
+\frac {\partial U _ {j _ {1} j _ {2} \cdots j _ {g}} ^ {i _ {1} i _ {2} \cdots i _ {g}}}{\partial p _ {i j}} = 0 \quad \text {i f} i \in \{i _ {1}, i _ {2}, \dots , i _ {g} \};
+$$
+
+$$
+\frac {\partial U _ {j _ {1} j _ {2} \cdots j _ {g}} ^ {i _ {1} i _ {2} \cdots i _ {g}}}{\partial p _ {i j}} = x _ {i j} \cdot \left(U _ {j _ {1} j _ {2} \cdots j _ {g} j} ^ {i _ {1} i _ {2} \cdots i _ {g} i} - U _ {j _ {1} j _ {2} \cdots j _ {g}} ^ {i _ {1} i _ {2} \cdots i _ {g}}\right) \qquad \text {i f} i \notin \{i _ {1}, i _ {2}, \dots , i _ {g} \}.
+$$
+
+MWU. Here, MWU update rule is $p_{ij}(t + 1) = p_{ij}(t) + \epsilon \cdot U_j^i$ . When computing the Jacobian matrix for this update rule using the formulae above, and comparing it with the Jacobian matrix computed in Eqn. (12) and Eqn. (13), it is immediate that they are the same by setting $H_{j\ell}^{ik} = U_{j\ell}^{ik}$ . This derives Eqn. (5), and completes the proof of Theorem 19.
+
+OMWU. As before, we use the continuous analogue and compute $\dot{\mathbf{u}}$ . By the chain rule and the above formulae, we have
+
+$$
+\dot{u}_{ij}(\mathbf{p}) = \sum_{\substack{k\in [N]\\ k\neq i\\ \ell \in S_{k}}}\frac{\partial U_{j}^{i}}{\partial p_{k\ell}}\cdot \frac{\mathrm{d}p_{k\ell}}{\mathrm{d}t} = \sum_{\substack{k\in [N]\\ k\neq i\\ \ell \in S_{k}}}x_{k\ell}\cdot \left(U_{j\ell}^{ik} - U_{j}^{i}\right)\cdot \frac{\mathrm{d}p_{k\ell}}{\mathrm{d}t}
+$$
+
+and hence
+
+$$
+\frac{\mathrm{d}p_{ij}}{\mathrm{d}t} = U_{j}^{i} + \epsilon \cdot \sum_{\substack{k\in [N]\\ k\neq i\\ \ell \in S_{k}}}x_{k\ell}\cdot \left(U_{j\ell}^{ik} - U_{j}^{i}\right)\cdot \frac{\mathrm{d}p_{k\ell}}{\mathrm{d}t}.
+$$
+
+Iterating the above recurrence yields
+
+$$
+\frac{\mathrm{d}p_{ij}}{\mathrm{d}t} = U_{j}^{i} + \epsilon \cdot \sum_{\substack{k\in [N]\\ k\neq i\\ \ell \in S_{k}}}x_{k\ell}\cdot \left(U_{j\ell}^{ik} - U_{j}^{i}\right)\cdot U_{\ell}^{k} + \mathcal{O}(\epsilon^{2}).
+$$
+
+Its standard Euler discretization is
+
+$$
+p_{ij}(t + 1) = p_{ij}(t) + \epsilon \cdot U_{j}^{i} + \epsilon^{2}\cdot \sum_{\substack{k\in [N]\\ k\neq i\\ \ell \in S_{k}}}x_{k\ell}\cdot \left(U_{j\ell}^{ik} - U_{j}^{i}\right)\cdot U_{\ell}^{k}.
+$$
+
+Now we compute the Jacobian matrix for this standard Euler discretization. For $j_1, j_2 \in S_i$ ,
+
+$$
+\epsilon J_{(i,j_{1}),(i,j_{2})} = \epsilon^{2}\sum_{\substack{k\in [N]\\ k\neq i\\ \ell \in S_{k}}}x_{k\ell}\cdot \left(U_{j_{1}\ell}^{ik} - U_{j_{1}}^{i}\right)\cdot x_{ij_{2}}\cdot \left(U_{\ell j_{2}}^{ki} - U_{\ell}^{k}\right)
+$$
+
+and for all $i\neq k,j\in S_i,\ell \in S_k$
+
+$$
+\epsilon J _ {(i, j), (k, \ell)} = \epsilon x _ {k \ell} \left(U _ {j \ell} ^ {i k} - U _ {j} ^ {i}\right) + \mathcal {O} (\epsilon^ {2}).
+$$
+
+By comparing this computed Jacobian matrix with the Jacobian matrix computed in Eqn. (16) and Eqn. (17), it is immediate to see that their determinants are the same (after ignoring all $\mathcal{O}(\epsilon^3)$ terms) by setting $H_{j\ell}^{ik} = U_{j\ell}^{ik}$ . With the result we just derived, together with Observation 22 and Theorem 19, Proposition 20 follows.
+
+# E MULTI-PLAYER POTENTIAL GAME
+
+Proof of Proposition 21. We know that the potential game satisfies the following condition:
+
+$$
+\mathcal {P} \left(s _ {i}, s _ {- i}\right) - \mathcal {P} \left(s _ {i} ^ {\prime}, s _ {- i}\right) = u _ {i} \left(s _ {i}, s _ {- i}\right) - u _ {i} \left(s _ {i ^ {\prime}}, s _ {- i}\right).
+$$
+
+Therefore, $u_{i}(s_{i}, s_{-i}) = \mathcal{P}(s_{i}, s_{-i}) + v^{i}(s_{-i})$ . Note that $v^{i}(s_{-i})$ does not depend on $s_{i}$ , the strategy of player $i$ .
+
+By Theorem 19, let $\mathbf{H}(\mathbf{U})$ be the induced graphical game of $\mathbf{U}$ and $\mathbf{H}(\mathbf{U}^{\mathcal{P}})$ be the induced graphical game of $\mathbf{U}^{\mathcal{P}}$ . Then,
+
+$$
+\begin{array}{l} C _ {\mathbf {U}} (\mathbf {p}) = C _ {\mathbf {H} (\mathbf {U})} (\mathbf {p}) (Theorem 19) \\ = \sum_ {i, k} C _ {\left(\mathbf {H} (\mathbf {U}) ^ {i k}, \left(\mathbf {H} (\mathbf {U}) ^ {k i}\right) ^ {\top}\right)} \left(\mathbf {p} _ {i}, \mathbf {p} _ {k}\right) \quad \text {(B y e q u a t i o n 1 5)} \\ = \sum_ {i, k} C _ {\left(\mathbf {H} \left(\mathbf {U} ^ {\mathcal {P}}\right) ^ {i k}, \left(\mathbf {H} \left(\mathbf {U} ^ {\mathcal {P}}\right) ^ {k i}\right) ^ {\top}\right)} \left(\mathbf {p} _ {i}, \mathbf {p} _ {k}\right) \quad (\text {s e e e x p l a n a t i o n b e l o w}) \\ = C _ {\mathbf {H} (\mathbf {U} ^ {\mathcal {P}})} (\mathbf {p}) \quad \text {(B y e q u a t i o n 1 5)} \\ = C _ {\mathbf {U} ^ {\mathcal {P}}} (\mathbf {p}). (Theorem 19) \\ \end{array}
+$$
+
+The third equality holds as the difference between $\mathbf{H}(\mathbf{U})^{ik}$ and $\mathbf{H}(\mathbf{U}^{\mathcal{P}})^{ik}$ is a trivial matrix:
+
+$$
+\mathbf {H} (\mathbf {U}) _ {j l} ^ {i k} = \mathbf {U} _ {j l} ^ {i k} = \left(\mathbf {U} ^ {\mathcal {P}}\right) _ {j l} ^ {i k} + \mathbf {E} _ {- (i, k)} \left[ v ^ {i} (s _ {- i}) \right] = \mathbf {H} (\mathbf {U} ^ {\mathcal {P}}) _ {j l} ^ {i k} + \mathbf {E} _ {- (i, k)} \left[ v ^ {i} (s _ {- i}) \right];
+$$
+
+where $^{13} \mathbf{E}_{-(i,k)}[v^i (s_{-i})]$ doesn't depend on $j$ , the strategy of player $i$ , and only depends on $l$ , the strategy of player $k$ . The same argument applies for $(\mathbf{H}(\mathbf{U})^{ki})^{\top}$ and $(\mathbf{H}(\mathbf{U}^{\mathcal{P}})^{ki})^{\top}$ .
+
+To see $C_{\mathbf{U}}(\mathbf{p}) \leq 0$ , observe that the induced graphical game of $\mathbf{U}^{\mathcal{P}}$ between player $i$ and $k$ , $(\mathbf{H}(\mathbf{U}^{\mathcal{P}})^{ik}, (\mathbf{H}(\mathbf{U}^{\mathcal{P}})^{ki})^{\top})$ , is also a bimatrix coordination game, which implies $C_{(\mathbf{H}(\mathbf{U}^{\mathcal{P}})^{ik}, (\mathbf{H}(\mathbf{U}^{\mathcal{P}})^{ki})^{\top})}(\cdot) \leq 0$ . As $C_{\mathbf{U}}(\mathbf{p}) = \sum_{i,k} C_{(\mathbf{H}(\mathbf{U}^{\mathcal{P}})^{ik}, (\mathbf{H}(\mathbf{U}^{\mathcal{P}})^{ki})^{\top})}(\mathbf{p}_i, \mathbf{p}_k)$ , the result follows.
+
+Next, we identify several cases such that $C_{\mathbf{U}}(\mathbf{p})$ is strictly negative in the region
+
+$$
+S ^ {\delta} = \{\mathbf {x} | \forall i, j x _ {i j} > \delta \}.
+$$
+
+The conditions we pose are on the corresponding potential function $\mathcal{P}$ . Note that $\mathbf{H}(\mathbf{U}^{\mathcal{P}})^{ik}$ , the induced edge-game between player $i$ and $k$ , is also a coordination game, i.e. $\mathbf{H}(\mathbf{U}^{\mathcal{P}})^{ik} = (\mathbf{H}(\mathbf{U}^{\mathcal{P}})^{ki})^{\top}$ .
+
+Case 1:
+
+$$
+\min_{\mathbf{x},\mathbf{g},\mathbf{h}}\sum_{1\leq i < k\leq N}\sum_{j\in S_{i},\ell \in S_{k}}\left(P_{j\ell}^{ik} - g_{j}^{ik} - h_{\ell}^{ik}\right)^{2}\geq \theta ,
+$$
+
+where $P_{j\ell}^{ik} = \mathbb{E}_{\mathbf{s}_{-(i,k)}}\left[\mathcal{P}(s_i = j,s_k = \ell ,\mathbf{s}_{-(i,k)})\right]$ . With this condition, we can prove that $C_{\mathbf{U}}(\mathbf{p})\leq -\theta \delta^2$ for any $\mathbf{p}$ in $S^{\delta}$ . One key observation for this is true is that
+
+$$
+\begin{array}{l} C _ {\mathbf {U}} (\mathbf {p}) = \sum_ {i, k} C _ {(\mathbf {H} (\mathbf {U} ^ {\mathcal {P}}) ^ {i k}, (\mathbf {H} (\mathbf {U} ^ {\mathcal {P}}) ^ {k i}) ^ {\intercal})} (\mathbf {p} _ {i}, \mathbf {p} _ {k}) \\ = - \sum_ {i, k} \sum_ {j, \ell} x _ {i j} (\mathbf {p} _ {i}) x _ {k \ell} (\mathbf {p} _ {k}) \left(P _ {j \ell} ^ {i k} - g _ {j} ^ {i k} - h _ {\ell} ^ {j \ell}\right) ^ {2}, \\ \end{array}
+$$
+
+as $\mathbf{H}(\mathbf{U}^{\mathcal{P}})^{ik} = \mathbf{U}^{\mathcal{P}^{ik}} = \mathbf{P}^{ik}$
+
+Case 2:
+
+If $\mathbf{U}$ is a graphical game, then if there exists a pair of player $i_1$ and $i_2$ , such that the game between $i_1$ and $i_2$ is a non-trivial game, then $C_{\mathbf{U}}$ will be strictly negative in $S^{\delta}$ .
+
+Case 3:
+
+Consider the payoff matrix of $\mathbf{U}^{\mathcal{P}}$ , the coordination game, between players $i_1$ and $i_2$ given a strategy profile of the other players. There are total $\prod_{i \neq i_1, i_2} n_i$ such matrices, one for each strategy profile of the other players, and each matrix is of dimension $n_{i_1} \times n_{i_2}$ . We call these matrices the projected matrices for players $i_1, i_2$ .
+
+Let $\mathcal{M}$ denote the matrix space of $n_{i_1} \times n_{i_2}$ . On the other hand, trivial matrices form a subspace of dimension $n_{i_1} + n_{i_2} - 1$ .14 Let's call this the trivial space, denoted by $\mathcal{T}$ .
+
+We consider the direct decomposition $\mathcal{M} = \mathcal{T}\oplus \mathcal{V}$ . Let a set of bases of $\mathcal{M}$ be $\mathbf{B}_1,\mathbf{B}_2,\mathbf{B}_3,\dots ,\mathbf{B}_{n_{i_1}n_{i_2}}$ , where the first $n_{i_1} + n_{i_2} - 1$ bases form a basis of $\mathcal{T}$ , and the remaining bases form a basis of $\mathcal{V}$ . Without loss of generality, we assume that all bases are of $\mathrm{L}_2$ norm 1.15
+
+Given the above-mentioned bases of $\mathcal{M}$ , each of the projected matrices can be written into a unique linear combination of these bases. Now, suppose there is a base $\mathbf{B}_{\ell}$ for $l \geq n_{i_1} + n_{i_2}$ (i.e., this base is in the set of bases for $\mathcal{V}$ ), such that all projected matrices have non-positive (or non-negative) coefficients of this base, and at least one of these projected matrices (which we call a special projected matrix) has strictly negative (or strictly positive) coefficient of the base. Then we claim that $C_{(\mathbf{H}(\mathbf{U}^{\mathcal{P}})^{i_1i_2}, (\mathbf{H}(\mathbf{U}^{\mathcal{P}})^{i_2i_1})^\top)}(\mathbf{p}_{i_1}, \mathbf{p}_{i_2})$ will be strictly negative in $S^\delta$ . This is because $\mathbf{H}(\mathbf{U}^{\mathcal{P}})^{i_1i_2}$ is a convex combination of all those projected matrices, and by our assumption above, when $\mathbf{H}(\mathbf{U}^{\mathcal{P}})^{i_1i_2}$ is expressed as the linear combinations of the bases of $\mathcal{M}$ , the coefficient of $\mathbf{B}_{\ell}$ is strictly negative (or strictly positive), thus $\mathbf{H}(\mathbf{U}^{\mathcal{P}})^{i_1i_2}$ cannot be a trivial matrix.
+
+Suppose further that there exists $\theta > 0$ such that a special projected matrix has negative (or positive) coefficient for $\mathbf{B}_{\ell}$ which is smaller (or bigger) than $-\theta$ (or $\theta$ ), then we are guaranteed that $\mathbf{H}(\mathbf{U}^{\mathcal{P}})^{i_1 i_2}$ is bounded away from $\mathcal{T}$ for a distance of $\theta \delta^{N-2,16}$ and hence as the calculations below show, $C_{(\mathbf{H}(\mathbf{U}^{\mathcal{P}})^{i_1 i_2}, (\mathbf{H}(\mathbf{U}^{\mathcal{P}})^{i_2 i_1})^{\intercal})}(\mathbf{p}_{i_1}, \mathbf{p}_{i_2}) \leq -\theta^2 \delta^{2N-2}$ . If there exists a pair of player $i_1$ and $i_2$ such that this condition holds, then $C_{\mathbf{U}} \leq -\theta^2 \delta^{2N-2}$ .
+
+$$
+\begin{array}{l} C _ {(\mathbf {H} (\mathbf {U} ^ {\mathcal {P}}) ^ {i _ {1} i _ {2}}, (\mathbf {H} (\mathbf {U} ^ {\mathcal {P}}) ^ {i _ {2} i _ {1}}) ^ {\intercal})} \left(\mathbf {p} _ {i _ {1}}, \mathbf {p} _ {i _ {2}}\right) \\ = - \min _ {g, h} \sum_ {j l} x _ {i _ {1} j} \left(\mathbf {p} _ {i _ {1}}\right) x _ {i _ {2}, l} \left(\mathbf {p} _ {i _ {2}}\right) \left(\mathbf {H} \left(\mathbf {U} ^ {\mathcal {P}}\right) ^ {i _ {1} i _ {2}} - g _ {j} - h _ {k}\right) ^ {2} \\ \leq - \min _ {g, h} \delta^ {2} \sum_ {j l} \left(\mathbf {H} \left(\mathbf {U} ^ {\mathcal {P}}\right) ^ {i _ {1} i _ {2}} - g _ {j} - h _ {k}\right) ^ {2} \\ = - \delta^ {2} \sum_ {j l} (\mathbf {H} (\mathbf {U} ^ {\mathcal {P}}) ^ {i _ {1} i _ {2}} - g _ {j} ^ {*} - h _ {k} ^ {*}) ^ {2} \\ \leq - \delta^ {2} \left(\theta \delta^ {N - 2}\right) ^ {2}, \\ \end{array}
+$$
+
+where $\{g_j^* + h_k^*\}_{jk}$ is projection of $\mathbf{H}(\mathbf{U}^{\mathcal{P}})^{i_1 i_2}$ on the trivial space. The first inequality follows as $\mathbf{p} \in S^\delta$ ; the second equality holds as the projection minimizing the distance to the trivial space, and the final inequality comes from the distance from $\mathbf{H}(\mathbf{U}^{\mathcal{P}})^{i_1 i_2}$ to the trivial space.
+
+For all these cases, we can have OMWU is $C_{\mathbf{U}}$ to be strictly negative in domain $S^{\delta}$ , which implies OMWU is Lyapunov chaotic in $S^{\delta}$ .
\ No newline at end of file
diff --git a/chaosoflearningbeyondzerosumandcoordinationviagamedecompositions/images.zip b/chaosoflearningbeyondzerosumandcoordinationviagamedecompositions/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..e4f6330baa83b64db251a7dd3b8b79c50a0dac72
--- /dev/null
+++ b/chaosoflearningbeyondzerosumandcoordinationviagamedecompositions/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f7b1353fab8d563ea9fc2d168b0d6cb6fe6cf1803b4cfe6d30680df214e68a6e
+size 693358
diff --git a/chaosoflearningbeyondzerosumandcoordinationviagamedecompositions/layout.json b/chaosoflearningbeyondzerosumandcoordinationviagamedecompositions/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..416f12f4838a5d7460fd39995b1460c05dfb24e5
--- /dev/null
+++ b/chaosoflearningbeyondzerosumandcoordinationviagamedecompositions/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:935b44e3f074c160d9e326352cde723673f492651f3c00cae5744960bed5a8bb
+size 1226996
diff --git a/characterizingsignalpropagationtoclosetheperformancegapinunnormalizedresnets/ea610c58-6e35-4fb0-b1e5-04c40ccb875c_content_list.json b/characterizingsignalpropagationtoclosetheperformancegapinunnormalizedresnets/ea610c58-6e35-4fb0-b1e5-04c40ccb875c_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..196b94e208405ec7958e0f29e852f289b9f7ff35
--- /dev/null
+++ b/characterizingsignalpropagationtoclosetheperformancegapinunnormalizedresnets/ea610c58-6e35-4fb0-b1e5-04c40ccb875c_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a7608c55b3237346a36797a8512be8ac427bb45976beaa25631cdbd38264dd23
+size 136502
diff --git a/characterizingsignalpropagationtoclosetheperformancegapinunnormalizedresnets/ea610c58-6e35-4fb0-b1e5-04c40ccb875c_model.json b/characterizingsignalpropagationtoclosetheperformancegapinunnormalizedresnets/ea610c58-6e35-4fb0-b1e5-04c40ccb875c_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..e46778d47130b3db91194ab53b6c1c21fad15fb9
--- /dev/null
+++ b/characterizingsignalpropagationtoclosetheperformancegapinunnormalizedresnets/ea610c58-6e35-4fb0-b1e5-04c40ccb875c_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a1e9b5252370d96dfc32d24d57c6e98afaa36bb7cf6ab6fbd88499599c4b36bc
+size 167655
diff --git a/characterizingsignalpropagationtoclosetheperformancegapinunnormalizedresnets/ea610c58-6e35-4fb0-b1e5-04c40ccb875c_origin.pdf b/characterizingsignalpropagationtoclosetheperformancegapinunnormalizedresnets/ea610c58-6e35-4fb0-b1e5-04c40ccb875c_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..653ec317233a3f65b0f9d5fcf29f5fd1184b0730
--- /dev/null
+++ b/characterizingsignalpropagationtoclosetheperformancegapinunnormalizedresnets/ea610c58-6e35-4fb0-b1e5-04c40ccb875c_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5fe9806f9d90ea2c488e623b701911e28ced6115266894e750971b2a58f8be1c
+size 1208722
diff --git a/characterizingsignalpropagationtoclosetheperformancegapinunnormalizedresnets/full.md b/characterizingsignalpropagationtoclosetheperformancegapinunnormalizedresnets/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..919b0000d812ce9490f39d7b173bb8e5f06f73a1
--- /dev/null
+++ b/characterizingsignalpropagationtoclosetheperformancegapinunnormalizedresnets/full.md
@@ -0,0 +1,482 @@
+# CHARACTERIZING SIGNAL PROPAGATION TO CLOSE THE PERFORMANCE GAP IN UNNORMALIZED RESNETS
+
+Andrew Brock, Soham De & Samuel L. Smith
+
+Deepmind
+
+{ajbrock, sohamde, slsmith}@google.com
+
+# ABSTRACT
+
+Batch Normalization is a key component in almost all state-of-the-art image classifiers, but it also introduces practical challenges: it breaks the independence between training examples within a batch, can incur compute and memory overhead, and often results in unexpected bugs. Building on recent theoretical analyses of deep ResNets at initialization, we propose a simple set of analysis tools to characterize signal propagation on the forward pass, and leverage these tools to design highly performant ResNets without activation normalization layers. Crucial to our success is an adapted version of the recently proposed Weight Standardization. Our analysis tools show how this technique preserves the signal in networks with ReLU or Swish activation functions by ensuring that the per-channel activation means do not grow with depth. Across a range of FLOP budgets, our networks attain performance competitive with the state-of-the-art EfficientNets on ImageNet. Our code is available at http://dpmd.ai/nfnets.
+
+# 1 INTRODUCTION
+
+BatchNorm has become a core computational primitive in deep learning (Ioffe & Szegedy, 2015), and it is used in almost all state-of-the-art image classifiers (Tan & Le, 2019; Wei et al., 2020). A number of different benefits of BatchNorm have been identified. It smoothens the loss landscape (Santurkar et al., 2018), which allows training with larger learning rates (BJORck et al., 2018), and the noise arising from the minibatch estimates of the batch statistics introduces implicit regularization (Luo et al., 2019). Crucially, recent theoretical work (Balduzzi et al., 2017; De & Smith, 2020) has demonstrated that BatchNorm ensures good signal propagation at initialization in deep residual networks with identity skip connections (He et al., 2016b;a), and this benefit has enabled practitioners to train deep ResNets with hundreds or even thousands of layers (Zhang et al., 2019).
+
+However, BatchNorm also has many disadvantages. Its behavior is strongly dependent on the batch size, performing poorly when the per device batch size is too small or too large (Hoffer et al., 2017), and it introduces a discrepancy between the behaviour of the model during training and at inference time. BatchNorm also adds memory overhead (Rota Bulò et al., 2018), and is a common source of implementation errors (Pham et al., 2019). In addition, it is often difficult to replicate batch normalized models trained on different hardware. A number of alternative normalization layers have been proposed (Ba et al., 2016; Wu & He, 2018), but typically these alternatives generalize poorly or introduce their own drawbacks, such as added compute costs at inference.
+
+Another line of work has sought to eliminate layers which normalize hidden activations entirely. A common trend is to initialize residual branches to output zeros (Goyal et al., 2017; Zhang et al., 2019; De & Smith, 2020; Bachlechner et al., 2020), which ensures that the signal is dominated by the skip path early in training. However while this strategy enables us to train deep ResNets with thousands of layers, it still degrades generalization when compared to well-tuned baselines (De & Smith, 2020). These simple initialization strategies are also not applicable to more complicated architectures like EfficientNets (Tan & Le, 2019), the current state of the art on ImageNet (Russakovsky et al., 2015).
+
+This work seeks to establish a general recipe for training deep ResNets without normalization layers, which achieve test accuracy competitive with the state of the art. Our contributions are as follows:
+
+- We introduce Signal Propagation Plots (SPPs): a simple set of visualizations which help us inspect signal propagation at initialization on the forward pass in deep residual networks. Leveraging these SPPs, we show how to design unnormalized ResNets which are constrained to have signal propagation properties similar to batch-normalized ResNets.
+- We identify a key failure mode in unnormalized ResNets with ReLU or Swish activations and Gaussian weights. Because the mean output of these non-linearities is positive, the squared mean of the hidden activations on each channel grows rapidly as the network depth increases. To resolve this, we propose Scaled Weight Standardization, a minor modification of the recently proposed Weight Standardization (Qiao et al., 2019; Huang et al., 2017b), which prevents the growth in the mean signal, leading to a substantial boost in performance.
+- We apply our normalization-free network structure in conjunction with Scaled Weight Standardization to ResNets on ImageNet, where we for the first time attain performance which is comparable or better than batch-normalized ResNets on networks as deep as 288 layers.
+- Finally, we apply our normalization-free approach to the RegNet architecture (Radosavovic et al., 2020). By combining this architecture with the compound scaling strategy proposed by Tan & Le (2019), we develop a class of models without normalization layers which are competitive with the current ImageNet state of the art across a range of FLOP budgets.
+
+# 2 BACKGROUND
+
+Deep ResNets at initialization: The combination of BatchNorm (Ioffe & Szegedy, 2015) and skip connections (Srivastava et al., 2015; He et al., 2016a) has allowed practitioners to train deep ResNets with hundreds or thousands of layers. To understand this effect, a number of papers have analyzed signal propagation in normalized ResNets at initialization (Balduzzi et al., 2017; Yang et al., 2019). In a recent work, De & Smith (2020) showed that in normalized ResNets with Gaussian initialization, the activations on the $\ell^{th}$ residual branch are suppressed by factor of $O(\sqrt{\ell})$ , relative to the scale of the activations on the skip path. This biases the residual blocks in deep ResNets towards the identity function at initialization, ensuring well-behaved gradients. In unnormalized networks, one can preserve this benefit by introducing a learnable scalar at the end of each residual branch, initialized to zero (Zhang et al., 2019; De & Smith, 2020; Bachlechner et al., 2020). This simple change is sufficient to train deep ResNets with thousands of layers without normalization. However, while this method is easy to implement and achieves excellent convergence on the training set, it still achieves lower test accuracies than normalized networks when compared to well-tuned baselines.
+
+These insights from studies of batch-normalized ResNets are also supported by theoretical analyses of unnormalized networks (Taki, 2017; Yang & Schoenholz, 2017; Hanin & Rolnick, 2018; Qi et al., 2020). These works suggest that, in ResNets with identity skip connections, if the signal does not explode on the forward pass, the gradients will neither explode nor vanish on the backward pass. Hanin & Rolnick (2018) conclude that multiplying the hidden activations on the residual branch by a factor of $O(1 / d)$ or less, where $d$ denotes the network depth, is sufficient to ensure trainability at initialization.
+
+Alternate normalizers: To counteract the limitations of BatchNorm in different situations, a range of alternative normalization schemes have been proposed, each operating on different components of the hidden activations. These include LayerNorm (Ba et al., 2016), InstanceNorm (Ulyanov et al., 2016), GroupNorm (Wu & He, 2018), and many more (Huang et al., 2020). While these alternatives remove the dependency on the batch size and typically work better than BatchNorm for very small batch sizes, they also introduce limitations of their own, such as introducing additional computational costs during inference time. Furthermore for image classification, these alternatives still tend to achieve lower test accuracies than well-tuned BatchNorm baselines. As one exception, we note that the combination of GroupNorm with Weight Standardization (Qiao et al., 2019) was recently identified as a promising alternative to BatchNorm in ResNet-50 (Kolesnikov et al., 2019).
+
+# 3 SIGNAL PROPAGATION PLOTS
+
+Although papers have recently theoretically analyzed signal propagation in ResNets (see Section 2), practitioners rarely empirically evaluate the scales of the hidden activations at different depths in
+
+
+Figure 1: Signal Propagation Plot for a ResNetV2-600 at initialization with BatchNorm, ReLU activations and He init, in response to an $\mathcal{N}(0,1)$ input at 512px resolution. Black dots indicate the end of a stage. Blue plots use the BN-ReLU-Conv ordering while red plots use ReLU-BN-Conv.
+
+
+
+
+
+side a specific deep network when designing new models or proposing modifications to existing architectures. By contrast, we have found that plotting the statistics of the hidden activations at different points inside a network, when conditioned on a batch of either random Gaussian inputs or real training examples, can be extremely beneficial. This practice both enables us to immediately detect hidden bugs in our implementation before launching an expensive training run destined to fail, and also allows us to identify surprising phenomena which might be challenging to derive from scratch.
+
+We therefore propose to formalize this good practice by introducing Signal Propagation Plots (SPPs), a simple graphical method for visualizing signal propagation on the forward pass in deep ResNets. We assume identity residual blocks of the form $x_{\ell + 1} = f_{\ell}(x_{\ell}) + x_{\ell}$ , where $x_{\ell}$ denotes the input to the $\ell^{th}$ block and $f_{\ell}$ denotes the function computed by the $\ell^{th}$ residual branch. We consider 4-dimensional input and output tensors with dimensions denoted by $NHWC$ , where $N$ denotes the batch dimension, $C$ denotes the channels, and $H$ and $W$ denote the two spatial dimensions. To generate SPPs, we initialize a single set of weights according to the network initialization scheme, and then provide the network with a batch of input examples sampled from a unit Gaussian distribution. Then, we plot the following hidden activation statistics at the output of each residual block:
+
+- Average Channel Squared Mean, computed as the square of the mean across the $NHW$ axes, and then averaged across the $C$ axis. In a network with good signal propagation, we would expect the mean activations on each channel, averaged across a batch of examples, to be close to zero. Importantly, we note that it is necessary to measure the averaged squared value of the mean, since the means of different channels may have opposite signs.
+- Average Channel Variance, computed by taking the channel variance across the $NHW$ axes, and then averaging across the $C$ axis. We generally find this to be the most informative measure of the signal magnitude, and to clearly show signal explosion or attenuation.
+- Average Channel Variance on the end of the residual branch, before merging with the skip path. This helps assess whether the layers on the residual branch are correctly initialized.
+
+We explore several other possible choices of statistics one could measure in Appendix G, but we have found these three to be the most informative. We also experiment with feeding the network real data samples instead of random noise, but find that this step does not meaningfully affect the key trends. We emphasize that SPPs do not capture every property of signal propagation, and they only consider the statistics of the forward pass. Despite this simplicity, SPPs are surprisingly useful for analyzing deep ResNets in practice. We speculate that this may be because in ResNets, as discussed in Section 2 (Taki, 2017; Yang & Schoenholz, 2017; Hanin & Rolnick, 2018), the backward pass will typically neither explode nor vanish so long as the signal on the forward pass is well behaved.
+
+As an example, in Figure 1 we present the SPP for a 600-layer pre-activation ResNet (He et al., $2016\mathrm{a})^{1}$ with BatchNorm, ReLU activations, and He initialization (He et al., 2015). We compare the standard BN-ReLU-Conv ordering to the less common ReLU-BN-Conv ordering. Immediately, several key patterns emerge. First, we note that the Average Channel Variance grows linearly with the depth in a given stage, and resets at each transition block to a fixed value close to 1. The linear growth arises because, at initialization, the variance of the activations satisfy $\operatorname{Var}(x_{\ell +1}) = \operatorname{Var}(x_{\ell}) + \operatorname{Var}(f_{\ell}(x_{\ell}))$ , while BatchNorm ensures that the variance of the activations at the end
+
+
+Figure 2: SPPs for three different variants of the ResNetV2-600 network (with ReLU activations). In red, we show a batch normalized network with ReLU-BN-Conv ordering. In green we show a normalizer-free network with He-init and $\alpha = 1$ . In cyan, we show the same normalizer-free network but with Scaled Weight Standardization. We note that the SPPs for a normalizer-free network with Scaled Weight Standardization are almost identical to those for the batch normalized network.
+
+
+
+
+
+of each residual branch is independent of depth (De & Smith, 2020). The variance is reset at each transition block because in these blocks the skip connection is replaced by a convolution operating on a normalized input, undoing any signal growth on the skip path in the preceding blocks.
+
+With the BN-ReLU-Conv ordering, the Average Squared Channel Means display similar behavior, growing linearly with depth between transition blocks. This may seem surprising, since we expect BatchNorm to center the activations. However with this ordering the final convolution on a residual branch receives a rectified input with positive mean. As we show in the following section, this causes the outputs of the branch on any single channel to also have non-zero mean, and explains why $\mathrm{Var}(f_{\ell}(x_{\ell}))\approx 0.68$ for all depths $\ell$ . Although this "mean-shift" is explicitly counteracted by the normalization layers in subsequent residual branches, it will have serious consequences when attempting to remove normalization layers, as discussed below. In contrast, the ReLU-BN-Conv ordering trains equally stably while avoiding this mean-shift issue, with $\mathrm{Var}(f_{\ell}(x_{\ell}))\approx 1$ for all $\ell$ .
+
+# 4 NORMALIZER-FREE RESNETS (NF-RESNETS)
+
+With SPPs in hand to aid our analysis, we now seek to develop ResNet variants without normalization layers, which have good signal propagation, are stable during training, and reach test accuracies competitive with batch-normalized ResNets. We begin with two observations from Section 3. First, for standard initializations, BatchNorm downscales the input to each residual block by a factor proportional to the standard deviation of the input signal (De & Smith, 2020). Second, each residual block increases the variance of the signal by an approximately constant factor. We propose to mimic this effect by using residual blocks of the form $x_{\ell +1} = x_{\ell} + \alpha f_{\ell}(x_{\ell} / \beta_{\ell})$ , where $x_{\ell}$ denotes the input to the $\ell^{th}$ residual block and $f_{\ell}(\cdot)$ denotes the $\ell^{th}$ residual branch. We design the network such that:
+
+- $f(\cdot)$ , the function computed by the residual branch, is parameterized to be variance preserving at initialization, i.e., $\operatorname{Var}(f_{\ell}(z)) = \operatorname{Var}(z)$ for all $\ell$ . This constraint enables us to reason about the signal growth in the network, and estimate the variances analytically.
+- $\beta_{\ell}$ is a fixed scalar, chosen as $\sqrt{\operatorname{Var}(x_{\ell})}$ , the expected empirical standard deviation of the activations $x_{\ell}$ at initialization. This ensures the input to $f_{\ell}(\cdot)$ has unit variance.
+- $\alpha$ is a scalar hyperparameter which controls the rate of variance growth between blocks.
+
+We compute the expected empirical variance at residual block $\ell$ analytically according to $\mathrm{Var}(x_{\ell}) = \mathrm{Var}(x_{\ell -1}) + \alpha^2$ , with an initial expected variance of $\mathrm{Var}(x_0) = 1$ , and we set $\beta_{\ell} = \sqrt{\mathrm{Var}(x_{\ell})}$ . A similar approach was proposed by Arpit et al. (2016) for non-residual networks. As noted in Section 3, the signal variance in normalized ResNets is reset at each transition layer due to the shortcut convolution receiving a normalized input. We mimic this reset by having the shortcut convolution in transition layers operate on $(x_{\ell} / \beta_{\ell})$ rather than $x_{\ell}$ , ensuring unit signal variance at the start of each stage $(\mathrm{Var}(x_{\ell +1}) = 1 + \alpha^2$ following each transition layer). For simplicity, we call residual networks employing this simple scaling strategy Normalizer-Free ResNets (NF-ResNets).
+
+# 4.1 RELU ACTIVATIONS INDUCE MEAN SHIFTS
+
+We plot the SPPs for Normalizer-Free ResNets (NF-ResNets) with $\alpha = 1$ in Figure 2. In green, we consider a NF-ResNet, which initializes the convolutions with Gaussian weights using He initialization (He et al., 2015). Although one might expect this simple recipe to be sufficient to achieve good signal propagation, we observe two unexpected features in practice. First, the average value of the squared channel mean grows rapidly with depth, achieving large values which exceed the average channel variance. This indicates a large "mean shift", whereby the hidden activations for different training inputs (in this case different vectors sampled from the unit normal) are strongly correlated (Jacot et al., 2019; Ruff et al., 2019). Second, as observed for BN-ReLU-Conv networks in Section 3, the scale of the empirical variances on the residual branch are consistently smaller than one.
+
+To identify the origin of these effects, in Figure 7 (in Appendix F) we provide a similar SPP for a linearized version of ResNetV2-600 without ReLU activation functions. When the ReLU activations are removed, the averaged squared channel means remain close to zero for all block depths, and the empirical variance on the residual branch fluctuates around one. This motivates the following question: why might ReLU activations cause the scale of the mean activations on a channel to grow?
+
+To develop an intuition for this phenomenon, consider the transformation $z = Wg(x)$ , where $W$ is arbitrary and fixed, and $g(\cdot)$ is an activation function that acts component-wise on iid inputs $x$ such that $g(x)$ is also iid. Thus, $g(\cdot)$ can be any popular activation function like ReLU, tanh, SiLU, etc. Let $\mathbb{E}(g(x_i)) = \mu_g$ and $\mathrm{Var}(g(x_i)) = \sigma_g^2$ for all dimensions $i$ . It is straightforward to show that the expected value and the variance of any single unit $i$ of the output $z_i = \sum_j^N W_{i,j} g(x_j)$ is given by:
+
+$$
+\mathbb {E} \left(z _ {i}\right) = N \mu_ {g} \mu_ {W _ {i, \cdot}}, \quad \text {a n d} \quad \operatorname {V a r} \left(z _ {i}\right) = N \sigma_ {g} ^ {2} \left(\sigma_ {W _ {i, \cdot}} ^ {2} + \mu_ {W _ {i, \cdot}} ^ {2}\right), \tag {1}
+$$
+
+where $\mu_{W_{i_s}}$ and $\sigma_{W_{i_s}}$ are the mean and standard deviation of the $i^{th}$ row of $W$ :
+
+$$
+\mu_ {W _ {i, j}} = \frac {1}{N} \sum_ {j} ^ {N} W _ {i, j}, \quad \text {a n d} \quad \sigma_ {W _ {i, j}} ^ {2} = \frac {1}{N} \sum_ {j} ^ {N} W _ {i, j} ^ {2} - \mu_ {W _ {i, j}} ^ {2}. \tag {2}
+$$
+
+Now consider $g(\cdot)$ to be the ReLU activation function, i.e., $g(x) = \max (x,0)$ . Then $g(x)\geq 0$ , which implies that the input to the linear layer has positive mean (ignoring the edge case when all inputs are less than or equal to zero). In particular, notice that if $x_{i}\sim \mathcal{N}(0,1)$ for all $i$ , then $\mu_g = 1 / \sqrt{2\pi}$ . Since we know that $\mu_g > 0$ , if $\mu_{W_i}$ is also non-zero, then the output of the transformation, $z_{i}$ , will also exhibit a non-zero mean. Crucially, even if we sample $W$ from a distribution centred around zero, any specific weight matrix drawn from this distribution will almost surely have a non-zero empirical mean, and consequently the outputs of the residual branches on any specific channel will have non-zero mean values. This simple NF-ResNet model with He-initialized weights is therefore often unstable, and it is increasingly difficult to train as the depth increases.
+
+# 4.2 SCALED WEIGHT STANDARDSIZATION
+
+To prevent the emergence of a mean shift, and to ensure that the residual branch $f_{\ell}(\cdot)$ is variance preserving, we propose Scaled Weight Standardization, a minor modification of the recently proposed Weight Standardization (Qiao et al., 2019) which is also closely related to Centered Weight Normalization (Huang et al., 2017b). We re-parameterize the convolutional layers, by imposing,
+
+$$
+\hat {W} _ {i, j} = \gamma \cdot \frac {W _ {i , j} - \mu_ {W _ {i ,}}}{\sigma_ {W _ {i ,}} \sqrt {N}}, \tag {3}
+$$
+
+where the mean $\mu$ and variance $\sigma$ are computed across the fan-in extent of the convolutional filters. We initialize the underlying parameters $\hat{W}$ from Gaussian weights, while $\gamma$ is a fixed constant. As in Qiao et al. (2019), we impose this constraint throughout training as a differentiable operation in the forward pass of the network. Recalling equation 1, we can immediately see that the output of the transformation using Scaled WS, $z = \hat{W} g(x)$ , has expected value $\mathbb{E}(z_i) = 0$ for all $i$ , thus eliminating the mean shift. Furthermore, the variance $\operatorname{Var}(z_i) = \gamma^2\sigma_g^2$ , meaning that for a correctly chosen $\gamma$ , which depends on the non-linearity $g$ , the layer will be variance preserving. Scaled Weight Standardization is cheap during training and free at inference, scales well (with the number of parameters rather than activations), introduces no dependence between batch elements and no discrepancy in training and test behavior, and its implementation does not differ in distributed training. These desirable properties make it a compelling alternative for replacing BatchNorm.
+
+The SPP of a normalizer-free ResNet-600 employing Scaled WS is shown in Figure 2 in cyan. As we can see, Scaled Weight Standardization eliminates the growth of the average channel squared mean at initialization. Indeed, the SPPs are almost identical to the SPPs for a batch-normalized network employing the ReLU-BN-Conv ordering, shown in red. Note that we select the constant $\gamma$ to ensure that the channel variance on the residual branch is close to one (discussed further below). The variance on the residual branch decays slightly near the end of the network due to zero padding.
+
+# 4.3 DETERMINING NONLINEARITY-SPECIFIC CONSTANTS $\gamma$
+
+The final ingredient we need is to determine the value of the gain $\gamma$ , in order to ensure that the variances of the hidden activations on the residual branch are close to 1 at initialization. Note that the value of $\gamma$ will depend on the specific nonlinearity used in the network. We derive the value of $\gamma$ by assuming that the input $x$ to the nonlinearity is sampled iid from $\mathcal{N}(0,1)$ . For ReLU networks, this implies that the outputs $g(x) = \max (x,0)$ will be sampled from the rectified Gaussian distribution with variance $\sigma_g^2 = (1 / 2)(1 - (1 / \pi))$ (Arpit et al., 2016). Since $\operatorname{Var}(\hat{W} g(x)) = \gamma^2\sigma_g^2$ , we set $\gamma = 1 / \sigma_{g} = \frac{\sqrt{2}}{\sqrt{1 - \frac{1}{\pi}}}$ to ensure that $\operatorname{Var}(\hat{W} g(x)) = 1$ . While the assumption $x\sim \mathcal{N}(0,1)$ is not typically true unless the network width is large, we find this approximation to work well in practice.
+
+For simple nonlinearities like ReLU or tanh, the analytical variance of the non-linearity $g(x)$ when $x$ is drawn from the unit normal may be known or easy to derive. For other nonlinearities, such as SiLU ((Elfwing et al., 2018; Hendrycks & Gimpel, 2016), recently popularized as Swish (Ramachandran et al., 2018)), analytically determining the variance can involve solving difficult integrals, or may even not have an analytical form. In practice, we find that it is sufficient to numerically approximate this value by the simple procedure of drawing many $N$ dimensional vectors $x$ from the Gaussian distribution, computing the empirical variance $\mathrm{Var}(g(x))$ for each vector, and taking the square root of the average of this empirical variance. We provide an example in Appendix D showing how this can be accomplished for any nonlinearity with just a few lines of code and provide reference values.
+
+# 4.4 OTHER BUILDING BLOCKS AND RELAXED CONSTRAINTS
+
+Our method generally requires that any additional operations used in a network maintain good signal propagation, which means many common building blocks must be modified. As with selecting $\gamma$ values, the necessary modification can be determined analytically or empirically. For example, the popular Squeeze-and-Eccitation operation (S+E, Hu et al. (2018)), $y = \text{sigmoid}(MLP(pool(h))) * h$ , involves multiplication by an activation in [0, 1], and will tend to attenuate the signal and make the model unstable. This attenuation can clearly be seen in the SPP of a normalizer-free ResNet using these blocks (see Figure 9 in Appendix F). If we examine this operation in isolation using our simple numerical procedure explained above, we find that the expected variance is 0.5 (for unit normal inputs), indicating that we simply need to multiply the output by 2 to recover good signal propagation. We empirically verified that this simple change is sufficient to restore training stability.
+
+In practice, we find that either a similarly simple modification to any given operation is sufficient to maintain good signal propagation, or that the network is sufficiently robust to the degradation induced by the operation to train well without modification. We also explore the degree to which we can relax our constraints and still maintain stable training. As an example of this, to recover some of the expressivity of a normal convolution, we introduce learnable affine gains and biases to the Scaled WS layer (the gain is applied to the weight, while the bias is added to the activation, as is typical). While we could constrain these values to enforce good signal propagation by, for example, downscaling the output by a scalar proportional to the values of the gains, we find that this is not necessary for stable training, and that stability is not impacted when these parameters vary freely. Relatedly, we find that using a learnable scalar multiplier at the end of the residual branch initialized to 0 (Goyal et al., 2017; De & Smith, 2020) helps when training networks over 150 layers, even if we ignore this modification when computing $\beta_{\ell}$ . In our final models, we employ several such relaxations without loss of training stability. We provide detailed explanations for each operation and any modifications we make in Appendix C (also detailed in our model code in Appendix D).
+
+# 4.5 SUMMARY
+
+In summary, the core recipe for a Normalizer-Free ResNet (NF-ResNet) is:
+
+1. Compute and forward propagate the expected signal variance $\beta_{\ell}^{2}$ , which grows by $\alpha^2$ after each residual block ( $\beta_0 = 1$ ). Downscale the input to each residual branch by $\beta_{\ell}$ .
+2. Additionally, downscale the input to the convolution on the skip path in transition blocks by $\beta_{\ell}$ , and reset $\beta_{\ell +1} = 1 + \alpha^2$ following a transition block.
+3. Employ Scaled Weight Standardization in all convolutional layers, computing $\gamma$ , the gain specific to the activation function $g(x)$ , as the reciprocal of the expected standard deviation, $\frac{1}{\sqrt{\operatorname{Var}(g(x))}}$ , assuming $x \sim \mathcal{N}(0, 1)$ .
+
+Code is provided in Appendix D for a reference Normalizer-Free Network.
+
+# 5 EXPERIMENTS
+
+# 5.1 AN EMPIRICAL EVALUATION ON RESNETS
+
+ | FixUp | SkipInit | NF-ResNets (ours) | BN-ResNets |
| Unreg. | Reg. | Unreg. | Reg. | Unreg. | Reg. | Unreg. | Reg. |
| RN50 | 74.0 ± .5 | 75.9 ± .3 | 73.7 ± .2 | 75.8 ± .2 | 75.8 ± .1 | 76.8 ± .1 | 76.8 ± .1 | 76.4 ± .1 |
| RN101 | 75.4 ± .6 | 77.6 ± .3 | 75.1 ± .1 | 77.3 ± .2 | 77.1 ± .1 | 78.4 ± .1 | 78.0 ± .1 | 78.1 ± .1 |
| RN152 | 75.8 ± .4 | 78.4 ± .3 | 75.7 ± .2 | 78.0 ± .1 | 77.6 ± .1 | 79.1 ± .1 | 78.6 ± .2 | 78.8 ± .1 |
| RN200 | 76.2 ± .5 | 78.7 ± .3 | 75.9 ± .2 | 78.2 ± .1 | 77.9 ± .2 | 79.6 ± .1 | 79.0 ± .2 | 79.2 ± .1 |
| RN288 | 76.2 ± .4 | 78.4 ± .4 | 76.3 ± .2 | 78.7 ± .2 | 78.1 ± .1* | 79.5 ± .1 | 78.8 ± .1 | 79.5 ± .1 |
+
+Table 1: ImageNet Top-1 Accuracy (%) for ResNets with FixUp (Zhang et al., 2019) or SkipInit (De & Smith, 2020), Normalizer-Free ResNets (ours), and Batch-Normalized ResNets. We train all variants both with and without additional regularization (stochastic depth and dropout). Results are given as the median accuracy $\pm$ the standard deviation across 5 random seeds. * indicates a setting where two runs collapsed and results are reported only using the 3 seeds which train successfully.
+
+We begin by investigating the performance of Normalizer-Free pre-activation ResNets on the ILSVRC dataset (Russakovsky et al., 2015), for which we compare our networks to FixUp initialization (Zhang et al., 2019), SkipInit (De & Smith, 2020), and batch-normalized ResNets. We use a training setup based on Goyal et al. (2017), and train our models using SGD (Robbins & Monro, 1951) with Nesterov's Momentum (Nesterov, 1983; Sutskever et al., 2013) for 90 epochs with a batch size of 1024 and a learning rate which warms up from zero to 0.4 over the first 5 epochs, then decays to zero using cosine annealing (Loshchilov & Hutter, 2017). We employ standard baseline preprocessing (sampling and resizing distorted bounding boxes, along with random flips), weight decay of 5e-5, and label smoothing of 0.1 (Szegedy et al., 2016). For Normalizer-Free ResNets (NF-ResNets), we chose $\alpha = 0.2$ based on a small sweep, and employ SkipInit as discussed above. For both FixUp and SkipInit we had to reduce the learning rate to 0.2 to enable stable training.
+
+We find that without additional regularization, our NF-ResNets achieve higher training accuracies but lower test accuracies than their batch-normalized counterparts. This is likely caused by the known regularization effect of BatchNorm (Hoffer et al., 2017; Luo et al., 2019; De & Smith, 2020). We therefore introduce stochastic depth (Huang et al., 2016) with a rate of 0.1, and Dropout (Srivastava et al., 2014) before the final linear layer with a drop probability of 0.25. We note that adding this same regularization does not substantially improve the performance of the normalized ResNets in our setup, suggesting that BatchNorm is indeed already providing some regularization benefit.
+
+In Table 1 we compare performance of our networks (NF-ResNets) against the baseline (BN-ResNets), across a wide range of network depths. After introducing additional regularization, NF-ResNets achieve performance better than FixUp/SkipInit and competitive with BN across all network depths, with our regularized NF-ResNet-288 achieving top-1 accuracy of $79.5\%$ . However, some of the 288 layer normalizer-free models undergo training collapse at the chosen learning rate,
+
+ | NF-ResNets (ours) | BN-ResNets |
| BS=1024 | BS=8 | BS=4 | BS=1024 | BS=8 | BS=4 |
| ResNet-50 | 69.9 ± 0.1 | 69.6 ± 0.1 | 69.9 ± 0.1 | 70.9 ± 0.1 | 65.7 ± 0.2 | 55.7 ± 0.3 |
+
+Table 2: ImageNet Top-1 Accuracy (%) for Normalizer-Free ResNets and Batch-Normalized ResNet-50s on ImageNet, using very small batch sizes trained for 15 epochs. Results are given as the median accuracy $\pm$ the standard deviation across 5 random seeds. Performance degrades severely for Batch-Normalized networks, while Normalizer-Free ResNets retain good performance.
+
+but only when unregularized. While we can remove this instability by reducing the learning rate to 0.2, this comes at the cost of test accuracy. We investigate this failure mode in Appendix A.
+
+One important limitation of BatchNorm is that its performance degrades when the per-device batch size is small (Hoffer et al., 2017; Wu & He, 2018). To demonstrate that our normalizer-free models overcome this limitation, we train ResNet-50s on ImageNet using very small batch sizes of 8 and 4, and report the results in Table 2. These models are trained for 15 epochs (4.8M and 2.4M training steps, respectively) with a learning rate of 0.025 for batch size 8 and 0.01 for batch size 4. For comparison, we also include the accuracy obtained when training for 15 epochs at batch size 1024 and learning rate 0.4. The NF-ResNet achieves significantly better performance when the batch size is small, and is not affected by the shift from batch size 8 to 4, demonstrating the usefulness of our approach in the microbatch setting. Note that we do not apply stochastic depth or dropout in these experiments, which may explain superior performance of the BN-ResNet at batch size 1024. We also study the transferability of our learned representations to the downstream tasks of semantic segmentation and depth estimation, and present the results of these experiments in Appendix H.
+
+# 5.2 DESIGNING PERFORMANT NORMALIZER-FREE NETWORKS
+
+We now turn our attention to developing unnormalized networks which are competitive with the state-of-the-art EfficientNet model family across a range of FLOP budgets (Tan & Le, 2019). We focus primarily on the small budget regime (EfficientNets B0-B4), but also report results for B5 and hope to extend our investigation to larger variants in future work.
+
+First, we apply Scaled WS and our Normalizer-Free structure directly to the EfficientNet backbone.2 While we succeed in training these networks stably without normalization, we find that even after extensive tuning our Normalizer-Free EfficientNets still substantially underperform their batch-normalized baselines. For example, our normalization free B0 variant achieves $73.5\%$ top-1, a $3.2\%$ absolute degradation relative to the baseline. We hypothesize that this degradation arises because Weight Standardization imposes a very strong constraint on depth-wise convolutions (which have an input channel count of 1), and this constraint may remove a substantial fraction of the model expressivity. To support this claim, we note that removing Scaled WS from the depth-wise convolutions improves the test accuracy of Normalizer-Free EfficientNets, although this also reduces the training stability.
+
+
+Figure 3: ImageNet Top-1 test accuracy versus FLOPs.
+
+Therefore, to overcome the potentially poor interactions between Weight Standardization and depth-wise convolutions,
+
+we decided to instead study Normalizer-Free variants of the RegNet model family (Radosavovic et al., 2020). RegNets are slightly modified variants of ResNeXts (Xie et al., 2017), developed via manual architecture search. Crucially, RegNets employ grouped convolutions, which we anticipate are more compatible with Scaled WS than depth-wise convolutions, since the fraction of the degrees of freedom in the model weights remaining after the weight standardization operation is higher.
+
+We develop a new base model by taking the 0.4B FLOP RegNet variant, and making several minor architectural changes which cumulatively substantially improve the model performance. We describe our final model in full in Appendix C, however we emphasize that most of the architecture changes we introduce simply reflect well-known best practices from the literature (Tan & Le, 2019; He et al., 2019). To assess the performance of our Normalizer-Free RegNets across a range of FLOPS budgets, we apply the EfficientNet compound scaling approach (which increases the width, depth and input resolution in tandem according to a set of three power laws learned using architecture search) to obtain model variants at a range of approximate FLOPS targets. Denoting these models NF-RegNets, we train variants B0-B5 (analogous to EfficientNet variants) using both baseline preprocessing and combined CutMix (Yun et al., 2019) and MixUp (Zhang et al., 2018) augmentation. Note that we follow the same compound scaling hyper-parameters used by EfficientNets, and do not retune these hyper-parameters on our own architecture. We compare the test accuracies of EfficientNets and NF-RegNets on ImageNet in Figure 3, and we provide the corresponding numerical values in Table 3 of Appendix A. We present a comparison of training speeds in Table 5 of Appendix A.
+
+For each FLOPS and augmentation setting, NF-RegNets attain comparable but slightly lower test accuracies than EfficientNets, while being substantially faster to train. In the augmented setting, we report EfficientNet results with AutoAugment (AA) or RandAugment (RA), (Cubuk et al., 2019; 2020), which we find performs better than training EfficientNets with CutMix+MixUp. However, both AA and RA degrade the performance and stability of NF-RegNets, and hence we report results of NF-RegNets with CutMix+Mixup instead. We hypothesize that this occurs because AA and RA were developed by applying architecture search on batch-normalized models, and that they may therefore change the statistics of the dataset in a way that negatively impacts signal propagation when normalization layers are removed. To support this claim, we note that inserting a single BatchNorm layer after the first convolution in an NF-RegNet removes these instabilities and enables us to train stably with either AA or RA, although this approach does not achieve higher test set accuracies.
+
+These observations highlight that, although our models do benefit from most of the architectural improvements and best practices which researchers have developed from the hundreds of thousands of device hours used while tuning batch-normalized models, there are certain aspects of existing state-of-the-art models, like AA and RA, which may implicitly rely on the presence of activation normalization layers in the network. Furthermore there may be other components, like depth-wise convolutions, which are incompatible with promising new primitives like Weight Standardization. It is therefore inevitable that some fine-tuning and model development is necessary to achieve competitive accuracies when removing a component like batch normalization which is crucial to the performance of existing state-of-the-art networks. Our experiments confirm for the first time that it is possible to develop deep ResNets which do not require batch normalization or other activation normalization layers, and which not only train stably and achieve low training losses, but also attain test accuracy competitive with the current state of the art on a challenging benchmark like ImageNet.
+
+# 6 CONCLUSION
+
+We introduce Normalizer-Free Networks, a simple approach for designing residual networks which do not require activation normalization layers. Across a range of FLOP budgets, our models achieve performance competitive with the state-of-the-art EfficientNets on ImageNet. Meanwhile, our empirical analysis of signal propagation suggests that batch normalization resolves two key failure modes at initialization in deep ResNets. First, it suppresses the scale of the hidden activations on the residual branch, preventing signal explosion. Second, it prevents the mean squared scale of the activations on each channel from exceeding the variance of the activations between examples. Our Normalizer-Free Networks were carefully designed to resolve both of these failure modes.
+
+# ACKNOWLEDGMENTS
+
+We would like to thank Karen Simonyan for helpful discussions and guidance, as well as Guillaume Desjardins, Michael Figurnov, Nikolay Savinov, Omar Rivasplata, Relja Arandjelović, and Rishub Jain.
+
+# REFERENCES
+
+Devansh Arpit, Yingbo Zhou, Bhargava Kota, and Venu Govindaraju. Normalization propagation: A parametric technique for removing internal covariate shift in deep networks. In International Conference on Machine Learning, pp. 1168-1176, 2016.
+Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
+Thomas Bachlechner, Bodhisattwa Prasad Majumder, Huanru Henry Mao, Garrison W Cottrell, and Julian McAuley. Rezero is all you need: Fast convergence at large depth. arXiv preprint arXiv:2003.04887, 2020.
+David Balduzzi, Marcus Frean, Lennox Leary, JP Lewis, Kurt Wan-Duo Ma, and Brian McWilliams. The shattered gradients problem: If resnets are the answer, then what is the question? In International Conference on Machine Learning, pp. 342-350, 2017.
+Nils Bjorck, Carla P Gomes, Bart Selman, and Kilian Q Weinberger. Understanding batch normalization. In Advances in Neural Information Processing Systems, pp. 7694-7705, 2018.
+James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, and Skye Wanderman-Milne. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/google/jax.
+Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. Autoaugment: Learning augmentation strategies from data. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 113-123, 2019.
+Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. Randaugment: Practical automated data augmentation with a reduced search space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702-703, 2020.
+Soham De and Sam Smith. Batch normalization biases residual blocks towards the identity function in deep networks. Advances in Neural Information Processing Systems, 33, 2020.
+Stefan Elfwing, Eiji Uchibe, and Kenji Doya. Sigmoid-weighted linear units for neural network function approximation in reinforcement learning. Neural Networks, 107:3-11, 2018.
+Priya Goyal, Piotr Dólar, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017.
+Jean-Bastien Grill, Florian Strub, Florent Alché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems, 33, 2020.
+Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C. Courville. Improved training of Wasserstein GANs. In Advances in neural information processing systems, 2017.
+Boris Hanin and David Rolnick. How to start training: The effect of initialization and architecture. In Advances in Neural Information Processing Systems, pp. 571-581, 2018.
+
+Charles R. Harris, K. Jarrod Millman, Stefan J. van der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J. Smith, Robert Kern, Matti Picus, Stephan Hoyer, Marten H. van Kerkwijk, Matthew Brett, Allan Haldane, Jaime Fernandez del Río, Mark Wiebe, Pearu Peterson, Pierre Gérard-Marchant, Kevin Sheppard, Tyler Reddy, Warren Weckesser, Hameer Abbasi, Christoph Gohlke, and Travis E. Oliphant. Array programming with numpy. Nature, 585(7825):357-362, Sep 2020. ISSN 1476-4687.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1026-1034, 2015.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European conference on computer vision, pp. 630-645. Springer, 2016a.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016b.
+Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9729-9738, 2020.
+Tong He, Zhi Zhang, Hang Zhang, Zhongyue Zhang, Junyuan Xie, and Mu Li. Bag of tricks for image classification with convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 558-567, 2019.
+Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415, 2016.
+Elad Hoffer, Itay Hubara, and Daniel Soudry. Train longer, generalize better: closing the generalization gap in large batch training of neural networks. In Advances in Neural Information Processing Systems, pp. 1731-1741, 2017.
+Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
+Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7132-7141, 2018.
+Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q Weinberger. Deep networks with stochastic depth. In European conference on computer vision, pp. 646-661. Springer, 2016.
+Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700-4708, 2017a.
+Lei Huang, Xianglong Liu, Yang Liu, Bo Lang, and Dacheng Tao. Centered weight normalization in accelerating training of deep neural networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2803-2811, 2017b.
+Lei Huang, Jie Qin, Yi Zhou, Fan Zhu, Li Liu, and Ling Shao. Normalization techniques in training DNNs: Methodology, analysis and application. arXiv preprint arXiv:2009.12836, 2020.
+Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015.
+Arthur Jacot, Franck Gabriel, and Clément Hongler. Freeze and chaos for dnns: an ntk view of batch normalization, checkerboard and boundary effects. arXiv preprint arXiv:1907.05715, 2019.
+Durk P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. In Advances in Neural Information Processing Systems, pp. 10215-10224, 2018.
+Günter Klambauer, Thomas Unterthiner, Andreas Mayr, and Sepp Hochreiter. Self-normalizing neural networks. In Advances in neural information processing systems, pp. 971-980, 2017.
+
+Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, and Neil Houlsby. Large scale learning of general visual representations for transfer. arXiv preprint arXiv:1912.11370, 2019.
+Iro Laina, Christian Rupprecht, Vasileios Belagiannis, Federico Tombari, and Nassir Navab. Deeper depth prediction with fully convolutional residual networks. In 2016 Fourth international conference on 3D vision (3DV), pp. 239-248. IEEE, 2016.
+Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431-3440, 2015.
+Ilya Loshchilov and Frank Hutter. SGDR: stochastic gradient descent with warm restarts. In 5th International Conference on Learning Representations, ICLR, 2017.
+Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR, 2019.
+Ping Luo, Xinjiang Wang, Wenqi Shao, and Zhanglin Peng. Towards understanding regularization in batch normalization. In 7th International Conference on Learning Representations, ICLR, 2019.
+Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, and Jian Sun. Shufflenet v2: Practical guidelines for efficient cnn architecture design. In Proceedings of the European conference on computer vision (ECCV), pp. 116-131, 2018.
+Dmytro Mishkin and Jiri Matas. All you need is a good init. In 4th International Conference on Learning Representations, ICLR, 2016.
+Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In ICLR, 2018.
+Y. Nesterov. A method for unconstrained convex minimization problem with the rate of convergence $O(1 / k^2)$ . Doklady AN USSR, pp. (269), 543-547, 1983.
+Art B Owen. A robust hybrid of lasso and ridge regression. 2007.
+Hung Viet Pham, Thibaud Lutellier, Weizhen Qi, and Lin Tan. Cradle: cross-backend validation to detect and localize bugs in deep learning libraries. In 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE), pp. 1027-1038. IEEE, 2019.
+Boris Polyak. Some methods of speeding up the convergence of iteration methods. USSR Computational Mathematics and Mathematical Physics, pp. 4(5):1-17, 1964.
+Haozhi Qi, Chong You, Xiaolong Wang, Yi Ma, and Jitendra Malik. Deep isometric learning for visual recognition. In International Conference on Machine Learning, pp. 7824-7835. PMLR, 2020.
+Siyuan Qiao, Huiyu Wang, Chenxi Liu, Wei Shen, and Alan Yuille. Weight standardization. arXiv preprint arXiv:1903.10520, 2019.
+Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, and Piotr Dólár. Designing network design spaces. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10428-10436, 2020.
+Prajit Ramachandran, Barret Zoph, and Quoc V. Le. Searching for activation functions. In 6th International Conference on Learning Representations, ICLR, Workshop Track Proceedings, 2018.
+Herbert Robbins and Sutton Monro. A stochastic approximation method. The Annals of Mathematical Statistics, pp. 22(3):400-407, 1951.
+Samuel Rota Bulò, Lorenzo Porzi, and Peter Kontschieder. In-place activated batchnorm for memory-optimized training of dnns. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5639–5647, 2018.
+
+Brendan Ruff, Taylor Beck, and Joscha Bach. Mean shift rejection: Training deep neural networks without minibatch statistics or normalization. arXiv preprint arXiv:1911.13173, 2019.
+Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, and Michael Bernstein. ImageNet large scale visual recognition challenge. IJCV, 115:211-252, 2015.
+Tim Salimans and Durk P Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Advances in neural information processing systems, pp. 901-909, 2016.
+Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4510-4520, 2018.
+Shibani Santurkar, Dimitris Tsipras, Andrew Ilyas, and Aleksander Madry. How does batch normalization help optimization? In Advances in Neural Information Processing Systems, pp. 2483-2493, 2018.
+Nathan Silberman, Derek Hoiem, Pushmeet Kohli, and Rob Fergus. Indoor segmentation and support inference from rgbd images. In European conference on computer vision, pp. 746-760. Springer, 2012.
+Samuel Smith, Erich Olsen, and Soham De. On the generalization benefit of noise in stochastic gradient descent. In International Conference on Machine Learning, pp. 9058-9067. PMLR, 2020.
+Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958, 2014.
+Rupesh Kumar Srivastava, Klaus Greff, and Jürgen Schmidhuber. Highway networks. arXiv preprint arXiv:1505.00387, 2015.
+Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In International conference on machine learning, pp. 1139-1147, 2013.
+C Szegedy, V Vanhoucke, S Ioffe, J Shlens, and Z Wojna. Rethinking the inception architecture for computer vision. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2818-2826, 2016.
+Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A. Alemi. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, pp. 4278–4284, 2017.
+Masato Taki. Deep residual networks and weight initialization. arXiv preprint arXiv:1709.02956, 2017.
+Mingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In International Conference on Machine Learning, pp. 6105-6114, 2019.
+Hugo Touvron, Andrea Vedaldi, Matthijs Douze, and Hervé Jégou. Fixing the train-test resolution discrepancy. In Advances in Neural Information Processing Systems, pp. 8252-8262, 2019.
+Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022, 2016.
+Longhui Wei, An Xiao, Lingxi Xie, Xiaopeng Zhang, Xin Chen, and Qi Tian. Circumventing outliers of autoaugment with knowledge distillation. In ECCV, pp. 608-625, 2020.
+Yuxin Wu and Kaiming He. Group normalization. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 3-19, 2018.
+
+Saining Xie, Ross Girshick, Piotr Dolkar, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1492-1500, 2017.
+Yunyang Xiong, Hanxiao Liu, Suyog Gupta, Berkin Akin, Gabriel Bender, Pieter-Jan Kindermans, Mingxing Tan, Vikas Singh, and Bo Chen. Mobiledets: Searching for object detection architectures for mobile accelerators. arXiv preprint arXiv:2004.14525, 2020.
+Ge Yang and Samuel Schoenholz. Mean field residual networks: On the edge of chaos. In Advances in neural information processing systems, pp. 7103-7114, 2017.
+Greg Yang, Jeffrey Pennington, Vinay Rao, Jascha Sohl-Dickstein, and Samuel S. Schoenholz. A mean field theory of batch normalization. In 7th International Conference on Learning Representations, ICLR, 2019.
+Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE International Conference on Computer Vision, pp. 6023-6032, 2019.
+Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In Proceedings of the British Machine Vision Conference 2016, BMVC, 2016.
+Hongyi Zhang, Moustapha Cissé, Yann N. Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. In 6th International Conference on Learning Representations, ICLR, 2018.
+Hongyi Zhang, Yann N. Dauphin, and Tengyu Ma. Fixup initialization: Residual learning without normalization. In 7th International Conference on Learning Representations, ICLR, 2019.
+
+APPENDIX A EXPERIMENT DETAILS
+
+| Model | #FLOPs | #Params | Top-1 w/o Augs | Top-1 w/ Augs |
| NF-RegNet-B0 | 0.44B | 8.3M | 76.8 ± 0.2 | 77.0 ± 0.1 |
| EfficientNet-B0 | 0.39B | 5.3M | 76.7 | 77.1 |
| RegNetY-400MF | 0.40B | 4.3M | 74.1 | - |
| NF-RegNet-B1 | 0.73B | 9.8M | 78.6 ± 0.1 | 78.7 ± 0.1 |
| EfficientNet-B1 | 0.70B | 7.8M | 78.7 | 79.1 |
| RegNetY-600MF | 0.60B | 6.1M | 75.5 | - |
| RegNetY-800MF | 0.80B | 6.3M | 76.3 | - |
| MobileNet (Howard et al., 2017) | 0.60B | 4.2M | 70.6 | - |
| MobileNet v2 (Sandler et al., 2018) | 0.59B | 6.9M | 74.7 | - |
| ShuffleNet v2 (Ma et al., 2018) | 0.59B | - | 74.9 | - |
| NF-RegNet-B2 | 1.09B | 13.4M | 79.6 ± 0.1 | 80.0 ± 0.1 |
| EfficientNet-B2 | 1.00B | 9.2M | 79.8 | 80.1 |
| NF-RegNet-B3 | 1.98B | 17.6M | 80.6 ± 0.1 | 81.2 ± 0.1 |
| EfficientNet-B3 | 1.80B | 12.0M | 81.1 | 81.6 |
| NF-RegNet-B4 | 4.43B | 28.5M | 81.7 ± 0.1 | 82.5 ± 0.1 |
| EfficientNet-B4 | 4.20B | 19.0M | 82.5 | 82.9 |
| RegNetY-4.0GF | 4.00B | 20.6M | 79.4 | - |
| ResNet50 | 4.10B | 26.0M | 76.8 | 78.6 |
| DenseNet-169 (Huang et al., 2017a) | 3.50B | 14.0M | 76.2 | - |
| NF-RegNet-B5 | 10.48B | 47.5M | 82.0 ± 0.2 | 83.0 ± 0.2 |
| EfficientNet-B5 | 9.90B | 30.0M | 83.1 | 83.7 |
| RegNetY-12GF | 12.10B | 51.8M | 80.3 | - |
| ResNet152 | 11.00B | 60.0M | 78.6 | - |
| Inception-v4 (Szegedy et al., 2017) | 13.00B | 48.0M | 80.0 | - |
+
+Table 3: ImageNet Top-1 Accuracy (%) comparison for NF-RegNets and recent state-of-the-art models. "w/ Augs" refers to accuracy with advanced augmentations: for EfficientNets, this is with AutoAugment or RandAugment. For NF-RegNets, this is with CutMix + MixUp. NF-RegNet results are reported as the median and standard deviation across 5 random seeds.
+
+# A.1 STABILITY, LEARNING RATES, AND BATCH SIZES
+
+Previous work (Goyal et al., 2017) has established a fairly robust linear relationship between the optimal learning rate (or highest stable learning rate) and batch size for Batch-Normalized ResNets. As noted in Smith et al. (2020), we also find that this relationship breaks down past batch size 1024 for our unnormalized ResNets, as opposed to 2048 or 4096 for normalized ResNets. Both the optimal learning rate and the highest stable learning rate decrease for higher batch sizes. This also appears to correlate with depth: when not regularized, our deepest models are not always stable with a learning rate of 0.4. While we can mitigate this collapse by reducing the learning rate for deeper nets, this introduces additional tuning expense and is clearly undesirable. It is not presently clear why regularization aids in stability; we leave investigation of this phenomenon to future work.
+
+Taking a closer look at collapsed networks, we find that even though their outputs have exploded (becoming large enough to go NaN), their weight magnitudes are not especially large, even if we remove our relaxed affine transforms and train networks whose layers are purely weight-standardized. The singular values of the weights, however, end up poorly conditioned, meaning that the Lipschitz constant of the network can become quite large, an effect which Scaled WS does not prevent. One might consider adopting one of the many techniques from the GAN literature to regularize or constrain this constant (Gulrajani et al., 2017; Miyato et al., 2018), but we have found that this added complexity and expense is not necessary to develop performant unnormalized networks.
+
+This collapse highlights an important limitation of our approach, and of SPPs: as SPPs only show signal prop for a given state of a network (i.e., at initialization), no guarantees are provided far from initialization. This fact drives us to prefer parameterizations like Scaled WS rather than solely relying on initialization strategies, and highlights that while good signal propagation is generally necessary for stable optimization, it is not always sufficient.
+
+# A.2 TRAINING SPEED
+
+ | BS16 | BS32 | BS64 |
| NF | BN | NF | BN | NF | BN |
| ResNet-50 | 17.3 | 16.42 | 10.5 | 9.45 | 5.79 | 5.24 |
| ResNet-101 | 10.4 | 9.4 | 6.28 | 5.75 | 3.46 | 3.08 |
| ResNet-152 | 7.02 | 5.57 | 4.4 | 3.95 | 2.4 | 2.17 |
| ResNet-200 | 5.22 | 3.53 | 3.26 | 2.61 | OOM | OOM |
| ResNet-288 | 3.0 | 2.25 | OOM | OOM | OOM | OOM |
+
+Table 4: Training speed (in training iterations per second) comparisons of NF-ResNets and BN-ResNets on a single 16GB V100 for various batch sizes.
+
+ | BS16 | BS32 | BS64 |
| NF-RegNet | EffNet | NF-RegNet | EffNet | NF-RegNet | EffNet |
| B0 | 12.2 | 5.23 | 9.25 | 3.61 | 6.51 | 2.61 |
| B1 | 9.06 | 3.0 | 6.5 | 2.14 | 4.69 | 1.55 |
| B2 | 5.84 | 2.22 | 4.05 | 1.68 | 2.7 | 1.16 |
| B3 | 4.49 | 1.57 | 3.1 | 1.05 | 2.13 | OOM |
| B4 | 2.73 | 0.94 | 1.96 | OOM | OOM | OOM |
| B5 | 1.66 | OOM | OOM | OOM | OOM | OOM |
+
+Table 5: Training speed (in training iterations per second) comparisons of NF-RegNets and Batch-Normalized EfficientNets on a single 16GB V100 for various batch sizes.
+
+We evaluate the relative training speed of our normalizer-free models against batch-normalized models by comparing training speed (measured as the number of training steps per second). For comparing NF-RegNets against EfficientNets, we measure using the EfficientNet image sizes for each variant to employ comparable settings, but in practice we employ smaller image sizes so that our actual observed training speed for NF-RegNets is faster.
+
+# APPENDIX B MODIFIED BUILDING BLOCKS
+
+In order to maintain good signal propagation in our Normalizer-Free models, we must ensure that any architectural modifications do not compromise our model's conditioning, as we cannot rely on activation normalizers to automatically correct for such changes. However, our models are not so fragile as to be unable to handle slight relaxations in this realm. We leverage this robustness to improve model expressivity and to incorporate known best practices for model design.
+
+# B.1 AFFINE GAINS AND BIASES
+
+First, we add affine gains and biases to our network, similar to those used by activation normalizers. These are applied as a vector gain, each element of which multiplies a given output unit of a reparameterized convolutional weight, and a vector bias, which is added to the output of each convolution. We also experimented with using these as a separate affine transform applied before the ReLU, but moved the parameters next to the weight instead to enable constant-folding for inference. As is common practice with normalizer parameters, we do not weight decay or otherwise regularize these weights.
+
+Even though these parameters are allowed to vary freely, we do not find that they are responsible for training instability, even in networks where we observe collapse. Indeed, we find that for settings which collapse (typically due to learning rates being too high), removing the affine transform has no impact on stability. As discussed in Appendix A, we observe that model instability arises as a result of the collapse of the spectra of the weights, rather than any consequence of the affine gains and biases.
+
+# B.2 STOCHASTIC DEPTH
+
+We incorporate Stochastic Depth (Huang et al., 2016), where the output of the residual branch of a block is randomly set to zero during training. This is often implemented such that if the block is kept, its value is divided by the keep probability. We remove this rescaling factor to help maintain signal propagation when the signal is kept, but otherwise do not find it necessary to modify this block.
+
+While it is possible that we might have an example where many blocks are dropped and signals are attenuated, in practice we find that, as with affine gains, removing Stochastic Depth does not improve stability, and adding it does not reduce stability. One might also consider a slightly more principled variant of Stochastic Depth in this context, where the skip connection is upscaled by $1 + \alpha$ if the residual branch is dropped, resulting in the variance growing as expected, but we did not find this strategy necessary.
+
+# B.3 SQUEEZE AND EXCITE LAYERS
+
+As mentioned in Section 4.4, we incorporate Squeeze and Excitation layers (Hu et al., 2018), which we empirically find to reduce signal magnitude by a factor of 0.5, which is simply corrected by multiplying by 2. This was determined using a similar procedure to that used to find $\gamma$ values for a given nonlinearity, as demonstrated in Appendix D. We validate this empirically by training NF-RegNet models with unmodified S+E blocks, which do not train stably, and NF-RegNet models with the additional correcting factor of 2, which do train stably.
+
+# B.4 AVERAGE POOLING
+
+In line with best practices determined by He et al. (2019), in our NF-RegNet models we replace the strided $1 \times 1$ convolutions with average pooling followed by $1 \times 1$ convolutions, a common alternative also employed in Zagoruyko & Komodakis (2016). We found that average pooling with a kernel of size $k \times k$ tended to attenuate the signal by a factor of $k$ , but that it was not necessary to apply any correction due to this. While this will result in mis-estimation of $\beta$ values at initialization, it does not harm training (and average pooling in fact improved results over strided $1 \times 1$ convolutions in every case we tried), so we simply include this operation as-is.
+
+# APPENDIX C MODEL DETAILS
+
+We develop the NF-RegNet architecture starting with a RegNetY-400MF architecture (Radosavovic et al. (2020)) a low-latency RegNet variant which also uses Squeeze+Excite blocks (Hu et al., 2018)) and uses grouped convolutions with a group width of 8. Following EfficientNets, we first add an additional expansion convolution after the final residual block, expanding to $1280w$ channels, where $w$ is a model width multiplier hyperparameter. We find this to be very important for performance: if the classifier layer does not have access to a large enough feature basis, it will tend to underfit (as measured by higher training losses) and underperform. We also experimented with adding an additional linear expansion layer after the global average pooling, but found this not to provide the same benefit.
+
+Next, we replace the strided $1 \times 1$ convolutions in transition layers with average pooling followed by $1 \times 1$ convolutions (following He et al. (2019)), which we also find to improve performance. We switch from ReLU activations to SiLU activations (Elfwing et al., 2018; Hendrycks & Gimpel, 2016; Ramachandran et al., 2018). We find that SiLU's benefits are only realized when used in conjunction with EMA (the model averaging we use, explained below), as in EfficientNets. The performance of the underlying weights does not seem to be affected by the difference in nonlinearities, so the improvement appears to come from SiLU apparently being more amenable to averaging.
+
+We then tune the choice of width $w$ and bottleneck ratio $g$ by sweeping them on the 0.4B FLOP model. Contrary to Radosavovic et al. (2020) which found that inverted bottlenecks (Sandler et al., 2018) were not performant, we find that inverted bottlenecks strongly outperformed their compressive bottleneck counterparts, and select $w = 0.75$ and $g = 2.25$ . Following EfficientNets (Tan & Le, 2019), the very first residual block in a network uses $g = 1$ , a FLOP-reducing strategy that does not appear to harmfully impact performance.
+
+We also modify the S+E layers to be wider by making their hidden channel width a function of the block's expanded width, rather than the block's input width (which is smaller in an inverted bottleneck). This results in our models having higher parameter counts than their equivalent FLOP target EfficientNets, but has minimal effect on FLOPS, while improving performance. While both FLOPS and parameter count play a part in the latency of a deployed model, (the quantity which is often most relevant in practice) neither are fully predictive of latency (Xiong et al., 2020). We choose to focus on the FLOPS target instead of parameter count, as one can typically obtain large improvements in accuracy at a given parameter count by, for example, increasing the resolution of the input image, which will dramatically increase the FLOPS.
+
+With our baseline model in hand, we apply the EfficientNet compound scaling (increasing width, depth, and input image resolution) to obtain a family of models at approximately the same FLOP targets as each EfficientNet variant. We directly use the EfficientNet width and depth multipliers for models B0 through B5, and tune the test image resolution to attain similar FLOP counts (although our models tend to have slightly higher FLOP budgets). Again contrary to Radosavovic et al. (2020), which scales models almost entirely by increasing width and group width, we find that the EfficientNet compound scaling works effectively as originally reported, particularly with respect to image size. Improvements might be made by applying further architecture search, such as tuning the $w$ and $g$ values for each variant separately, or by choosing the group width separately for each variant.
+
+Following Touvron et al. (2019), we train on images of slightly lower resolution than we test on, primarily to reduce the resource costs of training. We do not employ the fine-tuning procedure of Touvron et al. (2019). The exact train and test image sizes we use are visible in our model code in Appendix D.
+
+We train using SGD with Nesterov Momentum, using a batch size of 1024 for 360 epochs, which is chosen to be in line with EfficientNet's schedule of 360 epoch training at batch size 4096. We employ a 5 epoch warmup to a learning rate of 0.4 (Goyal et al., 2017), and cosine annealing to 0 over the remaining epochs (Loshchilov & Hutter, 2017). As with EfficientNets, we also take an exponential moving average of the weights (Polyak, 1964), using a decay of 0.99999 which employs a warmup schedule such that at iteration $i$ , the decay is $decay = \min(i, \frac{1 + i}{10 + i})$ . We choose a larger decay than the EfficientNets value of 0.9999, as EfficientNets also take an EMA of the running average statistics of the BatchNorm layers, resulting in a longer horizon for the averaged model.
+
+As with EfficientNets, we find that some of our models attain their best performance before the end of training, but unlike EfficientNets we do not employ early stopping, instead simply reporting performance at the end of training. The source of this phenomenon is that as some models (particularly larger models) reach the end of their decay schedule, the rate of change of their weights slows, ultimately resulting in the averaged weights converging back towards the underlying (less performant) weights. Future work in this area might consider examining the interaction between averaging and learning rate schedules.
+
+Following EfficientNets, we also use stochastic depth (modified to remove the rescaling by the keep rate, so as to better preserve signal) with a drop rate that scales from 0 to 0.1 with depth (reduced from the EfficientNets value of 0.2). We swept this value and found the model to not be especially sensitive to it as long as it was not chosen beyond 0.25. We apply Dropout (Srivastava et al., 2014) to the final pooled activations, using the same Dropout rates as EfficientNets for each variant. We also use label smoothing (Szegedy et al., 2016) of 0.1, and weight decay of 5e-5.
+
+# APPENDIX D MODEL CODE
+
+We here provide reference code using Numpy (Harris et al., 2020) and JAX (Bradbury et al., 2018). Our full training code is publicly available at dpmd.ai/nfnets.
+
+# D.1 NUMERICAL APPROXIMATIONS OF NONLINEARITY-SPECIFIC GAINS
+
+It is often faster to determine the nonlinearity-specific constants $\gamma$ empirically, especially when the chosen activation functions are complex or difficult to integrate. One simple way to do this is for the SiLU function is to sample many (say, 1024) random C-dimensional vectors (of say size 256) and compute the average variance, which will allow for computing an estimate of the constant. Empirically estimating constants to ensure good signal propagation in networks at initialization has previously been proposed in Mishkin & Matas (2016) and Kingma & Dhariwal (2018).
+
+```python
+import jax
+import jax numpy as jnp
+key = jax.random.PRNGKey(2) # Arbitrary key
+# Produce a large batch of random noise vectors
+x = jax.random.normal(key, (1024, 256))
+y = jax(nn.silu(x))
+# Take the average variance of many random batches
+gamma = jnp.mean(jnp.var(y, axis=1)) ** -0.5
+```
+
+# APPENDIX E OVERVIEW OF EXISTING BLOCKS
+
+This appendix contains an overview of several different types of residual blocks.
+
+
+(a) Pre-Activation ResNet Block
+Figure 4: Residual Blocks for pre-activation ResNets (He et al., 2016a). Note that some variants swap the order of the nonlinearity and the BatchNorm, resulting in signal propagation which is more similar to that of our normalizer-free networks.
+
+
+(b) Pre-Activation ResNet Transition Block
+
+
+(a) Post-Activation ResNet Block
+Figure 5: Residual Blocks for post-activation (original) ResNets (He et al., 2016b).
+
+
+(b) Post-Activation ResNet Transition Block
+
+# APPENDIX F ADDITIONAL SPPS
+
+In this appendix, we include additional Signal Propagation Plots. For reference, given an $NHWC$ tensor, we compute the measured properties using the equivalent of the following Numpy (Harris et al., 2020) snippets:
+
+Average Channel Mean Squared:
+
+$$
+\text {n p . m e a n} (\text {n p . m e a n} (\text {y}, \text {a x i s} = [ 0, 1, 2 ]) \star \star 2)
+$$
+
+Average Channel Variance:
+
+$$
+\mathrm {n p . m e a n} (\mathrm {n p . v a r} (\mathrm {y}, \text {a x i s} = [ 0, 1, 2 ]))
+$$
+
+- Residual Average Channel Variance:
+
+$$
+\text {n p . m e a n} (\text {n p . v a r} (\mathrm {f} (\mathrm {x}), \text {a x i s} = [ 0, 1, 2 ])
+$$
+
+
+Figure 6: Signal Propagation Plot for a ResNetV2-600 with ReLU and He initialization, without any normalization, on a semilog scale. The scales of all three properties grow logarithmically due to signal explosion.
+
+
+
+
+
+
+Figure 7: Signal Propagation Plot for 600-layer ResNetV2s with linear activations, comparing BatchNorm against with normalizer-free scaling. Note that the max-pooling operation in the ResNet stem has here been removed so that the inputs to the first blocks are centered.
+
+
+
+
+
+
+Figure 8: Signal Propagation Plot for a Normalizer-Free ResNetV2-600 with ReLU and Scaled WS, using $\gamma = \sqrt{2}$ , the gain for ReLU from (He et al., 2015). As this gain (derived from $\sqrt{\frac{1}{\mathbb{E}[g(x)^2]}}$ ) is lower than the correct gain ( $\sqrt{\frac{1}{Var(g(x))}}$ ), signals attenuate progressively in the first stage, then are further downscaled at each transition which uses a $\beta$ value that assumes a higher incoming scale.
+
+
+
+
+
+
+Figure 9: Signal Propagation Plot for a Normalizer-Free ResNetV2-600 with ReLU, Scaled WS with correctly chosen gains, and unmodified Squeeze-and-Excite Blocks. Similar to understimating $\gamma$ values, unmodified S+E blocks will attenuate the signal.
+
+
+
+
+
+
+(a)
+
+
+(b)
+
+
+(c)
+Figure 10: Signal Propagation Plot for a ResNet-600 with FixUp. Due to the zero-initialized weights, FixUp networks have constant variance in a stage, and still demonstrate variance resets across stages.
+
+
+(a)
+Figure 11: Signal Propagation Plot for a ResNet-600 V1 with post-activation ordering. As this variant applies BatchNorm at the end of the residual block, the Residual Average Channel Variance is kept constant at 1 throughout the model. This ordering also applies BatchNorm on shortcut 1x1 convolutions at stage transition, and thus also displays variance resets.
+
+
+(b)
+
+
+(c)
+
+# APPENDIX G NEGATIVE RESULTS
+
+# G.1 FORWARD MODE VS DECOUPLED WS
+
+Parameterization methods like Weight Standardization (Qiao et al., 2019), Weight Normalization (Salimans & Kingma, 2016), and Spectral Normalization (Miyato et al., 2018) are typically proposed as "foward mode" modifications applied to parameters during the forward pass of a network. This has two consequences: first, this means that the gradients with respect to the underlying parameters are influenced by the parameterization, and that the weights which are optimized may differ substantially from the weights which are actually plugged into the network.
+
+One alternative approach is to implement "decoupled" variants of these parameterizers, by applying them as a projection step in the optimizer. For example, "Decoupled Weight Standardization" can be implemented atop any gradient based optimizer by replacing $W$ with the normalized $\hat{W}$ after the update step. Most papers proposing parameterizations (including the above) argue that the parameterization's gradient influence is helpful for learning, but this is typically argued with respect to simply ignoring the parameterization during the backward pass, rather than with respect to a strategy such as this.
+
+Using a Forward-Mode parameterization may result in interesting interactions with moving averages or weight decay. For example, with WS, if one takes a moving average of the underlying weights, then applies the WS parameterization to the averaged weights, this will produce different results than if one took the EMA of the Weight-Standardized parameters. Weight decay will have a similar phenomenon: if one is weight decaying a parameter which is actually a proxy for a weight-standardized parameter, how does this change the behavior of the regularization?
+
+We experimented with Decoupled WS and found that it reduced sensitivity to weight decay (presumably because of the strength of the projection step) and often improved the accuracy of the EMA weights early in training, but ultimately led to worse performance than using the originally proposed "forward-mode" formulation. We emphasize that our experiments in this regime were only cursory, and suggest that future work might seek to analyze these interactions in more depth.
+
+We also tried applying Scaled WS as a regularizer ("Soft WS") by penalizing the mean squared error between the parameter $W$ and its Scaled WS parameterization, $\hat{W}$ . We implemented this as a direct addition to the parameters following Loshchilov & Hutter (2019) rather than as a differentiated loss, with a scale hyperparameter controlling the strength of the regularization. We found that this scale could not be meaningfully decreased from its maximal value without drastic training instability, indicating that relaxing the WS constraint is better done through other means, such as the affine gains and biases we employ.
+
+# G.2 MISCELLANEOUS
+
+- For SPPs, we initially explored plotting activation mean (np.mean(h)) instead of the average squared channel mean, but found that this was less informative.
+- We also initially explored plotting the average pixel norm: the Frobenius norm of each pixel (reduced across the $C$ axis) then averaged across the $N HW$ axis, np.mean(np.linalgnorm(h, axis=-1))). We found that this value did not add any information not already contained in the channel or residual variance measures, and was harder to interpret due to it varying with the channel count.
+- We explored NF-ResNet variants which maintained constant signal variance, rather than mimicking Batch-Normalized ResNets with signal growth + resets. The first of two key components in this approach was making use of "rescaled sum junctions," where the sum junction in a residual block was rewritten to downscale the shortcut path as $y = \frac{\alpha * f(x) + x}{\alpha^2}$ , which is approximately norm-preserving if $f(x)$ is orthogonal to $x$ (which we observed to generally hold in practice). Instead of Scaled WS, this variant employed SeLU (Klambauer et al., 2017) activations, which we found to work as-advertising in encouraging centering and good scaling.
+
+While these networks could be made to train stably, we found tuning them to be difficult and were not able to easily recover the performance of BN-ResNets as we were with the approach ultimately presented in this paper.
+
+# APPENDIX H EXPERIMENTS WITH ADDITIONAL TASKS
+
+# H.1 SEMANTIC SEGMENTATION ON PASCAL VOC
+
+ | NF-ResNets (ours) | BN-ResNets |
| mIoU | mIoU |
| ResNet-50 | 74.4 | 75.4 |
| ResNet-101 | 76.7 | 77.0 |
| ResNet-152 | 77.6 | 77.9 |
| ResNet-200 | 78.4 | 78.0 |
+
+We present additional results investigating the transferability of our normalizer-free models to downstream tasks, beginning with the Pascal VOC Semantic Segmentation task. We use the FCN architecture (Long et al., 2015) following He et al. (2020) and Grill et al. (2020). We take the ResNet backbones of each variant and modify the $3 \times 3$ convolutions in the final stage to have dilation 2 and stride of 1, then add two extra $3 \times 3$ convolutions with dilation of 6, and a final $1 \times 1$ convolution for classification. We train for 30000 steps at batch size 16 using SGD with Nesterov Momentum of 0.9, a learning rate of 0.003 which is reduced by a factor of 10 at $70\%$ and $90\%$ of training, and weight decay of 5e-5. Training images are augmented with random scaling in the range [0.5, 2.0]), random horizontal flips, and random crops. Results in mean Intersection over Union (mIoU) are reported in Table 6 on the val2012 set using a single 513 pixel center crop. We do not add any additional regularization such as stochastic depth or dropout. NF-ResNets obtain comparable performance to their BN-ResNet counterparts across all variants.
+
+# H.2 DEPTH ESTIMATION ON NYU DEPTH v2
+
+Table 6: Results on Pascal VOC Semantic Segmentation.
+
+ | Higher better | Lower Better |
| pct. < 1.25 | pct. < 1.252 | pct. < 1.253 | rms | rel |
| NF-ResNet-50 | 81.9 | 95.9 | 98.9 | 0.572 | 0.141 |
| BN-ResNet-50 | 81.7 | 95.7 | 98.8 | 0.579 | 0.141 |
| NF-ResNet-101 | 82.7 | 96.4 | 99.0 | 0.564 | 0.136 |
| BN-ResNet-101 | 83.4 | 96.2 | 98.9 | 0.563 | 0.132 |
| NF-ResNet-152 | 83.2 | 96.4 | 99.1 | 0.558 | 0.135 |
| BN-ResNet-152 | 81.6 | 96.0 | 98.9 | 0.579 | 0.140 |
| NF-ResNet-200 | 84.0 | 96.7 | 99.2 | 0.552 | 0.130 |
| BN-ResNet-200 | 84.6 | 96.6 | 99.1 | 0.548 | 0.125 |
+
+Table 7: Results on NYUv2 Depth Estimation.
+
+We next present results for depth estimation on the NYU v2 dataset (Silberman et al., 2012) using the protocol from (Laina et al., 2016). We downsample the images and center crop them to [304, 228] pixels, then randomly flip and apply several color augmentations: grayscale with probability $30\%$ , brightness with a maximum difference of 0.1255, saturation with a random factor picked from [0.5, 1.5], and Hue with adjustment factor picked in [-0.2, 0.2]. We take the features from the final residual stage and feed them into the up-projection blocks from (Silberman et al., 2012), then train with a reverse Huber loss (Laina et al., 2016; Owen, 2007). We train for 7500 steps at batch size 256 using SGD with Nesterov Momentum of 0.9, a learning rate of 0.16, and cosine annealing. We report results in Table7 using five metrics commonly used for this task: the percentage of pixels where the magnitude of the relative error (taken as the ratio of the predicted depth and the ground truth, where the denominator is whichever is smaller) is below a certain threshold, as well as root-mean-squared and relative error (rms and rel). As with semantic segmentation, we do not apply any additional regularization, and find that our normalizer-free ResNets attain comparable performance across all model variants.
\ No newline at end of file
diff --git a/characterizingsignalpropagationtoclosetheperformancegapinunnormalizedresnets/images.zip b/characterizingsignalpropagationtoclosetheperformancegapinunnormalizedresnets/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..678c438194dfdf908e0ebdc6786b18533785e2eb
--- /dev/null
+++ b/characterizingsignalpropagationtoclosetheperformancegapinunnormalizedresnets/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:35cf9b7b99937332691705ef209469969c3df93293ab18392d57b434e6d227cd
+size 708991
diff --git a/characterizingsignalpropagationtoclosetheperformancegapinunnormalizedresnets/layout.json b/characterizingsignalpropagationtoclosetheperformancegapinunnormalizedresnets/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..f096b1d9941242d61a2e9cfe68aa6f4f04090d60
--- /dev/null
+++ b/characterizingsignalpropagationtoclosetheperformancegapinunnormalizedresnets/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3c34dc4746c13cbdccfea23bd05eb0697203aeec7b4e37c23a87389d4cf1e6f6
+size 684336
diff --git a/chipnetbudgetawarepruningwithheavisidecontinuousapproximations/dae7ecb0-46b0-470f-a1eb-bf7c62eee568_content_list.json b/chipnetbudgetawarepruningwithheavisidecontinuousapproximations/dae7ecb0-46b0-470f-a1eb-bf7c62eee568_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..6ef1e0f29cf50689678fd24ce2a23f664dc8b1bd
--- /dev/null
+++ b/chipnetbudgetawarepruningwithheavisidecontinuousapproximations/dae7ecb0-46b0-470f-a1eb-bf7c62eee568_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:68df39e546bd66064037a8f24934a5bb0d01a2c0e7963dc110a20fd9e0e945d2
+size 109707
diff --git a/chipnetbudgetawarepruningwithheavisidecontinuousapproximations/dae7ecb0-46b0-470f-a1eb-bf7c62eee568_model.json b/chipnetbudgetawarepruningwithheavisidecontinuousapproximations/dae7ecb0-46b0-470f-a1eb-bf7c62eee568_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..ab93fea1be012bab37c4901283af6e86df083e05
--- /dev/null
+++ b/chipnetbudgetawarepruningwithheavisidecontinuousapproximations/dae7ecb0-46b0-470f-a1eb-bf7c62eee568_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:91e0187c4d3f0ee0d74a8d307e28c44685f162fef0bbe3327ed56751b71574eb
+size 129322
diff --git a/chipnetbudgetawarepruningwithheavisidecontinuousapproximations/dae7ecb0-46b0-470f-a1eb-bf7c62eee568_origin.pdf b/chipnetbudgetawarepruningwithheavisidecontinuousapproximations/dae7ecb0-46b0-470f-a1eb-bf7c62eee568_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..83a89cd3a825f04783a20f769cde22c652e28c7e
--- /dev/null
+++ b/chipnetbudgetawarepruningwithheavisidecontinuousapproximations/dae7ecb0-46b0-470f-a1eb-bf7c62eee568_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:79687e846579eb626780d715e9b1c61cd9728606d854b424880a183d57fe50c5
+size 430073
diff --git a/chipnetbudgetawarepruningwithheavisidecontinuousapproximations/full.md b/chipnetbudgetawarepruningwithheavisidecontinuousapproximations/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..07d125aa3f341cd3db4fbfff0dd5afeefeb477a1
--- /dev/null
+++ b/chipnetbudgetawarepruningwithheavisidecontinuousapproximations/full.md
@@ -0,0 +1,444 @@
+# CHIPNET: BUDGET-AWARE PRUNING WITH HEAVISIDE CONTINUOUS APPROXIMATIONS
+
+Rishabh Tiwari†‡*, Udbhav Bamba†‡*, Arnav Chavan†‡* and Deepak K. Gupta†‡*
+
+†Transmute AI Research, The Netherlands
+$^{\ddagger}$ Indian Institute of Technology, ISM Dhanbad, India
+\*Informatics Institute, University of Amsterdam, The Netherlands
+
+# ABSTRACT
+
+Structured pruning methods are among the effective strategies for extracting small resource-efficient convolutional neural networks from their dense counterparts with minimal loss in accuracy. However, most existing methods still suffer from one or more limitations, that include 1) the need for training the dense model from scratch with pruning-related parameters embedded in the architecture, 2) requiring model-specific hyperparameter settings, 3) inability to include budget-related constraint in the training process, and 4) instability under scenarios of extreme pruning. In this paper, we present ChipNet, a deterministic pruning strategy that employs continuous Heaviside function and a novel crispness loss to identify a highly sparse network out of an existing dense network. Our choice of continuous Heaviside function is inspired by the field of design optimization, where the material distribution task is posed as a continuous optimization problem, but only discrete values (0 or 1) are practically feasible and expected as final outcomes. Our approach's flexible design facilitates its use with different choices of budget constraints while maintaining stability for very low target budgets. Experimental results show that ChipNet outperforms state-of-the-art structured pruning methods by remarkable margins of up to $16.1\%$ in terms of accuracy. Further, we show that the masks obtained with ChipNet are transferable across datasets. For certain cases, it was observed that masks transferred from a model trained on feature-rich teacher dataset provide better performance on the student dataset than those obtained by directly pruning on the student data itself.
+
+# 1 INTRODUCTION
+
+Convolution Neural Networks (CNNs) have resulted in several breakthroughs across various disciplines of deep learning, especially for their effectiveness in extracting complex features. However, these models demand significantly high computational power, making it hard to use them on low-memory hardware platforms that require high-inference speed. Moreover, most of the existing deep networks are heavily over-parameterized resulting in high memory footprint (Denil et al., 2013; Frankle & Carbin, 2018). Several strategies have been proposed to tackle this issue, that include network pruning (Liu et al., 2018), neural architecture search using methods such as reinforcement learning (JAAFra et al., 2019) and vector quantization (Gong et al., 2014), among others.
+
+Among the methods outlined above, network pruning has proved to be very effective in designing small resource-efficient architectures that perform at par with their dense counterparts. Network pruning refers to removal of unnecessary weights or filters from a given architecture without compromising its accuracy. It can broadly be classified into two categories: unstructured pruning and structured pruning. Unstructured pruning involves removal of neurons or the corresponding connection weights from the network to make it sparse. While this strategy reduces the number of parameters in the model, computational requirements are still the same (Li et al., 2017). Structured pruning methods on the other hand remove the entire channels from the network. This strategy pre
+
+serves the regular structure, thereby taking advantage of the high degree of parallelism provided by modern hardware (Liu et al., 2017; Gordon et al., 2018).
+
+Several structured pruning approaches have been proposed in the recent literature. A general consensus is that variational approaches using sparsity prior loss and learnable dropout parameters outperform the deterministic methods (Lemaire et al., 2019). Some of these methods learn sparsity as a part of pretraining, and have proved to perform better than the three stage pretrain-prune-finetune methods. However, since such approaches need to train the model from scratch with pruning-related variables embedded into the network, they cannot benefit from off-the-shelf pretrained weights (Liu et al., 2017; Alvarez & Salzmann, 2017). Others require choosing hyperparameters based on the choice of the network, and cannot be easily adapted for new models (Gordon et al., 2018). Further, with most of these methods, controlled pruning cannot be performed, and a resource-usage constraint can only be satisfied through trial-and-error approach. Recently, Lemaire et al. (2019) presented a budget-aware pruning method that includes the budget constraint as a part of the training process. A major drawback of this approach and other recent methods is that they are unstable for very low resource budgets, and require additional tricks to work. Overall, a robust budget-aware pruning approach that can be coupled with different budget constraints as well as maintains stability for very low target budgets, is still missing in the existing literature.
+
+In this paper, we present ChipNet, a deterministic strategy for structured pruning that employs continuous Heaviside function and crispness loss to identify a highly sparse network out of an existing pretrained dense network. The abbreviation 'ChipNet' stands for Continuous Heaviside Pruning of Networks. Our pruning strategy draws inspiration from the field of design optimization, where the material distribution task is posed as a continuous optimization problem, but only discrete values (0 or 1) are practically feasible. Thus, only such values are produced as final outcomes through continuous Heaviside projections. We use a similar strategy to obtain the masks in our sparsity learning approach. The flexible design of ChipNet facilitates its use with different choices of budget constraints, such as restriction on the maximum number of parameters, FLOPs, channels or the volume of activations in the network. Through experiments, we show that ChipNet consistently outperforms state-of-the-art pruning methods for different choices of budget constraints.
+
+ChipNet is stable for even very low resource budgets, and we demonstrate this through experiments where network is pruned to as low as $1\%$ of its parameters. We show that for such extreme cases, ChipNet outperforms the respective baselines by remarkable margins, with a difference in accuracy of slightly beyond $16\%$ observed for one of the experiments. The masks learnt by ChipNet are transferable across datasets. We show that for certain cases, masks transferred from a model trained on feature-rich teacher dataset provide better performance on the student dataset than those obtained by directly pruning on the student data itself.
+
+# 2 RELATED WORK
+
+As has been stated in the hypothesis by Frankle & Carbin (2018), most neural networks are overparameterized with a large portion (as much as $90\%$ ) of the weights being of little significance to the output of the model. Clearly, there exists enormous scope to reduce the size of these networks. Several works have explored the efficiency of network pruning strategies for reducing storage requirements of these networks and accelerating inference speed (LeCun et al., 1990; Dong et al., 2017). Some early works by Han et al. (2015a;c); Zhu & Gupta (2017) involve removal of individual neurons from a network to make it sparse. This reduces the storage requirements of these networks, however, no improvement in inference speed is observed. Recently, several works have focused on structured network pruning, as it involves pruning the entire channel/filter or even layers to maintain the regular structure (Luo et al., 2017; Li et al., 2017; Alvarez & Salzmann, 2016).
+
+The focus of this paper is on structured network pruning, thus, we briefly discuss here the recent works related to this approach. The recent work by Li et al. (2017) identifies less important channels based on L1-norm. Luo et al. (2017); He et al. (2017) perform channel selection based on their influence on the activation values of the next layer. Liu et al. (2017) perform channel-level pruning by imposing LASSO regularization on the scaling terms in the batchnorm layers, and prune the model based on a global threshold. He et al. (2018b) automatically learn the compression ratio of each layer with reinforcement learning. Louizos et al. (2017); Alvarez & Salzmann (2017; 2016) train and prune the network in a single stage strategy.
+
+
+Figure 1: Representation of the different functions used in ChipNet for various choices of $\beta$ and $\gamma$ . The plots show (a) logistic curves, (b) continuous Heaviside functions, (c) the outputs of logistic and Heaviside functions shown together for $\beta = 2.0$ and $\gamma = 4.0$ , and (d) the crispness loss function.
+
+
+
+
+
+
+
+The above mentioned approaches cannot optimize networks for a pre-defined budget constraint. Adding budget constraint to the pruning process can provide a direct control on the size of the pruned network. For example, Morphnet imposes this budget by iteratively shrinking and expanding a network through a sparsifying regularizer and uniform layer wise width multiplier, respectively, and is adaptable to specific resource constraints (Gordon et al., 2018). However, it requires a model-specific hyperparameter grid search for choosing the regularization factor. Another approach is BAR (Lemaire et al., 2019) that uses a budget-constrained pruning approach based on variational method. A limitation of this approach is that for low resource budgets, it needs to explicitly ensure that at least one channel is active in the downsample layer to avoid fatal pruning. The approach proposed in this paper does not require any such tweaking, and is stable for even very low resource budgets.
+
+# 3 PROPOSED APPROACH
+
+# 3.1 LEARNING SPARSITY MASKS
+
+Sparsity learning forms the core of our approach. It refers to learning a set of sparsity masks for a dense convolutional neural network (parent). When designing the smaller pruned network (child), these masks identify the parts of the parent that are to be included in the child network. We first describe here the general idea of learning these masks in the context of our method.
+
+The proposed approach falls in the category of structured pruning where masks are designed for channels and not individual neurons. Let $f: \mathbb{R}^d \to \mathbb{R}^k$ denote a convolutional neural network with weights $\mathbf{W} \in \mathbb{R}^m$ and a set of hidden channels $\mathbf{H} \in \mathbb{R}^p$ . We define $\mathbf{z} \in \mathbb{R}^p$ as a set of sparsity masks, where $z_i \in \mathbf{z}$ refers to the mask associated with the feature map $\mathbf{h}_i \in \mathbf{H}$ . To apply the mask, $z_i$ is multiplied with all the entries of $\mathbf{h}_i$ . The optimization problem can further be stated as
+
+$$
+\min _ {\mathbf {W}, \mathbf {z}} \mathcal {L} (f (\mathbf {z} \odot \mathbf {H} (\mathbf {W}); \mathbf {x}), \mathbf {y}) \text {s . t .} \mathcal {V} (\mathbf {z}) = \mathcal {V} _ {0}, \tag {1}
+$$
+
+where $\odot$ denotes elementwise multiplication and $\{\mathbf{x},\mathbf{y}\} \in \mathcal{D}$ are data samples used to train the network $f$ . The desired sparsity of the network is defined in terms of the equality constraint, where $\mathcal{V}(\cdot)$ denotes the budget function and $\nu_{0}$ is the maximum permissible budget. Our proposed formulation of pruning is independent from the choice of the budget function. We later show this through experiments with volume budget as in Lemaire et al. (2019), channel budget similar to Liu et al. (2017), and budget defined in terms of parameters and FLOPs as well.
+
+Originally, $z_{i} \in \mathbf{z}$ would be defined such that $z_{i} \in \{0,1\}$ , and a discrete optimization problem is to be solved. For the sake of using gradient-based methods, we convert it to a continuous optimization problem, such that $z_{i} \in [0,1]$ . Such reformulation would lead to intermediate values of $z$ occurring in the final optimized solution. Any intermediate value of $z$ , for example $z = 0.4$ , would imply that a fraction of the respective channel is to be used, and clearly such a solution is practically infeasible. We propose to overcome this challenge through the use of simple nonlinear projections and a novel loss term, and these are discussed in detail in the next section.
+
+# 3.2 CONTINUOUS HEAVISIDE APPROXIMATION AND LOGISTIC CURVES
+
+At the backbone of our pruning strategy lies three important functions: the commonly used logistic curves, continuous Heaviside function and crispness loss term. Figure 1 presents a graphical representation of these functions. We further provide below a brief motivation for the choice of these functions as well as their significance in our pruning approach.
+
+Logistic curves. A commonly used function for adding nonlinearity to a neural network (LeCun et al., 1998), logistic curve projects an input from the real space to a range of 0-1 (Figure 1a), and can be mathematically stated as
+
+$$
+\tilde {z} = \frac {1}{1 + e ^ {- \beta (\psi - \psi_ {0})}}, \tag {2}
+$$
+
+where $\psi$ denotes the optimization parameter corresponding to the mask $z$ , $\psi_0$ is the midpoint of the curve, and $\tilde{z}$ denotes the resultant intermediate projection. The additional parameter $\beta$ is used to control the growth rate of the curve, and forms an important ingredient of our approach. While low values of $\beta$ can produce an approximately linear curve between -1 and 1, higher values turn it into a step function. During the initial stages of training, we propose to keep $\beta$ very low, and increase it to higher values at later stages of the optimization process. With increased values of $\beta$ , the values further from 0.5 are made more favorable for $\tilde{z}$ .
+
+In our experience, the logistic curve alone cannot be used to obtain approximately discrete (0-1) solutions for $\mathbf{z}$ in a continuous optimization scheme. The nonlinearity introduced by this function cannot sufficiently penalize the intermediate values between 0 and 1, and optimization algorithm can easily identify values of $\psi$ for which the projected values are far from both. An example experiment demonstrating this issue is presented in Appendix C.2. To circumvent this issue, we add another nonlinear projection using a continuous approximation of the Heaviside function.
+
+Continuous Heaviside function. A continuous approximation to the Heaviside step function, referred as continuous Heaviside function in this paper, is a commonly used projection strategy to solve continuous versions of binary optimization (0-1) problems in the domain of design optimization (Guest et al., 2004; 2011). The generalized form of this function can be stated as:
+
+$$
+z = 1 - e ^ {- \gamma \tilde {z}} + \tilde {z} e ^ {- \gamma}, \tag {3}
+$$
+
+where, the parameter $\gamma$ dictates the curvature of the regularization. Figure 1b shows the continuous Heaviside function for several values of $\gamma$ . We see that $z$ is linear for $\gamma = 0$ and approaches the Heaviside step function for very large values of $\gamma$ .
+
+The advantages of our projection function are twofold. First, during projection, the values close to 0 and 1 are not affected irrespective of the choice of $\gamma$ . This implies that the masks identified with most confidence in the early stage of training are not directly impacted by the continuation applied on the value of $\gamma$ , thus helping in the convergence of the training process. Here, 'continuation' refers to slowly adapting the value of $\gamma$ during the course of training. Second, even the values of $\tilde{z}$ which are slightly greater than 0 are also nonlinearly projected to close to 1, and this effect is more prominent for larger values of $\gamma$ . The projection adds higher penalty on values between 0 and 1, and makes them extremely unfavorable when higher values of $\gamma$ are chosen.
+
+While the continuous Heaviside function helps to obtain approximately discrete masks, there is still no explicit constraint or penalty function that can regulate this. To overcome this problem, we tie the outputs of logistic and continuous Heaviside functions to define a novel loss term, referred as crispness loss.
+
+Crispness Loss. This novel loss term explicitly penalizes the model performance for intermediate values of $\mathbf{z}$ , and drives the convergence towards crisp (0-1) masks. It is defined as the squared $L_{2}$ norm of the difference between $\tilde{\mathbf{z}}$ and $\mathbf{z}$ , stated as $\mathcal{L}_{\mathrm{c}} = \| \tilde{\mathbf{z}} - \mathbf{z} \|_2^2$ , and from Figure 1c, we see that $\mathcal{L}_{c}$ achieves its minima when either $\tilde{z} = z = 0$ or $\tilde{z} = z = 1$ . Further, the trend of this loss function with respect to $\psi$ for different values of $\beta$ and $\gamma$ is shown in Figure 1d. It can be seen that for lower values of $\beta$ and $\gamma$ , the loss value is low, and the crispness function plays little to no role in driving the pruning process. When the value of $\gamma$ slowly increases, the peak of the graph shifts upwards as well as towards the left, thereby increasing the penalty associated with values of $\psi$ . This drives the values of $\psi$ to move farther from the origin. The left shift in the graph adds higher penalty on the negative values, forcing them to become even more negative, thus forcing the respective $z$ to move closer to 0.
+
+The additional loss function associated with the model generally favors values towards 1. For example, cross-entropy loss used for classification would prefer to set all values in $\mathbf{z}$ to 1 to be able to maximize the classification accuracy. With increasing values of $\gamma$ forcing the masks towards 0, a balance between the two is identified during the training process. The term $\beta$ acts as a regularizer that to some extent counteracts the too abrupt impact of $\gamma$ and regulates the convergence of the training process.
+
+# 3.3 IMPOSING BUDGET CONSTRAINT
+
+The simplicity of our pruning strategy decouples it from the choice of budget constraint. In this paper, we show the working with four different choices of budget constraints: channel, activation volume, parameters and FLOPs. These choices are inspired from some of the recent state-of-the-art methods from the existing literature (Liu et al., 2017; Lemaire et al., 2019).
+
+For budget calculation, values of the masks $\mathbf{z}$ should be close to 0 or 1. However, during the initial iterations of training, masks would contain intermediate values as well. This makes it difficult to accurately calculate the budget for the constraint specified in Eq. 1. Thus, rather than computing it directly over the masks $\mathbf{z}$ , these are computed on $\bar{\mathbf{z}}$ , where $\bar{z}_i \in \bar{\mathbf{z}}$ is obtained by applying a logistic projection on $\mathbf{z}$ with $\psi_0 = 0.5$ (Eq. 2). Further discussion related to it is provided in Appendix C.3.
+
+The budget constraint is imposed using a loss term $\mathcal{L}_b$ , referred as budget loss. We define the budget loss as $\mathcal{L}_b = (\mathcal{V}(\mathbf{z}) - \mathcal{V}_0)^2$ , where $\mathcal{V}(\cdot)$ can be one of the 4 budget functions described below.
+
+Channel budget. It refers to the maximum number of hidden channels $\mathbf{h}$ that can be used across all convolutional layers of the network. Mathematically, it can be stated as $\mathcal{V}^{(c)} = (\sum_{i=1}^{p} \bar{z}_i) / p$ , where $p$ denotes the number of hidden channels in the network. Constraint on the channel budget limits the number of channels, and thus the number of weights in the network.
+
+Volume budget. This budget controls the size of the activations, thereby imposing an upper limit on the memory requirement for the inference step. We define volume budget $\mathcal{V}^{(v)} = (\sum_{j=1}^{\mathcal{N}(h)} \sum_{i=1}^{p_j} A_j \bar{z}_i) / (\sum_{j=1}^{\mathcal{N}(h)} A_j \cdot p_j)$ , where $\mathcal{N}(h)$ denotes the number of convolutional layers, and $A_j$ and $p_j$ denote area of the feature maps and their count, respectively, in the $j^{\text{th}}$ layer.
+
+Parameter budget. This budget directly controls the total number of parameters in the network, and can thus be used to impose an upper limit on the size of the model. For details, see Appendix A.1.
+
+FLOPs budget. This budget can be directly used to control the computational requirement of the model. Mathematical formulae to calculate it is stated in Appendix A.1.
+
+# 3.4 SOFT AND HARD PRUNING
+
+The pruning stage in our approach comprises two steps: soft pruning and hard pruning. After a deep dense network has been pretrained, masks are added to the network and soft pruning is performed. The steps involved in soft pruning are stated in Algorithm 1. During this stage, the network is optimized with the joint loss $\mathcal{L} = \mathcal{L}_{ce} + \alpha_{1}\mathcal{L}_{c} + \alpha_{2}\mathcal{L}_{b}$ , where $\alpha_{1}$ and $\alpha_{2}$ are the weights for the crispness and budget loss terms, respectively.
+
+After every epoch of soft pruning, the performance of the network is evaluated in a hard pruned manner. For this purpose, masks $\mathbf{z}$ are used, and a cutoff is chosen using binary search such that the budget constraint is exactly satisfied. Values above this cutoff are converted to 1 and the ones lower turned to 0. Finally, the model with best performance on the validation set is chosen for fine tuning.
+
+# Algorithm 1: ChipNet Pruning Approach
+
+Input: pretrained network weights $\mathbf{W}$ budget constraint function $\nu (\cdot)$ budget value $\nu_{0}$ training data $\mathcal{D}$ pruning iterations $N$
+Output: learnt sparsity masks $\mathbf{z}$ $\psi_i\in \Psi \gets$ random initialization
+for $k = 1\dots N$ do
+ $(\mathbf{x},\mathbf{y})\gets$ sample(D)
+ $\tilde{\mathbf{z}}\gets$ LOGISTIC( $\psi$ $\mathbf{z}\gets$ CONTINUOUSHEAVISIDE( $\tilde{\mathbf{z}}$ $\hat{\mathbf{y}}\gets$ Forward(x,W,z) $\mathcal{V}\gets \mathcal{V}(\mathbf{z})$ $\mathcal{L}\gets$ CHIPNETLOSS(V,Vo, $\tilde{\mathbf{z}},\mathbf{z},\hat{\mathbf{y}},\mathbf{y})$ $(\nabla \mathbf{W},\nabla \psi)\gets$ Backward(L)
+ $(\mathbf{W},\psi)\gets$
+OptimizeStep( $\nabla \mathbf{W},\nabla \psi$
+
+# end
+
+
+Figure 2: Performance comparison of ChipNet with different structured pruning baselines for various choices of volume constraint. Here, volume pruning factor refers to the factor by which the volume budget is being reduced.
+
+
+
+
+
+# 4 EXPERIMENTS
+
+# 4.1 EXPERIMENTAL SETUP
+
+We test the efficacy of our pruning strategy on several network architectures for four different choices of budget constraint functions. The architectures chosen in this study include WideResNet-26-12 (WRN-26-12) (Zagoruyko & Komodakis, 2016), PreResNet-164 (He et al., 2016b), ResNet-50 and ResNet-101 (He et al., 2016a). For datasets, we have chosen CIFAR-10/100 (Krizhevsky, 2009) and Tiny ImageNet (Wu et al.). For the combined loss $\mathcal{L}$ in Eq. 1, weights $\alpha_{1}$ and $\alpha_{2}$ are set to 10 and 30, respectively, across all experiments. Implementation details related to the pretraining, pruning and finetuning steps, as well details of the hardware are described in Appendix B.
+
+# 4.2 RESULTS
+
+Performance of pruned sparse networks. We present here results obtained using ChipNet for WRN-26-12 and PreResNet-164 pruned with volume and channel constraints, respectively. For the two other constraints, parameters and FLOPs budget, we perform a comparative study later in this paper.
+
+Volume budget. Figure 2 shows the comparison of performance values for WRN-26-12 for CIFAR-10, CIFAR-100 and TinyImageNet datasets, when pruned using ChipNet. We compare our results with BAR (Lemaire et al., 2019), MorphNet (Gordon et al., 2018) and LZR (Louizos et al., 2017) for volume pruning factors of 2, 4, 8 and 16. Details related to the three baselines are presented in Appendix D.1. ChipNet consistently outperforms all the baselines across all datasets and for all choices of the budget. For the case of extreme pruning of 16 folds on CIFAR-100, the performance of BAR is close to ours, while the other two baselines significantly underperform.
+
+Channel budget We study here the pruning efficacy of ChipNet coupled with channel constraint on PreResNet-164 architecture for CIFAR-10 and CIFAR-100 datasets. Results are compared with the network slimming approach (Liu et al., 2017), implementation details related to which can be found in Appendix D.1. As constraints, we use channel budgets of $60\%$ , $40\%$ , $20\%$ and $10\%$ .
+
+Table 1 presents the results for different choices of channel budgets. We also report the number of parameters in the pruned network as well as the associated FLOPs. It is seen that ChipNet outperforms the baseline method for all the experimental settings. For CIFAR-10 in particular, we see that for even very low channel budget of $10\%$ , accuracy of the pruned network drops by only $3.1\%$ . For $10\%$ channel budget, our method outperforms the network slimming strategy on CIFAR-10 and CIFAR-100 by remarkable margins of $8.5\%$ and $16.1\%$ , respectively.
+
+Note that lower channel usage does not necessarily imply lower number of parameters or reduced FLOPS in the pruned network, and we analyze this for the various cases of pruning considered in Table 1. We see that ChipNet performs selection of channels in a more optimized way, such that better accuracy is achieved with fewer parameters. In terms of FLOPS, both methods perform at par. Although, the FLOPS for ChipNet are slightly higher for the channel budget of $10\%$ , this overhead is insignificant compared to the gain in accuracy and reduction of parameters. Overall, we infer that ChipNet couples well with the channel constraint, and is stable for even extreme pruning cases of as low as $10\%$ channel budget.
+
+Table 1: Performance scores for pruning PreResNet-164 architecture on CIFAR-10 and CIFAR-100 datasets for Network Slimming and ChipNet (ours). The number of parameters and FLOPs for the unpruned networks are 1.72 million and $5.03 \times 10^{8}$ , respectively. Here budget refers to the percentage of total channels remaining. Abbreviations 'Acc.' and 'Params.' refer to accuracy and number of parameters, all scores are reported in $\%$ , and parameters and FLOPs are reported relative to those in the unpruned network.
+
+| Method | Budget(%) | CIFAR-10 | CIFAR-100 |
| Acc. ↑ | Params. ↓ | FLOPs ↓ | Acc. ↑ | Params. ↓ | FLOPs ↓ |
| Unpruned | - | 94.9 | 100.0 | 100.0 | 77.1 | 100.0 | 100.0 |
| Net-Slim | 60 | 95.3 | 85.1 | 79.0 | 77.5 | 85.9 | 75.1 |
| ChipNet | 95.3 | 79.3 | 77.9 | 77.8 | 85.0 | 75.2 |
| Net-Slim | 40 | 94.9 | 65.4 | 58.9 | 76.6 | 71.9 | 55.4 |
| ChipNet | 95.0 | 51.7 | 54.7 | 77.3 | 65.8 | 53.1 |
| Net-Slim | 20 | 93.0 | 33.3 | 29.9 | 70.1 | 44.7 | 25.0 |
| ChipNet | 94.2 | 24.0 | 28.4 | 72.3 | 31.8 | 23.9 |
| Net-Slim | 10 | 87.1 | 19.0 | 15.3 | 51.2 | 19.2 | 11.1 |
| ChipNet | 91.8 | 13.8 | 16.4 | 67.3 | 14.6 | 12.6 |
+
+Table 2: Performance scores for pruning ResNet-50 architecture on CIFAR-100 and CIFAR-10 for BAR and ChipNet (ours) with volume budget (V) and channel budget (C). The number of parameters and FLOPS for the unpruned networks are 23.7 million and $2.45 \times 10^{9}$ , respectively. Here budget refers to the percentage of total channels/volume remaining. Abbreviations 'Acc.' and 'Param.' refer to accuracy and number of parameters, all scores are reported in $\%$ , and parameters and FLOPs are reported relative to those in the unpruned network.
+
+| Method | Budget (%) | CIFAR-10 | CIFAR-100 |
| Acc. ↑ | Param. ↓ | FLOPs ↓ | Acc. ↑ | Param. ↓ | FLOPs ↓ |
| Unpruned | - | 93.3 | 100 | 100 | 73.0 | 100 | 100 |
| ChipNet (C) | | 92.8 | 4.5 | 17.7 | 71.1 | 7.3 | 10.9 |
| ChipNet (V) | 12.5 | 91.0 | 2.8 | 5.1 | 65.5 | 22.5 | 9.0 |
| BAR (V) | | 88.4 | 1.8 | 3.8 | 63.8 | 5.2 | 4.2 |
| ChipNet (C) | | 92.1 | 1.6 | 8.8 | 67.0 | 1.8 | 4.8 |
| ChipNet (V) | 6.25 | 83.6 | 1.3 | 2.0 | 54.7 | 14.5 | 5.1 |
| BAR (V) | | 84.0 | 0.9 | 1.3 | 42.9 | 3.7 | 2.0 |
+
+Effect of the choice of budget. Here, we analyze the impact of one budget type over another to understand whether the choice of budget really matters when pruning a network. As a first experiment, we study side-by-side the results for channel and volume constraints when used to prune ResNet-50 on CIFAR-10 and CIFAR-100 datasets. Results of this experiment are shown in Table 2. Note that we do not intend to identify a winner among the two, since both are meant to optimize different aspects of the network. For baseline comparison, the network is also pruned using the BAR method. The volume budget variant of ChipNet outperforms BAR by a significant margin. Moreover, we see that for the same amount of volume constraint, the number of parameters used by BAR are lower than our method for most cases. A reason for significant drop in performance of BAR could be that the optimization algorithm does not fully exploit the choice of channels to be dropped, thereby choosing a sub-optimal set and losing too many parameters from the network.
+
+Between the results of volume and channel constraints for ChipNet, at the first look, it seems that channel constraint is better throughout. However, as stated above, a direct comparison between the two is unfair. For example, volume constraints are meant to reduce the number of activations, and in turn would also reduce the FLOPs. This is evident from the results as FLOPS reported for volume constraint are always lower than the respective channel constraint. For a better understanding of the effects of these budgets, we perform another experiment for pairwise analysis of these constraints.
+
+
+Figure 3: Test accuracy versus the remaining budget for networks pruned using ChipNet with different budget constraints.
+
+
+
+
+
+
+
+Figure 3 shows the pairwise plots of the budgets used to prune WRN-26-12 on CIFAR-100. From the first two plots, we see that the scores reported are higher for any volume budget when the network is optimized with volume constraint, and similarly higher for a certain channel budget when the network is optimized for it. Similar observations can also be made between the number of parameters and FLOPs. In a nutshell, we observe that the pruned network performs best with respect to the constraint for which the masks are trained. Thus, the choice of constraint type should not be arbitrary but based on the practical applications, such as reducing FLOPs, among others.
+
+Stability and robustness. Our pruning strategy is also very stable, and this has already been demonstrated earlier for channel and volume pruning at low resource budgets. Compared to the baselines, networks obtained with ChipNet are found to perform significantly better even without the need for any additional tweaking such as explicitly opening certain channels to ensure network connectivity (Liu et al., 2017; Lemaire et al., 2019). Another example demonstrating the stability is volume pruning $(6.25\%)$ of ResNet-50 on CIFAR-100, where ChipNet performs $11.8\%$ better than BAR.
+
+To account for robustness, we have extensively performed hyperparameter grid search on a channel budget of $6.25\%$ for WRN-26-12 to identify the suitable values for $\alpha_{1}$ and $\alpha_{2}$ . It has been observed that values in the neighborhood of this point do not affect the performance. Details related to this grid search are further provided in Appendix C.5. Further, the same hyperparameter setting has been used for all the experiments. The consistent results across all datasets show that ChipNet is robust.
+
+Transfer learning of masks. Inspired by knowledge distillation (Hinton et al., 2015), where refined information obtained from a deeper teacher network is transferred to a shallow student network, we study here the transfer of sparsity masks across datasets. For teacher and student, we use Tiny ImageNet and CIFAR-100 datasets, respectively, and ResNet-101 is pruned for different choices of channels budgets. Table 8 reports the performance scores for the pruned network on CIFAR-100 when the masks are learnt on CIFAR-100 as well as when they are learnt on Tiny ImageNet and transferred. Interestingly, for moderate channel budgets of $40\%$ and $60\%$ , we see that the models using masks transferred from Tiny ImageNet perform better than those obtained directly on CIFAR-100. This gain in performance from mask transfer could be attributed to the feature-richness of the chosen teacher
+
+Table 3: Accuracy values $(\%)$ on CIFAR-100 dataset for ResNet-101 pruned with different choices of channel budget $(\%)$ on CIFAR-100 (Base) and with masks from Tiny ImageNet (Transfer).
+
+| Budget | Base Acc | Transfer Acc |
| 20 | 71.3 | 68.3 |
| 40 | 71.6 | 72.0 |
| 60 | 71.8 | 72.1 |
| 100 | 73.6 | - |
+
+dataset. We also see that for the very low budget case of $20\%$ , masks from the student dataset outperform that from the teacher. For such low budgets, the expressive power of the model is too low to fully exploit the knowledge from the transferred masks.
+
+# 5 CONCLUSION
+
+We have presented ChipNet, a deterministic strategy for structured pruning of CNNs based on continuous Heaviside function and crispness loss. Our approach provides the flexibility of using it with different budget constraints. Through several experiments, it has been demonstrated that ChipNet outperforms the other methods on representative benchmark datasets. We have also shown that
+
+ChipNet can generate well-performing pruned architectures for very low resource budgets as well. To conclude, with the strongly effective pruning capability that ChipNet exhibits, it can be used by the machine learning community to design efficient neural networks for a variety of applications.
+
+# 6 LIMITATIONS AND FUTURE WORK
+
+In this paper, we have explored the stability and robustness of ChipNet from various perspectives. Through experiments, we have shown that ChipNet consistently performs well across several CNN architectures and datasets. We analyzed it with respect to different choices of budget constraints, performed stability tests for even extreme scenarios of as low as $1\%$ parameters, analyzed how the masks get distributed across the network, and even studied the transferability of masks. For all these experiments, ChipNet has proved to work well. However, before ChipNet can be considered a fullproof solution for pruning, additional experiments might be needed. For example, the applicability of ChipNet is not yet explored on large datasets such as ImageNet, and we would like to explore it in our future work.
+
+Further, it would be of interest to explore how the pruned architectures obtained using ChipNet perform for tasks beyond classification, such as segmentation and object tracking. Improving inference speed is an important aspect of object tracking, and it has not been explored from the point of view of network pruning. We would like to see if the recent object tracking algorithms that use backbones such as ResNet-50 and ResNet-101 can be made faster through our pruning method.
+
+# REFERENCES
+
+Jose M Alvarez and Mathieu Salzmann. Learning the number of neurons in deep networks. In Advances in Neural Information Processing Systems, pp. 2270-2278, 2016.
+Jose M Alvarez and Mathieu Salzmann. Compression-aware training of deep networks. In Advances in Neural Information Processing Systems, pp. 856-867, 2017.
+Misha Denil, Babak Shakibi, Laurent Dinh, Marc'Aurelio Ranzato, and Nando De Freitas. Predicting parameters in deep learning. In Advances in neural information processing systems, pp. 2148-2156, 2013.
+Emily L. Denton, W. Zaremba, Joan Bruna, Y. LeCun, and R. Fergus. Exploiting linear structure within convolutional networks for efficient evaluation. ArXiv, abs/1404.0736, 2014.
+Xiaohan Ding, Guiguang Ding, Yuchen Guo, and Jungong Han. Centripetal sgd for pruning very deep convolutional networks with complicated structure. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4943-4953, 2019.
+Xin Dong, Shangyu Chen, and Sinno Pan. Learning to prune deep neural networks via layer-wise optimal brain surgeon. In Advances in Neural Information Processing Systems, pp. 4857-4867, 2017.
+Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. arXiv preprint arXiv:1803.03635, 2018.
+Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115, 2014.
+Ariel Gordon, Elad Eban, Ofir Nachum, Bo Chen, Hao Wu, Tien-Ju Yang, and Edward Choi. Morphnet: Fast & simple resource-constrained structure learning of deep networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1586-1595, 2018.
+J. K. Guest, J. H. Prevost, and T. Belytschko. Achieving minimum length scale in topology optimization using nodal design variables and projection functions. International Journal for Numerical Methods in Engineering, 61:238-254, 2004.
+J. K. Guest, A. Asadpoure, and S. Ha. Eliminating beta-continuation from heaviside projection and density filter algorithms. Structural and Multidisciplinary Optimization, 44:443-453, 2011.
+
+Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015a.
+Song Han, J. Pool, John Tran, and W. Dally. Learning both weights and connections for efficient neural network. In NIPS, 2015b.
+Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In Advances in neural information processing systems, pp. 1135-1143, 2015c.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016a.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European conference on computer vision, pp. 630-645. Springer, 2016b.
+Yang He, Guoliang Kang, Xuanyi Dong, Yanwei Fu, and Yi Yang. Soft filter pruning for accelerating deep convolutional neural networks. arXiv preprint arXiv:1808.06866, 2018a.
+Yihui He, X. Zhang, and Jian Sun. Channel pruning for accelerating very deep neural networks. 2017 IEEE International Conference on Computer Vision (ICCV), pp. 1398-1406, 2017.
+Yihui He, Ji Lin, Zhijian Liu, Hanrui Wang, Li-Jia Li, and Song Han. Amc: Automl for model compression and acceleration on mobile devices. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 784-800, 2018b.
+Geoffrey E. Hinton, Oriol Vinyals, and J. Dean. Distilling the knowledge in a neural network. *ArXiv*, abs/1503.02531, 2015.
+Yesmina Jaafra, Jean Luc Laurent, Aline Deruyver, and Mohamed Saber Naceur. Reinforcement learning for neural architecture search: A review. Image and Vision Computing, 89:57-66, 2019.
+A. Krizhevsky. Learning multiple layers of features from tiny images. 2009.
+Yann LeCun, John S Denker, and Sara A Solla. Optimal brain damage. In Advances in neural information processing systems, pp. 598-605, 1990.
+Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998.
+C. Lemaire, A. Achkar, and P. Jodoin. Structured pruning of neural networks with budget-aware regularization. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 9108-9116, 2019.
+Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710, 2016.
+Hao Li, Asim Kadav, Igor Durdanovic, H. Samet, and H. Graf. Pruning filters for efficient convnets. ArXiv, abs/1608.08710, 2017.
+Tuanhui Li, Baoyuan Wu, Yujiu Yang, Yanbo Fan, Yong Zhang, and Wei Liu. Compressing convolutional neural networks via factorized convolutional filters. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3977-3986, 2019.
+Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, and Changshui Zhang. Learning efficient convolutional networks through network slimming. In Proceedings of the IEEE International Conference on Computer Vision, 2017.
+Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, and Trevor Darrell. Rethinking the value of network pruning. arXiv preprint arXiv:1810.05270, 2018.
+I. Loshchilov and F. Hutter. Decoupled weight decay regularization. In ICLR, 2019.
+
+Christos Louizos, Max Welling, and Diederik P Kingma. Learning sparse neural networks through $l_{-}0$ regularization. arXiv preprint arXiv:1712.01312, 2017.
+Jian-Hao Luo, Jianxin Wu, and Weiyao Lin. Thinet: A filter level pruning method for deep neural network compression. 2017 IEEE International Conference on Computer Vision (ICCV), pp. 5068-5076, 2017.
+Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, and Jan Kautz. Pruning convolutional neural networks for resource efficient inference. arXiv preprint arXiv:1611.06440, 2016.
+Jiayu Wu, Qixiang Zhang, and Guoxi Xu. Tiny imagenet challenge. Technical report.
+Huanrui Yang, Wei Wen, and Hai Li. Deephoyer: Learning sparser neural network with differentiable scale-invariant sparsity measures. arXiv preprint arXiv:1908.09979, 2019.
+Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
+Michael Zhu and Suyog Gupta. To prune, or not to prune: exploring the efficacy of pruning for model compression. arXiv preprint arXiv:1710.01878, 2017.
+
+# APPENDICES
+
+# A EXTENSION: PROPOSED APPROACH
+
+# A.1 BUDGETCONSTRAINTS
+
+Additional details related to the 4 budget constraints discussed in this paper follow below.
+
+Channel budget. It refers to the maximum number of hidden channels $\mathbf{h}$ that can be used across all convolutional layers of the network. Mathematically, it can be stated as
+
+$$
+\mathcal {V} ^ {(c)} = \frac {\sum_ {i = 1} ^ {p} \bar {z} _ {i}}{p}, \tag {4}
+$$
+
+where $p$ denotes the number of hidden channels in the network.
+
+Volume budget. This budget controls the size of the activations, thereby imposing an upper limit on the memory requirement for the inference step. We define volume budget as
+
+$$
+\mathcal {V} ^ {(v)} = \frac {\sum_ {j = 1} ^ {\mathcal {N} (h)} \sum_ {i = 1} ^ {p _ {j}} A _ {j} \bar {z} _ {i}}{\sum_ {j = 1} ^ {\mathcal {N} (h)} A _ {j} \cdot p _ {j}}, \tag {5}
+$$
+
+where $\mathcal{N}(h)$ denotes the number of convolutional layers in the network, and $A_{j}$ and $p_j$ denote area of the feature maps and their count, respectively, in the $j^{\mathrm{th}}$ layer.
+
+Parameter budget. This budget term directly controls the total number of parameters in the network, and can thus be used to impose an upper limit on the size of the model parameters. It is defined as
+
+$$
+\mathcal {V} ^ {(p)} = \frac {\sum_ {j = 1} ^ {\mathcal {N} (h)} \left(K _ {j} \cdot \sum_ {i = 1} ^ {p _ {j}} \bar {z} _ {i} ^ {j} \cdot \sum_ {i = 1} ^ {p _ {j - 1}} \bar {z} _ {i} ^ {j - 1} + 2 \cdot \sum_ {i = 1} ^ {p _ {j}} \bar {z} _ {i} ^ {j}\right)}{\sum_ {j = 1} ^ {\mathcal {N} (h)} \left(K _ {j} \cdot p _ {j} \cdot p _ {j - 1} + 2 \cdot p _ {j}\right)}, \tag {6}
+$$
+
+where $K_{j}$ denotes area of the kernel. The two terms in the numerator account for the number of parameters in the convolutional layer and the batchnorm layer.
+
+FLOPs budget. This budget can be directly used to control the computational requirement of the model. Assuming that a sliding window is used to achieve convolution and the nonlinear computational overhead is ignored, the FLOPs budget of the convolution neural network can be defined as in Molchanov et al. (2016):
+
+$$
+\mathcal {V} ^ {(f)} = \frac {\sum_ {j = 1} ^ {\mathcal {N} (h)} \left(K _ {j} \cdot \sum_ {i = 1} ^ {p _ {j} - 1} \bar {z} _ {i} ^ {j - 1} + 1\right) \cdot \sum_ {i = 1} ^ {p _ {j}} \bar {z} _ {i} ^ {j} \cdot A _ {j}}{\sum_ {j = 1} ^ {\mathcal {N} (h)} \left(K _ {j} \cdot p _ {j - 1} + 1\right) \cdot p _ {j} \cdot A _ {j}}. \tag {7}
+$$
+
+# B TRAINING PROCEDURE
+
+Details regarding the pretraining, pruning and finetuning steps are discussed below:
+
+# B.1 PRE-TRAINING
+
+WRN-26-12, MobileNetV2, ResNet-50, ResNet-101, ResNet-110 were trained with batch size of 128 at initial learning rate of $5 \times 10^{-2}$ using SGD optimizer with momentum 0.9 and weight decay $10^{-3}$ . We use step learning rate strategy to decay learning rate by 0.5 after every 30 epochs. For CIFAR-10 and CIFAR-100, models were trained for 300 epochs whereas for Tiny ImageNet the number of epochs were reduced to half to have maintain same number of iterations.
+
+Preresnet-164 was trained with batch size of 64 at initial learning rate of $\times 10^{-1}$ using SGD optimizer with momentum 0.9 and weight decay $10^{-4}$ . We use Multi Step Learning rate strategy to decay learning rate by 0.1 after $80^{th}$ and $120^{th}$ epoch. The model was trained for 160 epochs for all datasets. This strategy is adopted from Liu et al. (2017)
+
+# B.2 PRUNING
+
+A common pruning strategy was applied for all models irrespective of budget type or dataset. AdamW (Loshchilov & Hutter, 2019) with constant learning rate of $10^{-3}$ and weight decay of $10^{-3}$ was used as optimizer. Weight decay for $\psi$ was kept 0. Weight for budget loss and crispness loss is kept constant to 30 and 10 respectively. $\beta$ increases by $2\times 10^{-2}$ after every epoch starting from 1 and $\gamma$ doubles after every 2 epochs starting from 2.
+
+# B.3 FINE-TUNING
+
+The finetuning of pruned model is done exactly similar to the pre-training step.
+
+# B.4 HARDWARE
+
+All experiments were run on a Google Cloud Platform instance with a NVIDIA V100 GPU (16GB), 16 GB RAM and 4 core processor.
+
+# C ADDITIONAL EXPERIMENTS
+
+# C.1 PRUNING WITH VOLUME AND CHANNEL BUDGET
+
+This section shows results of ChipNet along with different baselines pruned with with channel and volume budget. Table 4 is an extension to Table 2 presented in section 4.2. Table 5 shows numerical values corresponding to figure 2 discussed in section 4.2.
+
+Table 4: Performance scores for pruning ResNet-50 architecture on CIFAR-100/CIFAR-10 for BAR and ChipNet (ours) with volume budget (V) and channel budget (C). The number of parameters and FLOPS for the unpruned networks are 23.7 million and $2.45 \times 10^{9}$ , respectively. Abbreviations 'Acc.' and 'Param.' refer to accuracy and number of parameters, all scores are reported in %, and parameters and FLOPs are reported relative to those in the unpruned network.
+
+| Method | Budget (%) | CIFAR-10 | CIFAR-100 |
| Acc. ↑ | Param. ↓ | FLOPs ↓ | Acc. ↑ | Param. ↓ | FLOPs ↓ |
| Unpruned | - | 93.3 | 100 | 100 | 73.0 | 100 | 100 |
| ChipNet (C) | | 93.1 | 36.5 | 58.8 | 72.7 | 44.1 | 40.9 |
| ChipNet (V) | 50 | 93.4 | 18.6 | 29.0 | 72.1 | 58.0 | 38.4 |
| BAR (V) | | 91.4 | 9.5 | 21.3 | 71.5 | 22.5 | 24.9 |
| ChipNet (C) | | 93.0 | 12.3 | 30.0 | 72.6 | 18.7 | 20.7 |
| ChipNet (V) | 25 | 92.9 | 4.9 | 12.3 | 69.9 | 32.1 | 17.2 |
| BAR (V) | | 91.5 | 2.3 | 7.4 | 71.8 | 5.4 | 7.3 |
| ChipNet (C) | | 92.8 | 4.5 | 17.7 | 71.1 | 7.3 | 10.9 |
| ChipNet (V) | 12.5 | 91.0 | 2.8 | 5.1 | 65.5 | 22.5 | 9.0 |
| BAR (V) | | 88.4 | 1.8 | 3.8 | 63.8 | 5.2 | 4.2 |
| ChipNet (C) | | 92.1 | 1.6 | 8.8 | 67.0 | 1.8 | 4.8 |
| ChipNet (V) | 6.25 | 83.6 | 1.3 | 2.0 | 54.7 | 14.5 | 5.1 |
| BAR (V) | | 84.0 | 0.9 | 1.3 | 42.9 | 3.7 | 2.0 |
+
+# C.2 PRUNING WITH ONLY LOGISTIC CURVES
+
+As discussed in section 3.2, continuous Heaviside approximation helps to penalize intermediate values of $\mathbf{z}$ to attain values closer to 0-1. Only with logistic curves the distribution of soft masks gets concentrated at one point as shown in Figure 4b. Although, the budget constraint will be satisfied, this kind of distribution hinders effective channel selection as the relative importance of $\mathbf{z}$ cannot be determined concretely. Contrary to this, using heaviside function with crispness loss models $\mathbf{z}$ in
+
+Table 5: Performance scores for pruning WideResNet architecture on CIFAR-10, CIFAR-100 and Tiny ImageNet datasets for BAR (Lemaire et al., 2019), MorphNet (Gordon et al., 2018), ID (Denton et al., 2014), WM (Han et al., 2015b), Random Pruning and ChipNet (ours). All results are reported in % accuracy
+
+| Method | Budget (%) | CIFAR-10 ↑ | Tiny ImageNet ↑ | CIFAR-100 ↑ |
| BAR | 50 | 92.7 | 52.4 | 74.1 |
| 25 | 92.8 | 52 | 73.6 |
| 12.5 | 92.8 | 51.4 | 72.6 |
| 6.25 | 91.6 | 52.0 | 70.5 |
| MorphNet | 50 | 93.3 | 58.2 | 73.6 |
| 25 | 92.9 | 55.8 | 70.4 |
| 12.5 | 90.7 | 51.7 | 69.9 |
| 6.25 | 86.4 | 39.2 | 55.5 |
| ID | 50 | 91.09 | 49.96 | 69.29 |
| 25 | 91.44 | 49.55 | 69.75 |
| 12.5 | 90.37 | 45.77 | 66.03 |
| 6.25 | 86.92 | 39.72 | 59.13 |
| WM | 50 | 91.11 | 49.01 | 68.98 |
| 25 | 91.20 | 49.67 | 69.10 |
| 12.5 | 89.68 | 47.72 | 65.42 |
| 6.25 | 86.33 | 40.19 | 58.99 |
| Random | 50 | 89.63 | 48.25 | 67.51 |
| 25 | 88.02 | 46.08 | 63.64 |
| 12.5 | 84.62 | 39.41 | 59.22 |
| 6.25 | 81.36 | 29.53 | 48.88 |
| ChipNet | 50 | 94.7 | 61.6 | 77.3 |
| 25 | 94.4 | 61.5 | 77.1 |
| 12.5 | 93.9 | 59.6 | 75.8 |
| 6.25 | 92.7 | 56.7 | 71.4 |
+
+
+(a) with proposed approach
+
+
+(b) without Crispness Loss
+Figure 4: Distribution of zetas obtained on pruning WRN-26-12 with CIFAR-100 for 16x channel pruning factor.
+
+
+(c) without logistic round function
+
+terms of their relative importance as shown in Figure 4a and hence results in more effective pruning of less important channels.
+
+# C.3 ROLE OF LOGISTIC-ROUNDING IN BUDGET CALCULATION
+
+As discussed in section 3.3 budget calculation is done on $\bar{\mathbf{z}}$ rather than computing it directly over the masks $\mathbf{z}$ where $\bar{z}_i\in \bar{\mathbf{z}}$ is obtained by applying a logistic projection on $\mathbf{z}$ with $\psi_0 = 0.5$ (Eq. 2). Importance of this projection can be seen through figure 4c. The distribution of soft masks obtained with the proposed approach (Figure 4a) is clearly much more distinct than the one calculated without logistic round projection (Figure 4c). Thus a better threshold can be selected to choose the active sparsity mask that satisfies the budget constraint.
+
+
+(a) Volume Budget
+Figure 5: Visualization of the number of channels remaining per convolutional layer in the architectures obtained from ChipNet with different choices of budget constraint and various pruning factors.
+
+
+(b) Channel Budget
+
+
+(c) Parameter Budget
+
+
+(d) FLOPs Budget
+
+# C.4 EFFECT OF THE CHOICE OF BUDGET
+
+We further visualize how ChipNet performs pruning across the various convolutional layers of a network for different choices of budget. Figure 5 shows the number of active channels per convolutional layer for several pruning factors for the 4 budget types. These results have been obtained on WRN-26-12. We see that the pruned networks with low resource budgets are aligned with those with the higher budgets in terms of distribution of active channels across layers. This could mean that the networks pruned for low resource budgets should be achievable hierarchically from those pruned on larger budgets. Further, we also see that there are layers with almost no channels left. The performance of our model is still not affected since these dead layers correspond to the skip connections in the network. ChipNet identifies such extra connections, and eliminates them if the performance of the model is not affected significantly.
+
+# C.5 HYPERPARAMETER GRID SEARCH
+
+An extensive grid search is done to search hyperparameters, $\alpha_{1},\alpha_{2}$ $b_{inc}$ and $g_{inc}$ for pruning WRN26-12 on CIFAR-100 at $6.25\%$ volume budget.Here $\alpha_{1}$ and $\alpha_{2}$ are the weightage values given to crispness loss and budget loss repectively in the joint loss as shown in section 3.4. $\beta_{inc}$ and $\gamma_{inc}$ refers the the number of epochs after which value of beta increases by 0.1 and value of gamma doubles, effect of these hyperparameters is discussed in section 3.2. We found a cloud of values for which the pruning accuracy is comparable. This cloud can be seen in table 6. We choose the best values from these for all our other experiments as we concluded that model pruning is less sensitive to these hyperparameters.
+
+Table 6: Grid search on WRN-C100 for 16x volume pruning factor. Here Acc refers to the validation accuracy of hard pruned model during pruning.
+
+| α1 | α2 | βinc | γinc | Acc(%) |
| 10 | 30 | 5 | 2 | 5.5 |
| 10 | 45 | 5 | 1 | 5.4 |
| 15 | 30 | 1 | 1 | 5.3 |
| 15 | 30 | 5 | 1 | 5.2 |
| 10 | 30 | 5 | 1 | 4.8 |
| 10 | 60 | 5 | 1 | 4.7 |
| 15 | 20 | 1 | 1 | 4.7 |
| 5 | 60 | 2 | 2 | 4.6 |
| 15 | 60 | 5 | 2 | 4.5 |
| 1 | 45 | 1 | 1 | 4.5 |
| 15 | 30 | 5 | 2 | 4.3 |
| 5 | 45 | 2 | 2 | 4.2 |
| 15 | 60 | 2 | 1 | 3.9 |
| 5 | 45 | 5 | 1 | 3.9 |
| 10 | 20 | 1 | 1 | 3.8 |
+
+# C.6 SENSITIVITY ANALYSIS
+
+In this section we show the sensitivity analysis for WRN-26-12 on Cifar-100 at 16x Volume budget constraint. We ran 5 experiments of pruning where value of all four hyperparameters $\alpha_{1},\alpha_{2},\beta_{inc}$ $\gamma_{inc}$ were sampled from the uniform distribution with $+10\%$ perturbations from the tuned values.
+
+Table 7: Sensitivity analysis on WRN-C100 for 16x volume pruning factor. Here Accuracy refers to the test accuracy of hard pruned model after finetuning.
+
+| α1 | α2 | βinc | γinc | Accuracy |
| 10.19 | 28.23 | 5.12 | 1.98 | 0.7143 |
| 10.34 | 29.17 | 4.67 | 2.05 | 0.7232 |
| 9.28 | 27.34 | 4.8 | 1.91 | 0.72 |
| 9.24 | 32.25 | 5.13 | 1.89 | 0.7132 |
| 9.24 | 30.47 | 5.48 | 2.07 | 0.7168 |
| | | Mean | 0.7175 |
| | | Std dev. | 0.00412 |
+
+# C.7 TRANSFERABILITY OF MASK
+
+Here we show the complete results of Table 3 to depict the transferability of mask proposed in section 4.2
+
+Table 8: Accuracy values (\%) on CIFAR-100 dataset for ResNet-101 pruned with different choices of channel budget (\%) on CIFAR-100 (Base) and with masks from Tiny ImageNet (Host).
+
+| Budget(%) | Tiny ImageNet (Host Acc) | C100 (Base Acc) | C100 (Transfer Acc) |
| 20 | 51.6 | 71.3 | 68.3 |
| 40 | 55.2 | 71.6 | 72.0 |
| 60 | 56.0 | 71.8 | 72.1 |
| 100 | 63.3 | 73.6 | - |
+
+# D IMPLEMENTATION DETAILS
+
+# D.1 BASELINE METHODS
+
+BAR, LZR, MorphNet, WM, ID on WRN-26-12: All results are taken from Lemaire et al. (2019). We reproduced a few results to cross-check and ensure that there are no big deviations. We found that our reproduced results were very close to the one reported in the paper.
+
+BAR on Resnet-50: Results are produced from the code given by (Lemaire et al., 2019). Pruning strategy is adopted from Lemaire et al. (2019) and the number of iterations are adjusted to match ours for fair comparison.
+
+Network Slimming on PreResNet-164: We have reproduced the results by using the same pretraining, pruning and finetuning strategy is used by Liu et al. (2017) and the same pretraining and finetuning strategy is used for our results in order to do fair comparison of pruning algorithms.
+
+# E PSEUDO CODE
+
+Here we present the explanations and pseudo-codes for the various functions used in 1.
+
+# E.1 LOGISTIC FUNCTION
+
+# Algorithm 2: LOGISTIC
+
+Input : Optimization parameter corresponding to every mask $\psi$ ; Growth rate control parameter $\beta$
+
+Output: Resultant intermediate projection $\tilde{\mathbf{z}}$
+
+$$
+\tilde {\mathbf {z}} \leftarrow \frac {1}{1 + e ^ {- \beta \psi}}
+$$
+
+# E.2 CONTINUOUS HEAVISIDE FUNCTION
+
+# Algorithm 3: CONTINUOUS HEAVISIDE
+
+Input : Intermediate projection $\tilde{\mathbf{z}}$ ; Curvature regularization parameter $\gamma$
+
+Output: Resultant final projection $\mathbf{z}$
+
+$$
+\mathbf {z} \leftarrow 1 - e ^ {- \gamma \tilde {z}} + \tilde {z} e ^ {- \gamma}
+$$
+
+# E.3 CHIPNET LOSS FUNCTION
+
+# Algorithm 4: CHIPNET LOSS
+
+Input : Target budget $\nu_{0}$ ; Current model budget $\nu$ ; Intermediate projection $\tilde{\mathbf{z}}$ ; Final projection $\mathbf{z}$ ; Predicted output $\hat{\mathbf{y}}$ ; Ground truth $\mathbf{y}$ ; Crispness loss weight $\alpha_{1}$ ; Budget loss weight $\alpha_{2}$
+
+Output: Loss $\mathcal{L}$
+
+$$
+\begin{array}{l} \mathcal {L} _ {c e} \leftarrow - \sum \mathbf {y} \log (\hat {\mathbf {y}}) \\ \mathcal {L} _ {c} \leftarrow \| \tilde {\mathbf {z}} - \mathbf {z} \| _ {2} ^ {2} \\ \mathcal {L} _ {b} \leftarrow (\mathcal {V} - \mathcal {V} _ {0}) ^ {2} \\ \mathcal {L} \leftarrow \mathcal {L} _ {c e} + \alpha_ {1} \mathcal {L} _ {c} + \alpha_ {2} \mathcal {L} _ {b} \\ \end{array}
+$$
+
+# E.4 FORWARD FUNCTION
+
+Forward function takes three inputs - network weights $(\mathbf{W})$ , input data batch $(\mathbf{x})$ and sparsity masks $(\mathbf{z})$ . The forward function is the forward pass of regular CNN; with one change that the respective sparsity masks are multiplied to the activation obtained after every batch normalization layer.
+
+# E.5 BACKWARD FUNCTION
+
+The backward function is the back propagation pass of a regular CNN to obtain the gradients of the loss with respect to the model parameters $(\mathbf{W}$ and $\psi$ ).
+
+# F ADDITIONAL RESULTS
+
+Table 9: Performance scores for pruning MobileNetV2 architecture on Cifar-10 for ChipNet with channel budget.
+
+| Budget(%) | Acc. ↑ |
| Unpruned | 93.55 |
| 80 | 92.58 |
| 60 | 92.44 |
| 40 | 91.98 |
| 20 | 90.65 |
+
+Table 10: Performance scores for pruning ResNet-50 architecture on Tiny-ImageNet for ChipNet with volume budget (V) and channel budget (C).
+
+| Method | Budget(%) | Acc. ↑ |
| Unpruned | - | 61.38 |
| ChipNet (C) | 50 | 56.65 |
| 25 | 54.72 |
| 12.5 | 52.73 |
| 6.25 | 47.49 |
| ChipNet (V) | 50 | 54.23 |
| 25 | 53 |
| 12.5 | 50.03 |
| 6.25 | 45.51 |
+
+Table 11: Accuracy values (%) on CIFAR-10 dataset for ResNet-110 pruned using volume budget.
+
+| Model | Base acc. ↑ | Prune acc. ↑ | FLOPs reduction ↑ |
| Pruning-A (Li et al., 2016) | 93.53% | 93.51% | 1.19x |
| Pruning-B (Li et al., 2016) | 93.53% | 93.30% | 1.62x |
| SFP (He et al., 2018a) | 93.68% | 93.86% | 1.69x |
| C-SGD-5/8 (Ding et al., 2019) | 94.38% | 94.41% | 2.56x |
| CNN-FCF-A (Li et al., 2019) | 93.58% | 93.67% | 1.76x |
| CNN-FCF-B (Li et al., 2019) | 93.58% | 92.96% | 3.42x |
| Group-HS 7e-5 (Yang et al., 2019) | 93.62% | 94.06% | 2.30x |
| Group-HS 1e-4 (Yang et al., 2019) | 93.62% | 93.80% | 3.09x |
| Group-HS 1.5e-4 (Yang et al., 2019) | 93.62% | 93.54% | 4.38x |
| Group-HS 2e-4 (Yang et al., 2019) | 93.62% | 92.97% | 5.84x |
| Chip Net (Volume-2x) | 93.98% | 93.78% | 2.66x |
| Chip Net (Volume-4x) | 93.98% | 92.38% | 7.54x |
| Chip Net (Volume-8x) | 93.98% | 91.36% | 12.53x |
\ No newline at end of file
diff --git a/chipnetbudgetawarepruningwithheavisidecontinuousapproximations/images.zip b/chipnetbudgetawarepruningwithheavisidecontinuousapproximations/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..608f833710f76d824cad82fc5895af4829e1bfc9
--- /dev/null
+++ b/chipnetbudgetawarepruningwithheavisidecontinuousapproximations/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:86d6a271c270b8aaba5e08e667e2d9c2bfefe1ae9aac5759928dd6889f81f169
+size 673483
diff --git a/chipnetbudgetawarepruningwithheavisidecontinuousapproximations/layout.json b/chipnetbudgetawarepruningwithheavisidecontinuousapproximations/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..3f99f400cb3bf129cc16ab15cb5f868f589437f3
--- /dev/null
+++ b/chipnetbudgetawarepruningwithheavisidecontinuousapproximations/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:07f2d10794b18c43c9539234e853c28cceff1cec4e4ab21e6d38607c5e28b5c6
+size 592202
diff --git a/clairvoyanceapipelinetoolkitformedicaltimeseries/9223a420-ae10-46db-aa46-7dfb85bcd90d_content_list.json b/clairvoyanceapipelinetoolkitformedicaltimeseries/9223a420-ae10-46db-aa46-7dfb85bcd90d_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..ef716f6cc0dfdfde383e830278d6dfc5d28d96e9
--- /dev/null
+++ b/clairvoyanceapipelinetoolkitformedicaltimeseries/9223a420-ae10-46db-aa46-7dfb85bcd90d_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e472ea42905427485c9760fed386868bbaf364279a7fca72542c00a984a037c0
+size 172953
diff --git a/clairvoyanceapipelinetoolkitformedicaltimeseries/9223a420-ae10-46db-aa46-7dfb85bcd90d_model.json b/clairvoyanceapipelinetoolkitformedicaltimeseries/9223a420-ae10-46db-aa46-7dfb85bcd90d_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..08e7d9236bae620c796e53a7a4a233235df36941
--- /dev/null
+++ b/clairvoyanceapipelinetoolkitformedicaltimeseries/9223a420-ae10-46db-aa46-7dfb85bcd90d_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2864254ceb51fed159ccce5aa6a4af3faf376448708de7e3e91f74a12c1e4461
+size 212566
diff --git a/clairvoyanceapipelinetoolkitformedicaltimeseries/9223a420-ae10-46db-aa46-7dfb85bcd90d_origin.pdf b/clairvoyanceapipelinetoolkitformedicaltimeseries/9223a420-ae10-46db-aa46-7dfb85bcd90d_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..7785120e0d699b86b1c07355e76ea12c71caab3d
--- /dev/null
+++ b/clairvoyanceapipelinetoolkitformedicaltimeseries/9223a420-ae10-46db-aa46-7dfb85bcd90d_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0a31aa67f2e9eca6ea5fa631304835f6ca02dee254cdef11562e4b3add641999
+size 771700
diff --git a/clairvoyanceapipelinetoolkitformedicaltimeseries/full.md b/clairvoyanceapipelinetoolkitformedicaltimeseries/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..9ce113476cc63387566c1ccf1344c16afad6bc06
--- /dev/null
+++ b/clairvoyanceapipelinetoolkitformedicaltimeseries/full.md
@@ -0,0 +1,1005 @@
+# CLAIRVOYANCE: A PIPELINE TOOLKIT FOR MEDICAL TIME SERIES
+
+# Daniel Jarrett*
+
+University of Cambridge, UK
+daniel.jarrett@maths.cam.ac.uk
+
+# Ioana Bica
+
+University of Oxford, UK
+The Alan Turing Institute, UK
+ioana.bica@eng.ox.ac.uk
+
+# Ari Ercole
+
+University of Cambridge, UK
+Cambridge University Hospitals NHS Foundation Trust, UK
+ae105@cam.ac.uk
+
+# Jinsung Yoon*
+
+Google Cloud AI, Sunnyvale, USA
+University of California, Los Angeles, USA
+jinsungyoon@google.com
+
+# Zhaozhi Qian
+
+University of Cambridge, UK
+zhaozhi.qian@maths.cam.ac.uk
+
+# Mihaela van der Schaar
+
+University of Cambridge, UK
+University of California, Los Angeles, USA
+The Alan Turing Institute, UK
+mv472@cam.ac.uk
+
+# ABSTRACT
+
+Time-series learning is the bread and butter of data-driven clinical decision support, and the recent explosion in ML research has demonstrated great potential in various healthcare settings. At the same time, medical time-series problems in the wild are challenging due to their highly composite nature: They entail design choices and interactions among components that preprocess data, impute missing values, select features, issue predictions, estimate uncertainty, and interpret models. Despite exponential growth in electronic patient data, there is a remarkable gap between the potential and realized utilization of ML for clinical research and decision support. In particular, orchestrating a real-world project lifecycle poses challenges in engineering (i.e. hard to build), evaluation (i.e. hard to assess), and efficiency (i.e. hard to optimize). Designed to address these issues simultaneously, Clairvoyance proposes a unified, end-to-end, autoML-friendly pipeline that serves as a (i) software toolkit, (ii) empirical standard, and (iii) interface for optimization. Our ultimate goal lies in facilitating transparent and reproducible experimentation with complex inference workflows, providing integrated pathways for (1) personalized prediction, (2) treatment-effect estimation, and (3) information acquisition. Through illustrative examples on real-world data in outpatient, general wards, and intensive-care settings, we illustrate the applicability of the pipeline paradigm on core tasks in the healthcare journey. To the best of our knowledge, Clairvoyance is the first to demonstrate viability of a comprehensive and automatable pipeline for clinical time-series ML.
+
+Python Software Repository: https://github.com/vanderschaarlab/clairvoyance
+
+# 1 INTRODUCTION
+
+Inference over time series is ubiquitous in medical problems [1-7]. With the increasing availability and accessibility of electronic patient records, machine learning for clinical decision support has made great strides in offering actionable predictive models for real-world questions [8, 9]. In particular, a plethora of methods-based research has focused on addressing specific problems along different stages of the clinical data science pipeline, including preprocessing patient data [10, 11], imputing missing measurements [12-16], issuing diagnoses and prognoses of diseases and biomarkers [17-25], estimating the effects of different treatments [26-31], optimizing measurements [32-36], capturing
+
+uncertainty [37-41], and interpreting learned models [42-46]. On the other hand, these component tasks are often formulated, solved, and implemented as mathematical problems (on their own), resulting in a stylized range of methods that may not acknowledge the complexities and interdependencies within the real-world clinical ML project lifecycle (as a composite). This leads to an often punishing translational barrier between state-of-the-art ML techniques and any actual patient benefit that could be realized from their intended application towards clinical research and decision support [47-51].
+
+Three Challenges To bridge this gap, we argue for a more comprehensive, systematic approach to development, validation, and clinical utilization. Specifically, due to the number of moving pieces, managing real-world clinical time-series inference workflows is challenging due the following concerns:
+
+- First and foremost, the engineering problem is that building complex inference procedures involves significant investment: Over $95\%$ of work in a typical mature project is consumed by software technicals, and $< 5\%$ addressing real scientific questions [52]. As a clinician or healthcare practitioner, however, few resources are available for easily developing and validating complete workflows. What is desired is a simple, consistent development and validation workflow that encapsulates all major aspects of clinical time-series ML—from initial data preprocessing all the way to the end.
+- Second, the evaluation problem is that the performance of any component depends on its context; for instance, the accuracy of a prediction model is intimately tied to the data imputation method that precedes it [13, 14]. As an ML researcher, however, current empirical practices typically examine the merits of each component individually, with surrounding steps configured as convenient for ensuring "all else equal" conditions for assessing performance. What is desired is a structured, realistic, and reproducible method of comparing techniques that honestly reflects interdependencies in the gestalt.
+- Lastly, the efficiency problem is that sophisticated designs tend to be resource-intensive to optimize, and state-of-the-art deep learning approaches require many knobs to be tuned. As a clinical or ML practitioner alike, this computational difficulty may be compounded by pipeline combinations and the potential presence of temporal distribution shifts in time-series datasets [53]. What is desired is a platform on which the process of pipeline configuration and hyperparameter optimization can be automated—and through which new optimization algorithms to that effect may be built and tested.
+
+Contributions We tackle all three issues simultaneously. The Clairvoyance package is a unified, end-to-end, autoML-friendly pipeline for medical time series. (i) As a software toolkit, it enables development through a single unified interface: Modular and composable structures facilitate rapid experimentation and deployment by clinical practitioners, as well as simplifying collaboration and code-sharing. (ii) As an empirical standard, it serves as a complete experimental benchmarking environment: Standardized, end-to-end pipelines provide realistic and systematic context for evaluating novelties within individual component designs, ensuring that comparisons are fair, transparent, and reproducible. (iii) Finally, as an interface for optimization over the pipeline abstraction, Clairvoyance enables leveraging and developing algorithms for automatic pipeline configuration and stepwise selection, accounting for interdependencies among components, hyperparameters, and time steps. Through illustrative examples on real-world medical datasets, we highlight the applicability of the proposed paradigm within personalized prediction, personalized treatment planning, and personalized monitoring. To the best of our knowledge, Clairvoyance is the first coherent effort to demonstrate viability of a comprehensive, structured, and automatable pipeline for clinical time-series learning.
+
+
+Figure 1: Clairvoyance and the Patient Journey. The healthcare lifecycle revolves around asking (1) what outcomes are most likely, (2) which treatments may best improve them, and (3) when taking additional measurements is most informative. Utilizing both static and temporal data, Clairvoyance provides corresponding pathways for personalized prediction of outcomes, personalized estimation of treatment-effects, and personalized monitoring.
+
+# 2 THE CLAIRVOYANCE PIPELINE
+
+The Patient Journey Consider the typical patient's interactions with the healthcare system. Their healthcare lifecycle revolves tightly around (1) forecasting outcomes of interest (i.e. the prediction problem), (2) selecting appropriate interventions (i.e. the treatment effects problem), and (3) arranging followup monitoring (i.e. the active sensing problem). Each of these undertakings involves the full complexity of preparing, modeling, optimizing, and drawing conclusions from clinical time series. Clairvoyance provides model pathways for these core tasks in the patient journey (see Figure 1)—integrated into a single pipeline from start to finish (see Figure 2). Formally, these pathways include:
+
+- Predictions Path. Let $\{(\mathbf{s}_n, \mathbf{x}_{1:T_n})\}_{n=1}^N$ denote any medical time-series dataset, where $\mathbf{s}_n$ is the vector of static features for the $n$ -th patient, and $\mathbf{x}_{1:T_n} \doteq \{\mathbf{x}_{n,t}\}_{t=1}^{T_n}$ is the vector sequence of temporal features. One-shot problems seek to predict a vector of labels $\mathbf{y}_n$ from $(\mathbf{s}_n, \mathbf{x}_{n,1:T_n})$ : e.g. prediction of mortality or discharge, where $y_n \in \{0,1\}$ . Online problems predict some target vector $\mathbf{y}_{n,t}$ from $(\mathbf{s}_n, \mathbf{x}_{n,1:t})$ at every time step: e.g. $\tau$ -step-ahead prediction of biomarkers $\mathbf{y}_{n,t} \subseteq \mathbf{x}_{n,t + \tau}$ .
+- Treatment Effects Path. For individualized treatment-effect estimation [26-31], we additionally identify interventional actions $\mathbf{a}_{n,t} \subseteq \mathbf{x}_{n,t}$ at each time step (e.g. the choices and dosages of prescribed medication), as well as corresponding measurable outcomes $\mathbf{y}_{n,t} \subseteq \mathbf{x}_{n,t + \tau}$ . The learning problem now consists in quantifying the (factual or counterfactual) potential outcomes $\mathbf{y}_{n,t + \tau}$ that would result from any specific sequence of interventions and patient covariates $(\mathbf{s}_n, \mathbf{x}_{n,1:t}, \mathbf{a}_{n,1:t})$ .
+- Active Sensing Path. In addition to mapping (already-measured) covariates to targets, the very decision of what (and when) to measure is also important under resource constraints. In medical settings, active sensing deals with balancing this trade-off between information gain and acquisition costs [32-36]. With reference to some downstream task (e.g. predicting $\mathbf{y}_{n,t + 1}$ ), the aim is to select a subset of covariates $\kappa_{n,t}$ at each $t$ to maximize the (net) benefit of observing $\{x_{n,t,k}\}_{k\in \kappa_{n,t}}$ .
+
+As a Software Toolkit Engineering complete medical time-series workflows is hard. The primary barrier to collaborative research between ML and medicine seldom lies in any particular algorithm. Instead, the difficulty is operational [6, 48, 54]—i.e. in coordinating the entire data science process, from handling missing/irregularly sampled patient data all the way to validation on different popu
+
+
+Figure 2: Clairvoyance Pipeline Overview. (Dashed) purple cells denote pipeline inputs/outputs, and (solid) gray cells denote pipeline components. Orange options give main pathway models, and blue options give surrounding components. (Solid) gray arrows indicate pipeline workflow, and (dashed) orange the optimization interface.
+
+```python
+"Configure Data Preprocessing"
+preprocessing = PipelineComposer(
+FilterNegative(...), OneHotEncoder(...),
+Normalization(...),...)
+"Configure Problem Specification"
+specification = ProblemMaker(
+problem_class='online', max_seq_len=24,
+label=['ventilator'], treatment=None, window=4,...)
+"Configure Data Imputation"
+imputation = PipelineComposer(
+Imputation(type='static', model_name='...', ..., Imputation(type='temporal', model_name='...', ...))
+"Configure Feature Selection"
+feature_selection = PipelineComposer(
+FeatureSelection(type='static', model_name='...', ..., FeatureSelection(type='temporal', model_name='...', ...))
+```
+
+```python
+"Configure Pathway Model"
+prediction_model = Prediction(model_name='...',
+parameter_dict={'...'}, ...)
+"Load Datasets"
+data_train, data_test = DataLoader.load(
+ static_dir='...', temporal_dir='...', ..., DataLoader.load(static_dir='...', temporal_dir='...')
+"Execute Pipeline"
+for component in [preprocessing,
+specification,
+imputation,
+feature_selection]:
+
+ data_train = component.fit_transform(data_train)
+ data_test = component.transform(data_test)
+prediction_model.fit(data_train,...)
+test_output = prediction_model.predict(data_test, ...)
+```
+
+Figure 3: Illustrative Usage. A prototypical structure of API calls for constructing a prediction pathway model. Clairvoyance is modularized to abide by established fit/transform/predict design patterns. (Green) ellipses denote additional configuration; further modules (treatments, sensing, uncertainty, etc.) expose similar interfaces.
+
+lations [4, 55-60]. Clairvoyance gives a single unified roof under which clinicians and researchers alike can readily address such common issues—with the only requirement that the data conform to the standard EAV open schema for clinical records (i.e. patient key, timestamp, parameter, and value).
+
+Under a simple, consistent API, Clairvoyance encapsulates all major steps of time-series modeling, including (a.) loading and (b.) preprocessing patient records, (c.) defining the learning problem, handling missing or irregular samples in both (d.) static and (e.) temporal contexts, (f.) conducting feature selection, (g.) fitting prediction models, performing (h.) calibration and (i.) uncertainty estimation of model outputs, (j.) applying global or instance-wise methods for interpreting learned models, (k.) computing evaluation metrics, and (l.) visualizing results. Figure 2 shows a high-level overview of major components in the pipeline, and Figure 3 shows an illustrative example of usage.
+
+All component modules are designed around the established fit-transform-predict paradigms, and the modeling workflow is based around a single chain of API calls. In this manner, each stage in the pipeline is extensible with little effort: Novel techniques developed for specific purposes (e.g. a new state-of-the-art imputation method) can be seamlessly integrated via simple wrappers (see Appendix G for an example of how this can be done for any existing method, e.g. from sklearn). This stepwise composability aims to facilitate rapid experimentation and deployment for research, as well as simplifying collaboration and code-sharing. Package documentation/tutorials give further software details.
+
+As an Empirical Standard Evaluating any algorithm depends on its context. For instance, how well a proposed classifier ultimately performs is invariably coupled with the upstream feature-selection method it is paired with [44]. Likewise, the accuracy of a state-of-the-art imputation method cannot be assessed on its own: With respect to different downstream prediction models, more sophisticated imputation may actually yield inferior performance relative to simpler techniques [13, 14]—especially if components are not jointly optimized [15]. While current research practices typically seek to isolate individual gains through "all-else-equal" configurations in benchmarking experiments, the degree of actual overlap in pipeline configurations across studies is lacking: There is often little commonality in the datasets used, preprocessing done, problem types, model classes, and prediction endpoints. This dearth of empirical standardization may not optimally promote practical assessment/reproducibility, and may obscure/entangle true progress. (Tables 6–7 in Appendix A give a more detailed illustration).
+
+Clairvoyance aims to serve as a structured evaluation framework to provide such an empirical standard. After all, in order to be relevant from a real-world medical standpoint, assessment of any single proposed component (e.g. a novel ICU mortality predictor) can—and should—be contextualized in the entire end-to-end workflow as a whole. Together, the 'problem-maker', 'pipeline-composer', and all the pipeline component modules aim to simplify the process of specifying, benchmarking, and (self-)documenting full-fledged experimental setups for each use case. At the end of the day, while results from external validation of is often heterogeneous [2, 59, 61], improving transparency and reproducibility greatly facilitates code re-use and independent verification [54, 56, 57]. Just as the "environment" abstraction in OpenAI Gym does for reinforcement learning, the "pipeline" abstraction in Clairvoyance seeks to promote accessibility and fair comparison as pertains medical time-series.
+
+(a) Example: SASH 'decomposed' as SMS (fulfilled here- (b) Example: SPSC 'decomposed' as PSC (fulfilled here by DKL) followed by combiner (stacking ensemble): re by SKL) followed by SMS (fulfilled here by DKL):
+
+```python
+stepwise_pathway_models = [] pipeline_classess = [list_of(static_imputation_classes, "Optimize Each Class" for klass in list_of_pathway_classes: "PSC Optimization" sms_agent $=$ Stepwise(method='dkl', pclass, data_train, metric) pipeline_classes, data_train, data_test, metric) models, scores $=$ sms_agent OPTIMIZE(num_iters=300) components, score $=$ psc_agent OPTIMIZE(num_iters=300) sms_model $=$ StepwiseEnsemble Models, scores) pathway_class, data_train, data_test $\equiv$ psc_agent.get_pathway_class_and_data() "Ensemble Over Classes" "SMS Optimization" for model in stepwise_pathway_models: sms_agent $=$ Stepwise(method='dkl', pathwalvable_class, data_train, metric) models, scores $=$ sms_agent OPTIMIZE(num_iters=300) pathway_model $=$ StackingEnsemble(stepwise_pathway_models) pathway_model.fit(data_train,...) pathway_model.load_model(...) test_output $=$ pathway_model.predict(data_test,...) test_output $=$ pathway_model.predict(data_test,...)
+```
+
+As an Optimization Interface Especially in cross-disciplinary clinical research—and during initial stages of experimentation—automated optimization may alleviate potential scarcity of expertise in the specifics of design and tuning. The Clairvoyance pipeline abstraction serves as a software interface for optimization algorithms—through which new/ existing techniques can be applied, developed, and tested in a more systematic, realistic setting. In particular, by focusing on the temporal aspect of medical time series, this adds a new dimension to classes of autoML problems.
+
+Briefly (see Figure 5), consider the standard task of hyperparameter optimization (for a given model) [62]. By optimizing over
+
+
+Figure 4: Optimization Interface. Example code using the optimization interface to conduct stepwise (i.e. across time steps) and componentwise (i.e. across the pipeline) configuration. Each interface is implementable by any choice of new/ existing algorithms. The DKL implementation of SMS is provided for use the Section 4 examples.
+Figure 5: Degrees of Optimizations. Clairvoyance allows optimizing over algorithms, pipelines, and time steps.
+
+classes of algorithms, the combined algorithm selection and hyperparameter optimization ("CASH") problem [63-65] has been approached in healthcare settings by methods such as progressive sampling, filtering, and fine-tuning [50, 66]. By further optimizing over combinations of pipeline components, the pipeline selection and configuration ("PSC") problem [67] has also been tackled in clinical modeling via such techniques as fast linear search ("FLASH") [68] and structured kernel learning ("SKL") [67, 69]. Now, what bears further emphasis is that for clinical time series, the temporal dimension is critical due to the potential for temporal distribution shifts within time-series data—a common phenomenon in the medical setting (we refer to [53, 70, 71] for additional background). Precisely to account for such temporal settings, the stepwise model selection ("SMS") problem [71] has recently been approached by such methods as relaxed parameter sharing ("RPS") [53] as well as deep kernel learning ("DKL") [71, 72]. Further, what the pipeline interface also does is to naturally allow extending this to define the stepwise algorithm selection and hyperparameter optimization ("SASH") problem, or even—in the most general case—the stepwise pipeline selection and configuration ("SPSC") problem. Although these latter two are new—and clearly hard—problems (with no existing solutions), Figure 4 shows simple examples of how the interface allows minimally adapting the SMS and PSC sub-problems (which do have existing solutions) to form feasible (approximate) solutions.
+
+Two distinctions are due: First, Clairvoyance is a pipeline toolkit, not an autoML toolkit. It is not our goal to (re-)implement new/existing optimization algorithms—which abound in literature. Rather, the standardized interface is precisely what enables existing implementations to be plugged in, as well as allowing new autoML techniques to be developed and validated within a realistic medical pipeline. All that is required, is for the optimizing agent to expose an appropriate 'optimize' method given candidate components, and for such candidates to expose a 'get-hyperparameter-space' method. Second—but no less importantly—we must emphasize that we are not advocating removing human oversight from the healthcare loop. Rather, the pipeline simply encourages systematizing the initial development stages in clinical ML, which stands to benefit from existing literature on efficient autoML techniques.
+
+# Design Principles
+
+Our philosophy is based on the authors' experience in prototyping and developing real-world collaborative projects in clinical time-series. $\cdot$ Pipeline First, Models Second: Our first emphasis is on reproducibility: The process of engineering and evaluating complete medical time-series workflows needs to be clear and transparent. Concretely, this manifests in the strict "separation of concerns" enforced by the high-level API of each component module along the pipeline (see e.g. Figure 3). With the 'problem-maker' and 'problem-composer' as first-class objects, the central abstraction here is the pipeline itself, while the intricacies and configurations of individual model choices (e.g. a specific deep learning temporal imputation method) are limited to within each component module. $\cdot$ Be Minimal and Unintrusive: Our second emphasis is on standardization: While workflow development needs to be unified and systematic, learning to use the framework should be intuitive as well. Concretely, this manifests in the API's adherence to the existing and popular 'fit-transform-predict' paradigm (see e.g. sklearnn) in all component modules—both 'along' the pipeline steps, as well as 'across' the pathways that define the patient's healthcare lifecycle (see Figure 2). This enables easy adoption and rapid prototyping—qualities that are paramount given the degree of collaborative research and cross-disciplinary code-sharing required in healthcare-related research. $\cdot$ Encourage Extension: Our third emphasis is on extensibility: Given that novel methods are proposed in the ML community every day, the pipeline components should be easily extensible to incorporate new algorithms. Concretely, this manifests in the encapsulated design for models within each component module: Specifically, in order to integrate a new component method (e.g. from another researcher's code, or from an external package) into the framework, all that is required is a simple wrapper class that implements the 'fit', 'predict', and 'get-hyperparameter-space' methods; likewise, for an optimization agent (see subsection on optimization interface below), all that is required is to expose an 'optimize' method.
+
+# Worked Examples
+
+For a discussion of the choice of built-in techniques to include with the initial release, see Appendix C. Appendix E gives a worked example of using the Clairvoyance pipeline to train and use a model in a standard setting (for this, we use the predictions pathway). Appendix F gives a worked example of using the optimization interface to perform stepwise model selection (for this, we use the treatment effects pathway for variety). Appendix G gives an example of how a generic wrapper can be written for integrating an external model/algorithm that is not already implemented in the current version of Clairvoyance. Finally, the software repository contains Jupyter notebooks and top-level API code with examples of pathways and optimization.
+
+# 3 RELATED WORK
+
+Clairvoyance is a pipeline toolkit for medical time-series machine learning research and clinical decision support. As such, this broad undertaking lies at the intersection of three concurrent domains of work: Time-series software development, healthcare journey modeling, and automated learning.
+
+Time-series Software First and foremost, Clairvoyance is a software toolkit. Focusing on challenges common to clinical time-series modeling, it is primarily differentiated by the breadth and flexibility of the pipeline. While there exists a variety of sophisticated time-series packages for different purposes, they typically concentrate on implementing collections of algorithms and estimators for specific types of problems, such as classification [79], forecasting [77], feature extraction [76], reductions between tasks [78], or integrating segmentation and transforms with estimators [75]. By contrast, our focus is orthogonal: Clairvoyance aims at end-to-end development along the entire inference workflow, including pathways and pipeline components important to medical problems (see Table 1). Again indeed—if so desired, and as mentioned above—specific algorithms from [73-79] can be integrated into Clairvoyance workflows through the usual 'fit-transform-predict' interface, with little hassle.
+
+Healthcare Lifecycle For specific use cases, clearly a plethora of research exists in support of issuing diagnoses [22-24], prognostic modeling [17-21], treatment-effect estimation [26-31], optimizing measurements [32-36], among much more. The key proposition that Clairvoyance advances is the underlying commonality across these seemingly disparate problems: It abstracts and integrates along the time-series inference workflow, across outpatient, general wards, and intensive-care environments, and—above all—amongst a patient's journey of interactions through the healthcare system that call for decision support in predictions, treatments, and monitoring (Figure 1). Now, it also important to state what Clairvoyance is not: It is not an exhaustive list of algorithms; the pipeline includes a collection of popular components, and provides a standardized interface for extension. It is also not a solution to preference-/application-specific considerations: While issues such as data cleaning, algorithmic fairness, and privacy and heterogeneity are important, they are beyond the scope of our software.
+
+Table 1: Clairvoyance and Comparable Software.* Note that vernacularly, "pipelining" simply refers to the procedural workflow (i.e. from inputs to training, cross-validation, outputs, and evaluation); existing packages focus on implementing algorithms for prediction models alone, with minimal preprocessing. In contrast, Clairvoyance provides support along the data science pipeline, and across different healthcare pathways requiring decision support.
+
+ | cesium [73] | tslearn [74] | seglearn [75] | tsfresh [76] | pysf [77] | sktime [78] | pyts [79] | Clairvoyance |
| Preprocessing | ✓ | ✓ | ✓ | ✓ | | ✓ | ✓ | ✓ |
| Temporal Imputation | ✓ | ✓ | ✓ | ✓ | ✓ | | ✓ | ✓ |
| Feature Selection | | | | ✓ | | | | ✓ |
| Predictions | Static Features | ✓ | | ✓ | | | | | ✓ |
| Online Targets | | | ✓ | | ✓ | | ✓ | ✓ |
| Predictions | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| Treatment Effects | | | | | | | | ✓ |
| Active Sensing | | | | | | | | ✓ |
| Interp. | Feat. Importance | | | | | | | | ✓ |
| Feat. Additivity | | | | | | | | ✓ |
| Instance-wise | | | | | | | | ✓ |
| Uncertainty Calibration | | | | | | | | ✓ |
| | | | | | | ✓ |
| End-to-End Pipelining* Optimization Interface | | | | | | | | ✓ |
| | | | | | | ✓ |
+
+Automated Learning Finally, tangentially related is the rich body of work on autoML for hyperparameter optimization [62], algorithm/pipeline configuration [63-65, 67], and stepwise selection [71], as well as specific work for healthcare data [50, 53, 66-68, 70, 71]. In complement to these threads of research, the Clairvoyance pipeline interface enables—if so desired—leveraging existing implementations, or validating novel ones—esp. in efficiently accounting for the temporal dimension.
+
+# 4 ILLUSTRATIVE EXAMPLES
+
+Recall the patient's journey of interactions within the healthcare system (Figure 1). In this section, our goal is to illustrate key usage scenarios for Clairvoyance in this journey—for personalized (1) prediction, (2) treatment, and (3) monitoring—in outpatient, general wards, and intensive-care environments.
+
+Specifically, implicit in all examples is our proposition that: (i) as a software toolkit, constructing an end-to-end solution to each problem is easy, systematic, and self-documenting; (ii) as an empirical standard, evaluating collections of models by varying a single component ensures that comparisons are standardized, explicit, and reproducible; and (iii) as an optimization interface, the flexibility of selecting over the temporal dimension—in and of itself—abstracts out an interesting research avenue.
+
+Medical Environments Our choices of time-series environments are made to reflect the heterogeneity of realistic use cases envisioned for Clairvoyance. For the outpatient setting, we consider a cohort of patients enrolled in the UK Cystic Fibrosis Registry (CYSTIC) [80], which records longitudinal follow-up data for $\sim 5,800$ individuals with the disease. On the registry, individuals
+
+| Medical Environment | Outpatient | General Wards | Intensive Care |
| Dataset | UKCF [80] | WARD[81] | MIMIC [82] |
| Duration of Trajectories | Avg. ~5.3 years | Avg. ~9.1 days | Avg. ~85.4 hours |
| Variance (25%-50%-75%) | (4-6-7 years) | (6-9-15 days) | (27-47-91 hours) |
| Frequency of Measurements | Per 6 months | Per 4 hours | Per 1 hour |
| Different Types of Static | Demo., Comorbidities, | Admiss. Stats, Vital | Demo., Vital Signs, |
| and Temporal Features | Infections, Treatments | Signs, Lab Tests | Lab Tests, Medications |
| Dimensionality of Features | 11 static, 79 temporal | 8 static, 37 temporal | 11 static, 40 temporal |
| Number of Samples | ~5,800 patients | ~6,300 patients | ~23,100 patients |
| Endpoints (cf. Predictions) | FEV1 Result | Admission to ICU | Mechanical Ventilation |
| Class-label Imbalance | (Continuous-valued) | ~5.0%-to-95.0% | ~36.8%-to-63.2% |
+
+Table 2: Medical Environments. We consider the range of settings, incl. outpatient, general wards, and ICU data.
+
+are chronic patients monitored over infrequent visits, and for which long-term decline is generally expected. For the general wards setting, we consider a cohort of $\sim 6,300$ patients hospitalized in the general medicine floor in the Ronald Reagan Medical Center (WARDs) [81]. In contrast, here the population of patients presents with a wide variety of conditions and diagnoses (1,600+ ICD-9 codes), and patients are monitored more frequently. The data is highly non-stationary: on the hospital floor, deterioration is an unexpected event. For the intensive-care setting, we consider $\sim 23,100$ individuals from the Medical Information Mart for Intensive Care (MIMIC) [82]. Here, the setting is virtually that more or less "anything-can-happen", and physiological data streams for each patient are recorded extremely frequently. Varying across the set of environments are such characteristics as the average durations of patient trajectories, the types of static and longitudinal features recorded, their frequencies of measurement, and their patterns and rates of missingness (Table ?? presents some brief statistics).
+
+Example 1 (Lung Function in Cystic Fibrosis Patients) The most common genetic disease in Caucasian populations is cystic fibrosis [83], which entails various forms of dysfunction in respiratory and gastrointestinal systems, chiefly resulting in progressive lung damage and recurrent respiratory infections requiring antibiotics—and in severe cases may require hospitalization and even mechanical ventilation in an ICU (see Example 2) [84, 85]. While classical risk scores and survival models utilize only a fraction of up-to-date measurements, recent work has leveraged deep learning to incorporate greater extents of longitudinal biomarkers, comorbidities, and other risk factors [86]. An essential barometer for anticipating the occurrence of respiratory failures is the gauge of lung function by forced expiratory volume (FEV1): Accurate prediction yields an important tool for assessing severity of a patient's disease, describing its onset/progression, and as an input to treatment decisions [85, 87].
+
+This is an archetypal rolling-window time-series problem for Clairvoyance's predictions pathway. Consider the models in Table 3: (i) As a clinical professional, it goes without saying that building the pipeline for each—or extending additional models through wrappers—has a low barrier to entry (see Figure 3/tutorials/documentation). (ii) As an ML researcher, one can rest assured that such comparisons are expressly standardized: Here, all results are explicitly from same pipeline using min-max normalized features, GAIN for static missing values, M-RNN for temporal imputation, no feature selection, and each model class shown. (iii) Lastly, to highlight the utility of the interface for selection over time, the final row presents results of approaching SASH using the example method of Figure 4(a), and—for fair comparison—with the pipeline kept constant. This simple approach already yields some gains in performance, laying a precedent—and the pipeline infrastructure—for further research.
+
+| Dataset (Label) | UKCF (FEV1 Result) | WARDSS (Admission to ICU) | MIMIC (Mech. Ventilation) |
| Evaluation | RMSE | MAE | AUC | APR | AUC | APR |
| Attention | (N/A) | (N/A) | 0.888 ± 0.016 | 0.551 ± 0.024 | (N/A) | (N/A) |
| RNN-GRU | 0.064 ± 0.001 | 0.035 ± 0.001 | 0.865 ± 0.010 | 0.487 ± 0.048 | 0.898 ± 0.001 | 0.774 ± 0.002 |
| RNN-LSTM | 0.062 ± 0.001 | 0.033 ± 0.001 | 0.841 ± 0.014 | 0.412 ± 0.032 | 0.901 ± 0.001 | 0.776 ± 0.002 |
| Temporal CNN | 0.120 ± 0.004 | 0.096 ± 0.003 | 0.826 ± 0.020 | 0.319 ± 0.048 | 0.884 ± 0.004 | 0.749 ± 0.007 |
| Transformer | 0.081 ± 0.002 | 0.050 ± 0.002 | 0.846 ± 0.006 | 0.472 ± 0.045 | 0.889 ± 0.002 | 0.761 ± 0.004 |
| Vanilla RNN | 0.070 ± 0.001 | 0.043 ± 0.001 | 0.794 ± 0.018 | 0.277 ± 0.063 | 0.898 ± 0.001 | 0.771 ± 0.002 |
| SASH | 0.059 ± 0.001 | 0.030 ± 0.001 | 0.891 ± 0.011 | 0.557 ± 0.031 | 0.917 ± 0.006 | 0.809 ± 0.013 |
+
+Table 3: Predictions Pathway Example. In addition to (online) 6-month ahead predictions of FEV1 in UKCF, we also test (one-shot) predictions of admission to ICU after 48 hours on the floor in WARDs, and (online) 4-hours ahead predictions of the need for mechanical ventilation in MIMIC (these are extended to treatment and sensing problems below). As the WARDs prediction is one-shot, what is denoted 'SASH' for that excludes the SMS ensembling step. Note that the canonical attention mechanism does not permit (variable-length) online predictions.
+
+Example 2 (Mechanical Ventilation on Intensive Care) Mechanical ventilation is an invasive, painful, and extremely unpleasant therapy that requires induction of artificial coma, and carries a high risk of mortality [88]. It is also expensive, with a typical ICU ventilator admission $>$ 30,000 [89]. To the patient, the need for mechanical ventilation—due to evidence of respiratory/ventilatory failure—is by itself an adverse outcome, and is unacceptable to some, even if it means they will not survive. It is possible that alternative strategies employed earlier may alleviate the need for ventilation, such as high flow oxygen, non-invasive ventilation, or—in this example—appropriate use of antibiotics [88]. Now, little is known about optimal timing of courses of antibiotics; in most cases a routine number of days is simply chosen when blood is typically sterile after first dose. On the one hand, there is a clear biologically plausible mechanism for incompletely treated infection to lead to longer periods of
+
+critical care, esp. requiring ventilation. On the other hand, antibiotic stewardship is crucial: Overuse of broad spectrum antibiotics leads to resistance, and is by itself a global health emergency [90].
+
+This is an archetypal problem for the treatment effects pathway. Table 4 shows the performance of the two state-of-the-art models for estimating effects of treatment decisions over time while adjusting for time-dependent confounding—that is, since actions taken in the data may depend on time-varying variables related to the outcome of interest [30, 31]. We refrain from belaboring points (i), (ii), (iii) above but their merits should be clear. From the patient's perspective, accurate estimation of the effect of treatment decisions on the risk of ventilation may assist them and their carers in achieving optimal shared decision-making about the care that they would like to receive. From the hospital's perspective, many ICUs around the world operate at $\sim 100\%$ bed occupancy, and delayed admission is typically an independent predictor of mortality [91-94]; therefore accurate estimation of the need for escalation or continued ICU ventilation is logistically important for resource planning and minimization of delays.
+
+| Time Horizon | Estimating 1 Day Ahead | Estimating 2 Days Ahead | Estimating 3 Days Ahead |
| Evaluation | AUC | APR | AUC | APR | AUC | APR |
| RMSN | 0.860 ± 0.005 | 0.889 ± 0.007 | 0.790 ± 0.004 | 0.883 ± 0.003 | 0.726 ± 0.015 | 0.852 ± 0.009 |
| CRN | 0.865 ± 0.003 | 0.892 ± 0.004 | 0.783 ± 0.009 | 0.872 ± 0.013 | 0.767 ± 0.010 | 0.869 ± 0.007 |
| SASH | 0.871 ± 0.007 | 0.902 ± 0.005 | 0.792 ± 0.003 | 0.885 ± 0.009 | 0.771 ± 0.005 | 0.873 ± 0.003 |
+
+Table 4: Treatment Effects Pathway Example. Results for estimation over different horizon lengths. Note that this uses a $\sim 6,000$ -patient subset (from those in Table ??) who received antibiotics at any point, based on daily decisions on antibiotic treatment, over spans of up to 20 days, with labels distributed $58.9\%$ to $-41.1\%$ overall.
+
+Example 3 (Clinical Deterioration of Ward Patients) Given the delay-critical nature of ICU admission w.r.t. morbidity/mortality, what is often desired is an automated prognostic decision support system to monitor ward patients and raise (early) alarms for impending admission to ICU (as a result of clinical deterioration) [25, 94, 95]. However, observations are costly, and the question of what (and when) to measure is by itself an active choice under resource constraints [32-36]: For instance, there is less reason to measure a feature whose value can already be confidently estimated on the basis of known quantities, or if its value is not expected to contribute greatly to the prognostic task at hand.
+
+This is an archetypal problem for Clairvoyance's active sensing pathway. Table 5 indicates the performance of different models for balancing this trade-off between information gain and acquisition rate with respect to admissions to ICU of ward patients. At various budget constraints (i.e. amounts of measurements permitted), each active sensing model learns from the training data to identify the most informative features to measure at test-time, so as to maximize the performance of admission predictions. (To allow some measurements to be costlier than others, they can simply be up-weighted when computing the budget constraint). As before, our propositions (i), (ii), and (iii) are implicit here.
+
+| Measure Rate | With 50% Measurements | With 70% Measurements | With 90% Measurements |
| Evaluation | AUC | APR | AUC | APR | AUC | APR |
| ASAC | 0.714 ± 0.018 | 0.235 ± 0.034 | 0.781 ± 0.015 | 0.262 ± 0.037 | 0.841 ± 0.016 | 0.414 ± 0.033 |
| DeepSensing | 0.707 ± 0.020 | 0.230 ± 0.036 | 0.772 ± 0.016 | 0.255 ± 0.033 | 0.829 ± 0.017 | 0.409 ± 0.038 |
| Randomize | 0.677 ± 0.021 | 0.217 ± 0.033 | 0.729 ± 0.019 | 0.249 ± 0.032 | 0.788 ± 0.017 | 0.269 ± 0.039 |
| SASH | 0.725 ± 0.015 | 0.248 ± 0.032 | 0.793 ± 0.013 | 0.278 ± 0.043 | 0.849 ± 0.014 | 0.420 ± 0.037 |
+
+Table 5: Active Sensing Pathway Example. Results at different acquisition rates (using GRUs as base predictors).
+
+# 5 CONCLUSION
+
+Machines will never replace a doctor's medical judgment, nor an ML researcher's technical innovation. But as a matter of data-driven clinical decision support, Clairvoyance enables rapid prototyping, benchmarking, and validation of complex time-series pipelines—so doctors can spend more time on the real scientific problems, and ML researchers can focus on the real technical questions. Moreover, collaborative research between medical practitioners and ML researchers is increasingly common [48]. To help grease the wheels, we developed and presented Clairvoyance, and illustrated its flexibility and capability in answering important and interesting medical questions in real-world environments.
+
+# ACKNOWLEDGMENTS
+
+We would like to thank the reviewers for their generous and invaluable comments and suggestions. This work was supported by Alzheimer's Research UK (ARUK), The Alan Turing Institute (ATI) under the EPSRC grant EP/N510129/1, The US Office of Naval Research (ONR), and the National Science Foundation (NSF) under grant numbers 1407712, 1462245, 1524417, 1533983, and 1722516.
+
+# REFERENCES
+
+[1] Romain Pirracchio. Mortality prediction in the icu based on mimic-ii results from the super icu learner algorithm (sicula) project. Springer: Secondary Analysis of Electronic Health Records, 2016.
+[2] Alistair EW Johnson, Tom J Pollard, and Roger G Mark. Reproducibility in critical care: a mortality prediction case study. Machine Learning for Healthcare Conference (MLHC), 2017.
+[3] Sanjay Purushotham, Chuizheng Meng, Zhengping Che, and Yan Liu. Benchmarking deep learning models on large healthcare datasets. Journal of Biomedical Informatics, 2018.
+[4] Alvin Rajkomar, Eyal Oren, Kai Chen, Andrew M Dai, Nissan Hajaj, Michaela Hardt, Peter J Liu, Xiaobing Liu, Jake Marcus, Mimi Sun, et al. Scalable and accurate deep learning with electronic health records. Nature Digital Medicine, 2018.
+[5] Hrayr Harutyunyan, Hrant Khachatrian, David C Kale, Greg Ver Steeg, and Aram Galstyan. Multitask learning and benchmarking with clinical time series data. Nature Scientific Data, 2019.
+[6] Beau Norgeot, Benjamin S Glicksberg, Laura Trupin, Dmytro Lituiev, Milena Gianfrancesco, Boris Oskotsky, Gabriela Schmajuk, Jinoos Yazdany, and Atul J Butte. Assessment of a deep learning model based on electronic health record data to forecast clinical outcomes in patients with rheumatoid arthritis. Journal of the American Medical Association (JAMA), 2019.
+[7] Carl Waldmann, Neil Soni, and Andrew Rhodes. Critical Care: Oxford Desk Reference. Oxford University Press, 2008.
+[8] Eren Gultepe, Jeffrey P Green, Hien Nguyen, Jason Adams, Timothy Albertson, and Ilias Tagkopoulos. From vital signs to clinical outcomes for patients with sepsis: a machine learning basis for a clinical decision support system. Journal of the American Medical Informatics Association (AMIA), 2014.
+[9] Steven Horng, David A Sontag, Yoni Halpern, Yacine Jernite, Nathan I Shapiro, and Larry A Nathanson. Creating an automated trigger for sepsis clinical decision support at emergency department triage using machine learning. *PloS one*, 2017.
+[10] Andreas Philipp Hassler, Ernestina Menasalvas, Francisco José García-García, Leocadio Rodríguez-Manas, and Andreas Holzinger. Importance of medical data preprocessing in predictive modeling and risk factor discovery for the frailty syndrome. BMC Medical Informatics and Decision Making, 2019.
+[11] Daniel Trujillo Viedma, Antonio Jesús Rivera Rivas, Francisco Charte Ojeda, and María José del Jesus Díaz. A first approximation to the effects of classical time series preprocessing methods on lstm accuracy. International Work-Conference on Artificial Neural Networks, 2019.
+[12] Dimitris Bertsimas, Agni Orfanoudaki, and Colin Pawlowski. Imputation of clinical covariates in time series. NeurIPS 2018 Workshop on Machine Learning for Health (ML4H), 2018.
+[13] Wei Cao, Dong Wang, Jian Li, Hao Zhou, Lei Li, and Yitan Li. Brits: Bidirectional recurrent imputation for time series. Advances in Neural Information Processing Systems (NeurIPS), 2018.
+[14] Yonghong Luo, Xiangrui Cai, Ying Zhang, Jun Xu, et al. Multivariate time series imputation with generative adversarial networks. Advances in Neural Information Processing Systems (NeurIPS), pages 1596-1607, 2018.
+[15] Jinsung Yoon, William R Zame, and Mihaela van der Schaar. Estimating missing data in temporal data streams using multi-directional recurrent neural networks. IEEE Transactions on Biomedical Engineering (TBME), 2018.
+
+[16] Paul Nickerson, Raheleh Baharloo, Anis Davoudi, Azra Bihorac, and Parisa Rashidi. Comparison of gaussian processes methods to linear methods for imputation of sparse physiological time series. International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2018.
+[17] Edward Choi, Mohammad Taha Bahadori, Andy Schuetz, Walter F Stewart, and Jimeng Sun. Doctor ai: Predicting clinical events via recurrent neural networks. Machine Learning for Healthcare Conference (MLHC), 2016.
+[18] Joseph Futoma, Mark Sendak, Blake Cameron, and Katherine A Heller. Scalable joint modeling of longitudinal and point process data for disease trajectory prediction and improving management of chronic kidney disease. Conference on Uncertainty in Artificial Intelligence (UAI), 2016.
+[19] Bryan Lim and Mihaela van der Schaar. Disease-atlas: Navigating disease trajectories with deep learning. Machine Learning for Healthcare Conference (MLHC), 2018.
+[20] Ahmed M Alaa and Mihaela van der Schaar. Attentive state-space modeling of disease progression. Advances in Neural Information Processing Systems (NeurIPS), 2019.
+[21] Daniel Jarrett and Mihaela van der Schaar. Target-embedding autoencoders for supervised representation learning. International Conference on Learning Representations (ICLR), 2020.
+[22] Zachary C Lipton, David C Kale, Charles Elkan, and Randall Wetzel. Learning to diagnose with LSTM recurrent neural networks. International Conference on Learning Representations (ICLR), 2016.
+[23] Inci M Baytas, Cao Xiao, Xi Zhang, Fei Wang, Anil K Jain, and Jiayu Zhou. Patient subtyping via time-aware LSTM networks. ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2017.
+[24] Huan Song, Deepta Rajan, Jayaraman J Thiagarajan, and Andreas Spanias. Attend and diagnose: Clinical time series analysis using attention models. AAAI Conference on Artificial Intelligence (AAAI), 2018.
+[25] Ahmed M Alaa and Mihaela Van Der Schaar. A hidden absorbing semi-markov model for informatively censored temporal data: Learning and inference. Journal of Machine Learning Research (JMLR), 2018.
+[26] Jason Roy, Kirsten J Lum, and Michael J Daniels. A bayesian nonparametric approach to marginal structural models for point treatments and a continuous or survival outcome. *Biostatistics*, 2016.
+[27] Yanbo Xu, Yanxun Xu, and Suchi Saria. A bayesian nonparametric approach for estimating individualized treatment-response curves. Machine Learning for Healthcare Conference (MLHC), 2016.
+[28] Peter Schulam and Suchi Saria. Reliable decision support using counterfactual models. Advances in Neural Information Processing Systems (NeurIPS), 2017.
+[29] Hossein Soleimani, Adarsh Subbaswamy, and Suchi Saria. Treatment-response models for counterfactual reasoning with continuous-time, continuous-valued interventions. arXiv preprint, 2017.
+[30] Bryan Lim. Forecasting treatment responses over time using recurrent marginal structural networks. Advances in Neural Information Processing Systems (NeurIPS), 2018.
+[31] Ioana Bica, Ahmed M Alaa, James Jordon, and Mihaela van der Schaar. Estimating counterfactual treatment outcomes over time through adversarially balanced representations. International Conference on Learning Representations (ICLR), 2020.
+[32] Shipeng Yu, Balaji Krishnapuram, Romer Rosales, and R. Bharat Rao. Active sensing. International Conference on Artificial Intelligence and Statistics (AISTATS), 2009.
+[33] Kartik Ahuja, William Zame, and Mihaela van der Schaar. Dpscreen: Dynamic personalized screening. Advances in Neural Information Processing Systems (NeurIPS), 2017.
+[34] Jinsung Yoon, William R Zame, and Mihaela van der Schaar. Deep sensing: Active sensing using multi-directional recurrent neural networks. International Conference on Learning Representations (ICLR), 2018.
+
+[35] Jaromír Janisch, Tomáš Pevný, and Viliam Lisý. Classification with costly features using deep reinforcement learning. AAAI Conference on Artificial Intelligence (AAAI), 2019.
+[36] Jinsung Yoon, James Jordon, and Mihaela van der Schaar. Asac: Active sensing using actor-critic models. Machine Learning for Healthcare Conference (MLHC), 2019.
+[37] Joseph Futoma, Sanjay Hariharan, and Katherine Heller. Learning to detect sepsis with a multitask gaussian process rnn classifier. International Conference on Machine Learning (ICML), 2017.
+[38] Li-Fang Cheng, Gregory Darnell, Bianca Dumitrascu, Corey Chivers, Michael E Draugelis, Kai Li, and Barbara E Engelhardt. Sparse multi-output gaussian processes for medical time series prediction. arXiv preprint, 2017.
+[39] Edmon Begoli, Tanmoy Bhattacharya, and Dimitri Kusnezov. The need for uncertainty quantification in machine-assisted medical decision making. Nature Machine Intelligence, 2019.
+[40] Marco Lorenzi, Maurizio Filippone, Giovanni B Frisoni, Daniel C Alexander, Sébastien Ourselin, Alzheimer's Disease Neuroimaging Initiative, et al. Probabilistic disease progression modeling to characterize diagnostic uncertainty: application to staging and prediction in Alzheimer's disease. NeuroImage, 2019.
+[41] Aven Samareh and Shuai Huang. Uq-chi: An uncertainty quantification-based contemporaneous health index for degenerative disease monitoring. IEEE Journal of Biomedical and Health Informatics (JBHI), 2019.
+[42] Edward Choi, Mohammad Taha Bahadori, Jimeng Sun, Joshua Kulas, Andy Schuetz, and Walter Stewart. Retain: An interpretable predictive model for healthcare using reverse time attention mechanism. Advances in Neural Information Processing Systems (NeurIPS), 2016.
+[43] Tian Bai, Shanshan Zhang, Brian L Egleston, and Slobodan Vucetic. Interpretable representation learning for healthcare via capturing disease progression through time. ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2018.
+[44] Jinsung Yoon, James Jordan, and Mihaela van der Schaar. Invasive: Instance-wise variable selection using neural networks. International Conference on Learning Representations (ICLR), 2018.
+[45] Camburu Oana-Maria, Giunchiglia Eleonora, Foerster Jakob, Lukasiewicz Thomas, and Blunsom Phil. Can i trust the explainer? verifying post-hoc explanatory methods. NeurIPS 2019 Workshop on Safety and Robustness in Decision Making, 2019.
+[46] Sana Tonekaboni, Shalmali Joshi, David Duvenaud, and Anna Goldenberg. What went wrong and when? instance-wise feature importance for time-series models. arXiv preprint, 2020.
+[47] Beau Norgeot, Benjamin S Glicksberg, and Atul J Butte. A call for deep-learning healthcare. Nature Medicine, 2019.
+[48] Brett Beaulieu-Jones, Samuel G Finlayson, Corey Chivers, Irene Chen, Matthew McDermott, Jaz Kandola, Adrian V Dalca, Andrew Beam, Madalina Fiterau, and Tristan Naumann. Trends and focus of machine learning applications for health research. Journal of the American Medical Association (JAMA), 2019.
+[49] Raag Agrawal and Sudhakaran Prabakaran. Big data in digital healthcare: lessons learnt and recommendations for general practice. Nature Heredity, 2020.
+[50] Gang Luo, Bryan L Stone, Michael D Johnson, Peter Tarczy-Hornoch, Adam B Wilcox, Sean D Mooney, Xiaoming Sheng, Peter J Haug, and Flory L Nkoy. Automating construction of machine learning models with clinical big data: proposal rationale and methods. JMIR research protocols, 2017.
+[51] Duncan Shillan, Jonathan AC Sterne, Alan Champneys, and Ben Gibbison. Use of machine learning to analyse routinely collected intensive care unit data: a systematic review. Critical Care, 23(1):284, 2019.
+[52] D Sculley, Gary Holt, Daniel Golovin, Eugene Davydov, Todd Phillips, Dietmar Ebner, Vinay Chaudhary, and Michael Young. Hidden technical debt in machine learning systems. Advances in Neural Information Processing Systems (NeurIPS), 2014.
+[53] Jeeheh Oh, Jiaxuan Wang, Shengpu Tang, Michael Sjoding, and Jenna Wiens. Relaxed weight sharing: Effectively modeling time-varying relationships in clinical time-series. Machine
+
+Learning for Healthcare Conference (MLHC), 2019.
+[54] Fei Wang, Lawrence Peter Casalino, and Dhruv Khullar. Deep learning in medicine—promise, progress, and challenges. Journal of the American Medical Association (JAMA), 2019.
+[55] Maarten van Smeden, Ben Van Calster, and Rolf HH Groenwold. Machine learning compared with pathologist assessment. Journal of the American Medical Association (JAMA), 2018.
+[56] Nilay D Shah, Ewout W Steyerberg, and David M Kent. Big data and predictive analytics: recalibrating expectations. Journal of the American Medical Association (JAMA), 2018.
+[57] Eric J Topol. High-performance medicine: the convergence of human and artificial intelligence. Nature Medicine, 2019.
+[58] Cao Xiao, Edward Choi, and Jimeng Sun. Opportunities and challenges in developing deep learning models using electronic health records data: a systematic review. Journal of the American Medical Informatics Association (AMIA), 2018.
+[59] Ben Van Calster, Ewout W Steyerberg, and Gary S Collins. Artificial intelligence algorithms for medical prediction should be nonproprietary and readily available. Journal of the American Medical Association (JAMA), 2019.
+[60] Zhengping Che, Sanjay Purushotham, Kyunghyun Cho, David Sontag, and Yan Liu. Recurrent neural networks for multivariate time series with missing values. Nature Scientific Reports, 2018.
+[61] Richard D Riley, Joie Ensor, Kym IE Snell, Thomas PA Debray, Doug G Altman, Karel GM Moons, and Gary S Collins. External validation of clinical prediction models using big datasets from e-health records or ipd meta-analysis: opportunities and challenges. The British Medical Journal (BMJ), 2016.
+[62] Frank Hutter, Lars Kotthoff, and Joaquin Vanschoren. Hyperparameter optimization. Chapter 1, Automated machine learning, 2019.
+[63] Matthias Feurer, Aaron Klein, Katharina Eggensperger, Jost Springenberg, Manuel Blum, and Frank Hutter. Efficient and robust automated machine learning. Advances in neural information processing systems (NeurIPS), 2015.
+[64] Lars Kotthoff, Chris Thornton, Holger H Hoos, Frank Hutter, and Kevin Leyton-Brown. Autoweka 2.0: Automatic model selection and hyperparameter optimization in weka. Journal of Machine Learning Research (JMLR), 2017.
+[65] Randal S Olson and Jason H Moore. Tpot: A tree-based pipeline optimization tool for automating machine learning. ICML Workshop on Automatic Machine Learning, 2016.
+[66] Xueqiang Zeng and Gang Luo. Progressive sampling-based bayesian optimization for efficient and automatic machine learning model selection. Health information science and systems, 2017.
+[67] Ahmed M Alaa and Mihaela van der Schaar. Autoprognosis: Automated clinical prognostic modeling via bayesian optimization with structured kernel learning. International Conference on Machine Learning (ICML), 2018.
+[68] Yuyu Zhang, Mohammad Taha Bahadori, Hang Su, and Jimeng Sun. Flash: fast bayesian optimization for data analytic pipelines. ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2016.
+[69] Zi Wang, Chengtao Li, Stefanie Jegelka, and Pushmeet Kohli. Batched high-dimensional bayesian optimization via structural kernel learning. International Conference on Machine Learning (ICML), 2017.
+[70] Jenna Wiens, John Guttag, and Eric Horvitz. Patient risk stratification with time-varying parameters: a multitask learning approach. Journal of Machine Learning Research (JMLR), 2016.
+[71] Yao Zhang, Daniel Jarrett, and Mihaela van der Schaar. Stepwise model selection for sequence prediction via deep kernel learning. International Conference on Artificial Intelligence and Statistics (AISTATS), 2020.
+[72] Andrew Gordon Wilson, Zhiting Hu, Ruslan Salakhutdinov, and Eric P Xing. Deep kernel learning. International Conference on Artificial Intelligence and Statistics (AISTATS), 2016.
+[73] Brett Naul, Stefan van der Walt, Arien Crellin-Quick, Joshua S Bloom, and Fernando Pérez. Cesium: open-source platform for time-series inference. Annual Scientific Computing with
+
+Python Conference (SciPy), 2016.
+[74] Romain Tavenard, Johann Faouzi, Gilles Vandewiele, Felix Divo, Guillaume Androz, Chester Holtz, Marie Payne, Roman Yurchak, Marc Rußwurm, Kushal Kolar, and Eli Woods. tslearn: A machine learning toolkit dedicated to time-series data. GitHub, 2017.
+[75] David M Burns and Cari M Whyne. Seglearn: A python package for learning sequences and time series. Journal of Machine Learning Research (JMLR), 2018.
+[76] Maximilian Christ, Nils Braun, Julius Neuffer, and Andreas W Kempa-Liehr. tsfresh: time series feature extraction on basis of scalable hypothesis tests. Neurocomputing, 2018.
+[77] Ahmed Guecioueur. pysf: supervised forecasting of sequential data in python. GitHub, 2018.
+[78] Markus Löning, Anthony Bagnall, Sajaysurya Ganesh, Viktor Kazakov, Jason Lines, and Franz J Kiraly. sktime: A unified interface for machine learning with time series. NeurIPS 2019 Workshop on Systems for ML, 2019.
+[79] Johann Faouzi and Hicham Janati. pyts: A python package for time series classification. Journal of Machine Learning Research (JMLR), 2020.
+[80] David Taylor-Robinson, Olia Archangelidi, Siobhán B Carr, Rebecca Cosgriff, Elaine Gunn, Ruth H Keogh, Amy MacDougall, Simon Newsome, Daniela K Schluter, Sanja Stanojevic, et al. Data resource profile: the uk cystic fibrosis registry. International Journal of Epidemiology, 2018.
+[81] Ahmed M Alaa, Jinsung Yoon, Scott Hu, and Mihaela Van der Schaar. Personalized risk scoring for critical care prognosis using mixtures of gaussian processes. IEEE Transactions on Biomedical Engineering, 2017.
+[82] Alistair EW Johnson, Tom J Pollard, Lu Shen, H Lehman Li-Wei, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. Mimic-iii, a freely accessible critical care database. Nature Scientific Data, 2016.
+[83] Shawn D Aaron, Anne L Stephenson, Donald W Cameron, and George A Whitmore. A statistical model to predict one-year risk of death in patients with cystic fibrosis. Journal of Clinical Epidemiology, 2015.
+[84] Theodore G Liou, Frederick R Adler, and David Huang. Use of lung transplantation survival models to refine patient selection in cystic fibrosis. American Journal of Respiratory and Critical Care Medicine, 2005.
+[85] Lionelle Nkam, Jérôme Lambert, Aurélien Latouche, Gil Bellis, Pierre-Régis Burgel, and MN Hocine. A 3-year prognostic score for adults with cystic fibrosis. Journal of Cystic Fibrosis, 2017.
+[86] Changhee Lee, Jinsung Yoon, and Mihaela Van Der Schaar. Dynamic-deephit: A deep learning approach for dynamic survival analysis with competing risks based on longitudinal data. IEEE Transactions on Biomedical Engineering, 2019.
+[87] Dan Li, Ruth Keogh, John P Clancy, and Rhonda D Szczesniak. Flexible semiparametric joint modeling: an application to estimate individual lung function decline and risk of pulmonary exacerbations in cystic fibrosis. Emerging Themes in Epidemiology, 2017.
+[88] Joëlle Texereau, Dany Jamal, Gérald Choukroun, Pierre-Régis Burgel, Jean-Luc Diehl, Antoine Rabbat, Philippe Loirat, Antoine Parrot, Alexandre Duguet, Joel Coste, et al. Determinants of mortality for adults with cystic fibrosis admitted in intensive care unit: a multicenter study. Respiratory Research, 2006.
+[89] Joseph F Dasta, Trent P McLaughlin, Samir H Mody, and Catherine Tak Piech. Daily cost of an intensive care unit day: the contribution of mechanical ventilation. Critical Care Medicine, 2005.
+[90] Geneva: World Health Organization. Antibiotic resistance. World Health Organization: Newsroom - Antibiotic Resistance Fact Sheet, 2020.
+[91] Jason Phua, Wang Jee Ngerng, and Tow Keang Lim. The impact of a delay in intensive care unit admission for community-acquired pneumonia. European Respiratory Journal, 2010.
+[92] Vincent Liu, Patricia Kipnis, Norman W Rizk, and Gabriel J Escobar. Adverse outcomes associated with delayed intensive care unit transfers in an integrated healthcare system. Journal of Hospital Medicine, 2012.
+
+[93] Michael P Young, Valerie J Gooder, Karen McBride, Brent James, and Elliott S Fisher. Inpatient transfers to the intensive care unit. Journal of General Internal Medicine, 2003.
+[94] Jinsung Yoon, Ahmed Alaa, Scott Hu, and Mihaela Schaar. Forecasticu: a prognostic decision support system for timely prediction of intensive care unit admission. International Conference on Machine Learning (ICML), 2016.
+[95] Ahmed M Alaa, Scott Hu, and Mihaela van der Schaar. Learning from clinical judgments: Semi-markov-modulated marked hawkes processes for risk prognosis. International Conference on Machine Learning (ICML), 2017.
+[96] Ioana Bica, Ahmed M Alaa, and Mihaela van der Schaar. Time series deconfounder: Estimating treatment effects over time in the presence of hidden confounders. International Conference on Machine Learning (ICML), 2020.
+[97] Jinsung Yoon, James Jordan, and Mihaela Van Der Schaar. Gain: Missing data imputation using generative adversarial nets. arXiv preprint arXiv:1806.02920, 2018.
+[98] Jinsung Yoon, James Jordan, and Mihaela van der Schaar. Ganite: Estimation of individualized treatment effects using generative adversarial nets. In International Conference on Learning Representations, 2018.
+
+# A NEED FOR EMPIRICAL STANDARDSIZATION
+
+'All else' is seldom 'equal'. As examples, a review of recent, state-of-the-art research on medical time-series imputation and prediction models demonstrates the following: While the benchmarking performed within each individual study strives to isolate sources of gain through "all-else-equal" experiments, the degree of overlap in pipeline settings across studies is lacking. Such a dearth of empirical standardization may not optimally promote effective assessment of true research progress:
+
+| Proposed Imputation Method | Downstream Prediction Component | Dataset(s) Used |
| Technique | Evaluation | Problem Type | Endpoint | Model(s) | Evaluation |
| med. impute [12] | Imputation MSE | One-shot Classification | 10-year Risk of Stroke | LogR | Prediction ROC | Framingham Heart St. (FHS) |
| BRITS [13] | Imputation MAE, MRE | One-shot Classification | In-Hospital Death | NN | Prediction ROC | PhysioNet ICU (MIMIC) |
| GRUI-GAN [14] | None | One-shot Classification | In-Hospital Death | LogR, SVM, RF, RNN | Prediction ROC | PhysioNet ICU (MIMIC) |
| GP-based [16] | Imputation MSE | None | N/A | N/A | N/A | UF Shands Hospital Data |
| M-RNN [15] | Imputation MSE | Online Classification | Various Endpoints1 | NN, RF, LogR, XGB | Prediction ROC | Various Medical Datasets2 |
+
+Table 6: Research on Medical Time-series Data Imputation. Typically, proposed imputation methods rely on some downstream dummy prediction task for evaluating utility. However, the datasets, problem types, prediction endpoints, and time-series models themselves do not often coincide. Depending on specific use cases, this dearth of standardized benchmarking does not promote accessible comparison across different proposed techniques.
+
+| Proposed Prediction Method | Upstream Imputation Component | Dataset(s) Used |
| Technique | Endpoint | Evaluation | Imputation | Model(s) | Evaluation |
| LSTM-DO-TR [22] | Multi-label Diagnosis | ROC, F1, precision at 10 | Yes | Forw-, Back-, Mean-fill | None | Ch. Hospital LA PICU |
| T-LSTM [23] | Regression; Subtyping | MSC; tests for group effects | Data Pre-imputed | N/A | None | Parkinson's (PPMI) |
| MGP-RNN [37] | Sepsis Onset Prediction | ROC, PRC, precision | Yes | Multitask GPs | None | Duke UHS Inpatient |
| D-Atlas [19] | Survival; Forecasting | ROC, PRC, MSE | Yes | Median-, Mean-fill | None | Cystic Fibrosis (UKCF) |
| SAND [24] | Regression; Classification | ROC, PRC, MSE, MAPE | Masking as Input | N/A | None | PhysioNet ICU (MIMIC) |
+
+Table 7: Research on Medical Time-series Prediction Models. Typically, proposals of prediction models pay short attention to the choices regarding upstream imputation of missing and/or irregularly sampled data. In comparative experiments, a single imputation method is usually fixed w.r.t. all prediction models for evaluation. The datasets, imputation methods, and even prediction endpoints themselves have little overlap across studies.
+
+B BACKGROUND REFERENCES FOR EXPERIMENTS
+
+ | Example 1 | Example 2 | Example 3 |
| Modeling Question | Lung Function in Cystic Fibrosis Patients | Mechanical Ventilation on Intensive Care | Clinical Deterioration of Ward Patients |
| Original description of dataset | Data Resource Profile: The UK Cystic Fibrosis Registry [80] | The Medical Information Mart for Intensive Care [82] | The Ronald Reagan Medical Center General Wards Dataset [81] |
| Initial data cleaning/selection used | As specified in the original study in [86], specifically as in their Appendix A | As specified in the software corresponding to the implementation of the study in [96] | As specified in the original study in [34], specifically as in their Section 5 |
| Theoretical background on model pathway involved | This is vanilla time-series prediction, so any technical study should have adequate background, such as our references [17] through [25] in the introduction of this paper | Decision support with counterfactuals [28], marginal structural models [26] and their deep equivalents [30], and the most recent: counterfactual recurrent networks [31] | Active sensing as an original problem [32], models for personalized screening [33], and most recent deep learning methods for the active sensing problem [34-36] |
| Original works describing and justifying the modeling task | Background on Cystic Fibrosis [83, 84], studies of classical risk scores [85], studies of joint modeling approaches for survival analysis [87], as well as more recent deep learning approaches to survival analysis [86] | Mechanical ventilation as an adverse outcome of interest [88, 89], ventilation as pertains mortality [91-94], and the importance of antibiotic stewardship vs. reduction in the need for critical care [90] | Prior methods and importance of forecasting deterioration of patients in general wards [25, 95], and specifically developing forecasting systems for for clinical decision support [94] |
| Data collection, data coverage, ethics, etc. | See Sections 1-3 in [80] | See Sections 2-3 in [82] | See Section 4 in [81] |
+
+Table 8: Background References. Additional table of background references for the experiments in Section 4.
+
+# C NOTE ON CHOICE OF BUILT-IN TECHNIQUES
+
+Given our "Pipeline First" focus (see "Key Design Principles" in Section 2), especially in the context of medical applications, rather than (re-)implementing every time-series model in existence, our primary contribution is in unifying all three key pathways in a patient's healthcare lifecycle (i.e. predictions, treatments, and monitoring tasks; see Section 2: "The Patient Journey") through a single end-to-end pipeline abstraction—for which Clairvoyance is the first (see Table 1: "Clairvoyance and Comparable Software"). For the predictions pathway, while there is a virtually infinite variety of time-series models in the wild, we choose to include standard and popular classes of deep learning models, given their ability to handle large amounts and dimensions of data, as well as the explosion of their usage in medical time-series studies (see e.g. virtually any of the paper references in Section 1). For both the treatment effects and active sensing pathways, there is much less existing work available; for these, we provide state-of-the-art models (e.g. CRN, R-MSN, ASAC, DeepSensing) implemented exactly as given in their original research papers. With that said, as noted throughout, recall that all component modules (including the various other pipeline components) are easily extensible: For instance, if more traditional time-series baselines from classical literature were desired for comparison purposes, existing algorithms from packages such as [73-79] can be integrated into Clairvoyance by using simple wrapper classes with little hassle (for an explicit demonstration of this, see Appendix G).
+
+Note on Time to Train: Our computations for the examples included in Section 4 were performed using a single NVIDIA GeForce GTX 1080 Ti GPU, and each experiment took approximately $\sim 24-72$ hours. Of course, this duration may be shortened through the use of multiple GPUs in parallel.
+
+# D ADDITIONAL DETAIL ON EXPERIMENTS
+
+In our experiments for UKCF (used in Example 1), out of the total of 10,995 entries in the registry data, we focused on the 5,883 adult patients with followup data available from January 2009 through December 2015, which excludes pediatric patients and patients with no follow-up data from January 2009. This includes a total of 90 features, with 11 static covariates and 79 time-varying covariates, which includes basic demographic features, genetic mutations, lung function scores, hospitalizations, bacterial lung infections, comorbidities, and therapeutic management. Within the 5,883 patients, 605 were followed until death (the most common causes of which were complications due to transplantation and CF-associated liver disease); the remaining 5,278 patients were right-censored.
+
+In our experiments for WARDS (used in Examples 1 and 3), the data comes from 6,321 patients who were hospitalized in the general medicine floor during the period March 2013 through February 2016, and excludes patients who were reverse transfers from the ICU (i.e. initially admitted from the ICU, and then returned to the ward subsequent to stabilization in condition). The heterogeneity in patient conditions mentioned in the main text include such conditions as shortness of breath, hypertension, septicemia, sepsis, fever, pneumonia, and renal failure. Many patients had diagnoses of leukemia or lymphoma, and had received chemotherapy, allogeneic or autologous stem cell transplantation, and treatments that cause severe immunosuppression places them at risk at developing further complications that may require ICU admission. Here, the recorded features include 8 static variables (admission-time statistics) and 37 temporal physiological data streams (vital signs and laboratory tests); vital signs were taken approximately every 4 hours, and lab tests approximately every 24 hours.
+
+In our experiments for MIMIC (used in Example 1 for predictions, and Example 2 for estimating treatment effects), for the predictions example we focus on 22,803 patients who were admitted to ICU after 2008, and consider 11 static variables (demographics information) and 40 physiological data streams in total, which includes 20 vital signs which were most frequently measured and for which missing rates were lowest (e.g. heart rate, respiratory rate), as well as 20 laboratory tests (e.g. creatinine, chloride); vital signs were taken approximately every 1 hour, and laboratory tests approximately every 24 hours. For the treatment effects pathway (used in Example 2), we focus on the 6,033 patients who had received antibiotics at any point in time, based on daily decisions on antibiotic treatment, with a maximum sequence length of 20 days. Note that the class-label imbalance between the pure prediction task (Example 1) and treatment effects task (Example 2) is slightly different per the different populations included, and the numerical results should not be compared directly. The code for extracting this data is included under 'mimic_data Extraction' in the repository.
+
+In all experiments, the entire dataset is first randomly partitioned into training sets (64%), validation sets (16%), and testing sets (20%). The training set is used for model training, the validation set is used for hyperparameter tuning, and the testing set is used for the final evaluation—which generates the performance metrics. This process itself is then repeated randomly for a total of 10 times, with the means and spreads of each result used in generating results Tables 3-5. As usual, the entire pipeline (with the exception of the pathway model corresponding to each row) is fixed across all rows, which in this case uses min-max normalized features, GAIN for static missing values, M-RNN for temporal imputation, and no prior feature selection; where hyperparameters for such pipeline components are involved (i.e. GAIN and M-RNN here), these are also—as they should be—constant across all rows.
+
+In order to highlight our emphasis on the temporal dimension of autoML in Clairvoyance, the results for SASH isolate precisely this effect alone: Each result for SASH is generated using the simple approach of Figure 4(a)—that is, by 'naively' decomposing SASH into a collection of SMS problems (for each model class considered), subsequent to which the stepwise models for each class are further ensembled through stacking. Note that the point here is not to argue for this specific technique, but merely to show that even this (simplicistic) approach already yields some gains, thereby illustrating the potential for further autoML research (which can be conveniently performed over Clairvoyance's pipeline abstraction) to investigate perhaps more efficient solutions with respect to this temporal dimension. Briefly, in DKL the validation performance for each time step is treated as a noisy version of a black box function, which leads to a multiple black-box function optimization problem (which DKL solves jointly and efficiently); we refer to [71] for their original exposition. In our experiments we complete 100 iterations of Bayesian optimization in DKL for each model class. For reproducibility, the code for our implementation of DKL used for experiments is included in the repository.
+
+# E WORKED EXAMPLE: USING THE FULL PIPELINE
+
+This section gives a fully worked example of using the Clairvoyance pipeline (via the predictions pathway). To follow along, the user should have their own static and temporal datasets for training and testing, named as follows—where ‘data_name’ is replaced by the appropriate name of the dataset:
+
+- data_name_temporal_train_data_eav.csv.gz
+- data_name(static_train_data.csv.gz
+- data_name_temporal_test_data_eav.csv.gz
+- data_name(static_test_data.csv.gz
+
+and placed within the directory ‘. ./datasets/data/data_name/’. As described in Section 2 (“As a Software Toolkit”), the requirement is that the data conform to the standard EAV open schema for clinical records (i.e. patient key, timestamp, parameter, and value). See Figure 6 for a summary of the pipeline workflow that we shall be walking through and executing in the following subsections:
+
+1. Load Dataset: Extract csv files from the original raw datasets located in the data directory.
+2. Preprocess Dataset: Preprocess the raw data using various filters, such as replacing negative values to NaN, doing one-hot encoding for certain features, and normalizing feature values.
+3. Define Problem: Set the prediction problem (one-shot or online), the label (the target of predictions), the maximum sequence length, and (optionally) the treatment features (not used here). Also define the metric for evaluation and the task itself (classification or regression).
+4. Impute Dataset: Impute missing values in the preprocessed static and temporal datasets—for each selecting among data imputation methods of choice, and return the complete datasets.
+5. Feature Selection: Select the relevant static and temporal features for the labels (e.g. recursive or greedy addition/deletion, or simply skip this step by setting the method to be None.
+6. Model Training and Prediction: After finishing the data preparation steps, we define the model used for time-series prediction, and train the model using the training dataset. After training is finished, we use the trained model to predict the labels using the testing dataset.
+7. Estimate Uncertainty: Estimate uncertainty of the predictions made by the predictor model.
+8. Interpret Predictions: Compute the (instance-wise) feature and temporal importance weights.
+9. Visualize Results: Output predictions, performance metrics, uncertainties, and importances.
+
+
+Figure 6: Pipeline Workflow. Step-by-step schematic corresponding to the procedure in this worked example.
+
+Import necessary packages for this example:
+
+```python
+#Necessary packages
+from __future__ import absolute_export
+from __future__ import division
+from __future__ import print_function
+import numpy as np
+import warnings; warnings.filter warnings('ignore')
+import sys; sys.path.append('/’)
+from utilis import PipelineComposer
+```
+
+# E.1 LOAD DATASET
+
+Extract csv files from the original raw datasets located in the directory. The CSVLoader is responsible for loading csv files from the original raw datasets in the '.../datasets/data/data_name/' directory. In this example we use data from MIMC, so here the 'data_name' is 'mimic' throughout:
+
+Load Dataset
+```python
+from datasets import CSVLoader
+# Define data name
+data_name $=$ 'mimic'
+# Define data directory
+data_directory $= \mathrm{'}$ ./datasets/data/'+data_name $^+$ '/' $^+$ data_name $^+$ '\n# Load train and test datasets
+dataloader_training $= \backslash$
+CSVLoader(static_file $\equiv$ data_directory $^+$ 'static_train_data.csv.gz', temporal_file $\equiv$ data_directory $^+$ 'temporal_train_data_eav.csv.gz')
+dataloader/testing $= \backslash$
+CSVLoader(static_file $\equiv$ data_directory $^+$ 'static_test_data.csv.gz', temporal_file $\equiv$ data_directory $^+$ 'temporal_test_data_eav.csv.gz')
+dataset_training $=$ dataloader_training.load()
+dataset/testing $=$ dataloader/testing.load()
+print('Finish data loading.')
+```
+
+# E.2 PREPROCESS DATASET
+
+Preprocess the raw data using multiple filters. In this example, we replace all the negative values to NaN (using FilterNegative), do one-hot encoding on 'admission_type' feature (using OneHotEncoder), and do MinMax Normalization (using Normalizer). Preprocessing is done for both training and testing datasets; note that—as should be the case—the 'fit_transform' method is called on the training dataset, and only the 'transform' method is executed on the testing dataset:
+
+Preprocess Dataset
+```python
+from preprocessing import FilterNegative, OneHotEncoder, Normalizer
+# (1) filter out negative values
+negative_filter = FilterNegative()
+# (2) one-hot encode categorical features
+one-hot_encoding = 'admission_type'
+onehotEncoder = OneHotEncoder(one-hot_encoding_features=[one-hot_encoding])
+# (3)Normalize features: 3 options (minmax, standard, none)
+```
+
+```python
+normalization $=$ 'minmax'
+normalizer $=$ Normalizer(normalization)
+# Data preprocessing.
+filter=PipelineComposer(negative_filter,onehotEncoder,normalizer)
+dataset_training $=$ filterpipeline.fit_transform(dataset_training)
+dataset/testing $=$ filterpipeline.transform(dataset_testing)
+print('Finish preprocessing.')
+```
+
+# E.3 DEFINE PROBLEM
+
+The prediction problem can be defined as: 'one-shot' (one time prediction) or 'online' (rolling window prediction). The "max_seq_len" is the maximum sequence length of time-series sequence. The 'label_name' is the column name for the label(s) selected as the prediction target. The 'treatment' is the column name for the actions selected as treatments (not used here in this example, in the predictions pathway). The 'window' specifies the prediction window (i.e. how many hours ahead to predict). The 'metric_name' specifies the performance metric of interest, e.g. 'auc', 'apr', 'mse', 'mae', and the 'task' is classification or regression. In this example, we are interested in issuing online predictions for whether the patient will require mechanical ventilation after 4 hours:
+
+Define Problem
+
+```python
+from preprocessing import ProblemMaker
+# Define parameters
+problem $=$ 'online'
+max_seq_len $= 24$
+label_name $=$ 'ventilator'
+treatment $=$ None
+window $= 4$
+# Define problem
+problem-maker $= \backslash$
+ProblemMaker(question $=$ problem, label $=$ [label_name], max_seq_len $=$ max_seq_len, treatment $=$ treatment, window $=$ window)
+dataset_training $=$ problem-maker.fit_transform(dataset_training)
+dataset/testing $=$ problem-maker.fit_transform(dataset-testing)
+# Set other parameters
+metric_name $=$ 'auc'
+task $=$ 'classification'
+metric_sets $=$ [metric_name]
+metric_parameters $=$ {'problem': problem, 'label_name': [label_name]}
+print('Finish defining problem.')
+```
+
+# E.4 IMPUTE DATASET
+
+For static imputation there are options such as mean, median, mice, missforest, knn, gain. For temporal imputation there are options such as mean, median, linear, quadratic, cubic, spline, mrnn, tgain. In this example we simply select median imputation for both static and temporal data:
+
+Impute Dataset
+
+```python
+from imputation import Imputation
+# Set imputation models
+static_imputation_model = 'median'
+temporal_imputation_model = 'median'
+```
+
+```txt
+#Impute the missing data
+static_imputation $=$ Imputation(imputation_model_name $\equiv$ static_imputation_model, data_type $=$ 'static')
+temporal_imputation $=$ Imputation(imputation_model_name $\equiv$ temporal_imputation_model, data_type $=$ 'temporal')
+imputationpipeline $=$ PipelineComposer(static_imputation, temporal_imputation)
+dataset_training $=$ imputationpipeline.fit_transform(dataset_training)
+dataset/testing $=$ imputationpipeline.transform(dataset_testing)
+print('Finish imputation.')
+```
+
+# E.5 FEATURE SELECTION
+
+In this step, we can perform feature selection for the most relevant static and temporal features to the labels. In the simplest case, we can skip the feature selection step entirely (as we do here). The user can select from among greedy-addition, greedy-deletion, recursive-addition, recursive-deletion, and None. The feature_number specifies the number of selected features:
+
+# Feature Selection
+
+```python
+from feature_selection import FeatureSelection
+# Set feature selection parameters
+static_feature_selection_model = None
+temporal_feature_selection_model = None
+static_feature_selection_number = None
+temporal_feature_selection_number = None
+# Select relevant features
+static_feature_selection = \
+FeatureSelection feature_selection_model_name = static_feature_selection_model,
+feature_type = 'static',
+feature_number = static_feature_selection_number,
+task = task, metric_name = metric_name,
+metric_parameters = metric_parameters)
+temporal_feature_selection = \
+FeatureSelection feature_selection_model_name = temporal_feature_selection_model,
+feature_type = 'temporal',
+feature_number = temporal_feature_selection_number,
+task = task, metric_name = metric_name,
+metric_parameters = metric_parameters)
+feature_selectionpipeline = \
+PipelineComposer(static_feature_selection, temporal_feature_selection)
+dataset_training = feature_selectionpipeline.fit_transform(dataset_training)
+dataset/testing = feature_selectionpipeline.transform(dataset/testing)
+print('Finish feature selection.')
+```
+
+# E.6 TRAINING AND PREDICTION
+
+After finishing the data preparation, we define the predictive models. Existing options include RNN, GRU, LSTM, Attention, Temporal CNN, and Transformer, and—as is the case for the other pipeline modules, and as discussed in Section 2—is easily extensible through the standard fit-transform-predict paradigm. We now train the model using the training dataset. We set the validation set as the $20\%$ of the training set for early stopping and for saving the best model. After training, we use the trained model to predict the labels of the testing dataset. Here the parameters include model_name: rnn, gru, lstm, attention, tcn, transformer; model_parameters: network parameters,
+
+such as hdim: hidden dimensions, n_layer: number of layers, n_head: number of heads (for transformer model), batch_size: number of samples in mini-batch, epochs: number of epochs, learning_rate: learning rate, static_mode: method of incorporating static features (e.g. by concatenation), time_mode: method of incorporating temporal information (e.g. concatenate), etc.:
+
+Training and Prediction
+```python
+from prediction import prediction
+# Set predictive model
+model_name = 'gru'
+# Set model parameters
+model_parameters = {'h_dim': 100, 'n_layer': 2, 'n_head': 2, 'batch_size': 128, 'epoch': 20, 'model_type': model_name, 'learning_rate': 0.001, 'static_mode': 'Concatenate', 'time_mode': 'Concatenate', 'verbose': True}
+# Set up validation for early stopping and best model saving
+dataset_training.train_val_test_split(prob_val=0.2, prob_test = 0.0)
+# Train the predictive model
+pred_class = prediction(model_name, model_parameters, task)
+pred_class.fit(dataset_training)
+# Return the predictions on the testing set
+test_y_hat = pred_class.predict(dataset/testing)
+print('Finish predictor model training and testing.')
+```
+
+# E.7 ESTIMATE UNCERTAINTY
+
+Estimate uncertainty of the predictions (which we name 'test_ci_hat' below) made by the predictor model. In this example, we use the method of ensembling to model uncertainty in prediction output:
+
+Estimate Uncertainty
+```python
+from uncertainty import uncertainty
+# Set uncertainty model
+uncertainty_model_name $=$ 'ensemble'
+# Train uncertainty model
+uncertainty_model $=$ uncertainty(uncertainty_model_name, model_parameters, pred_class, task)
+uncertainty_model.fit(dataset_training)
+# Return uncertainty of the trained predictive model
+test_ci_hat $=$ uncertainty_model.predict(dataset-testing)
+print('Finish uncertainty estimation')
+```
+
+# E.8 INTERPRET PREDICTIONS
+
+Compute feature importance weights (which we name 'test_s_hat' below). In this example, we use the method of (temporal) INVASE to model instance-wise feature/temporal importance weights:
+
+Interpret Predictions
+```python
+from interpretation import interpretation
+# Set interpretation model interpretation_model_name $=$ 'tinvase'
+# Train interpretation model
+interpreter $=$ interpretation(interpretation_model_name, model_parameters,pred_class,task)
+%\*interpretor.fit(dataset_training)
+# Return instance-wise temporal and static feature importance test_s_hat $=$ interpreter.predict(dataset-testing)
+print('Finish model interpretation')
+```
+
+# E.9 VISUALIZE RESULTS
+
+Here we visualize the performance of the trained model (using the print_performance method):
+
+Visualize Performance
+```python
+from evaluation import Metrics
+from evaluation import print_performance
+# Evaluate predictor model
+result $=$ Metrics(metric_sets, metric_parameters).evaluate( dataset_testing.label, test_y_hat)
+print('Finish predictor model evaluation.')
+print('Overall performance')
+print_performance(result, metric_sets, metric_parameters)
+```
+
+
+
+Similar methods can be used to visualize model predictions, uncertainties, and importances (by importing print_prediction, print Uncertainty, and print Interpretation methods). See Jupyter notebook tutorial for complete sample code, including inputs, outputs, and visualizations:
+
+Other Visualizations
+```python
+from evaluation import print_prediction, print Uncertainty, print_interpretation
+# Set the patient index for visualization
+index = [1]
+print('Each prediction')
+print_prediction(test_y_hat[index], metric_parameters)
+print('Uncertainty estimations')
+printuncertainty (test_y_hat[index], test_ci_hat[index], metric_parameters)
+print('Model interpretation')
+print_interpretation (test_s_hat[index], dataset_training feature_name,
+ metric_parameters, model_parameters)
+```
+
+# F WORKED EXAMPLE: USING THE AUTOML INTERFACE
+
+This section gives a fully worked example of using the Clairvoyance optimization interface (via the treatment effects pathway). Here the basic structure remains the same as in Section E, but in the model training step (here we use CRN for the treatment effects model) we show an example of performing stepwise model selection (SMS) as well. We assume the reader is familiar with the details as in Section E, and do not repeat similar descriptions. Instead, we organize the code as in a standard experiment—using a 'main' function wrapper with top-level arguments to enable ease of inspection.
+
+Import necessary packages, begin main function, and set basic parameters:
+
+```python
+from __future__ import absolute-import
+from __future__ import division
+from __future__ import print_function
+import argparse
+import numpy as np
+import warnings; warnings.filter warnings('ignore')
+import sys; sys.path.append('/')
+from datasets import CSVLoader
+from preprocessing import FilterNegative, OneHotEncoder, Normalizer, ProblemMaker
+from imputation import Imputation
+from feature_selection import FeatureSelection
+from treatments.CRN.CRN_model import CRN_model
+from prediction import AutoEnsemble
+from automl.model import AutoTS
+from evaluation import Metrics, BOMetric
+from evaluation import print_performance, print_prediction
+from utils import PipelineComposer
+```
+
+Begin Main Function
+```python
+def main(args):
+ 'Args:
+ - data loading parameters:
+ - data_names: mimic, ward, cf, mimic ANTIBIOTICS
+ - preprocess parameters:
+ - normalization: minmax, standard, None
+ - one-hot_encoding: input features that need to be one-hot encoded
+ - problem: 'one-shot' or 'online'
+ - 'one-shot': one time prediction at the end of the time-series
+ - 'online': prediciton at every time stamps of the time-series
+ - max_seq_len: maximum sequence length after padding
+ - label_name: the column name for the label(s)
+ - treatment: the column name for treatments
+ - imputation parameters:
+ - static_imputation_model: mean, median, mice, missforest, knn, gain, etc.
+ - temporal_imputation_model: mean, median, linear, quadratic, cubic, spline, etc.
+ - feature selection parameters:
+ - feature_selection_model: greedy-addition, recursive-addition, etc.
+ - feature_number: selected feature number
+ - predictor_parameters:
+ - epochs: number of epochs
+ - bo itr: bayesian optimization iterations
+ - static_mode: how to utilize static features (concatenate or None)
+ - time_mode: how to utilize time information (concatenate or None)
+ - task: classification or regression
+ - metric_name: auc, apr, mae, mse
+ # Set basic parameters
+ metricsets = [args(metric_name)]
+ metric_parameters = {'problem': args/problem, 'label_name': [args.labelname]}
+```
+
+F.1 LOAD DATASET
+```python
+Load Dataset
+# (continued within 'def main') # File names data_directory $=$ '/datasets/data/' $^+$ args.data_name $^+$ '/' $^+$ args.data_name $^+$ ' $$ dataloader_training =\CSVLoader(static_file=data_directory $^+$ 'static_train_data.csv.gz', temporal_file=data_directory + 'temporal_train_data_eav.csv.gz') dataloader_testing $=$ $\mathsf{CSVLoader}$ (static_file $\equiv$ data_directory $^+$ 'static_test_data.csv.gz', temporal_file $\equiv$ data_directory $^+$ 'temporal_test_data_eav.csv.gz') dataset_training $=$ dataloader_training.load() dataset/testing $=$ dataloader/testing.load() print('Finish data loading.')
+```
+
+F.2 PREPROCESS DATASET
+```txt
+Preprocess Dataset
+# (continued within 'def main') # (0) filter out negative values (Automatically) negative_filter $=$ FilterNegative() # (1) one-hot encode categorical features onehotEncoder $=$ OneHotEncoder(one-hot_encoding_features $\coloneqq$ [args.one-hot_encoding]) # (2)Normalize features:3 options (minmax,standard,none) normalizer $=$ Normalizer(args normalization) filterpipeline $=$ PipelineComposer(negative_filter,onehotEncoder,normalizer) dataset_training $=$ filterpipeline.fit_transform(dataset_training) dataset/testing $=$ filterpipeline.transform(dataset-testing) print('Finish preprocessing.')
+```
+
+F.3 DEFINE PROBLEM
+```python
+Define Problem
+# (continued within 'def main') problemMaker $=$ \ ProblemMaker(question=args一个问题,label=[args.label_name], max_seq_len=args.max_seq_len,treatment=[args.treatment]) dataset_training $=$ problemmaker.fit_transform(dataset_training) dataset/testing $=$ problemmaker.fit_transform(dataset/testing) print('Finish defining problem.')
+```
+
+F.4 IMPUTE DATASET
+```python
+Impute Dataset # (continued within 'def main') static_imputation $=$ Imputation( imputation_model_name $\equiv$ args(static_imputation_model, data_type $\equiv$ 'static')
+```
+
+```python
+temporal_imputation $=$ Imputation( imputation_model_name $\equiv$ args.temporal_imputation_model,data_type $\equiv$ 'temporal')
+imputationpipeline $=$ PipelineComposer(static_imputation,temporal_imputation)
+dataset_training $=$ imputationpipeline.fit_transform(dataset_training)
+dataset/testing $=$ imputationpipeline.transform(dataset_testing)
+print('Finish imputation.')
+```
+
+F.5 FEATURE SELECTION
+```txt
+Feature Selection
+# (continued within 'def main') static_feature_selection $=$ FeatureSelection( feature_selection_model_name $\equiv$ args(static_feature_selection_model, feature_type $\coloneqq$ 'static', feature_number $\equiv$ args(static_feature_selection_model, task $\equiv$ args.task, metric_name $\equiv$ args(metric_name, metric_parameters=metric_parameters) temporal_feature_selection $=$ FeatureSelection( feature_selection_model_name $\equiv$ args.temporal_feature_selection_model, feature_type $\equiv$ 'temporal', feature_number $\equiv$ args.temporal_feature_selection_number, task $\equiv$ args.task, metric_name $\equiv$ args(metric_name, metric_parameters=metric_parameters) feature_selectionpipeline $=$ PipelineComposer(static_feature_selection, temporal_feature_selection) dataset_training $=$ feature_selectionpipeline.fit_transform(dataset_training) dataset/testing $=$ feature_selectionpipeline.transform(dataset/testing) print('Finish feature selection.')
+```
+
+# F.6 OPTIMIZATION AND PREDICTION
+
+Since we want to do stepwise model selection, this step differs from that in Section E. In particular, here we are not just relying on a single pathway model (CRN); we are calling the 'AutoTS' module to perform Bayesian optimization—which implements SMS by DKL, exactly as described in [71]:
+
+Optimization and Prediction
+```javascript
+(continued within 'def main') # CRN model model_parameters $=$ {'projectionHorizon':5, encoder_max_alpha':1, decoder_max_alpha':1, static_mode':'concatenate', time_mode':'concatenate'} crn_model $\equiv$ CRN_model(task $\coloneqq$ 'classification') crn_model.set_parameters(\*\*model_parameters) model_class $\equiv$ crn_model # train Validate split dataset_training.train_val_test_split(prob_val=0.2,prob_test=0.2)
+```
+
+```txt
+Bayesian Optimization Start
+metric $=$ BOMetric(metric $\equiv$ 'auc',fold $\coloneqq 0$ ,split $\equiv$ 'test')
+# Run BO for selected model class
+BO_model $=$ AutoTS(dataset_training, model_class, metric)
+models,bo_score $=$ BO_model.trainig_loop(num_iter=20)
+auto_ens_model $=$ AutoEnsemble/models,bo_score)
+# Prediction of treatment effects
+test_y_hat $=$ auto_ens_model.predict(dataset_testing,test_split $\equiv$ 'test')
+print('Finish AutoML model training and testing.')
+```
+
+# F.7 MODEL EVALUATION
+
+Output performance evaluation for the final trained model, using metric parameters as defined above:
+
+Model Evaluation
+```python
+(continued within 'def main')
+result = Metrics(metric_sets, metric_parameters).evaluate(test_y, test_y_hat)
+print('Finish ITE model evaluation.')
+print('Overall performance')
+print_performance(result, metric_sets, metric_parameters)
+```
+
+# F.8 TOP-LEVEL PARSER
+
+Define and Parse Arguments
+```python
+if __name__ == __main__':
+ parser = argparse ArgumentParser()
+ parser.add_argument(
+ "--data_name",
+ choices=['mimic', 'ward', 'cf', 'mimic ANTIBIOTICS'],
+ default='mimic ANTIBIOTICS',
+ type=str)
+ parser.add_argument(
+ "--normalization",
+ choices=['minmax', 'standard', None],
+ default='minmax',
+ type=str)
+ parser.add_argument(
+ "--one_hot_encoding",
+ default='admission_type',
+ type=str)
+ parser.add_argument(
+ "--problem",
+ choices=['online', 'one-shot'],
+ default='online',
+ type=str)
+ parser.add_argument(
+ "--max_seq_len",
+ help='maximum sequence length',
+ default=20,
+ type=int)
+ parser.add_argument(
+ "--label_name",
+ default='ventilator',
+ type=str)
+ parser.add_argument(
+ "--treatment",
+ default='antibiotics',
+ type=str)
+```
+
+```python
+parser.add_argument(
+'--static_imputation_model',
+choices=['mean', 'median', 'mice', 'missforest', 'knn', 'gain'],
+default='median',
+type=str)
+parser.add_argument(
+'--temporal_imputation_model',
+choices=['mean', 'median', 'linear', 'quadratic', 'cubic', 'spline',
+'mrnn', 'tgain'],
+default='median',
+type=str)
+parser.add_argument(
+'--static_feature_selection_model',
+choices=['greedy-addition', 'greedy-deletion', 'recursive-addition',
+'re recursive-deletion', None],
+default=None,
+type=str)
+parser.add_argument(
+'--static_feature_selection_number',
+default=10,
+type=int)
+parser.add_argument(
+'--temporal_feature_selection_model',
+choices=['greedy-addition', 'greedy-deletion', 'recursive-addition',
+'re recursive-deletion', None],
+default=None,
+type=str)
+parser.add_argument(
+'--temporal_feature_selection_number',
+default=10,
+type=int)
+parser.add_argument(
+'--epochs',
+default=20,
+type=int)
+parser.add_argument(
+'--bo itr',
+default=20,
+type=int)
+parser.add_argument(
+'--static_mode',
+choices=['concatenate', None],
+default='concatenate',
+type=str)
+parser.add_argument(
+'--time_mode',
+choices=['concatenate', None],
+default='concatenate',
+type=str)
+parser.add_argument(
+'--task',
+choices=['classification', 'regression'],
+default='classification',
+type=str)
+parser.add_argument(
+'--metric_name',
+choices=['auc', 'apr', 'mse', 'mae'],
+default='auc',
+type=str)
+# Call main function
+args = parser.parse_args()
+main(args)
+```
+
+# G EXTENSIBILITY: EXAMPLE WRAPPER CLASS
+
+Since novel methods are proposed in the ML community every day, the pipeline components should be easily extensible to incorporate new algorithms. To integrate a new component method (e.g. from another researcher's code, or from an external package) into the framework, all that is required is a simple wrapper class that implements the 'fit', 'predict', and 'get-hyperparameter-space' methods. Here we show an example of how a classical time-series prediction model (ARIMA) can be integrated.
+
+ARIMA Wrapper Class
+```txt
+Necessary packages
+import os
+import pmdarima as pm
+from datetime import datetime
+from base import BaseEstimator, PredictorMixin
+import numpy as np
+```
+
+```python
+class ARIMA(BaseEstimator, PredictorMixin):
+ ""Attributes:
+ - task: classification or regression
+ - p: MA degree
+ - d: Differencing degree
+ - q: AR degree
+ - time_mode: 'concatenate' or None
+ - model_id: the name of model
+ - model_path: model path for saving
+ - verbose: print intermediate process
+```
+
+```python
+def __init__(self, task=None, p=None, d=None, q=None, time_mode=None, model_id='auto_arima', model_path='tmp', verbose=False): super().__init__(task) self.task = task self.p = p self.d = d self.q = q self.time_mode = time_mode self.model_path = model_path self.model_id = model_id selfverbose = verbose # Predictor model & optimizer define self.predictor_model = None if self.task == 'classification': raise ValueError('Arima model cannot be used for classification') # Set path for model saving if not os.path.exists(model_path): os.makedirs(model_path) self.save_file_name = '{}/{''.format(model_path, model_id) + \ datetime.now().strftime("%H%M%S") + '.hdf5' def new(self, model_id): '''Create a new model with the same parameter as the existing one. Args: - model_id: an unique identifier for the new model
+```
+
+```python
+Returns: - a new ARIMA "" return ARIMA(self.task, self.p, self.d, self.q, self.time_mode, model_id, self.model_path, selfverbose)
+```
+
+# fit
+
+```python
+def fit(self, dataset, fold=0, train_split='train', valid_split='val'):
+ Arima model fitting does not require an independent training set.
+ pass
+```
+
+# predict
+
+```python
+def predict(self, dataset, fold=0, test_split='test'):
+ '''"Return the predictions based on the trained model.
+Args:
+ - dataset: temporal, static, label, time, treatment information
+ - fold: Cross validation fold
+ - test_split: testing set splitting parameter
+Returns:
+ - test_y_hat: predictions on testing set
+ '''"
+test_x, test_y = self._data_preprocess(dataset, fold, test_split)
+shape0 = test_y.shape[0]
+shape1 = test_y.shape[1]
+shape2 = test_y.shape[2]
+print(test_y.shape)
+assert shape2 == 1
+# y: N_sample, max_seq_len, dim
+fited_list = []
+for i in range(shape0):
+ y0 = test_y[i, :, 0]
+ model = pm.arima.ARIMA(order=(self.p, self.d, self.q), suppressWarnings=True)
+ try:
+ model.fit(y0)
+ y_hat = model.predict_in_sample(dynamic=True)
+ except Exception:
+ y_hat = np.zeros_like(y0)
+ fited_list.append(y_hat)
+ y_hat = np.stack(fited_list, axis=0)[:, :, None]
+return y_hat
+@staticmethod
+```
+
+# getHyperparameter_space
+
+```python
+def getHyperparameter_space():
+ hyp_ = ['name': 'p', 'type': 'discrete', 'domain': list(range(1, 6)), 'dimensionality': 1],
+ {name': 'd', 'type': 'discrete', 'domain': list(range(1, 6)), 'dimensionality': 1},
+ {name': 'q', 'type': 'discrete', 'domain': list(range(1, 6)), 'dimensionality': 1}]
+return hyp_
+```
+
+# H SOME FREQUENTLY ASKED QUESTIONS
+
+Q1. Does Clairvoyance include every time-series model under the sun?
+
+A1. That is not our purpose in providing the pipeline abstraction (see Section 2: "As a Software Toolkit"), not to mention generally impossible. We do include standard classes of models (e.g. popular deep learning models for prediction), and an important contribution is in unifying all three key tasks involved in a patient's healthcare lifecycle under a single roof, including the treatment effects pathway and active sensing pathway (both for which we provide state-of-the-art time-series models) in addition to the predictions pathway (see Section 2: "The Patient Journey", Figures 1-2, as well as Table 1). Moreover, as noted throughout, modules are easily extensible: For instance, if more traditional time-series baselines from classical literature are desired for comparison purposes, existing algorithms from [73-79] can be integrated into Clairvoyance by using wrappers, with little hassle.
+
+Q2. Isn't preprocessing, imputation, selection, etc. already always performed?
+
+A2. Yes, and we are not claiming that there is anything wrong with individual studies per se. However (per Section 2: "As an Empirical Standard", and Appendix A: Tables 6-7), while current research practices typically seek to isolate individual gains, the degree of clarity and/or overlap in pipeline configurations across studies is lacking. This dearth of empirical standardization may not optimally promote practical assessment/reproducibility, and may obscure/entangle true progress. By providing a software toolkit and empirical standard, constructing an end-to-end solution to each problem is easy, systematic, and self-documenting (see Figures 2-3), and evaluating collections of models by varying a single component ensures that comparisons are standardized, explicit, and reproducible.
+
+Q3. How about other issues like regulations, privacy, and federated learning?
+
+A3. Per the discussion in Section 3, Clairvoyance is not a solution for preference-/application-specific considerations such as cohort construction, data cleaning and heterogeneity, patient privacy, algorithmic fairness, federated learning, or compliance with government regulations. While such issues are real/important concerns (with plenty of research), they are firmly beyond the scope of our software; it is designed to operate in service to clinical decision support—not at all to replace humans in the loop.
+
+Q4. What are these interdependencies among components and time steps?
+
+A4. Componentwise interdependencies occur for any number of reasons. We have discussed several examples (see Section 2: "As an Empirical Standard"), but it is not our mission to convince the reader from scratch: For that, there exists a plethora of existing autoML/medical literature (see e.g. Section 3). However, the pipeline abstraction serves as a succinct and standardized interface to anyone's favorite autoML algorithm (see Section 2: "As an Optimization Interface"). Moreover, here we do specifically highlight the temporal dimension of model selection opened up by the time-series nature of the pipeline (see Figure 5). In particular, each example in Section 4 specifically illustrates the gains in performance that already occur—ceteris paribus—using a simple approach to SASH as in Figure 4(a).
+
+Q5. Where is all the background and theory on each module?
+
+A5. The scope of the software toolkit is purposefully broad, but it is not our intention to provide a technical introduction to each of the topics involved (which would—in any case—be impossible in the scope of a paper). While Clairvoyance lowers the barrier to entry in terms of engineering/evaluation, it is not intended to be used as a black-box solution. For instance, we expect that a user desiring to conduct treatment effects estimation using the CRN component to be familiar with its basic theory and limitations. That said, in addition to the various references provided throughout the description of each aspect of Clairvoyance, the following may serve as more concise background information on original problem formulations and solutions: For treatment effects estimation over time we refer to [31]; for active sensing we refer to [34]; for time-series data imputation we refer to [15]; for interpretation by individualized variable selection we refer to [44]; for autoML in general we refer to [63]; for the pipeline configuration and selection (PSC) problem we refer to Section 3.1 in [67]; and for the stepwise model selection (SMS) problem we refer to Sections 2–3 in [71]; moreover, Figure 5 shows how new problems (e.g. SASH) directly result from combining their optimization domains.
+
+Q6. How do you know what clinicians want?
+
+A6. With clinicians as developers/authors, it is our central goal to understand realistic usage scenarios.
+
+# I GLOSSARY OF ACRONYMS
+
+ASAC: Active sensing by actor critic, first defined in [36].
+
+APR: Area under the precision-recall curve.
+
+AUC: Area under the receiver-operating characteristic curve.
+
+CASH: Combined algorithm selection and hyperparameter optimization, first defined in [63].
+
+CRN: Counterfactual recurrent network, first defined in [31].
+
+DKL: Deep kernel learning, first defined in [72].
+
+FLASH: Fast linear search, first defined in [68].
+
+GAIN: Generative adversarial imputation network, first defined in [97].
+
+GANITE: Generative adversarial network for individualized treatment effects, first defined in [98].
+
+GRU: Gated Recurrent Units, a type of recurrent neural network.
+
+INVASE: Instance-wise variable selection, first defined in [44].
+
+LSTM: Long-short term memory, a type of recurrent neural network.
+
+MAE: Mean absolute error.
+
+MICE: Multiple imputation by chained equations.
+
+MissForest: Missing value imputation using random forest.
+
+MRNN: Multi-directional recurrent neural networks, first defined in [15].
+
+PSC: Pipeline selection and configuration, first defined in [67].
+
+RMSE: Root mean squared error.
+
+RMSN: Recurrent marginal structural network, first defined in [30].
+
+RPS: Relaxed parameter sharing, first defined in [53].
+
+SASH: Stepwise algorithm selection and hyperparameter optimization, first defined in Section 2.
+
+SKL: Structured kernel learning, first defined in [69].
+
+SMS: Stepwise model selection, first defined in [71].
+
+TCN: Temporal convolutional network.
+
+Note: A prefix of "T" to certain techniques simply indicates its temporal counterpart (e.g. "T-GAIN" refers to the method of GAIN using a recurrent neural network for handling the temporal dimension).
\ No newline at end of file
diff --git a/clairvoyanceapipelinetoolkitformedicaltimeseries/images.zip b/clairvoyanceapipelinetoolkitformedicaltimeseries/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..14bdada46173b809621eb2deb2263dbe84405e51
--- /dev/null
+++ b/clairvoyanceapipelinetoolkitformedicaltimeseries/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0ba4378f09cdd9c92312f995ea33158faeee4a7ff1345ed9c3002e59c6cccb51
+size 840894
diff --git a/clairvoyanceapipelinetoolkitformedicaltimeseries/layout.json b/clairvoyanceapipelinetoolkitformedicaltimeseries/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..df0199d98d60ca0365561e6da565b2a91aa9f4f8
--- /dev/null
+++ b/clairvoyanceapipelinetoolkitformedicaltimeseries/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bd46c77271069325e510af722dab782b511e79ad18bbb7a1d644ba66e65ae3ea
+size 831044
diff --git a/classnormalizationforcontinualgeneralizedzeroshotlearning/7671a75f-9b93-4dca-9979-6caf5cd09ab9_content_list.json b/classnormalizationforcontinualgeneralizedzeroshotlearning/7671a75f-9b93-4dca-9979-6caf5cd09ab9_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..b3b5c068f675f326a8c6effbfaa55b5892a6957d
--- /dev/null
+++ b/classnormalizationforcontinualgeneralizedzeroshotlearning/7671a75f-9b93-4dca-9979-6caf5cd09ab9_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b48ed94492210ee15e5f750b55e2d1172bf07cf1e94298c189612ac95e9b9657
+size 208259
diff --git a/classnormalizationforcontinualgeneralizedzeroshotlearning/7671a75f-9b93-4dca-9979-6caf5cd09ab9_model.json b/classnormalizationforcontinualgeneralizedzeroshotlearning/7671a75f-9b93-4dca-9979-6caf5cd09ab9_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..466b10d6b3eec0349ca6c2340531ac25c20228f0
--- /dev/null
+++ b/classnormalizationforcontinualgeneralizedzeroshotlearning/7671a75f-9b93-4dca-9979-6caf5cd09ab9_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:aeb32b0a8ef4efcbd33e436d6703058c044dc60a13dce0b2ae88510663f2c591
+size 253870
diff --git a/classnormalizationforcontinualgeneralizedzeroshotlearning/7671a75f-9b93-4dca-9979-6caf5cd09ab9_origin.pdf b/classnormalizationforcontinualgeneralizedzeroshotlearning/7671a75f-9b93-4dca-9979-6caf5cd09ab9_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..bdde47d3277bdd22006e9db8f3215f6a39734f38
--- /dev/null
+++ b/classnormalizationforcontinualgeneralizedzeroshotlearning/7671a75f-9b93-4dca-9979-6caf5cd09ab9_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:675bcd3172f6bc5046a1520af74661bf5bb5c7ddf696a1fd0baccb67dc348b34
+size 2441945
diff --git a/classnormalizationforcontinualgeneralizedzeroshotlearning/full.md b/classnormalizationforcontinualgeneralizedzeroshotlearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..76489d420dea3d6a41746b71e035a2eba89221b0
--- /dev/null
+++ b/classnormalizationforcontinualgeneralizedzeroshotlearning/full.md
@@ -0,0 +1,932 @@
+# CLASS NORMALIZATION FOR (CONTINUAL)?
+
+# GENERALIZED ZERO-SHOT LEARNING
+
+Ivan Skorokhodov1,2
+
+Thuwal, Saudi Arabia
+
+iskorokhodov@gmail.com
+
+Mohamed Elhoseiny
+
+Thuwal, Saudi Arabia
+
+mohamed.elhoseiny@kaust.edu.sa
+
+$^{1}$ King Abdullah University of Science and Technology (KAUST), Saudi Arabia
+
+$^{2}$ Moscow Institute of Physics and Technology (MIPT), Russia
+
+# ABSTRACT
+
+Normalization techniques have proved to be a crucial ingredient of successful training in a traditional supervised learning regime. However, in the zero-shot learning (ZSL) world, these ideas have received only marginal attention. This work studies normalization in ZSL scenario from both theoretical and practical perspectives. First, we give a theoretical explanation to two popular tricks used in zero-shot learning: normalize+scale and attributes normalization and show that they help training by preserving variance during a forward pass. Next, we demonstrate that they are insufficient to normalize a deep ZSL model and propose Class Normalization (CN): a normalization scheme, which alleviates this issue both provably and in practice. Third, we show that ZSL models typically have more irregular loss surface compared to traditional classifiers and that the proposed method partially remedies this problem. Then, we test our approach on 4 standard ZSL datasets and outperform sophisticated modern SotA with a simple MLP optimized without any bells and whistles and having $\approx 50$ times faster training speed. Finally, we generalize ZSL to a broader problem — continual ZSL, and introduce some principled metrics and rigorous baselines for this new setup. The source code is available at https://github.com/universome/class-norm.
+
+# 1 INTRODUCTION
+
+Zero-shot learning (ZSL) aims to understand new concepts based on their semantic descriptions instead of numerous input-output learning pairs. It is a key element of human intelligence and our best machines still struggle to master it (Ferrari & Zisserman, 2008; Lampert et al., 2009; Xian et al., 2018a). Normalization techniques like batch/layer/group normalization (Ioffe & Szegedy, 2015; Ba et al., 2016; Wu & He, 2018) are now a common and important practice of modern deep learning. But despite their popularity in traditional supervised training, not much is explored in the realm of zero-shot learning, which motivated us to study and investigate normalization in ZSL models.
+
+We start by analyzing two ubiquitous tricks employed by ZSL and representation learning practitioners: normalize+scale (NS) and attributes normalization (AN) (Bell et al., 2016; Zhang et al., 2019; Guo et al., 2020; Chaudhry et al., 2019). Their dramatic influence on performance can be observed from Table 1. When these two tricks are employed, a vanilla MLP model, described in Sec 3.1, can outperform some recent sophisticated ZSL methods.
+
+Normalize+scale (NS) changes logits computation from usual dot-product to scaled cosine similarity:
+
+$$
+\hat {y} _ {c} = \boldsymbol {z} ^ {\top} \boldsymbol {p} _ {c} \Longrightarrow \hat {y} _ {c} = \left(\gamma \cdot \frac {\boldsymbol {z}}{\| \boldsymbol {z} \| _ {2}}\right) ^ {\top} \left(\gamma \cdot \frac {\boldsymbol {p} _ {c}}{\| \boldsymbol {p} _ {c} \|}\right) \tag {1}
+$$
+
+where $z$ is an image feature, $p_c$ is $c$ -th class prototype and $\gamma$ is a hyperparameter, usually picked from [5, 10] interval (Li et al., 2019; Zhang et al., 2019). Scaling by $\gamma$ is equivalent to setting a high temperature of $\gamma^2$ in softmax. In Sec. 3.2, we theoretically justify the need for this trick and explain why the value of $\gamma$ must be so high.
+
+Table 1: Effectiveness of Normalize+Scale, Attributes Normalization and Class Normalization. When NS and AN are integrated into a basic ZSL model, its performance is boosted up to a level of some sophisticated SotA methods and additionally using CN allows to outperform them. $\pm$ NS and $\pm$ AN denote if normalize+scale or attributes normalization are being used. Bold/normal blue font denote best/second-best results. Extended results are in Table 2, 5 and 8.
+
+ | SUN | CUB | AwA1 | AwA2 | Avg training time |
| U | S | H | U | S | H | U | S | H | U | S | H |
| DCN Liu et al. (2018) | 25.5 | 37.0 | 30.2 | 28.4 | 60.7 | 38.7 | - | - | - | 25.5 | 84.2 | 39.1 | 50 minutes |
| SGAL Yu & Lee (2019) | 42.9 | 31.2 | 36.1 | 47.1 | 44.7 | 45.9 | 52.7 | 75.7 | 62.2 | 55.1 | 81.2 | 65.6 | 50 minutes |
| LsrGAN Vyas et al. (2020) | 44.8 | 37.7 | 40.9 | 48.1 | 59.1 | 53.0 | - | - | - | 54.6 | 74.6 | 63.0 | 1.25 hours |
| Vanilla MLP -NS -AN | 4.7 | 27.2 | 8.0 | 5.9 | 26.0 | 9.7 | 43.1 | 81.3 | 56.3 | 37.7 | 84.3 | 52.1 | 30 seconds |
| Vanilla MLP -NS +AN | 9.6 | 34.0 | 14.9 | 8.8 | 4.6 | 6.0 | 28.6 | 84.4 | 42.7 | 23.3 | 87.4 | 36.8 |
| Vanilla MLP +NS -AN | 34.7 | 38.5 | 36.5 | 46.9 | 42.8 | 44.9 | 57.0 | 69.9 | 62.8 | 49.7 | 76.4 | 60.2 |
| Vanilla MLP +NS+AN | 31.4 | 40.4 | 35.3 | 45.2 | 50.7 | 47.8 | 58.1 | 70.3 | 63.6 | 58.2 | 73.0 | 64.8 |
| Vanilla MLP +NS +AN +CN | 44.7 | 41.6 | 43.1 | 49.9 | 50.7 | 50.3 | 63.1 | 73.4 | 67.8 | 60.2 | 77.1 | 67.6 | 30 seconds |
+
+Attributes Normalization (AN) technique simply divides class attributes by their $L_{2}$ norms:
+
+$$
+\boldsymbol {a} _ {c} \longmapsto \boldsymbol {a} _ {c} / \| \boldsymbol {a} _ {c} \| _ {2} \tag {2}
+$$
+
+While this may look inconsiderable, it is surprising to see it being preferred in practice (Li et al., 2019; Narayan et al., 2020; Chaudhry et al., 2019) instead of the traditional zero-mean and unit-variance data standardization (Glorot & Bengio, 2010). In Sec 3, we show that it helps in normalizing signal's variance in and ablate its importance in Table 1 and Appx D.
+
+These two tricks work well and normalize the variance to a unit value when the underlying ZSL model is linear (see Figure 1), but they fail when we use a multi-layer architecture. To remedy this issue, we introduce Class Normalization (CN): a novel normalization scheme, which is based on a different initialization and a class-wise standardization transform. Modern ZSL methods either utilize sophisticated architectural design like training generative models (Narayan et al., 2020; Felix et al., 2018) or use heavy optimization schemes like episode-based training (Yu et al., 2020; Li et al., 2019). In contrast, we show that simply adding Class Normalization on top of a vanilla MLP is enough to set new state-of-the-art results on several standard ZSL datasets (see Table 2). Moreover, since it is optimized with plain gradient descent without any bells and whistles, training time for us takes 50-100 times less and runs in about 1 minute. We also demonstrate that many ZSL models tend to have more irregular loss surface compared to traditional supervised learning classifiers and apply the results of Santurkar et al. (2018) to show that our CN partially remedies the issue. We discuss and empirically validate this in Sec 3.5 and Appx F.
+
+Apart from the theoretical exposition and a new normalization scheme, we also propose a broader ZSL setup: continual zero-shot learning (CZSL). Continual learning (CL) is an ability to acquire new knowledge without forgetting (e.g. (Kirkpatrick et al., 2017)), which is scarcely investigated in ZSL. We develop the ideas of lifelong learning with class attributes, originally proposed by Chaudhry et al. (2019) and extended by Wei et al. (2020a), propose several principled metrics for it and test several classical CL methods in this new setup.
+
+# 2 RELATED WORK
+
+Zero-shot learning. Zero-shot learning (ZSL) aims at understanding example of unseen classes from their language or semantic descriptions. Earlier ZSL methods directly predict attribute confidence from images to facilitate zero-shot recognition (e.g., Lampert et al. (2009); Farhadi et al. (2009); Lampert et al. (2013b)). Recent ZSL methods for image classification can be categorized into two groups: generative-based and embedding-based. The main goal for generative-based approaches is to build a conditional generative model (e.g., GANs Goodfellow et al. (2014) and VAEs (Kingma & Welling, 2014)) to synthesize visual generations conditioned on class descriptors (e.g., Xian et al. (2018b); Zhu et al. (2018); Elhoseiny & Elfeki (2019); Guo et al. (2017); Guo et al. (2017); Kumar Verma et al. (2018)). At test time, the trained generator is expected to produce synthetic/fake data of unseen classes given its semantic descriptor. The fake data is then used to train a traditional classifier or to perform a simple kNN-classification on the test images. Embedding-based approaches learn a mapping that projects semantic attributes and images into a common space where the distance
+
+between a class projection and the corresponding images is minimized (e.g, Romera-Paredes & Torr (2015); Frome et al. (2013); Lei Ba et al. (2015); Akata et al. (2016a); Zhang et al. (2017); Akata et al. (2015; 2016b)). One question that arises is what space to choose to project the attributes or images to. Previous works projected images to the semantic space (Elhoseiny et al., 2013; Frome et al., 2013; Lampert et al., 2013a) or some common space (Zhang & Saligrama, 2015; Akata et al., 2015), but our approach follows the idea of Zhang et al. (2016); Li et al. (2019) that shows that projecting attributes to the image space reduces the bias towards seen data.
+
+Normalize+scale and attributes normalization. It was observed both in ZSL (e.g., Li et al. (2019); Zhang et al. (2019); Bell et al. (2016)) and representation learning (e.g., Sohn (2016); Guo et al. (2020); Ye et al. (2020)) fields that normalize+scale (i.e. (1)) and attributes normalization (i.e. (2)) tend to significantly improve the performance of a learning system. In the literature, these two techniques lack rigorous motivation and are usually introduced as practical heuristics that aid training (Changpinyo et al., 2017; Zhang et al., 2019; 2021). One of the earliest works that employ attributes normalization was done by (Norouzi et al., 2013), and in (Changpinyo et al., 2016a) authors also ablate its importance. The main consumers of normalize+scale trick had been similarity learning algorithms, which employ it to refine the distance metric between the representations (Bellet et al., 2013; Guo et al., 2020; Shi et al., 2020). Luo et al. (2018) proposed to use cosine similarity in the final output projection matrix as a normalization procedure, but didn't incorporate any analysis on how it affects the variance. They also didn't use the scaling which our experiments in Table 5 show to be crucial. Gidaris & Komodakis (2018) demonstrated a greatly superior performance of an NS-enriched model compared to a dot-product based one in their setup where the classifying matrix is constructed dynamically. Li et al. (2019) motivated their usage of NS by variance reduction, but didn't elaborate on this in their subsequent analysis. Chen et al. (2020) related the use of the normalized temperature-scaled cross entropy loss (NT-Xent) to different weighting of negative examples in contrastive learning framework. Overall, to the best of our knowledge, there is no precise understanding of the influence of these two tricks on the optimization process and benefits they provide.
+
+Initialization schemes. In the seminal work, Xavier's init Glorot & Bengio (2010), the authors showed how to preserve the variance during a forward pass. He et al. (2015) applied a similar analysis but taking ReLU nonlinearities into account. There is also a growing interest in two-step Jia et al. (2014), data-dependent Krahenbuhl et al. (2015), and orthogonal Hu et al. (2020) initialization schemes. However, the importance of a good initialization for multi-modal embedding functions like attribute embedding is less studied and not well understood. We propose a proper initialization scheme based on a different initialization variance and a dynamic standardization layer. Our variance analysis is similar in nature to Chang et al. (2020) since attribute embedder may be seen as a hypernetwork (Ha et al., 2016) that outputs a linear classifier. But the exact embedding transformation is different from a hypernetwork since it has matrix-wise input and in our derivations we have to use more loose assumptions about attributes distribution (see Sec 3 and Appx H).
+
+Normalization techniques. A closely related branch of research is the development of normalization layers for deep neural networks (Ioffe & Szegedy, 2015) since they also influence a signal's variance. BatchNorm, being the most popular one, normalizes the location and scale of activations. It is applied in a batch-wise fashion and that's why its performance is highly dependent on batch size (Singh & Krishnan, 2020). That's why several normalization techniques have been proposed to eliminate the batch-size dependency (Wu & He, 2018; Ba et al., 2016; Singh & Krishnan, 2020). The proposed class normalization is very similar to a standardization procedure which underlies BatchNorm, but it is applied class-wise in the attribute embedder. This also makes it independent from the batch size.
+
+Continual zero-shot learning. We introduce continual zero-shot learning: a new benchmark for ZSL agents that is inspired by continual learning literature (e.g., Kirkpatrick et al. (2017)). It is a development of the scenario proposed in Chaudhry et al. (2019), but authors there focused on ZSL performance only a single task ahead, while in our case we consider the performance on all seen (previous tasks) and all unseen data (future tasks). This also contrasts our work to the very recent work by Wei et al. (2020b), where a sequence of seen class splits of existing ZSL benchmarks is trained and the zero-shot performance is reported for every task individually at test time. In contrast, for our setup, the label space is not restricted and covers the spectrum of all previous tasks (seen tasks so far), and future tasks (unseen tasks so far). Due to this difference, we need to introduce a set of new metrics and benchmarks to measure this continual generalized ZSL skill over time. From the lifelong learning perspective, the idea to consider all the processed data to evaluate the model is not
+
+new and was previously explored by Elhoseiny et al. (2018); van de Ven & Tolias (2019). It lies in contrast with the common practice of providing task identity at test time, which limits the prediction space for a model, making the problem easier (Kirkpatrick et al., 2017; Aljundi et al., 2017). In Isele et al. (2016); Lopez-Paz & Ranzato (2017) authors motivate the use of task descriptors for zero-shot knowledge transfer, but in our work we consider class descriptors instead. We defined CZSL as a continual version of generalized-ZSL which allows us to naturally extend all the existing ZSL metrics Xian et al. (2018a); Chao et al. (2016) to our new continual setup.
+
+# 3 NORMALIZATION IN ZERO-SHOT LEARNING
+
+The goal of a good normalization scheme is to preserve a signal inside a model from severe fluctuations and to keep it in the regions that are appropriate for subsequent transformations. For example, for ReLU activations, we aim that its input activations to be zero-centered and not scaled too much: otherwise, we risk to find ourselves in all-zero or all-linear activation regimes, disrupting the model performance. For logits, we aim them to have a close-to-unit variance since too small variance leads to poor gradients of the subsequent cross-entropy loss and too large variance is an indicator of poor scaling of the preceding weight matrix. For linear layers, we aim their inputs to be zero-centered: in the opposite case, they would produce too biased outputs, which is undesirable.
+
+In traditional supervised learning, we have different normalization and initialization techniques to control the signal flow. In zero-shot learning (ZSL), however, the set of tools is extremely limited. In this section, we justify the popularity of Normalize+Scale (NS) and Attributes Normalization (AN) techniques by demonstrating that they just retain a signal variance. We demonstrate that they are not enough to normalize a deep ZSL model and propose class normalization to regulate a signal inside a deep ZSL model. We empirically evaluate our study in Sec. 5 and appendices A, B, D and F.
+
+# 3.1 NOTATION
+
+A ZSL setup considers access to datasets of seen and unseen images with the corresponding labels $D^{\mathrm{s}} = \{x_{i}^{s},y_{i}^{s}\}_{i = 1}^{N_{s}}$ and $D^{\mathrm{u}} = \{x_i^u,y_i^u\}_{i = 1}^{N_u}$ respectively. Each class $c$ is described by its class attribute vector $\pmb{a}_c\in \mathbb{R}^{d_a}$ . All attribute vectors are partitioned into non-overlapping seen and unseen sets as well: $A^s = \{a_i\}_{i = 1}^{K_s}$ and $A^u = \{a_i\}_{i = 1}^{K_u}$ . Here $N_{s},N_{u},K_{s},K_{u}$ are number of seen images, unseen images, seen classes, and unseen classes respectively. In modern ZSL, all images are usually transformed via some standard feature extractor $E:\pmb {x}\mapsto \pmb {z}\in \mathbb{R}^{d_z}$ (Xian et al., 2018a). Then, a typical ZSL method trains attribute embedder $P_{\theta}:a_{c}\rightarrow p_{c}\in \mathbb{R}^{d_{z}}$ which projects class attributes $\pmb{a}_c^s$ onto feature space $\mathbb{R}^{d_z}$ in such a way that it lies closer to exemplar features $z^s$ of its class $c$ .
+
+This is done by solving a classification task, where logits are computed using formula (1). In such a way at test time we are able to classify unseen images by projecting unseen attribute vectors $\pmb{a}_c^u$ into the feature space and computing similarity with the provided features $z^u$ . Attribute embedder $P_{\theta}$ is usually a very simple neural network (Li et al., 2019); in many cases even linear (Romera-Paredes & Torr, 2015; Elhoseiny et al., 2013), so it is the training procedure and different regularization schemes that carry the load. We will denote the final projection matrix and the body of $P_{\theta}$ as $V$ and $H_{\varphi}$ respectively, i.e. $P_{\theta}(a_c) = VH_{\varphi}(a_c)$ . During training, it receives matrix of class attributes $A = [a_1, \dots, a_{K_s}]$ of size $K_s \times d_a$ and outputs matrix $W = P_{\theta}(A)$ of size $K_s \times d_z$ . Then $W$ is used to compute class logits with a batch of image feature vectors $z_1, \dots, z_{N_s}$ .
+
+# 3.2 UNDERSTANDING NORMALIZE + SCALE TRICK
+
+One of the most popular tricks in ZSL and deep learning is using the scaled cosine similarity instead of a simple dot product in logits computation (Li et al., 2019; Zhang et al., 2019; Ye et al., 2020):
+
+$$
+\hat {y} _ {c} = \boldsymbol {z} ^ {\top} \boldsymbol {p} _ {c} \Longrightarrow \hat {y} _ {c} = \gamma^ {2} \frac {\boldsymbol {z} ^ {\top} \boldsymbol {p} _ {c}}{\| \boldsymbol {z} \| \| \boldsymbol {p} _ {c} \|} \tag {3}
+$$
+
+where hyperparameter $\gamma$ is usually picked from [5, 10] interval. Both using the cosine similarity and scaling it afterwards by a large value is critical to obtain good performance; see Appendix D. To our knowledge, it has not been clear why exactly it has such big influence and why the value of $\gamma$ must be so large. The following statement provides an answer to these questions.
+
+Statement 1 (informal).Normalize $^+$ scale trick forces the variance for $\hat{y}_c$ to be approximately:
+
+$$
+\operatorname {V a r} \left[ \hat {y} _ {c} \right] \approx \gamma^ {4} \frac {d _ {z}}{\left(d _ {z} - 2\right) ^ {2}}, \tag {4}
+$$
+
+where $d_{z}$ is the dimensionality of the feature space. See Appendix A for the assumptions, derivation and the empirical study. Formula (4) demonstrates two things:
+
+1. When we use cosine similarity, the variance of $\hat{y}_c$ becomes independent from the variance of $W = P_{\theta}(A)$ , leading to better stability.
+2. If one uses Eq. (3) without scaling (i.e. $\gamma = 1$ ), then the Var $[\hat{y}_c]$ will be extremely low (especially for large $d_z$ ) and our model will always output uniform distribution and the training would stale. That's why we need very large values for $\gamma$ .
+
+Usually, the optimal value of $\gamma$ is found via a hyperparameter search (Li et al., 2019), but our formula suggests another strategy: one can obtain any desired variance $\nu = \mathrm{Var}\left[\hat{y}_c\right]$ by setting $\gamma$ to:
+
+$$
+\gamma = \left(\frac {\nu \cdot \left(d _ {z} - 2\right) ^ {2}}{d _ {z}}\right) ^ {\frac {1}{4}} \tag {5}
+$$
+
+For example, for $\operatorname{Var}[\hat{y}_c] = \nu = 1$ and $d_z = 2048$ we obtain $\gamma \approx 6.78$ , which falls right in the middle of [5, 10] — a usual search region for $\gamma$ used by ZSL and representation learning practitioners Li et al. (2019); Zhang et al. (2019); Guo et al. (2020). The above consideration not only gives a theoretical understanding of the trick, which we believe is important on its own right, but also allows us to speed up the search by either picking the predicted "optimal" value for $\gamma$ or by searching in its vicinity.
+
+# 3.3 UNDERSTANDING ATTRIBUTES NORMALIZATION TRICK
+
+We showed in the previous subsection that "normalize+scale" trick makes the variances of $\hat{y}_c$ independent from variance of weights, features and attributes. This may create an impression that it does not matter how we initialize the weights — normalization would undo any fluctuations. However it is not true, because it is still important how the signal flows under the hood, i.e. for an unnormalized and unscaled logit value $\tilde{y}_c = z^\top p_c$ . Another common trick in ZSL is the normalization of attribute vectors to a unit norm $a_c \longmapsto \frac{a_c}{\|a_c\|_2}$ . We provide some theoretical underpinnings of its importance.
+
+Let's first consider a linear case for $P_{\theta}$ , i.e. $H_{\varphi}$ is an identity, thus $\tilde{y}_c = z^\top p_c = z^\top V a_c$ . Then, the way we initialize $V$ is crucial since $\operatorname{Var}[\tilde{y}]_c$ depends on it. To derive an initialization scheme people use 3 strong assumptions for the inputs Glorot & Bengio (2010); He et al. (2015); Chang et al. (2020): 1) they are zero-centered 2) independent from each other; and 3) have the covariance matrix of the form $\sigma^2 I$ . But in ZSL setting, we have two sources of inputs: image features $z$ and class attributes $a_c$ . And these assumptions are safe to assume only for $z$ but not for $a_c$ , because they do not hold for the standard datasets (see Appendix H). To account for this, we derive the variance $\operatorname{Var}[\hat{y}_c]$ without relying on these assumptions for $a_c$ (see Appendix B):
+
+$$
+\operatorname {V a r} \left[ \tilde {y} _ {c} \right] = d _ {z} \cdot \operatorname {V a r} \left[ z _ {i} \right] \cdot \operatorname {V a r} \left[ V _ {i j} \right] \cdot \underline {{\mathbf {E}}} \left[ \| \boldsymbol {a} \| _ {2} ^ {2} \right] \tag {6}
+$$
+
+From equation (6) one can see that after giving up invalid assumptions for $\mathbf{a}_c$ , pre-logits variance $\mathrm{Var}[\tilde{y}_c]$ now became dependent on $\| \mathbf{a}_c\|_2$ , which is not captured by traditional Glorot & Bengio (2010) and He et al. (2015) initialization schemes and thus leads to poor variance control. Attributes normalization trick rectifies this limitation, which is summarized in the following statement.
+
+Statement 2 (informal). Attributes normalization trick leads to the same pre-logits variance as we have with Xavier fan-out initialization. (see Appendix B for the formal statement and the derivation).
+
+Xavier fan-out initialization selects such a scale for a linear layer that the variance of backward pass representations is preserved across the model (in the absence of non-linearities). The fact that attributes normalization results in the scaling of $P_{\theta}$ equivalent to Xavier fan-out scaling and not some other one is a coincidence and shows what underlying meaning this procedure has.
+
+
+Figure 1: Logits variances (two left plots) and the approximate loss landscape smoothness (right plot) measured during training for different models. From variances plots, one can observe the following picture: a linear model without NS (normalize+scale) and AN (attributes normalization) has diverging variance, but adding NS+AN fixes this pushing the variance to 1. For a non-linear model, using NS and AN on their own is not enough and the variance deteriorates, but class normalization (CN) rectifies it back to 1; see additional analysis in Appx E. On the right plot, the maximum gradient magnitude over 10 batches at a given iteration for different classification models is presented. As stated in 3.5, ZSL attribute embeddings have more irregular loss surface than traditional models: large gradient norms indicate abrupt changes in the landscape. Class normalization makes it more smooth; see more analysis in Appx F.
+
+
+
+
+
+# 3.4 CLASS NORMALIZATION
+
+What happens when $P_{\theta}$ is not linear? Let $h_c = H_{\varphi}(a_c)$ be the output of $H_{\varphi}$ . The analysis of this case is equivalent to the previous one but with plugging in $h_c$ everywhere instead of $a_c$ . This leads to:
+
+$$
+\operatorname {V a r} \left[ \tilde {y} _ {c} \right] = d _ {z} \cdot \operatorname {V a r} \left[ z _ {i} \right] \cdot \operatorname {V a r} \left[ V _ {i j} \right] \cdot \frac {\mathbb {E}}{\boldsymbol {h}} \left[ \| \boldsymbol {h} \| _ {2} ^ {2} \right] \tag {7}
+$$
+
+As a result, to obtain $\operatorname{Var}\left[\tilde{y}_c\right] = \operatorname{Var}\left[z_i\right]$ property, we need to initialize $\operatorname{Var}\left[V_{ij}\right]$ the following way:
+
+$$
+\operatorname {V a r} \left[ V _ {i j} \right] = \left(d _ {z} \cdot \underset {\boldsymbol {h} _ {c}} {\mathbb {E}} \left[ \| \boldsymbol {h} _ {c} \| _ {2} ^ {2} \right]\right) ^ {- 1} \tag {8}
+$$
+
+This makes the initialization dependent on the magnitude of $\| h_c\|$ instead of $\| a_{c}\|$ , so normalizing attributes to a unit norm would not be sufficient to preserve the variance. To initialize the weights of $V$ using this formula, a two-step data-dependent initialization is required: first initializing $H_{\varphi}$ , then computing average $\| h_c\| _2^2$ , and then initializing $V$ . However, this is not reliable since $\| h_c\| _2^2$ changes on each iteration, so we propose a more elegant solution to standardize $h_c$
+
+$$
+S \left(\boldsymbol {h} _ {c}\right) = \left(\boldsymbol {h} _ {c} - \hat {\boldsymbol {\mu}}\right) / \hat {\boldsymbol {\sigma}} \tag {9}
+$$
+
+As one can note, this is similar to BatchNorm standardization without the subsequent affine transform, but we apply it class-wise on top of attribute embeddings $h_c$ . We plug it in right before $V$ , i.e. $P_{\theta}(a_c) = VS(H_{\varphi}(a_c))$ . This does not add any parameters and has imperceptible computational overhead. At test time, we use statistics accumulated during training similar to batch norm. Standardization (9) makes inputs to $V$ have constant norm, which now makes it trivial to pick a proper value to initialize $V$ :
+
+$$
+\underset {\boldsymbol {h} _ {c}} {\mathbb {E}} \left[ \| S \left(\boldsymbol {h} _ {c}\right) \| _ {2} ^ {2} \right] = d _ {h} \Longrightarrow \operatorname {V a r} \left[ V _ {i j} \right] = \frac {1}{d _ {z} d _ {h}}. \tag {10}
+$$
+
+We coin the simultaneous use of (9) and (10) class normalization and highlight its influence in the following statement. See Fig. 3 for the model diagram, Fig. 1 for empirical study of its impact, and Appendix C for the assumptions, proof and additional details.
+
+Statement 3 (informal). Standardization procedure (9) together with the proper variance formula (10), preserves the variance between $z$ and $\tilde{y}$ for a multi-layer attribute embedder $P_{\theta}$ .
+
+# 3.5 IMPROVED SMOOTHNESS
+
+We also analyze the loss surface smoothness for $P_{\theta}$ . There are many ways to measure this notion (Hochreiter & Schmidhuber, 1997; Keskar et al., 2016; Dinh et al., 2017; Skorokhodov & Burtsev,
+
+2019), but following Santurkar et al. (2018), we define it in a "per-layer" fashion via Lipschitzness:
+
+$$
+g ^ {\ell} = \max _ {\| X ^ {\ell - 1} \| _ {2} \leq \lambda} \| \nabla_ {W ^ {\ell}} \mathcal {L} \| _ {2} ^ {2}, \tag {11}
+$$
+
+where $\ell$ is the layer index and $X^{\ell -1}$ is its input data matrix. This definition is intuitive: larger gradient magnitudes indicate that the loss surface is prone to abrupt changes. We demonstrate two things:
+
+1. For each example in a batch, parameters of a ZSL attribute embedder receive $K$ more updates than a typical non-ZSL classifier, where $K$ is the number of classes. This suggests a hypothesis that it has larger overall gradient magnitude, hence a more irregular loss surface.
+2. Our standardization procedure (9) makes the surface more smooth. We demonstrate it by simply applying Theorem 4.4 from (Santurkar et al., 2018).
+
+Due to the space constraints, we defer the exposition on this to Appendix F.
+
+# 4 CONTINUAL ZERO-SHOT LEARNING
+
+# 4.1 PROBLEM FORMULATION
+
+In continual learning (CL), a model is being trained on a sequence of tasks that arrive one by one. Each task is defined by a dataset $D^{t} = \{x_{i}^{t},y_{i}^{t}\}_{i = 1}^{N_{t}}$ of size $N_{t}$ . The goal of the model is to learn all the tasks sequentially in such a way that at each task $t$ it has good performance both on the current task and all the previously observed ones. In this section we develop the ideas of Chaudhry et al. (2019) and formulate a Continual Zero-Shot Learning (CZSL) problem. Like in CL, CZSL also assumes a sequence of tasks, but now each task is a generalized zero-shot learning problem. This means that apart from $D^{t}$ we also receive a set of corresponding class descriptions $A^{t}$ for each task $t$ . In this way, traditional zero-shot learning can be seen as a special case of CZSL with just two tasks. In Chaudhry et al. (2019), authors evaluate their zero-shot models on each task individually, without considering the classification space across tasks; looking only one step ahead, which gives a limited picture of the model's quality. Instead, we borrow ideas from Generalized ZSL (Chao et al., 2016; Xian et al., 2018a), and propose to measure the performance on all the seen and all the unseen data for each task. More formally, for timestep $t$ we have the datasets:
+
+$$
+D ^ {\leq t} = \bigcup_ {r = 1} ^ {t} D ^ {r} \quad D ^ {> t} = \bigcup_ {r = t + 1} ^ {T} D ^ {r} \quad A ^ {\leq t} = \bigcup_ {r = 1} ^ {t} A ^ {r} \quad A ^ {> t} = \bigcup_ {r = t + 1} ^ {T} A ^ {r} \tag {12}
+$$
+
+which are the datasets of all seen data (learned tasks), all unseen data (future tasks), seen class attributes, and unseen class attributes respectively. For our proposed CZSL, the model at timestep $t$ has access to only data $D^{t}$ and attributes $A^{t}$ , but its goal is to have good performance on all seen data $D^{\leq t}$ and all unseen data $D^{>t}$ with the corresponding attributes sets $A^{\leq t}$ and $A^{>t}$ . For $T = 2$ , this would be equivalent to traditional generalized zero-shot learning. But for $T > 2$ , it is a novel and a much more challenging problem.
+
+# 4.2 PROPOSED EVALUATION METRICS
+
+Our metrics for CZSL use GZSL metrics under the hood and are based on generalized accuracy (GA) (Chao et al., 2016; Xian et al., 2018a). "Traditional" seen (unseen) accuracy computation discards unseen (seen) classes from the prediction space, thus making the problem easier, since the model has fewer classes to be distracted with. For generalized accuracy, we always consider the joint space of both seen and unseen and this is how GZSL-S and GZSL-U are constructed. We use this notion to construct mean seen (mS), mean unseen (mU) and mean harmonic (mH) accuracies. We do that just by measuring $GZSL-S/GZSL-U/GZSL-H$ at each timestep, considering all the past data as seen and all the future data as unseen. Another set of CZSL metrics are mean joint accuracy (mJA) which measures the performance across all the classes and mean area under seen/unseen curve (mAUC) which is an adaptation of AUSUC measure by Xian et al. (2018a). A more rigorous formulation of these metrics is presented in Appendix G.2. Apart from them, we also employ a popular forgetting measure (Lopez-Paz & Ranzato, 2017).
+
+# 5 EXPERIMENTS
+
+# 5.1 ZSL EXPERIMENTS
+
+Experiment details. We use 4 standard datasets: SUN (Patterson et al., 2014), CUB (Welinder et al., 2010), AwA1 and AwA2 and seen/unseen splits from Xian et al. (2018a). They have 645/72, 150/50, 40/10 and 40/10 seen/unseen classes respectively with $d_{a}$ being equal to 102, 312, 85 and 85 respectively. Following standard practice, we use ResNet101 image features (with $d_{z} = 2048$ ) from Xian et al. (2018a). Our attribute embedder $P_{\theta}$ is a vanilla 3-layer MLP augmented with standardization procedure 9 and corrected output matrix initialization 10. For all the datasets, we train the model with Adam optimizer for 50 epochs and evaluate it at the end of training. We also employ NS and AN techniques with $\gamma = 5$ for NS. Additional hyperparameters are reported in Appx D. To perform cross-validation, we first allocate $10\%$ of seen classes for a validation unseen data (for AwA1 and AwA2 we allocated $15\%$ since there are only 40 seen classes). Then we allocate $10\%$ out of the remaining $85\%$ of the data for validation seen data. This means that in total we allocate $\approx 30\%$ of all the seen data to perform validation. It is known (Xian et al., 2018a; Min et al., 2020), that GZSL-H score can be improved slightly by reducing the weight of seen class logits during accuracy computation since this would partially relieve the bias towards seen classes. We also employ this trick by multiplying seen class logits by value $s$ during evaluation and find its optimal value using cross-validation together with the other hyperparameters. On Figure 4 in Appendix D.4, we provide validation/test accuracy curves of how it influences the performance.
+
+Evaluation and discussion. We evaluate the model on the corresponding test sets using 3 metrics as proposed by Xian et al. (2018a): seen generalized unseen accuracy (GZSL-U), generalized seen accuracy (GZSL-S) and GZSL-S/GZSL-U harmonic mean (GZSL-H), which is considered to the main metric for ZSL. Table 2 shows that our model has the state-of-the-art in 3 out of 4 datasets.
+
+Training speed results. We conducted a survey and rerun several recent SotA methods from their official implementations to check their training speed, which details we report in Appx D. Table 2 shows the average training time for each of the methods. Since our model is just a vanilla MLP and does not use any sophisticated training scheme, it trains from 30 to 500 times faster compared to other methods, while outperforming them in the final performance.
+
+# 5.2 CZSL EXPERIMENTS
+
+Datasets. We test our approach in CZSL scenario on two datasets: CUB Welinder et al. (2010) and SUN Patterson et al. (2014). CUB contains 200 classes and is randomly split into 10 tasks with 20
+
+Table 2: Generalized Zero-Shot Learning results. S, U denote generalized seen/unseen accuracy and H is their harmonic mean. Bold/normal blue font denotes the best/second-best result.
+
+ | SUN | CUB | AwA1 | AwA2 | Avg training time |
| U | S | H | U | S | H | U | S | H | U | S | H |
| DCN (Liu et al., 2018) | 25.5 | 37.0 | 30.2 | 28.4 | 60.7 | 38.7 | - | - | - | 25.5 | 84.2 | 39.1 | 50 min |
| RN (Sung et al., 2018) | - | - | - | 38.1 | 61.4 | 47.0 | 31.4 | 91.3 | 46.7 | 30.9 | 93.4 | 45.3 | 35 min |
| f-CLSWGAN (Xian et al., 2018b) | 42.6 | 36.6 | 39.4 | 57.7 | 43.7 | 49.7 | 57.9 | 61.4 | 59.6 | - | - | - | - |
| CIZSL (Elhoseiny & Elfeki, 2019) | - | - | 27.8 | - | - | - | - | - | - | - | - | 24.6 | 2 hours |
| CVC-ZSL (Li et al., 2019) | 36.3 | 42.8 | 39.3 | 47.4 | 47.6 | 47.5 | 62.7 | 77.0 | 69.1 | 56.4 | 81.4 | 66.7 | 3 hours |
| SGMA (Zhu et al., 2019) | - | - | - | 36.7 | 71.3 | 48.5 | - | - | - | 37.6 | 87.1 | 52.5 | - |
| SGAL (Yu & Lee, 2019) | 42.9 | 31.2 | 36.1 | 47.1 | 44.7 | 45.9 | 52.7 | 75.7 | 62.2 | 55.1 | 81.2 | 65.6 | 50 min |
| DASCN (Ni et al., 2019) | 42.4 | 38.5 | 40.3 | 45.9 | 59.0 | 51.6 | 59.3 | 68.0 | 63.4 | - | - | - | - |
| F-VAEGAN-D2 (Xian et al., 2019) | 45.1 | 38.0 | 41.3 | 48.4 | 60.1 | 53.6 | - | - | - | 57.6 | 70.6 | 63.5 | - |
| TF-VAEGAN (Narayan et al., 2020) | 45.6 | 40.7 | 43.0 | 52.8 | 64.7 | 58.1 | - | - | - | 59.8 | 75.1 | 66.6 | 1.75 hours |
| EPGN (Yu et al., 2020) | - | - | - | 52.0 | 61.1 | 56.2 | 62.1 | 83.4 | 71.2 | 52.6 | 83.5 | 64.6 | - |
| DVBE (Min et al., 2020) | 45.0 | 37.2 | 40.7 | 53.2 | 60.2 | 56.5 | - | - | - | 63.6 | 70.8 | 67.0 | - |
| LsrGAN (Vyas et al., 2020) | 44.8 | 37.7 | 40.9 | 48.1 | 59.1 | 53.0 | - | - | - | 54.6 | 74.6 | 63.0 | 1.25 hours |
| ZSML (Verma et al., 2020) | - | - | - | 60.0 | 52.1 | 55.7 | 57.4 | 71.1 | 63.5 | 58.9 | 74.6 | 65.8 | - |
| 3-layer MLP | 31.4 | 40.4 | 35.3 | 45.2 | 48.4 | 46.7 | 57.0 | 69.9 | 62.8 | 54.5 | 72.2 | 62.1 | 30 seconds |
| 3-layer MLP + Eq. (9) | 41.5 | 41.3 | 41.4 | 49.4 | 48.6 | 49.0 | 60.1 | 73.0 | 65.9 | 60.3 | 75.6 | 67.1 |
| 3-layer MLP + Eq. (10) | 24.1 | 37.9 | 29.5 | 45.3 | 44.5 | 44.9 | 58.4 | 70.7 | 64.0 | 52.1 | 72.0 | 60.5 |
| 3-layer MLP + CN (i.e. (9) + (10)) | 44.7 | 41.6 | 43.1 | 49.9 | 50.7 | 50.3 | 63.1 | 73.4 | 67.8 | 60.2 | 77.1 | 67.6 |
+
+Table 3: Continual Zero-Shot Learning results with and without CN. Best scores are in bold blue.
+
+ | CUB | SUN |
| mAUC↑ | mH↑ | mJA↑ | Forgetting↓ | mAUC↑ | mH↑ | mJA↑ | Forgetting↓ |
| EWC-online (Schwarz et al., 2018) | 11.6 | 18.0 | 25.4 | 0.08 | 2.7 | 9.6 | 11.4 | 0.02 |
| EWC-online + ClassNorm | 14.1+22% | 23.3+29% | 28.6+13% | 0.04-50% | 4.8+78% | 14.3+49% | 15.8+39% | 0.03+50% |
| MAS-online (Aljundi et al., 2017) | 11.4 | 17.7 | 25.1 | 0.08 | 2.5 | 9.4 | 11.0 | 0.02 |
| MAS-online + ClassNorm | 14.0+23% | 23.8+34% | 28.5+14% | 0.05-37% | 4.8+92% | 14.2+51% | 15.8+44% | 0.03+50% |
| A-GEM (Chaudhry et al., 2019) | 10.4 | 17.3 | 23.6 | 0.16 | 2.4 | 9.6 | 10.8 | 0.05 |
| A-GEM + ClassNorm | 13.8+33% | 23.8+38% | 28.2+19% | 0.06-62% | 4.6+92% | 14.2+48% | 15.4+43% | 0.04-20% |
| Sequential | 9.7 | 17.2 | 22.6 | 0.17 | 2.3 | 9.3 | 10.4 | 0.05 |
| Sequential + ClassNorm | 13.5+39% | 23.0+34% | 27.9+23% | 0.05-71% | 4.6+99% | 14.0+51% | 15.3+47% | 0.03-40% |
| Multi-task | 23.4 | 24.3 | 39.6 | 0.00 | 4.2 | 12.5 | 14.9 | 0.00 |
| Multi-task + ClassNorm | 26.5+13% | 30.0+23% | 42.6+8% | 0.01 | 6.2+48% | 14.8+18% | 18.5+24% | 0.01 |
+
+classes per task. SUN contains 717 classes which is randomly split into 15 tasks, the first 3 tasks have 47 classes and the rest of them have 48 classes each (717 classes are difficult to separate evenly). We use official train/test splits for training and testing the model.
+
+Model and optimization. We follow the proposed cross-validation procedure from Chaudhry et al. (2019). Namely, for each run we allocate the first 3 tasks for hyperparameter search, validating on the test data. After that we reinitialize the model from scratch, discard the first 3 tasks and train it on the rest of the data. This reduces the effective number of tasks by 3, but provides a more fair way to perform cross-validation Chaudhry et al. (2019). We use an ImageNet-pretrained ResNet-18 model as an image encoder $E(\boldsymbol{x})$ which is optimized jointly with $P_{\theta}$ . For CZSL experiments, $P_{\theta}$ is a 2-layer MLP and we test the proposed CN procedure. All the details can be found in Appendix G.
+
+We test our approach on 3 continual learning methods: EWC Kirkpatrick et al. (2017), MAS Aljundi et al. (2017) and A-GEM Chaudhry et al. (2019) and 2 benchmarks: Multi-Task model and Sequential model. EWC and MAS fight forgetting by regularizing the weight update for a new task in such a way that the important parameters are preserved. A-GEM maintains a memory bank of previously encountered examples and performs a gradient step in such a manner that the loss does not increase on them. Multi-Task is an "upper bound" baseline: a model which has an access to all the previously encountered data and trains on them jointly. Sequential is a "lower bound" baseline: a model which does not employ any technique at all. We give each model an equal number of update iterations on each task. This makes the comparison of the Multi-Task baseline to other methods more fair: otherwise, since its dataset grows with time, it would make $t$ times more updates inside task $t$ than the other methods.
+
+Evaluation and discussion. Results for the proposed metrics mU, mS, mH, mAUC, mA and forgetting measure from Lopez-Paz & Ranzato (2017) are reported in Table 3 and Appendix G. As one can observe, class normalization boosts the performance of classical regularization-based and replay-based continual learning methods by up to $100\%$ and leads to lesser forgetting. However, we are still far behind traditional supervised classifiers as one can infer from mA metric. For example, some state-of-the-art approaches on CUB surpass $90\%$ accuracy Ge et al. (2019) which is drastically larger compared to what the considered approaches achieve.
+
+# 6 CONCLUSION
+
+We investigated and developed normalization techniques for zero-shot learning. We provided theoretical groundings for two popular tricks: normalize+scale and attributes normalization and showed both provably and in practice that they aid training by controlling a signal's variance during a forward pass. Next, we demonstrated that they are not enough to constrain a signal from fluctuations for a deep ZSL model. That motivated us to develop class normalization: a new normalization scheme that fixes the problem and allows to obtain SotA performance on 4 standard ZSL datasets in terms of quantitative performance and training speed. Next, we showed that ZSL attribute embeddings tend to have more irregular loss landscape than traditional classifiers and that class normalization partially remedies this issue. Finally, we generalized ZSL to a broader setting of continual zero-shot learning and proposed a set of principled metrics and baselines for it. We believe that our work will spur the development of stronger zero-shot systems and motivate their deployment in real-world applications.
+
+# REFERENCES
+
+Zeynep Akata, Scott Reed, Daniel Walter, Honglak Lee, and Bernt Schiele. Evaluation of output embeddings for fine-grained image classification. In CVPR, 2015.
+Zeynep Akata, Mateusz Malinowski, Mario Fritz, and Bernt Schiele. Multi-cue zero-shot learning with strong supervision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 59-68, 2016a.
+Zeynep Akata, Florent Perronnin, Zaid Harchaoui, and Cordelia Schmid. Label-embedding for image classification. PAMI, 38(7):1425-1438, 2016b.
+Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, and Tinne Tuytelaars. Memory aware synapses: Learning what (not) to forget. CoRR, abs/1711.09601, 2017. URL http://arxiv.org/abs/1711.09601.
+Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
+Sean Bell, C. Lawrence Zitnick, Kavita Bala, and Ross Girshick. Inside-outside net: Detecting objects in context with skip pooling and recurrent neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
+Aurelien Bellet, Amaury Habrard, and Marc Sebban. A survey on metric learning for feature vectors and structured data. arXiv preprint arXiv:1306.6709, 2013.
+Oscar Chang, Lampros Flokas, and Hod Lipson. Principled weight initialization for hypernetworks. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=H1lma24tPB.
+Soravit Changpinyo, Wei-Lun Chao, Boqing Gong, and Fei Sha. Synthesized classifiers for zero-shot learning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016a.
+Soravit Changpinyo, Wei-Lun Chao, Boqing Gong, and Fei Sha. Synthesized classifiers for zero-shot learning. In CVPR, pp. 5327-5336, 2016b.
+Soravit Changpinyo, Wei-Lun Chao, and Fei Sha. Predicting visual exemplars of unseen classes for zero-shot learning. In The IEEE International Conference on Computer Vision (ICCV), Oct 2017.
+Wei-Lun Chao, Soravit Changpinyo, Boqing Gong, and Fei Sha. An empirical study and analysis of generalized zero-shot learning for object recognition in the wild. In ECCV (2), pp. 52-68, 2016. URL https://doi.org/10.1007/978-3-319-46475-6_4.
+Arslan Chaudhry, Marc'Aurelio Ranzato, Marcus Rohrbach, and Mohamed Elhoseiny. Efficient lifelong learning with a-GEM. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=Hkf2_sC5FX.
+Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In Hal Daume III and Aarti Singh (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 1597-1607. PMLR, 13-18 Jul 2020. URL http://proceedings.mlr.press/v119/chen20j.html.
+Laurent Dinh, Razvan Pascanu, Samy Bengio, and Yoshua Bengio. Sharp minima can generalize for deep nets. arXiv preprint arXiv:1703.04933, 2017.
+Mohamed Elhoseiny and Mohamed Elfeki. Creativity inspired zero-shot learning. In The IEEE International Conference on Computer Vision (ICCV), October 2019.
+Mohamed Elhoseiny, Babak Saleh, and Ahmed Elgammal. Write a classifier: Zero-shot learning using purely textual descriptions. In The IEEE International Conference on Computer Vision (ICCV), December 2013.
+Mohamed Elhoseiny, Francesca Babiloni, Rahaf Aljundi, Marcus Rohrbach, Manohar Paluri, and Tinne Tuytelaars. Exploring the challenges towards lifelong fact learning. CoRR, abs/1812.10524, 2018. URL http://arxiv.org/abs/1812.10524.
+Ali Farhadi, Ian Endres, Derek Hoiem, and David Forsyth. Describing objects by their attributes. In CVPR 2009., pp. 1778-1785. IEEE, 2009.
+
+Rafael Felix, Vijay B. G. Kumar, Ian Reid, and Gustavo Carneiro. Multi-modal cycle-consistent generalized zero-shot learning. In The European Conference on Computer Vision (ECCV), September 2018.
+Vittorio Ferrari and Andrew Zisserman. Learning visual attributes. In J. C. Platt, D. Koller, Y. Singer, and S. T. Roweis (eds.), Advances in Neural Information Processing Systems 20, pp. 433-440. Curran Associates, Inc., 2008. URL http://papers.nips.cc/paper/3217-learning-visual-attributes.pdf.
+Andrea Frome, Greg S Corrado, Jon Shlens, Samy Bengio, Jeff Dean, Marc Aurelio Ranzato, and Tomas Mikolov. Devise: A deep visual-semantic embedding model. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger (eds.), Advances in Neural Information Processing Systems 26, pp. 2121-2129. Curran Associates, Inc., 2013. URL http://papers.nips.cc/paper/5204-devise-a-deep-visual-semantic-embedding-model.pdf.
+Weifeng Ge, Xiangru Lin, and Yizhou Yu. Weakly supervised complementary parts models for fine-grained image classification from the bottom up. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
+Spyros Gidaris and Nikos Komodakis. Dynamic few-shot visual learning without forgetting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4367-4375, 2018.
+Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Yee Whye Teh and Mike Titterington (eds.), Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, volume 9 of Proceedings of Machine Learning Research, pp. 249-256, Chia Laguna Resort, Sardinia, Italy, 13-15 May 2010. PMLR. URL http://proceedings.mlr.org/press/v9/glorot10a.html.
+Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672-2680, 2014.
+Jianzhu Guo, Xiangyu Zhu, Chenxu Zhao, Dong Cao, Zhen Lei, and Stan Z. Li. Learning meta face recognition in unseen domains. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
+Y. Guo, G. Ding, J. Han, and Y. Gao. Zero-shot learning with transferred samples. IEEE Transactions on Image Processing, 26(7):3277-3290, 2017.
+Yuchen Guo, Guiguang Ding, Jungong Han, and Yue Gao. Synthesizing samples for zero-shot learning. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, pp. 1774-1780, 2017. doi: 10.24963/ijcai.2017/246. URL https://doi.org/10.24963/ijcai.2017/246.
+David Ha, Andrew Dai, and Quoc V Le. Hypernetworks. arXiv preprint arXiv:1609.09106, 2016.
+K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1026-1034, 2015.
+Sepp Hochreiter and Jürgen Schmidhuber. Flat minima. Neural Computation, 9(1):1-42, 1997.
+Wei Hu, Lechao Xiao, and Jeffrey Pennington. Provable benefit of orthogonal initialization in optimizing deep linear networks. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=rkgqN1SYvr.
+Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Francis Bach and David Blei (eds.), Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pp. 448-456, Lille, France, 07-09 Jul 2015. PMLR. URL http://proceedings.mlr.press/v37/ioffe15.html.
+David Isele, Mohammad Rostami, and Eric Eaton. Using task features for zero-shot knowledge transfer in lifelong learning. In International Joint Conferences on Artificial Intelligence, 2016.
+Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
+Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836, 2016.
+
+Diederik P Kingma and Max Welling. Auto-encoding variational bayes. stat, 1050:1, 2014.
+James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521-3526, 2017.
+Philipp Krahenbuhl, Carl Doersch, Jeff Donahue, and Trevor Darrell. Data-dependent initializations of convolutional neural networks. arXiv preprint arXiv:1511.06856, 2015.
+Vinay Kumar Verma, Gundeep Arora, Ashish Mishra, and Piyush Rai. Generalized zero-shot learning via synthesized examples. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4281-4289, 2018.
+Christoph H Lampert, Hannes Nickisch, and Stefan Harmeling. Learning to detect unseen object classes by between-class attribute transfer. In CVPR, pp. 951-958. IEEE, 2009.
+Christoph H Lampert, Hannes Nickisch, and Stefan Harmeling. Attribute-based classification for zero-shot visual object categorization. IEEE transactions on pattern analysis and machine intelligence, 36(3):453-465, 2013a.
+Christoph H Lampert, Hannes Nickisch, and Stefan Harmeling. Attribute-based classification for zero-shot visual object categorization. IEEE transactions on pattern analysis and machine intelligence, 36(3):453-465, 2013b.
+Jimmy Lei Ba, Kevin Swersky, Sanja Fidler, et al. Predicting deep zero-shot convolutional neural networks using textual descriptions. In ICCV, 2015.
+Kai Li, Martin Renqiang Min, and Yun Fu. Rethinking zero-shot learning: A conditional visual classification perspective. In The IEEE International Conference on Computer Vision (ICCV), October 2019.
+Shichen Liu, Mingsheng Long, Jianmin Wang, and Michael I Jordan. Generalized zero-shot learning with deep calibration network. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems 31, pp. 2005-2015. Curran Associates, Inc., 2018. URL http://papers.nips.cc/paper/7471-generalized-zero-shot-learning-with-deep-calibration-network.pdf.
+David Lopez-Paz and Marc Aurelio Ranzato. Gradient episodic memory for continual learning. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems 30, pp. 6467-6476. Curran Associates, Inc., 2017. URL http://papers.nips.cc/paper/7225-gradient-episodic-memory-for-continual-learning.pdf.
+Chunjie Luo, Jianfeng Zhan, Xiaohe Xue, Lei Wang, Rui Ren, and Qiang Yang. Cosine normalization: Using cosine similarity instead of dot product in neural networks. In International Conference on Artificial Neural Networks, pp. 382-391. Springer, 2018.
+Shaobo Min, Hantao Yao, Hongtao Xie, Chaoqun Wang, Zheng-Jun Zha, and Yongdong Zhang. Domain-aware visual bias eliminating for generalized zero-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
+Sanath Narayan, Akshita Gupta, Fahad Shahbaz Khan, Cees GM Snoek, and Ling Shao. Latent embedding feedback and discriminative features for zero-shot classification. arXiv preprint arXiv:2003.07833, 2020.
+Jian Ni, Shanghang Zhang, and Haiyong Xie. Dual adversarial semantics-consistent network for generalized zero-shot learning. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems 32, pp. 6146-6157. Curran Associates, Inc., 2019.
+Mohammad Norouzi, Tomas Mikolov, Samy Bengio, Yoram Singer, Jonathon Shlens, Andrea Frome, Greg S Corrado, and Jeffrey Dean. Zero-shot learning by convex combination of semantic embeddings. arXiv preprint arXiv:1312.5650, 2013.
+Genevieve Patterson, Chen Xu, Hang Su, and James Hays. The sun attribute database: Beyond categories for deeper scene understanding. International Journal of Computer Vision, 108(1-2):59-81, 2014.
+Bernardino Romera-Paredes and Philip Torr. An embarrassingly simple approach to zero-shot learning. In Francis Bach and David Blei (eds.), Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pp. 2152-2161, Lille, France, 07-09 Jul 2015. PMLR. URL http://proceedings.mlr.press/v37/romera-paredes15.html.
+
+Shibani Santurkar, Dimitris Tsipras, Andrew Ilyas, and Aleksander Madry. How does batch normalization help optimization? In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems 31, pp. 2483-2493. Curran Associates, Inc., 2018. URL http://papers.nips.cc/paper/7515-how-does-batch-normalization-help-optimization.pdf.
+Jonathan Schwarz, Wojciech Czarnecki, Jelena Luketina, Agnieszka Grabska-Barwinska, Yee Whye Teh, Razvan Pascanu, and Raia Hadsell. Progress & compress: A scalable framework for continual learning. In International Conference on Machine Learning, pp. 4528-4537, 2018.
+Yichun Shi, Xiang Yu, Kihyuk Sohn, Manmohan Chandraker, and Anil K. Jain. Towards universal representation learning for deep face recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
+Saurabh Singh and Shankar Krishnan. Filter response normalization layer: Eliminating batch dependence in the training of deep neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
+Ivan Skorokhodov and Mikhail S. Burtsev. Loss landscape sightseeing with multi-point optimization. CoRR, abs/1910.03867, 2019. URL http://arxiv.org/abs/1910.03867.
+Kihyuk Sohn. Improved deep metric learning with multi-class n-pair loss objective. In Proceedings of the 30th International Conference on Neural Information Processing Systems, pp. 1857-1865, 2016.
+Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.
+Gido M. van de Ven and Andreas S. Tolias. Three scenarios for continual learning. CoRR, abs/1904.07734, 2019. URL http://arxiv.org/abs/1904.07734.
+Vinay Kumar Verma, Dhanajit Brahma, and Piyush Rai. Meta-learning for generalized zero-shot learning. In AAAI, 2020.
+Maunil R Vyas, Hemanth Venkateswara, and Sethuraman Panchanathan. Leveraging seen and unseen semantic relationships for generative zero-shot learning. In European Conference on Computer Vision (ECCV), 2020.
+Kun Wei, Cheng Deng, and Xu Yang. Lifelong zero-shot learning. In Christian Bessiere (ed.), Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20, pp. 551-557. International Joint Conferences on Artificial Intelligence Organization, 7 2020a. URL https://doi.org/10.24963/ijcai.2020/77. Main track.
+Kun Wei, Cheng Deng, and Xu Yang. Lifelong zero-shot learning. In *IJCAI*, 2020b.
+P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. Caltech-UCSD Birds 200. Technical Report CNS-TR-2010-001, California Institute of Technology, 2010.
+Yuxin Wu and Kaiming He. Group normalization. In The European Conference on Computer Vision (ECCV), September 2018.
+Yongqin Xian, Christoph H Lampert, Bernt Schiele, and Zeynep Akata. Zero-shot learning-a comprehensive evaluation of the good, the bad and the ugly. PAMI, 2018a.
+Yongqin Xian, Tobias Lorenz, Bernt Schiele, and Zeynep Akata. Feature generating networks for zero-shot learning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018b.
+Yongqin Xian, Saurabh Sharma, Bernt Schiele, and Zeynep Akata. F-vaegan-d2: A feature generating framework for any-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
+Mang Ye, Jianbing Shen, Gaojie Lin, Tao Xiang, Ling Shao, and Steven C. H. Hoi. Deep learning for person re-identification: A survey and outlook, 2020.
+Hyeonwoo Yu and Beomhee Lee. Zero-shot learning via simultaneous generating and learning. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems 32, pp. 46-56. Curran Associates, Inc., 2019. URL http://papers.nips.cc/paper/8300-zero-shot-learning-via-simultaneous-generating-and-learning.pdf.
+
+Yunlong Yu, Zhong Ji, Jungong Han, and Zhongfei Zhang. Episode-based prototype generating network for zero-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
+Han Zhang, Jing Yu Koh, Jason Baldridge, Honglak Lee, and Yinfei Yang. Cross-modal contrastive learning for text-to-image generation. arXiv preprint arXiv:2101.04702, 2021.
+Ji Zhang, Yannis Kalantidis, Marcus Rohrbach, Manohar Paluri, Ahmed Elgammal, and Mohamed Elhoseiny. Large-scale visual relationship understanding. In AAAI, 2019.
+Li Zhang, Tao Xiang, and Shaogang Gong. Learning a deep embedding model for zero-shot learning. In CVPR, 2016.
+Li Zhang, Tao Xiang, and Shaogang Gong. Learning a deep embedding model for zero-shot learning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
+Ziming Zhang and Venkatesh Saligrama. Zero-shot learning via semantic similarity embedding. In ICCV, pp. 4166-4174, 2015.
+Yizhe Zhu, Mohamed Elhoseiny, Bingchen Liu, Xi Peng, and Ahmed Elgammal. A generative adversarial approach for zero-shot learning from noisy texts. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
+Yizhe Zhu, Jianwen Xie, Zhiqiang Tang, Xi Peng, and Ahmed Elgammal. Semantic-guided multi-attention localization for zero-shot learning. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems 32, pp. 14943-14953. Curran Associates, Inc., 2019. URL http://papers.nips.cc/paper/9632-semantic-guided-multi-attention-localization-for-zero-shot-learning.pdf.
+
+# A “NORMALIZE + SCALE” TRICK
+
+As being discussed in Section 3.2, "normalize+scale" trick changes the logits computation from a usual dot product to the scaled cosine similarity:
+
+$$
+\hat {y} _ {c} = \left\langle \boldsymbol {z}, \boldsymbol {p} _ {c} \right\rangle \Longrightarrow \hat {y} _ {c} = \left\langle \gamma \frac {\boldsymbol {z}}{\| \boldsymbol {z} \|}, \gamma \frac {\boldsymbol {p} _ {c}}{\| \boldsymbol {p} _ {c} \|} \right\rangle , \tag {13}
+$$
+
+where $\hat{y}_c$ is the logit value for class $c$ ; $\mathbf{z}$ is an image feature vector; $\pmb{p}_c$ is the attribute embedding for class $c$ :
+
+$$
+\boldsymbol {p} _ {c} = P _ {\boldsymbol {\theta}} \left(\boldsymbol {a} _ {c}\right) = V H _ {\boldsymbol {\varphi} \left(\boldsymbol {a} _ {c}\right)} \tag {14}
+$$
+
+and $\gamma$ is the scaling hyperparameter. Let's denote a penultimate hidden representation of $P_{\theta}$ as $\pmb{h}_c = H_{\varphi}(\pmb{a}_c)$ . We note that in case of linear $P_{\theta}$ , we have $\pmb{h}_c = \pmb{a}_c$ . Let's also denote the dimensionalities of $\pmb{z}$ and $\pmb{h}_c$ by $d_z$ and $d_h$ .
+
+# A.1 ASSUMPTIONS
+
+To derive the approximate variance formula for $\hat{y}_c$ we will use the following assumptions and approximate identities:
+
+(i) All weights in matrix $V$ ..
+
+- are independent from each other and from $z_{k}$ and $h_{c,i}$ (for all $k, i$ );
+- $\mathbb{E}[V_{ij}] = 0$ for all $i,j$ ;
+- $\operatorname{Var}[V_{ij}] = s_v$ for all $i, j$ .
+
+(ii) There exists $\epsilon > 0$ s.t. $(2 + \epsilon)$ -th central moment exists for each of $h_{c,1}, \ldots, h_{c,d_h}$ . We require this technical condition to be able apply the central limit theorem for variables with non-equal variances.
+(iii) All $h_{c,i}, h_{c,j}$ are independent from each other for $i \neq j$ . This is the least realistic assumption from the list, because in case of linear $P_{\theta}$ it would be equivalent to independence of coordinates in attribute vector $\mathbf{a}_c$ . We are not going to use it in other statements. As we show in Appendix A.3 it works well in practice.
+(iv) All $p_{c,i}, p_{c,j}$ are independent between each other. This is also a nasty assumption, but more safe to assume in practice (for example, it is easy to demonstrate that $\operatorname{Cov}[p_{c,i}, p_{c,j}] = 0$ for $i \neq j$ ). We are going to use it only in normalize+scale approximate variance formula derivation.
+(v) $z \sim \mathcal{N}(\mathbf{0}, s^z I)$ . This property is safe to assume since $z$ is usually a hidden representation of a deep neural network and each coordinate is computed as a vector-vector product between independent vectors which results in the normal distribution (see the proof below for $p_c \rightsquigarrow \mathcal{N}(\mathbf{0}, s^p I)$ ).
+(vi) For $\pmb {\xi}\in \{\pmb {z},\pmb{p}_c\}$ we will use the approximations:
+
+$$
+\mathbb {E} \left[ \xi_ {i} \cdot \frac {1}{\| \boldsymbol {\xi} \| _ {2}} \right] \approx \mathbb {E} \left[ \xi_ {i} \right] \cdot \mathbb {E} \left[ \frac {1}{\| \boldsymbol {\xi} \| _ {2}} \right] \quad \text {a n d} \quad \mathbb {E} \left[ \xi_ {i} \xi_ {j} \cdot \frac {1}{\| \boldsymbol {\xi} \| _ {2} ^ {2}} \right] \approx \mathbb {E} \left[ \xi_ {i} \xi_ {j} \right] \cdot \mathbb {E} \left[ \frac {1}{\| \boldsymbol {\xi} \| _ {2} ^ {2}} \right] \tag {15}
+$$
+
+This approximation is safe to use if the dimensionality of $\xi$ is large enough (for neural networks it is definitely the case) because the contribution of each individual $\xi_{i}$ in the norm $\| \pmb {\xi}\| _2$ becomes negligible.
+
+Assumptions (i-v) are typical for such kind of analysis and can also be found in (Glorot & Bengio, 2010; He et al., 2015; Chang et al., 2020). Assumption (vi), as noted, holds only for large-dimensional inputs, but this is exactly our case and we validate that using leads to a decent approximation on Figure 2.
+
+# A.2 FORMAL STATEMENT AND THE PROOF
+
+Statement 1 (Normalize+scale trick). If conditions (i)-(vi) hold, then:
+
+$$
+\operatorname {V a r} \left[ \hat {y} _ {c} \right] = \operatorname {V a r} \left[ \left\langle \gamma \frac {\boldsymbol {z}}{\| \boldsymbol {z} \|}, \gamma \frac {\boldsymbol {p} _ {c}}{\| \boldsymbol {p} _ {c} \|} \right\rangle \right] \approx \frac {\gamma^ {4} d _ {z}}{(d _ {z} - 2) ^ {2}} \tag {16}
+$$
+
+Proof. First of all, we need to show that $p_{c,i} \rightsquigarrow \mathcal{N}(0, s^p)$ for some constant $s^p$ . Since
+
+$$
+p _ {c, i} = \sum_ {j = 1} ^ {d _ {h}} V _ {i, j} h _ {c, j} \tag {17}
+$$
+
+from assumption (i) we can easily compute its mean:
+
+$$
+\begin{array}{l} \mathbb {E} \left[ p _ {c, i} \right] = \mathbb {E} \left[ \sum_ {j = 1} ^ {d _ {h}} V _ {i, j} h _ {c, j} \right] (18) \\ = \sum_ {j = 1} ^ {d _ {h}} \mathbb {E} \left[ V _ {i, j} h _ {c, j} \right] (19) \\ = \sum_ {j = 1} ^ {d _ {h}} \mathbb {E} \left[ V _ {i, j} \right] \cdot \mathbb {E} \left[ h _ {c, j} \right] (20) \\ = \sum_ {j = 1} ^ {d _ {h}} 0 \cdot \mathbb {E} \left[ h _ {c, j} \right] (21) \\ = 0. (22) \\ \end{array}
+$$
+
+and the variance:
+
+$$
+\begin{array}{l} \operatorname {V a r} \left[ p _ {c, i} \right] = \mathbb {E} \left[ p _ {c, i} ^ {2} \right] - \left(\mathbb {E} \left[ p _ {c, i} \right]\right) ^ {2} (23) \\ = \mathbb {E} \left[ p _ {c, i} ^ {2} \right] (24) \\ = \mathbb {E} \left[ \left(\sum_ {j = 1} ^ {d _ {h}} V _ {i, j} h _ {c, j}\right) ^ {2} \right] (25) \\ = \mathbb {E} \left[ \sum_ {j, k = 1} ^ {d _ {h}} V _ {i, j} V _ {i, k} h _ {c, j} h _ {c, k} \right] (26) \\ \end{array}
+$$
+
+Using $\mathbb{E}\left[V_{i,j}V_{i,k}\right] = 0$ for $k\neq j$ , we have:
+
+$$
+\begin{array}{l} = \mathbb {E} \left[ \sum_ {j} ^ {d _ {h}} V _ {i, j} ^ {2} h _ {c, j} ^ {2} \right] (27) \\ = \sum_ {j} ^ {d _ {h}} \mathbb {E} \left[ V _ {i, j} ^ {2} \right] \mathbb {E} \left[ h _ {c, j} ^ {2} \right] (28) \\ \end{array}
+$$
+
+Since $s_v = \operatorname{Var}[V_{i,j}] = \mathbb{E}[V_{i,j}^2] - \mathbb{E}[V_{i,j}]^2 = \mathbb{E}[V_{i,j}^2]$ , we have:
+
+$$
+\begin{array}{l} = \sum_ {j} ^ {d _ {h}} s _ {v} \mathbb {E} \left[ h _ {c, j} ^ {2} \right] (29) \\ = s _ {c} \mathbb {E} \left[ \sum_ {j} ^ {d _ {h}} h _ {c, j} ^ {2} \right] (30) \\ = s _ {v} \mathbb {E} \left[ \| \boldsymbol {h} _ {c} \| _ {2} ^ {2} \right] (31) \\ = s ^ {p} (32) \\ \end{array}
+$$
+
+Now, from the assumptions (ii) and (iii) we can apply Lyapunov's Central Limit theorem to $p_{c,i}$ , which gives us:
+
+$$
+\frac {1}{\sqrt {s ^ {p}}} p _ {c, i} \rightsquigarrow \mathcal {N} (0, 1) \tag {34}
+$$
+
+For finite $d_h$ , this allows us say that:
+
+$$
+p _ {c, i} \sim \mathcal {N} \left(0, s ^ {p}\right) \tag {35}
+$$
+
+Now note that from (vi) we have:
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \hat {y} _ {c} \right] = \mathbb {E} \left[ \left\langle \gamma \frac {\boldsymbol {z}}{\| \boldsymbol {z} \|}, \gamma \frac {\boldsymbol {p} _ {c}}{\| \boldsymbol {p} _ {c} \|} \right\rangle \right] (36) \\ = \gamma^ {2} \left\langle \mathbb {E} \left[ \frac {\boldsymbol {z}}{\| \boldsymbol {z} \|} \right], \mathbb {E} \left[ \frac {\boldsymbol {p} _ {\mathrm {c}}}{\| \boldsymbol {p} _ {\mathrm {c}} \|} \right] \right\rangle (37) \\ \approx \gamma^ {2} \mathbb {E} \left[ \frac {1}{\| \boldsymbol {z} \|} \right] \cdot \mathbb {E} \left[ \frac {1}{\| \boldsymbol {p} _ {c} \|} \right] \cdot \langle \mathbb {E} [ \boldsymbol {z} ], \mathbb {E} [ \boldsymbol {p} _ {c} ] \rangle (38) \\ = \gamma^ {2} \mathbb {E} \left[ \frac {1}{\| \boldsymbol {z} \|} \right] \cdot \mathbb {E} \left[ \frac {1}{\| \boldsymbol {p} _ {c} \|} \right] \cdot \langle \mathbf {0}, \mathbf {0} \rangle (39) \\ = 0 (40) \\ \end{array}
+$$
+
+Since $\pmb{\xi} \sim \mathcal{N}(\mathbf{0}, s^{\xi})$ for $\pmb{\xi} \in \{\pmb{z}, \pmb{p}_c\}$ , $\frac{d_{\xi}}{\|\pmb{\xi}\|_2^2}$ follows scaled inverse chi-squared distribution with inverse variance $\tau = 1 / s^{\xi}$ , which has a known expression for expectation:
+
+$$
+\mathbb {E} \left[ \frac {d _ {\xi}}{\| \boldsymbol {\xi} \| _ {2} ^ {2}} \right] = \frac {\tau d _ {\xi}}{d _ {\xi} - 2} = \frac {d _ {\xi}}{s ^ {\xi} (d _ {\xi} - 2)} \tag {41}
+$$
+
+Now we are left with using approximation (vi) and plugging in the above expression into the variance formula:
+
+$$
+\begin{array}{l} \operatorname {V a r} \left[ \hat {y} _ {c} \right] = \operatorname {V a r} \left[ \left\langle \gamma \frac {\boldsymbol {z}}{\| \boldsymbol {z} \|}, \gamma \frac {\boldsymbol {p} _ {c}}{\| \boldsymbol {p} _ {c} \|} \right\rangle \right] (42) \\ = \mathbb {E} \left[ \left\langle \gamma \frac {\boldsymbol {z}}{\| \boldsymbol {z} \|}, \gamma \frac {\boldsymbol {p} _ {c}}{\| \boldsymbol {p} _ {c} \|} \right\rangle^ {2} \right] - \mathbb {E} \left[ \left\langle \gamma \frac {\boldsymbol {z}}{\| \boldsymbol {z} \|}, \gamma \frac {\boldsymbol {p} _ {c}}{\| \boldsymbol {p} _ {c} \|} \right\rangle \right] ^ {2} (43) \\ \approx \mathbb {E} \left[ \left\langle \gamma \frac {\boldsymbol {z}}{\| \boldsymbol {z} \|}, \gamma \frac {\boldsymbol {p} _ {c}}{\| \boldsymbol {p} _ {c} \|} \right\rangle^ {2} \right] - 0 (44) \\ = \gamma^ {4} \mathbb {E} \left[ \frac {\left(\boldsymbol {z} ^ {\top} \boldsymbol {p} _ {c}\right) ^ {2}}{\| \boldsymbol {z} \| ^ {2} \| \boldsymbol {p} _ {c} \| ^ {2}} \right] (45) \\ \approx \gamma^ {4} \mathbb {E} \left[ \left(\boldsymbol {z} ^ {\top} \boldsymbol {p}\right) ^ {2} \right] \cdot \mathbb {E} \left[ \frac {1}{d _ {z}} \cdot \frac {d _ {z}}{\| \boldsymbol {z} \| ^ {2}} \right] \cdot \mathbb {E} \left[ \frac {1}{d _ {z}} \cdot \frac {d _ {z}}{\| \boldsymbol {p} _ {\mathrm {c}} \| ^ {2}} \right] (46) \\ = \frac {\gamma^ {4}}{d _ {z} ^ {2}} \cdot \underset {\boldsymbol {p} _ {c}} {\mathbb {E}} \left[ \boldsymbol {p} _ {c} ^ {\top} \underset {\boldsymbol {z}} {\mathbb {E}} \left[ \boldsymbol {z} \boldsymbol {z} ^ {\top} \right] \boldsymbol {p} _ {c} \right] \cdot \frac {d _ {z}}{s ^ {z} (d _ {z} - 2)} \cdot \frac {d _ {z}}{s ^ {p} (d _ {z} - 2)} (47) \\ = \gamma^ {4} _ {\boldsymbol {p} _ {c}} \mathbb {E} \left[ \boldsymbol {p} _ {c} ^ {\top} s ^ {z} I _ {d _ {z}} \boldsymbol {p} _ {c} \right] \cdot \frac {1}{s ^ {z} s ^ {p} (d _ {z} - 2) ^ {2}} (48) \\ = \gamma^ {4} \mathbb {E} \left[ \sum_ {i = 1} ^ {d _ {z}} f _ {c, i} ^ {2} \right] \cdot \frac {1}{s ^ {p} \left(d _ {z} - 2\right) ^ {2}} (49) \\ = \gamma^ {4} d _ {z} \cdot s ^ {p} \cdot \frac {1}{s ^ {p} (d _ {z} - 2) ^ {2}} (50) \\ = \frac {\gamma^ {4} d _ {z}}{\left(d _ {z} - 2\right) ^ {2}} (51) \\ \end{array}
+$$
+
+# A.3 EMPIRICAL VALIDATION
+
+In this subsection, we validate the derived approximation empirically. For empirical validation of the variance analysis, see Appendix E. For this, we perform two experiments:
+
+- Synthetic data. An experiment on a synthetic data. We sample $\mathbf{x} \sim \mathcal{N}(\mathbf{0}, I_d)$ , $\mathbf{y} \sim \mathcal{N}(\mathbf{0}, I_d)$ for different dimensionalities $d = 32, 64, 128, \dots, 8192$ and compute the cosine similarity:
+
+$$
+z = \left\langle \frac {\gamma \boldsymbol {x}}{\| \boldsymbol {x} \|}, \frac {\gamma \boldsymbol {y}}{\| \boldsymbol {y} \|} \right\rangle \tag {53}
+$$
+
+After that, we compute $\operatorname{Var}[z]$ and average it out across different samples. The result is presented on figure 2a.
+
+Figure 2: Empirical validation of the derived approximation for variance (4)
+
+(a) Empirical validation of variance formula (4) on (b) Empirical validation of variance formula (4) on synthetic data real-world data and for a real-world model
+
+
+
+- Real data. We take ImageNet-pretrained ResNet101 features and real class attributes (unnormized) for SUN, CUB, AwA1, AwA2 and aPY datasets. Then, we initialize a random 2-layer MLP with 512 hidden units, and generate real logits (without scaling). Then we compute mean empirical variance and the corresponding standard deviation over different batches of size 4096. The resulted boxplots are presented on figure 2b.
+
+In both experiments, we computed the logits with $\gamma = 1$ . As one can see, even despite our demanding assumptions, our predicted variance formula is accurate for both synthetic and real-world data.
+
+# B ATTRIBUTES NORMALIZATION
+
+We will use the same notation as in Appendix A. Attributes normalization trick normalizes attributes to the unit $L_{2}$ -norm:
+
+$$
+\boldsymbol {a} _ {c} \longmapsto \frac {\boldsymbol {a} _ {c}}{\| \boldsymbol {a} _ {c} \| _ {2}} \tag {54}
+$$
+
+We will show that it helps to preserve the variance for pre-logit computation when attribute embedder $P_{\theta}$ is linear:
+
+$$
+\tilde {y} _ {c} = \boldsymbol {z} ^ {\top} F (\boldsymbol {a} _ {c}) = \boldsymbol {z} ^ {\top} V \boldsymbol {a} _ {c} \tag {55}
+$$
+
+For a non-linear attribute embedder it is not true, that's why we need the proposed initialization scheme.
+
+# B.1 ASSUMPTIONS
+
+We will need the following assumptions:
+
+(i) Feature vector $z$ has the properties:
+
+- $\mathbb{E}[z] = 0$
+- $\operatorname{Var}[z_i] = s^z$ for all $i = 1, \dots, d_z$ .
+- All $z_{i}$ are independent from each other and from all $p_{c,j}$ .
+
+(ii) Weight matrix $V$ is initialized with Xavier fan-out mode, i.e. $\mathrm{Var}\left[V_{ij}\right] = 1 / d_z$ and are independent from each other.
+
+Note here, that we do not have any assumptions on $\pmb{a}_c$ . This is the core difference from Chang et al. (2020) and is an essential condition for ZSL (see Appendix H).
+
+# B.2 FORMAL STATEMENT AND THE PROOF
+
+Statement 2 (Attributes normalization for a linear embedder). If assumptions (i)-(ii) are satisfied and $\| \pmb{a}_{\mathrm{c}}\|_{2} = 1$ then:
+
+$$
+\operatorname {V a r} \left[ \tilde {y} _ {c} \right] = \operatorname {V a r} \left[ z _ {i} \right] = s ^ {z} \tag {56}
+$$
+
+Proof. Now, note that:
+
+$$
+\mathbb {E} \left[ \tilde {\mathcal {Y}} _ {c} \right] = \mathbb {E} \left[ \boldsymbol {z} ^ {\top} V \boldsymbol {a} _ {c} \right] = \mathbb {E} _ {V, \boldsymbol {a} _ {c}} \left[ \mathbb {E} _ {\boldsymbol {z}} \left[ \boldsymbol {z} ^ {\top} \right] V \boldsymbol {a} _ {c} \right] = \mathbb {E} \left[ \boldsymbol {0} ^ {\top} V \boldsymbol {a} _ {c} \right] = 0 \tag {57}
+$$
+
+Then the variance for $\tilde{y}_c$ has the following form:
+
+$$
+\begin{array}{l} \operatorname {V a r} \left[ \tilde {y} _ {c} \right] = \mathbb {E} \left[ \tilde {y} _ {c} ^ {2} \right] - \mathbb {E} \left[ \tilde {y} _ {c} \right] ^ {2} (58) \\ = \mathbb {E} \left[ \dot {y} _ {c} ^ {2} \right] (59) \\ = \mathbb {E} \left[ \left(\boldsymbol {z} ^ {\top} V \boldsymbol {a} _ {c}\right) ^ {2} \right] (60) \\ = \mathbb {E} \left[ \boldsymbol {a} _ {c} ^ {\top} V ^ {\top} \boldsymbol {z} \boldsymbol {z} ^ {\top} V \boldsymbol {a} _ {c} \right] (61) \\ = \mathbb {E} _ {\boldsymbol {a} _ {c}} \left[ \mathbb {E} _ {V} \left[ \mathbb {E} _ {\boldsymbol {z}} \left[ \boldsymbol {a} _ {c} ^ {\top} V ^ {\top} \boldsymbol {z} \boldsymbol {z} ^ {\top} V \boldsymbol {a} _ {c} \right] \right] \right] (62) \\ = \underset {\boldsymbol {a} _ {c}} {\mathbb {E}} \left[ \boldsymbol {a} _ {c} ^ {\top} \underset {V} {\mathbb {E}} \left[ V ^ {\top} \underset {\boldsymbol {z}} {\mathbb {E}} \left[ \boldsymbol {z} \boldsymbol {z} ^ {\top} \right] V \right] \boldsymbol {a} _ {c} \right] (63) \\ = s ^ {z} \mathbb {E} _ {\boldsymbol {a} _ {c}} \left[ \boldsymbol {a} _ {c} ^ {\top} \mathbb {E} _ {V} \left[ V ^ {\top} V \right] \boldsymbol {a} _ {c} \right] (64) \\ = s ^ {z} s ^ {v} d _ {z} \underset {\boldsymbol {a} _ {c}} {\mathbb {E}} \left[ \boldsymbol {a} _ {c} ^ {\top} \boldsymbol {a} _ {c} \right] (65) \\ \end{array}
+$$
+
+since $s^v = 1 / d_z$ , then:
+
+$$
+= s ^ {z} \mathbb {E} \left[ \| \boldsymbol {a} _ {c} \| _ {2} ^ {2} \right] \tag {66}
+$$
+
+since attributes are normalized, i.e. $\| a_{c}\|_{2} = 1$ , then:
+
+$$
+= s ^ {z} \tag {67}
+$$
+
+# C NORMALIZATION FOR A DEEP ATTRIBUTE EMBEDDER
+
+# C.1 FORMAL STATEMENT AND THE PROOF
+
+Using the same derivation as in B, one can show that for a deep attribute embedder:
+
+$$
+P _ {\boldsymbol {\theta}} \left(\boldsymbol {a} _ {c}\right) = V \circ H _ {\varphi} \left(\boldsymbol {a} _ {c}\right) \tag {68}
+$$
+
+normalizing attributes is not enough to preserve the variance of $\mathrm{Var}[\tilde{y}_c]$ , because
+
+$$
+\operatorname {V a r} \left[ \tilde {y} _ {c} \right] = s ^ {z} \mathbb {E} \left[ \| h _ {c} \| _ {2} ^ {2} \right] \tag {69}
+$$
+
+and $h_c = H_\varphi(a_c)$ is not normalized to a unit norm.
+
+To fix the issue, we are going to use two mechanisms:
+
+1. A different initialization scheme:
+
+$$
+\operatorname {V a r} \left[ V _ {i j} \right] = \frac {1}{d _ {z} d _ {h}} \tag {70}
+$$
+
+2. Using the standardization layer before the final projection matrix:
+
+$$
+S (\boldsymbol {x}) = \left(\boldsymbol {x} - \hat {\boldsymbol {\mu}} _ {x}\right) \oslash \hat {\boldsymbol {\sigma}} _ {x}, \tag {71}
+$$
+
+$\pmb{\mu}_{x}, \pmb{\sigma}_{x}$ are the sample mean and variance and $\varnothing$ is the element-wise division.
+
+# C.2 ASSUMPTIONS
+
+We'll need the assumption:
+
+(i) Feature vector $z$ has the properties:
+
+- $\mathbb{E}[z] = 0$ ;
+- $\operatorname{Var}[z_i] = s^z$ for all $i = 1, \dots, d_z$ .
+- All $z_{i}$ are independent from each other and from all $p_{c,j}$ .
+
+
+Figure 3: Our architecture: a plain MLP with the standardization procedure (9) inserted before the final projection and output matrix $V$ being initialized using (10).
+
+# C.3 FORMAL STATEMENT AND THE PROOF
+
+Statement 3. If the assumption (i) is satisfied, an attribute embedder has the form $P_{\theta} = V \circ S \circ H_{\varphi}$ and we initialize output matrix $V$ s.t. $\text{Var}[V_{ij}] = \frac{1}{d_z d_h}$ , then the variance for $\tilde{y}_c$ is preserved:
+
+$$
+\operatorname {V a r} \left[ \tilde {y} _ {c} \right] \approx \operatorname {V a r} \left[ z _ {i} \right] = s ^ {z} \tag {72}
+$$
+
+Proof. With some abuse of notation, let $\bar{\pmb{h}}_c = S(\pmb{h}_c)$ (in practice, $S$ receives a batch of $\pmb{h}_c$ instead of a single vector). This leads to:
+
+$$
+\mathbb {E} \left[ \boldsymbol {h} _ {c} \right] = 0 \quad \text {a n d} \quad \operatorname {V a r} \left[ \boldsymbol {h} _ {c} \right] \approx 1 \tag {73}
+$$
+
+Using the same reasoning as in Appendix B, one can show that:
+
+$$
+\operatorname {V a r} \left[ \tilde {y} _ {c} \right] = d _ {z} \cdot s ^ {z} \cdot \operatorname {V a r} \left[ V _ {i j} \right] \cdot \mathbb {E} \left[ \| \bar {\boldsymbol {h}} _ {c} \| _ {2} ^ {2} \right] = \frac {s ^ {z}}{d _ {h}} \cdot \mathbb {E} \left[ \| \bar {\boldsymbol {h}} _ {c} \| _ {2} ^ {2} \right] \tag {74}
+$$
+
+So we are left to demonstrate that $\mathbb{E}\left[\| \bar{\pmb{h}}_c\| _2^2\right] = d_h$
+
+$$
+\mathbb {E} \left[ \| \bar {\boldsymbol {h}} _ {c} \| _ {2} ^ {2} \right] = \sum_ {i = 1} ^ {d _ {h}} \mathbb {E} \left[ \bar {h} _ {c, i} ^ {2} \right] = \sum_ {i = 1} ^ {d _ {h}} \operatorname {V a r} \left[ \bar {h} _ {c, i} \right] \approx d _ {h} \tag {75}
+$$
+
+# C.4 ADDITIONAL EMPIRICAL STUDIES
+
+# D ZSL DETAILS
+
+# D.1 EXPERIMENTS DETAILS
+
+In this section, we cover hyperparameter and training details for our ZSL experiments and also provide the extended training speed comparisons with other methods.
+
+We depict the architecture of our model on figure 3. As being said, $P_{\theta}$ is just a simple multi-layer MLP with standardization procedure (9) and adjusted output layer initialization (10).
+
+Besides, we also found it useful to use entropy regularizer (the same one which is often used in policy gradient methods for exploration) for some datasets:
+
+$$
+\mathcal {L} _ {\text {e n t}} (\cdot) = - H (\hat {\boldsymbol {p}}) = \sum_ {c = 1} ^ {K} \hat {p} _ {c} \log \hat {p} _ {c} \tag {76}
+$$
+
+We train the model with Adam optimizer with default $\beta_{1}$ and $\beta_{2}$ hyperparams. The list of hyperparameters is presented in Table 4. In those ablation experiments where we do not attributes normalization (2), we apply simple standardization to convert attributes to zero-mean and unit-variance.
+
+# D.2 ADDITIONAL EXPERIMENTS AND ABLATIONS
+
+In this section we present additional experiments and ablation studies for our approach (results are presented in Table 5). We also run validate :
+
+Table 4: Hyperparameters for ZSL experiments
+
+ | SUN | CUB | AwA1 | AwA2 |
| Batch size | 128 | 512 | 128 | 128 |
| Learning rate | 0.0005 | 0.005 | 0.005 | 0.002 |
| Number of epochs | 50 | 50 | 50 | 50 |
| Lent weight | 0.001 | 0.001 | 0.001 | 0.001 |
| Number of hidden layers | 2 | 2 | 2 | 2 |
| Hidden dimension | 2048 | 2048 | 1024 | 512 |
| γ | 5 | 5 | 5 | 5 |
+
+Table 5: Additional GZSL ablation studies. From this table one can observe the sensitivity of normalize+scale to $\gamma$ value. We also highlight $\gamma$ importance in Section 3.
+
+ | SUN | CUB | AwA1 | AwA2 |
| U | S | H | U | S | H | U | S | H | U | S | H |
| Linear +NS+AN γ = 1 | 11.7 | 35.1 | 17.6 | 5.1 | 44.8 | 9.1 | 20.3 | 66.5 | 31.1 | 20.6 | 71.1 | 31.9 |
| Linear +NS+AN γ = 3 | 11.2 | 37.1 | 17.2 | 10.3 | 53.7 | 17.3 | 13.6 | 72.8 | 23.0 | 21.0 | 50.8 | 29.7 |
| Linear +NS+AN γ = 5 | 13.8 | 41.0 | 20.6 | 16.8 | 62.0 | 26.4 | 16.8 | 74.2 | 27.4 | 18.9 | 73.2 | 30.0 |
| Linear +NS+AN γ = 10 | 17.1 | 40.9 | 24.1 | 14.9 | 61.7 | 24.0 | 36.4 | 46.2 | 40.7 | 27.9 | 86.8 | 42.2 |
| Linear +NS+AN γ = 20 | 13.9 | 35.5 | 20.0 | 13.8 | 52.7 | 21.9 | 46.4 | 43.3 | 44.8 | 47.9 | 59.4 | 53.1 |
| 2-layer MLP +NS+AN γ = 1 | 34.0 | 36.4 | 35.1 | 38.7 | 37.4 | 38.0 | 49.6 | 61.9 | 55.1 | 49.7 | 65.9 | 56.7 |
| 2-layer MLP +NS+AN γ = 3 | 32.0 | 37.4 | 34.5 | 42.0 | 43.1 | 42.6 | 53.6 | 68.9 | 60.3 | 53.7 | 72.3 | 61.6 |
| 2-layer MLP +NS+AN γ = 5 | 34.4 | 39.6 | 36.8 | 46.9 | 45.0 | 45.9 | 57.3 | 73.8 | 64.5 | 55.4 | 77.1 | 64.5 |
| 2-layer MLP +NS+AN γ = 10 | 31.7 | 37.5 | 34.4 | 47.0 | 43.3 | 45.1 | 54.1 | 65.5 | 59.3 | 56.0 | 72.4 | 63.2 |
| 2-layer MLP +NS+AN γ = 20 | 56.4 | 11.0 | 18.4 | 44.4 | 35.9 | 39.7 | 51.4 | 69.3 | 59.1 | 46.4 | 73.7 | 56.9 |
| 3-layer MLP +NS+AN γ = 1 | 18.6 | 37.9 | 25.0 | 23.0 | 42.3 | 29.8 | 50.1 | 60.9 | 55.0 | 48.4 | 64.0 | 55.1 |
| 3-layer MLP +NS+AN γ = 3 | 23.9 | 37.3 | 29.1 | 35.5 | 48.6 | 41.0 | 57.3 | 67.3 | 61.9 | 57.3 | 70.3 | 63.2 |
| 3-layer MLP +NS+AN γ = 5 | 31.4 | 40.4 | 35.3 | 45.2 | 50.7 | 47.8 | 58.1 | 70.3 | 63.6 | 58.2 | 73.0 | 64.8 |
| 3-layer MLP +NS+AN γ = 10 | 29.7 | 37.8 | 33.3 | 40.7 | 40.5 | 40.6 | 55.4 | 63.0 | 58.9 | 53.8 | 69.6 | 60.7 |
| 3-layer MLP +NS+AN γ = 20 | 15.8 | 39.5 | 22.6 | 22.2 | 54.0 | 31.4 | 53.8 | 63.8 | 58.3 | 49.2 | 69.4 | 57.6 |
| Dynamic Normalization | 31.9 | 39.8 | 35.5 | 22.7 | 56.6 | 32.4 | 58.5 | 68.4 | 63.1 | 55.5 | 70.3 | 62.0 |
| Xavier + (9) | 41.5 | 41.3 | 41.4 | 49.3 | 49.2 | 49.2 | 60.2 | 73.1 | 66.0 | 58.3 | 76.2 | 66.0 |
| Kaiming fan-in + (9) | 42.0 | 41.4 | 41.7 | 51.1 | 49.2 | 50.1 | 59.8 | 74.3 | 66.2 | 55.4 | 75.6 | 63.9 |
| Kaiming fan-out + (9) | 42.8 | 41.2 | 42.0 | 51.0 | 49.0 | 50.0 | 60.3 | 73.2 | 66.1 | 56.8 | 76.9 | 65.4 |
+
+- Dynamic normalization. As one can see from formula (8), to achieve the desired variance it would be enough to initialize $V$ s.t. $\operatorname{Var}\left[V_{ij}\right] = 1 / d_z$ (equivalent to Xavier fan-out) and use a dynamic normalization:
+
+$$
+\mathrm {D N} (\boldsymbol {h}) = \boldsymbol {h} / \mathbb {E} \left[ \| \boldsymbol {h} \| _ {2} ^ {2} \right] \tag {77}
+$$
+
+between $V$ and $H_{\varphi}$ , i.e. $P_{\theta}(\pmb{a}_c) = \mathrm{VDN}(H_{\varphi}(\pmb{a}_c))$ . Expectation $\mathbb{E}\left[\| \pmb{h}\| _2^2\right]$ is computed over a batch on each iteration. A downside of such an approach is that if the dimensionality is large, than a lot of dimensions will get suppressed leading to bad signal propagation. Besides, one has to compute the running statistics to use them at test time which is cumbersome.
+
+- Traditional initializations + standardization procedure (9). These experiments ablate the necessity of using the corrected variance formula (10).
+- Performance of NS for different scaling values of $\gamma$ and different number of layers.
+
+# D.3 MEASURING TRAINING SPEED
+
+We conduct a survey and search for open-source implementations of classification ZSL papers that were recently published on top conferences. This is done by 1) checking the papers for codeurls; 2) checking their supplementary; 3) searching for implementations on github.com and 4) searching authors by their names on github.com and checking their repositories list. As a result, we found 8 open-source implementations of the recent methods, but one of them got discarded since the corresponding data was not provided. We reran all these
+
+Table 6: Training time for the recent ZSL methods that made their official implementations publicly available. We reran them on the corresponding datasets with the official hyperparameters and training setups. All the comparisons are done on the same machine and hardware: NVidia GeForce RTX 2080 Ti GPU, Intel Xeon Gold 6142 CPU and 64 GB RAM. N/C stands for "no code" meaning that authors didn't release the code for a particular dataset.
+
+ | SUN | CUB | AwA1 | AwA2 |
| RelationNet Sung et al. (2018) | - | 25 min | 40 min | 40 min |
| DCN Liu et al. (2018) | 40 min | 50 min | - | 55 min |
| CIZSL Elhoseiny & Elfeki (2019) | 3 hours | 2 hours | 3 hours | 3 hours |
| CVC-ZSL Li et al. (2019) | 3 hours | 3 hours | 1.5 hours | 1.5 hours |
| SGAL Yu & Lee (2019) | N/C | N/C | 50 min | N/C |
| LsrGAN Vyas et al. (2020) | 1.1 hours | 1.25 hours | - | 1.5 hours |
| TF-VAEGAN Narayan et al. (2020) | 1.5 hours | 1.75 hours | - | 2 hours |
| Ours | 20 sec | 20 sec | 30 sec | 30 sec |
+
+methods with the official hyperparameters on the corresponding datasets and report their training time in Table 6 in Appx D.
+
+All runs are made with the official hyperparameters and training setups and on the same hardware: NVidia GeForce RTX 2080 Ti GPU, $\times 16$ Intel Xeon Gold 6142 CPU and 128 GB RAM. The results are depicted on Table 6.
+
+As one can see, the proposed method trains 50-100 faster than the recent SotA. This is due to not using any sophisticated architectures employing generative models (Xian et al., 2018b; Narayan et al., 2020); or optimization schemes like episode-based training (Li et al., 2019; Yu et al., 2020).
+
+# D.4 CHOOSING SCALE $s$ FOR SEEN CLASSES
+
+As mentioned in Section 5, we reweigh seen class logits by multiplying them on scale value $s$ . This is similar to a strategy considered by Xian et al. (2018a); Min et al. (2020), but we found that multiplying by a value instead of adding it by summation is more intuitive. We find the optimal scale value $s$ by cross-validation together with all other hyperparameters on the grid [1.0, 0.95, 0.9, 0.85, 0.8]. On Figure 4, we depict the curves of how $s$ influences GZSL-U/GZSL-S/GZSL-U for each dataset.
+
+# D.5 INCORPORATING CN FOR OTHER ATTRIBUTE EMBEDDERS
+
+In this section, we employ our proposed class normalization for two other methods: RelationNet (Sung et al., 2018) and CVC-ZSL (Sung et al., 2018). We build upon the officially provided source code bases and use the official hyperparameters for all the setups. For RelationNet, the authors provided the running commands. For CVC-ZSL, we used those hyperparameters for each dataset, that were specified in their paper. That included using different weight decay of $1e - 4$ , $1e - 3$ , $1e - 3$ and $1e - 5$ for AwA1, AwA2, CUB and SUN respectively, as stated in Section 4.2 of the paper (Li et al., 2019). We incorporated our Class Normalization procedure to these attribute embeddings and launched them on the corresponding datasets. The results are reported in Table 7. For some reason, we couldn't reproduce the official results for both these methods which we additionally report. As one can see from the presented results, our method gives $+2.0$ and $+1.8$ of GZSL-H improvement on average for these two methods respectively which emphasizes once again its favorable influence on the learned representations.
+
+# D.6 ADDITIONAL ABLATION ON AN AND NS TRICKS
+
+In Table 8 we provide additional ablations on attributes normalization and normalize+scale tricks. As one can see, they greatly influence the performance of ZSL attribute embeddings.
+
+
+
+
+
+
+SUN $(s^{*} = 0.95)$
+AwA1 $(s^{*} = 0.95)$
+Figure 4: Optimal value of seen logits scale $s$ for different datasets. Multiplying seen logits by some scale $s < 1$ during evaluation leads to sacrificing GZSL-S for an increase in GZSL-U which results in the increased GZSL-H value Xian et al. (2018a); Min et al. (2020). High gap between validation/test accuracy is caused by having different number of classes in these two sets. Lower test GZSL-H than reported in Table 2 is caused by splitting the train set into train/validation sets for the presented run, i.e. using less data for training: we allocated 50, 30, 5 and 5 seen classes for the validation unseen ones to construct these plots for SUN, CUB, AwA1 and AwA2 respectively and an equal amount of data was devoted to being used as validation seen data, i.e. we "lost" ≈ 25% train data in total. As one can see from these plots, the trick works for those datasets where the gap between GZSL-S and GZSL-U is large and does not give any benefit for CUB where seen and unseen logits are already well-balanced.
+
+
+CUB $(s^{*} = 1.0)$
+AwA2 $(s^{*} = 0.95)$
+
+Table 7: Incorporating Class Normalization into RelationNet (Sung et al., 2018) and CVC-ZSL (Li et al., 2019) based on the official source code and running hyperparameters. For some reason, our results differ considerably from the reported ones on AwA2 for RelationNet and on SUN for CVC-ZSL. Adding CN provides the improvement in all the setups.
+
+ | SUN | CUB | AwA1 | AwA2 |
| U | S | H | U | S | H | U | S | H | U | S | H |
| RelationNet (official code) | - | - | - | 38.3 | 62.4 | 47.5 | 28.5 | 87.8 | 43.1 | 10.2 | 88.1 | 18.3 |
| RelationNet (official code) + CN | - | - | - | 40.1 | 62.8 | 48.9 | 29.8 | 88.4 | 44.6 | 12.7 | 88.8 | 22.3 |
| CVC-ZSL (official code) | 20.7 | 43.0 | 28.0 | 42.6 | 47.8 | 45.1 | 58.1 | 78.1 | 66.6 | 51.4 | 79.9 | 62.5 |
| CVC-ZSL (official code) + CN | 24.6 | 42.5 | 31.1 | 44.6 | 48.7 | 46.6 | 58.8 | 79.7 | 67.7 | 53.2 | 80.4 | 64.0 |
| RelationNet (reported) | - | - | - | 38.1 | 61.4 | 47.0 | 31.4 | 91.3 | 46.7 | 30.9 | 93.4 | 45.3 |
| CVC-ZSL (reported) | 36.3 | 42.8 | 39.3 | 47.4 | 47.6 | 47.5 | 62.7 | 77.0 | 69.1 | 56.4 | 81.4 | 66.7 |
+
+Table 8: Ablating other methods for AN and NS importance. For CVC-ZSL, we used the officially provided code with the official hyperparameters. When we do not employ AN, we standardize them to zero-mean and unit-variance: otherwise training diverges due to too high attributes magnitudes.
+
+ | SUN | CUB | AwA1 | AwA2 |
| U | S | H | U | S | H | U | S | H | U | S | H |
| Linear | 41.0 | 33.4 | 36.8 | 26.9 | 58.1 | 36.8 | 40.6 | 76.6 | 53.1 | 38.4 | 81.7 | 52.2 |
| Linear -AN | 13.8 | 41.0 | 20.6 | 16.8 | 62.0 | 26.4 | 16.8 | 74.2 | 27.4 | 18.9 | 73.2 | 30.0 |
| Linear -NS | 38.7 | 3.5 | 6.4 | 33.5 | 6.7 | 11.2 | 44.0 | 42.7 | 43.3 | 44.9 | 48.8 | 46.8 |
| Linear -AN -NS | 17.7 | 2.6 | 4.5 | 3.0 | 0.0 | 0.0 | 13.2 | 0.0 | 0.1 | 23.3 | 0.0 | 0.0 |
| 2-layer MLP | 34.4 | 39.6 | 36.8 | 46.9 | 45.0 | 45.9 | 57.3 | 73.8 | 64.5 | 55.4 | 77.1 | 64.5 |
| 2-layer MLP -AN | 33.8 | 40.1 | 36.7 | 44.9 | 42.9 | 43.9 | 61.9 | 72.5 | 66.8 | 59.1 | 74.1 | 65.8 |
| 2-layer MLP -NS | 51.0 | 11.1 | 18.3 | 40.6 | 22.4 | 28.9 | 40.9 | 68.9 | 51.3 | 40.7 | 69.3 | 51.2 |
| 2-layer MLP -AN -NS | 20.9 | 24.0 | 22.3 | 11.6 | 13.1 | 12.3 | 40.7 | 50.2 | 45.0 | 30.8 | 61.1 | 41.0 |
| CVC-ZSL (official code) | 20.7 | 43.0 | 28.0 | 42.6 | 47.8 | 45.1 | 58.1 | 78.1 | 66.6 | 51.4 | 79.9 | 62.5 |
| CVC-ZSL (official code) -NS | 18.4 | 40.5 | 25.3 | 23.7 | 56.6 | 33.4 | 15.9 | 65.3 | 25.6 | 14.9 | 49.7 | 22.9 |
| CVC-ZSL (official code) -AN | 31.4 | 30.8 | 31.1 | 24.8 | 57.1 | 34.6 | 44.4 | 82.8 | 57.8 | 17.6 | 89.9 | 29.5 |
| CVC-ZSL (official code) -NS -AN | 18.2 | 36.9 | 24.3 | 21.6 | 54.7 | 30.9 | 14.1 | 59.4 | 22.8 | 14.6 | 45.9 | 22.2 |
| CVC-ZSL (reported) | 36.3 | 42.8 | 39.3 | 47.4 | 47.6 | 47.5 | 62.7 | 77.0 | 69.1 | 56.4 | 81.4 | 66.7 |
+
+
+Figure 5: Variances plots for different models for SUN dataset. See Appendix E for the experimental details.
+
+
+
+
+
+# E ADDITIONAL VARIANCE ANALYZIS
+
+In this section, we provide the extended variance analysis for different setups and datasets. The following models are used:
+
+1. A linear ZSL model with/without normalize+scale (NS) and/or attributes normalization (AN).
+2. A 3-layer ZSL model with/without NS and/or AN.
+3. A 3-layer ZSL model with class normalization, with/without NS and/or AN.
+
+These models are trained on 4 standard ZSL datasets: SUN, CUB, AwA1 and AwA2 and their logits variance is calculated on each iteration and reported. The same batch size, learning rate, number of epochs, hidden dimensionalities were used. Results are presented on figures 5, 6, 7 and 8, which illustrates the same trend:
+
+- A traditional linear model without NS and AN has poor variance.
+- Adding NS with a proper scaling of 5 and AN improves it and bounds to be close to 1.
+- After introducing new layers, NS and AN stop "working" and variance vanishes below unit.
+- Incorporating class normalization allows to push it back to 1.
+
+# F LOSS LANDSCAPE SMOOTHNESS ANALYSIS
+
+# F.1 OVERVIEW
+
+As being said, we demonstrate two things:
+
+
+Figure 6: Variances plots for different models for CUB dataset. See Appendix E for the experimental details.
+
+
+
+
+
+
+Figure 7: Variances plots for different models for AwA1 dataset. See Appendix E for the experimental details.
+
+
+
+
+
+
+Figure 8: Variances plots for different models for AwA2 dataset. See Appendix E for the experimental details.
+
+
+
+
+
+1. For each example in a batch, parameters of a ZSL attribute embedder receive $K$ more updates than a typical non-ZSL classifier, where $K$ is the number of classes. This suggests a hypothesis that it has larger overall gradient magnitude, hence a more irregular loss surface.
+2. Our standardization procedure (9) makes the surface more smooth. We demonstrate it by simply applying Theorem 4.4 from (Santurkar et al., 2018).
+
+To see the first point, one just needs to compute the derivative with respect to weight $W_{ij}$ for $n$ -th data sample for loss surface $\mathcal{L}_n^{\mathrm{SL}}$ of a traditional model and loss surface $\mathcal{L}_n^{\mathrm{ZSL}}$ of a ZSL embedder:
+
+$$
+\frac {\partial \mathcal {L} _ {n} ^ {\mathrm {S L}}}{\partial W _ {i j} ^ {\ell}} = \frac {\partial \mathcal {L} _ {n} ^ {\mathrm {S L}}}{\partial y _ {i} ^ {(n)}} x _ {j} ^ {(n)} \quad \frac {\partial \mathcal {L} _ {n} ^ {\mathrm {Z S L}}}{\partial W _ {i j} ^ {\ell}} = \sum_ {c = 1} ^ {K} \frac {\partial \mathcal {L} _ {n} ^ {\mathrm {Z S L}}}{\partial y _ {i}} x _ {j} (\boldsymbol {a} _ {c}) \tag {78}
+$$
+
+Since the gradient has $K$ more terms and these updates are not independent from each other (since the final representations are used to construct a single logits vector after a dot-product with $\mathbf{z}_n$ ), this may lead to an increased overall gradient magnitude. We verify this empirically by computing the gradient magnitudes for our model and its non-ZSL "equivalent": a model with the same number of layers and hidden dimensionalities, but trained to classify objects in non-ZSL fashion.
+
+To show that our class standardization procedure (9) smoothes the landscape, we apply Theorem 4.4 from Santurkar et al. (2018) that demonstrates that a model augmented with batch normalization (BN) has smaller Lipschitz constant. This is easily done after noticing that (9) is equivalent to BN, but without scaling/shifting and is applied in a class-wise instead of the batch-wise fashion.
+
+We empirically validate the above observations on Figures 1 and 9.
+
+# F.2 FORMAL REASONING
+
+ZSL embedders are prone to have more irregular loss surface. We demonstrate that the loss surface of attribute embedder $P_{\theta}$ is more irregular compared to a traditional neural network. The reason is that its output vectors $\pmb{p}_c = P_{\theta}(\pmb{a}_c)$ are not later used independently, but instead combined together in a single matrix $W = [p_1, \dots, p_{K_s}]$ to compute the logits vector $y = Wz$ . Because of this, the gradient update for $\theta_i$ receives $K$ signals instead of just 1 like for a traditional model, where $K$ is the number of classes.
+
+Consider a classification neural network $F_{\psi}(\pmb{x})$ optimized with loss $\mathcal{L}$ and some its intermediate transformation $\pmb{y} = W^{\ell}\pmb{x}$ . Then the gradient of $\mathcal{L}_n$ on $n$ -th training example with respect to $W_{ij}^{\ell}$ is computed as:
+
+$$
+\boldsymbol {y} = W ^ {\ell} \boldsymbol {x} \Longrightarrow \frac {\partial \mathcal {L} _ {n}}{\partial W _ {i j} ^ {\ell}} = \frac {\partial \mathcal {L} _ {n}}{\partial y _ {i} ^ {(n)}} \frac {\partial y _ {i} ^ {(n)}}{\partial W _ {i j} ^ {\ell}} = \frac {\partial \mathcal {L} _ {n}}{\partial y _ {i} ^ {(n)}} x _ {j} ^ {(n)} \tag {79}
+$$
+
+While for attribute embedder $P_{\theta}(a_c)$ , we have $K$ times more terms in the above sum since we perform $K$ forward passes for each individual class attribute vector $a_c$ . The gradient on $n$ -th training example for its inner transformation $\mathbf{y} = W^{\ell}\mathbf{x}(a_c)$ is computed as:
+
+$$
+\boldsymbol {y} = W ^ {\ell} \boldsymbol {x} \left(\boldsymbol {a} _ {c}\right) \Longrightarrow \frac {\partial \mathcal {L} _ {n}}{\partial W _ {i j} ^ {\ell}} = \sum_ {c = 1} ^ {K} \frac {\partial \mathcal {L} _ {n}}{\partial y _ {i}} x _ {j} \left(\boldsymbol {a} _ {c}\right) \tag {80}
+$$
+
+From this, we can see that the average gradient for $P_{\theta}$ is $K$ times larger which may lead to the increased overall gradient magnitude and hence more irregular loss surface as defined in Section 3.5.
+
+CN smoothes the loss landscape. In contrast to the previous point, we can prove this rigorously by applying Theorem 4.4 by Santurkar et al. (2018), who showed that performing standardization across hidden representations smoothes the loss surface of neural networks. Namely Santurkar et al. (2018) proved the following:
+
+Theorem 4.4 from (Santurkar et al., 2018). For a network with BatchNorm with loss $\widehat{\mathcal{L}}$ and a network without BatchNorm with loss $\mathcal{L}$ if:
+
+$$
+g _ {\ell} = \max _ {\| X \| \leq \lambda} \| \nabla_ {W} \mathcal {L} \| ^ {2}, \quad \hat {g} _ {\ell} = \max _ {\| X \| \leq \lambda} \left\| \nabla_ {W} \widehat {\mathcal {L}} \right\| ^ {2} \tag {81}
+$$
+
+then:
+
+$$
+\hat {g} _ {\ell} \leq \frac {\gamma^ {2}}{\sigma_ {\ell} ^ {2}} \left(g _ {\ell} ^ {2} - m \mu_ {g _ {\ell}} ^ {2} - \lambda^ {2} \left\langle \nabla_ {\boldsymbol {y} _ {\ell}} \mathcal {L}, \hat {\boldsymbol {y}} _ {\ell} \right\rangle^ {2}\right) \tag {82}
+$$
+
+where $\pmb{y}_{\ell},\hat{\pmb{y}}_{\ell}$ are hidden representations at the $\ell$ -th layer, $m$ is their dimensionality, $\sigma_{\ell}$ is their standard deviation, $\mu_g = \frac{1}{m}\left\langle 1,\partial \widehat{\mathcal{L}} /\partial z_\ell \right\rangle$ for $\pmb {z}_{\ell} = \gamma \pmb {y}_{\ell} + \beta$ , is the average gradient norm, $\gamma$ is the BN scaling parameter, $X$ is the input data matrix at layer $\ell$
+
+Now, it is easy to see that our class standardization (9) is "equivalent" to BN (and thus the above theorem can be applied to our model):
+
+
+Figure 9: Empirical validation of the more irregular loss surface of ZSL models and smoothing effect of class normalization on other datasets. Like in figure 1, we observe that the gradient norms for traditional MLPs are much lower compared to a basic ZSL model, but class normalization partially remedies this problem.
+
+
+
+
+
+- First, set $\gamma = 1$ (i.e. remove scaling) and $\beta = 0$ (i.e. remove bias addition).
+- Second, apply this modified BN inside attribute embedder $P_{\theta}$ on top of attributes representations $H = [h_1, \dots, h_K]$ across $K$ -axis (class dimension) instead of objects representations $X = [x_1, \dots, x_B]$ across $B$ -axis (batch dimension).
+
+It is important to note here that there are no restricting assumptions on the loss function or what data $X$ is being used. Thus Theorem 4.4 of Santurkar et al. (2018) is applicable to our model which means that CN smoothes its loss surface.
+
+# F.3 EMPIRICAL VALIDATION
+
+To validate the above claim empirically, we approximate the quantity 11, but computed for all the parameters of the model instead of a single layer on each iteration. We do this by taking 10 random batches of size 256 from the dataset, adding $\epsilon \sim \mathcal{N}(\mathbf{0}, I)$ noise to this batch, computing the gradient of the loss with respect to the parameters, then computing its norm scaled by $1/n$ factor to account for a small difference in number of parameters ( $\approx 0.9$ ) between a ZSL model and a non-ZSL one. This approximates the quantity 11, but instead of approximating it around $\mathbf{0}$ , we approximate it around real data points since it is more practically relevant. We run the described experiment for three models:
+
+1. A vanilla MLP classifier, i.e. without any class attributes. For each dataset, it receives feature vector $\mathbf{z}$ and produces logits.
+2. A vanilla MLP zero-shot classifier, as described in section 3.
+3. An MLP zero-shot classifier with class normalization.
+
+All three models were trained with cross-entropy loss with the same optimization hyperparameters: learning rate of 0.0001, batch size of 256, number of iterations of 2500. They had the same numbers of layers, which was equal to 3. The results are illustrated on figures 1 (left) and 9. As one can see, traditional MLP models indeed have more flat loss surface which is observed by a small gradient norm. But class normalization helps to reduce the gap.
+
+# G CONTINUAL ZERO-SHOT LEARNING DETAILS
+
+# G.1 CZSL EXPERIMENT DETAILS
+
+As being said, we use the validation sequence approach from Chaudhry et al. (2019) to find the best hyperparameters for each method. We allocate the first 3 tasks to perform grid search over a fixed range. After the best hyperparameters have been found, we train the model from scratch for the rest of the tasks. The hyperparameter range for CZSL experiments are presented in Table 9 (we use the same range for all the experiments).
+
+We train the model for 5 epochs on each task with SGD optimizer. We also found it beneficial to decrease learning rate after each task by a factor of 0.9. This is equivalent to using step-wise learning rate schedule with the number of epochs equal to the number of epochs per task. As being said, for CZSL experiments, we use an ImageNet-pretrained ResNet-18 model as our image encoder. In contrast with ZSL, we do not keep it fixed during training. The results for our mS, mU, mH, mA, mAUC metrics, as well as the forgetting measure (Lopez-Paz & Ranzato, 2017) are presented on figures 10 and 11.
+
+Table 9: Hyperparameters range for CZSL experiments
+
+| Sampling distribution | uniform, normal |
| Gradient clipping value | 10, 100 |
| Attribute embedder learning rate | 0.001, 0.005 |
| Attribute embedder momentum | 0.9, 0.95 |
| Image encoder learning rate | 0.001, 0.005 |
+
+
+
+
+
+
+
+
+(a) GZSL-S
+(d) mAUSUC
+
+
+(b) GZSL-U
+(e) Joint Accuracy
+Figure 10: Additional CZSL results for CUB dataset
+
+
+(c) GZSL-H
+(f) Forgetting Measure
+
+
+
+
+
+
+
+
+(a) GZSL-S
+(d) mAUSUC
+Figure 11: Additional CZSL results for SUN dataset
+
+
+(b) GZSL-U
+(e) Joint Accuracy
+
+
+(c) GZSL-H
+(f) Forgetting Measure
+
+As one can clearly see, adding class normalization significantly improves the results, at some timesteps even surpassing the multi-task baseline without ClassNorm.
+
+# G.2 ADDITIONAL CZSL METRICS
+
+In this subsection, we describe our proposed CZSL metrics that are used to access a model's performance. Subscripts "tr"/"ts" denote train/test data.
+
+Mean Seen Accuracy (mSA). We compute GZSL-S after tasks $t = 1,..,T$ and take the average:
+
+$$
+\mathrm {m S A} (F) = \frac {1}{T} \sum_ {t = 1} ^ {T} \text {G Z S L - S} \left(F, D _ {\mathrm {t s}} ^ {\leq t}, A ^ {\leq t}\right) \tag {83}
+$$
+
+Mean Unseen Accuracy (mUA). We compute GZSL-U after tasks $t = 1, \dots, T - 1$ (we do not compute it after task $T$ since $D^{>T} = \emptyset$ ) and take the average:
+
+$$
+\mathrm {m U A} (F) = \frac {1}{T - 1} \sum_ {t = 1} ^ {T - 1} \text {G Z S L - U} \left(F, D _ {\mathrm {t s}} ^ {> t}, A ^ {> t}\right) \tag {84}
+$$
+
+Mean Harmonic Seen/Unseen Accuracy (mH). We compute GZSL-H after tasks $t = 1, \dots, T - 1$ and take the average:
+
+$$
+\mathrm {m H} (F) = \frac {1}{T - 1} \sum_ {t = 1} ^ {T - 1} \text {G Z S L - H} (F, D _ {\mathrm {t s}} ^ {\leq t}, D _ {\mathrm {t s}} ^ {> t}, A) \tag {85}
+$$
+
+Mean Area Under Seen/Unseen Curve (mAUC). We compute AUSUC Chao et al. (2016) after tasks $t = 1, \dots, T - 1$ and take the average:
+
+$$
+\operatorname {m A U C} (F) = \frac {1}{T - 1} \sum_ {t = 1} ^ {T - 1} \operatorname {A U S U C} \left(F, D _ {\mathrm {t s}} ^ {\leq t}, D _ {\mathrm {t s}} ^ {> t}, A\right) \tag {86}
+$$
+
+AUSUC is a performance metric that allows to detect model's bias towards seen or unseen data and in our case it measures this in a continual fashion.
+
+Mean Joint Accuracy (mJA). On each task $t$ we compute the generalized accuracy on all the test data we have for the entire problem:
+
+$$
+\mathrm {m J A} (F) = \frac {1}{T} \sum_ {t = 1} ^ {T} \operatorname {A C C} (F, D _ {\mathrm {t s}}, A) \tag {87}
+$$
+
+This evaluation measure allows us to understand how far behind a model is from the traditional supervised classifiers. A perfect model would be able to generalize on all the unseen classes from the very first task and maintain the performance on par with normal classifiers.
+
+# H WHY CANNOT WE HAVE INDEPENDENCE, ZERO-MEAN AND SAME-VARIANCE ASSUMPTIONS FOR ATTRIBUTES IN ZSL?
+
+Usually, when deriving an initialization scheme, people assume that their random vectors have zero mean, the same coordinate variance and the coordinates are independent from each other. In the paper, we stated that these are unrealistic assumptions for class attributes in ZSL and in this section elaborate on it.
+
+Attribute values for the common datasets need to be standardized to satisfy zero-mean and unit-variance (or any other same-variance) assumption. But it is not a sensible thing to do, if your data does not follow normal distribution, because it makes it likely to encounter a skewed long-tail distribution like the one illustrated on Figure 14. In reality, this does not break our theoretical derivations, but this creates an additional optimization issue which hampers training and that we illustrate in Table 8. This observation is also confirmed by Changpinyo et al. (2016b).
+
+If we do not use these assumptions but rather use zero-mean and unit-variance one (and enforce it during training), than the formula (6) will transform into:
+
+$$
+\operatorname {V a r} \left[ \tilde {y} _ {c} \right] = d _ {z} \cdot \operatorname {V a r} \left[ z _ {i} \right] \cdot \operatorname {V a r} \left[ V _ {i j} \right] \cdot \underset {\boldsymbol {a}} {\mathbb {E}} \left[ \| \boldsymbol {a} \| _ {2} ^ {2} \right] = d _ {z} \cdot \operatorname {V a r} \left[ z _ {i} \right] \cdot \operatorname {V a r} \left[ V _ {i j} \right] \cdot d _ {a} \tag {88}
+$$
+
+Table 10: Checking how a model performs when we replace AN with the standardization procedure and with the standardization procedure, accounted for $\frac{1}{d}$ factor from (88). In the latter case, the performance is noticeably improved.
+
+ | SUN | CUB | AwA1 | AwA2 |
| U | S | H | U | S | H | U | S | H | U | S | H |
| Linear | 41.0 | 33.4 | 36.8 | 26.9 | 58.1 | 36.8 | 40.6 | 76.6 | 53.1 | 38.4 | 81.7 | 52.2 |
| Linear -AN | 13.8 | 41.0 | 20.6 | 16.8 | 62.0 | 26.4 | 16.8 | 74.2 | 27.4 | 18.9 | 73.2 | 30.0 |
| Linear -AN + 1/da | 36.2 | 33.0 | 34.5 | 36.0 | 39.0 | 37.4 | 49.2 | 72.3 | 58.6 | 46.9 | 79.5 | 59.0 |
| 2-layer MLP | 34.4 | 39.6 | 36.8 | 46.9 | 45.0 | 45.9 | 57.3 | 73.8 | 64.5 | 55.4 | 77.1 | 64.5 |
| 2-layer MLP -AN | 40.5 | 38.4 | 39.4 | 48.0 | 40.1 | 43.7 | 59.7 | 68.5 | 63.8 | 54.9 | 69.4 | 61.3 |
| 2-layer MLP -AN + 1/da | 37.1 | 38.4 | 37.7 | 50.8 | 33.3 | 40.2 | 60.1 | 66.3 | 63.1 | 50.5 | 71.8 | 59.3 |
| 3-layer MLP | 31.4 | 40.4 | 35.3 | 45.2 | 48.4 | 46.7 | 55.6 | 73.0 | 63.1 | 54.5 | 72.2 | 62.1 |
| 3-layer MLP -AN | 34.7 | 38.5 | 36.5 | 46.9 | 42.8 | 44.9 | 57.0 | 69.9 | 62.8 | 49.7 | 76.4 | 60.2 |
| 3-layer MLP -AN + 1/da | 42.0 | 33.4 | 37.2 | 50.4 | 30.6 | 38.1 | 57.1 | 64.7 | 60.6 | 55.2 | 69.0 | 61.4 |
+
+
+(a) $\chi^2$ statistics of the normality test.
+
+
+(b) Corresponding $p$ -values
+Figure 12: Results of the normality test for class attributes for real-world datasets. Higher values mean that the distribution is further away from a normal one. For a dataset of truly normal random variables, these values are usually in the range [0, 5]. As one can see from 12a, real-world distribution of attributes does not follow a normal one, thus requires more tackling and cannot be easily converted to it.
+
+This means, that we need to adjust the initialization by a value $1 / d_{a}$ to preserve the variance. This means, that we initialize the first projection matrix with the variance:
+
+$$
+\operatorname {V a r} \left[ V _ {i j} \right] = \frac {1}{d _ {z}} \cdot \frac {1}{d _ {a}}. \tag {89}
+$$
+
+In Table 10, we show what happens if we do count for this factor and if we don't. As one can see, just standardizing the attributes without accounting for $\frac{1}{d_a}$ factor leads to worse performance.
+
+To show more rigorously that attributes do not follow normal distribution and are not independent from each other, we report two statistical results:
+
+- Results of a normality test based on D'Agostino and Pearson's tests, which comes with scipy python stats library. We run it for each attribute dimension for each dataset and report the distribution of the resulted $\chi^2$ -statistics with the corresponding $p$ -values on Figure 12.
+- Compute the distribution of absolute values of correlation coefficients between attribute dimensions. The results are presented on Figure 13 which demonstrates that attributes dimensions are not independent between each other in practice and thus we cannot use a common independence assumption when deriving the initialization scheme for a ZSL embedder.
+
+We note, however, that attributes distribution is uni-modal and, in theory, it is possible to transform it into a normal one (by hacking it with log/inverse/sqrt/etc), but such an approach is far from being scalable. It is not scalable because transforming a non-normal distribution into a normal one is tricky and is done either manually by finding a proper transformation or by solving an optimization task. This is tedious to do for each dataset and thus scales poorly.
+
+
+Figure 13: Distribution of mean absolute correlation values between different attribute dimensions. This figure shows that attributes are not independent that's why it would be unreasonable to use such an assumption. If attributes would be independent from each other, that would mean that, for example, that "having black stripes" is independent from "being orange", which tigers would argue not to be a natural assumption to make.
+
+
+(a) Histogram of attributes for SUN dataset
+
+
+(b) Histogram of attributes for AwA2 dataset
+Figure 14: Histogram of standardized attribute values for SUN and AwA2. These figures demonstrate that the distribution is typically long-tailed and skewed, so it is far from being normal.
\ No newline at end of file
diff --git a/classnormalizationforcontinualgeneralizedzeroshotlearning/images.zip b/classnormalizationforcontinualgeneralizedzeroshotlearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..b8178ba8d71609aa846e898c4e45a0269058f72d
--- /dev/null
+++ b/classnormalizationforcontinualgeneralizedzeroshotlearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ad21521305c1859ccce92f1dd743fc89a9f1bce63c355d5bdb1771422713591a
+size 1962686
diff --git a/classnormalizationforcontinualgeneralizedzeroshotlearning/layout.json b/classnormalizationforcontinualgeneralizedzeroshotlearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..3c83abcf85fa367aac43c930e9feffce1ce34c48
--- /dev/null
+++ b/classnormalizationforcontinualgeneralizedzeroshotlearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:09f19bd05d02340884264031d260a1abbbd7ddc8640da2c62deb84b5c6ba4c8b
+size 1128390
diff --git a/clearninghorizonawarecumulativeaccessibilityestimation/1ff0f360-688b-4644-aa56-ccf588fdb02b_content_list.json b/clearninghorizonawarecumulativeaccessibilityestimation/1ff0f360-688b-4644-aa56-ccf588fdb02b_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..2d0887e95388c0d5150ec7a2c7b267438e8975ee
--- /dev/null
+++ b/clearninghorizonawarecumulativeaccessibilityestimation/1ff0f360-688b-4644-aa56-ccf588fdb02b_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a9797b39e2c570f1952c6c1194ae8a6a16aafbbf75d286445e93462a22ca94d3
+size 126107
diff --git a/clearninghorizonawarecumulativeaccessibilityestimation/1ff0f360-688b-4644-aa56-ccf588fdb02b_model.json b/clearninghorizonawarecumulativeaccessibilityestimation/1ff0f360-688b-4644-aa56-ccf588fdb02b_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..7401f98b791ae8c57fee214ef01a86beb1db9053
--- /dev/null
+++ b/clearninghorizonawarecumulativeaccessibilityestimation/1ff0f360-688b-4644-aa56-ccf588fdb02b_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d5f1fda024d885aee0da89938920285b4e6c272de1bd7e41078b00251a21e5b4
+size 145646
diff --git a/clearninghorizonawarecumulativeaccessibilityestimation/1ff0f360-688b-4644-aa56-ccf588fdb02b_origin.pdf b/clearninghorizonawarecumulativeaccessibilityestimation/1ff0f360-688b-4644-aa56-ccf588fdb02b_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c5caac2899ce15b3d4d865f80f6b0648faa5cdc4
--- /dev/null
+++ b/clearninghorizonawarecumulativeaccessibilityestimation/1ff0f360-688b-4644-aa56-ccf588fdb02b_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6c241953765ceb8f3c343af86d8a972d5cef572be260b5b3e5aeb4b0d50fc1c9
+size 1994833
diff --git a/clearninghorizonawarecumulativeaccessibilityestimation/full.md b/clearninghorizonawarecumulativeaccessibilityestimation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..ae76411eeed3dfbadd7f549a12c93f37b52a95bb
--- /dev/null
+++ b/clearninghorizonawarecumulativeaccessibilityestimation/full.md
@@ -0,0 +1,550 @@
+# C-LEARNING: HORIZON-AWARE CUMULATIVE ACCESSIBILITY ESTIMATION
+
+Panteha Naderian, Gabriel Loaiza-Ganem, Harry J. Braviner, Anthony L. Caterini, Jesse C. Cresswell & Tong Li
+
+Layer 6 AI
+
+{panteha, gabriel, harry, anthony, jesse, tong}@layer6.ai
+
+# Animesh Garg
+
+University of Toronto, Vector Institute, Nvidia
+
+garg@cs.toronto.edu
+
+# ABSTRACT
+
+Multi-goal reaching is an important problem in reinforcement learning needed to achieve algorithmic generalization. Despite recent advances in this field, current algorithms suffer from three major challenges: high sample complexity, learning only a single way of reaching the goals, and difficulties in solving complex motion planning tasks. In order to address these limitations, we introduce the concept of cumulative accessibility functions, which measure the reachability of a goal from a given state within a specified horizon. We show that these functions obey a recurrence relation, which enables learning from offline interactions. We also prove that optimal cumulative accessibility functions are monotonic in the planning horizon. Additionally, our method can trade off speed and reliability in goal-reaching by suggesting multiple paths to a single goal depending on the provided horizon. We evaluate our approach on a set of multi-goal discrete and continuous control tasks. We show that our method outperforms state-of-the-art goal-reaching algorithms in success rate, sample complexity, and path optimality. Our code is available at https://github.comlayer6ai-labs/CAE, and additional visualizations can be found at https://sites.google.com/view/learning-cae/.
+
+# 1 INTRODUCTION
+
+Multi-goal reinforcement learning tackles the challenging problem of reaching multiple goals, and as a result, is an ideal framework for real-world agents that solve a diverse set of tasks. Despite progress in this field (Kaelbling, 1993; Schaul et al., 2015; Andrychowicz et al., 2017; Ghosh et al., 2019), current algorithms suffer from a set of limitations: an inability to find multiple paths to a goal, high sample complexity, and poor results in complex motion planning tasks. In this paper we propose $C$ -learning, a method which addresses all of these shortcomings.
+
+Many multi-goal reinforcement learning algorithms are limited by learning only a single policy $\pi(a|s, g)$ over actions $a$ to reach goal $g$ from state $s$ . There is an unexplored trade-off between reaching the goal reliably and reaching it quickly. We illustrate this shortcoming in Figure 1a, which represents an environment where an agent must reach a goal on the opposite side of some predator. Shorter paths can reach the goal faster at the cost of a higher probability of being eaten. Existing algorithms do not allow a dynamic choice of whether to act safely or quickly at test time.
+
+The second limitation is sample complexity. Despite significant improvements (Andrychowicz et al., 2017; Ghosh et al., 2019), multi-goal reaching still requires a very large amount of environment interactions for effective learning. We argue that the optimal $Q$ -function must be learned to high accuracy for the agent to achieve reasonable performance, and this leads to sample inefficiency. The same drawback of optimal $Q$ -functions often causes agents to learn sub-optimal ways of reaching the intended goal. This issue is particularly true for motion planning tasks (Qureshi et al., 2020), where current algorithms struggle.
+
+
+(a)
+
+
+(b)
+Figure 1: (a) A continuous spectrum of paths allow the mouse to reach its goal faster, at an increased risk of disturbing the cat and being eaten. (b) $Q^{*}$ (with $\gamma = 0.99$ ) needs to be learned more accurately than $C^{*}$ to act optimally. The goal $g$ can be reached in $h^{*} = 5$ steps from $s$ , so that $Q^{*}(s,g,a^{*}) = 0.99^{5}$ and $Q^{*}(s,g,a_{-}) = 0.99^{7}$ ; while $C^{*}(s,a^{*},g,h^{*}) = 1$ and $C^{*}(s,a_{-},g,h^{*}) = 0$ .
+
+We propose to address these limitations by learning horizon-aware policies $\pi(a|s, g, h)$ , which should be followed to reach goal $g$ from state $s$ in at most $h$ steps. The introduction of a time horizon $h$ naturally allows us to tune the speed/reliability trade-off, as an agent wishing to reach the goal faster should select a policy with a suitably small $h$ value. To learn these policies, we introduce the optimal cumulative accessibility function $C^*(s, a, g, h)$ . This is a generalization of the state-action value function and corresponds to the probability of reaching goal $g$ from state $s$ after at most $h$ steps if action $a$ is taken, and the agent acts optimally thereafter. Intuitively it is similar to the optimal $Q$ -function, but $Q$ -functions rarely correspond to probabilities, whereas the $C^*$ -function does so by construction. We derive Bellman backup update rules for $C^*$ , which allow it to be learned via minimization of unbiased estimates of the cross-entropy loss – this is in contrast to $Q$ -learning, which optimizes biased estimates of the squared error. Policies $\pi(a|s, g, h)$ can then be recovered from the $C^*$ function. We call our method cumulative accessibility estimation, or $C$ -learning. Pong et al. (2018) proposed TDMs, a method involving horizon-aware policies. We point out that their method is roughly related to a non-cumulative version of ours with a different loss that does not enable the speed/reliability trade-off and is ill-suited for sparse rewards. We include a detailed discussion of TDMs in section 4.
+
+One might expect that adding an extra dimension to the learning task, namely $h$ , would increase the difficulty - as $C^*$ effectively contains the information of several optimal $Q$ -functions for different discount factors. However, we argue that $C^*$ does not need to be learned to the same degree of accuracy as the optimal $Q$ -function for the agent to solve the task. As a result, learning $C^*$ is more efficient, and converges in fewer environmental interactions. This property, combined with our proposed goal sampling technique and replay buffer used during training, provides empirical improvements over $Q$ -function based methods.
+
+In addition to these advantages, learning $C^*$ is itself useful, containing information that the horizon-aware policies do not. It estimates whether a goal $g$ is reachable from the current state $s$ within $h$ steps. In contrast, $\pi(a|s, g, h)$ simply returns some action, even for unreachable goals. We show that $C^*$ can be used to determine reachability with examples in a nonholonomic environment.
+
+Summary of contributions: $(i)$ introducing $C$ -functions and cumulative accessibility estimation for both discrete and continuous action spaces; $(ii)$ highlighting the importance of the speed vs reliability trade-off in finite horizon reinforcement learning; $(iii)$ introducing a novel replay buffer specially tailored for learning $C^*$ which builds on HER (Andrychowicz et al., 2017); and $(iv)$ empirically showing the effectiveness of our method for goal-reaching as compared to existing alternatives, particularly in the context of complex motion planning tasks.
+
+# 2 BACKGROUND AND RELATED WORK
+
+Let us extend the Markov Decision Process (MDP) formalism (Sutton et al., 1998) for goal-reaching. We consider a set of actions $\mathcal{A}$ , a state space $\mathcal{S}$ , and a goal set $\mathcal{G}$ . We assume access to a goal checking function $G: \mathcal{S} \times \mathcal{G} \to \{0,1\}$ such that $G(s,g) = 1$ if and only if state $s$ achieves goal $g$ . For example, achieving the goal could mean exactly reaching a certain state, in which case $\mathcal{G} = \mathcal{S}$ and
+
+
+Figure 2: Graphical model depicting trajectories from $\mathbb{P}_{\pi (\cdot |\cdot ,g,h)}(\cdot |s_0 = s,a_0 = a)$ . Gray nodes denote fixed values, and white nodes stochastic ones. Nodes $a,g$ and $s$ are non-stochastic simply because they are conditioned on, not because they are always fixed within the environment. Note that the values of $h$ decrease deterministically. Nodes corresponding to horizons could be separated from states, but are not for a more concise graph.
+
+$G(s,g) = \mathbb{1}(s = g)$ . For many continuous state-spaces, hitting a state exactly has zero probability. Here we can still take $\mathcal{G} = \mathcal{S}$ , but let $G(s,g) = \mathbb{1}(d(s,g)\leq \epsilon)$ for some radius $\epsilon$ and metric $d$ . More general choices are possible. For example, in the Dubins' Car environment which we describe in more detail later, the state consists of both the location and orientation of the car: $\mathcal{S} = \mathbb{R}^2\times S^1$ . We take $\mathcal{G} = \mathbb{R}^2$ , and $G(s,g)$ checks that the location of the car is within some small radius of $g$ , ignoring the direction entirely. For a fixed $g$ , $G(s,g)$ can be thought of as a sparse reward function.
+
+In the goal-reaching setting, a policy $\pi : \mathcal{S} \times \mathcal{G} \to \mathcal{P}(\mathcal{A})$ , where $\mathcal{P}(\mathcal{A})$ denotes the set of distributions over $\mathcal{A}$ , maps state-goal pairs to an action distribution. The environment dynamics are given by a starting distribution $p(s_0, g)$ , usually taken as $p(s_0)p(g)$ , and transition probabilities $p(s_{t+1}|s_t, a_t)$ . States for which $G(s, g) = 1$ are considered terminal.
+
+Q-Learning: A $Q$ -function (Watkins & Dayan, 1992) for multi-goal reaching, $Q^{\pi}: S \times G \times \mathcal{A} \to \mathbb{R}$ , is defined by $Q^{\pi}(s_t, g, a_t) = \mathbb{E}_{\pi}[\sum_{i=t}^{\infty} \gamma^{i-t} G(s_t, g)|s_t, a_t]$ , where $\gamma \in [0,1]$ is a discount factor and the expectation is with respect to state-action trajectories obtained by using $\pi(a|s_i, g)$ . If $\pi^*$ is an optimal policy in the sense that $Q^{\pi^*}(s, g, a) \geq Q^{\pi}(s, g, a)$ for every $\pi$ and $(s, g, a) \in S \times G \times \mathcal{A}$ , then $Q^{\pi^*}$ matches the optimal $Q$ -function, $Q^*$ , which obeys the Bellman equation:
+
+$$
+Q ^ {*} (s, g, a) = \mathbb {E} _ {s ^ {\prime} \sim p (\cdot | s, a)} \left[ G (s, g) + \gamma \max _ {a ^ {\prime} \in \mathcal {A}} Q ^ {*} \left(s ^ {\prime}, g, a ^ {\prime}\right) \right]. \tag {1}
+$$
+
+In deep $Q$ -learning (Mnih et al., 2015), $Q^{*}$ is parameterized with a neural network and learning is achieved by enforcing the relationship from equation 1. This is done by minimizing $\sum_{i} \mathcal{L}(Q^{*}(s_{i}, g_{i}, a_{i}), y_{i})$ , where $y_{i}$ corresponds to the expectation in equation 1 and is estimated using a replay buffer of stored tuples $(s_{i}, a_{i}, g_{i}, s_{i}')$ . Note that $s_{i}'$ is the state the environment transitioned to after taking action $a_{i}$ from state $s_{i}$ , and determines the value of $y_{i}$ . Typically $\mathcal{L}$ is chosen as a squared error loss, and the dependency of $y_{i}$ on $Q^{*}$ is ignored for backpropagation in order to stabilize training. Once $Q^{*}$ is learned, the optimal policy is recovered by $\pi^{*}(a|s, g) = \mathbb{1}(a = \arg \max_{a'} Q^{*}(s, g, a'))$ .
+
+There is ample work extending and improving upon deep $Q$ -learning (Haarnoja et al., 2018). For example, Lillicrap et al. (2015) extend it to the continuous action space setting, and Fujimoto et al. (2018) further stabilize training. These improvements are fully compatible with goal-reaching (Pong et al., 2019; Bharadhwaj et al., 2020a; Ghosh et al., 2019). Andrychowicz et al. (2017) proposed Hindsight Experience Replay (HER), which relabels past experience as achieved goals, and allows sample efficient learning from sparse rewards (Nachum et al., 2018).
+
+# 3 CUMULATIVE ACCESSIBILITY FUNCTIONS
+
+We now consider horizon-aware policies $\pi : \mathcal{S} \times \mathcal{G} \times \mathbb{N} \to \mathcal{P}(\mathcal{A})$ , and define the cumulative accessibility function $C^{\pi}(s, a, g, h)$ , or $C$ -function, as the probability of reaching goal $g$ from state $s$ in at most $h$ steps by taking action $a$ and following the policy $\pi$ thereafter. By "following the policy $\pi$ thereafter" we mean that after $a$ , the next action $a_1$ is sampled from $\pi(\cdot | s_1, g, h - 1)$ , $a_2$ is sampled from $\pi(\cdot | s_2, g, h - 2)$ and so on. See Figure 2 for a graphical model depiction of how these trajectories are obtained. Importantly, an agent need not always act the same way at a particular state in order to reach a particular goal, thanks to horizon-awareness. We use $\mathbb{P}_{\pi(\cdot | s, g, h)}(\cdot | s_0 = s, a_0 = a)$ to denote probabilities in which actions are drawn in this manner and transitions are drawn according to the environment $p(s_{t + 1} | s_t, a)$ . More formally, $C^{\pi}$ is given by:
+
+$$
+C ^ {\pi} (s, a, g, h) = \mathbb {P} _ {\pi (\cdot | \cdot , g, h)} \left(\max _ {t = 0, \dots , h} G (s _ {t}, g) = 1 \mid s _ {0} = s, a _ {0} = a\right). \tag {2}
+$$
+
+Proposition 1: $C^\pi$ can be framed as a $Q$ -function within the MDP formalism, and if $\pi^*$ is optimal in the sense that $C^{\pi^*}(s,a,g,h) \geq C^\pi (s,a,g,h)$ for every $\pi$ and $(s,a,g,h) \in S \times \mathcal{A} \times \mathcal{G} \times \mathbb{N}$ , then $C^{\pi^{*}}$ matches the optimal $C$ -function, $C^*$ , which obeys the following equation:
+
+$$
+C ^ {*} (s, a, g, h) = \left\{ \begin{array}{l l} \mathbb {E} _ {s ^ {\prime} \sim p (\cdot | s, a)} \left[ \max _ {a ^ {\prime} \in \mathcal {A}} C ^ {*} \left(s ^ {\prime}, a ^ {\prime}, g, h - 1\right) \right] & \text {i f} G (s, g) = 0 \text {a n d} h \geq 1, \\ G (s, g) & \text {o t h e r w i s e .} \end{array} \right. \tag {3}
+$$
+
+See appendix A for a detailed mathematical proof of this proposition. The proof proceeds by first deriving a recurrence relationship that holds for any $C^\pi$ . In an analogous manner to the Bellman equation in $Q$ -learning, this recurrence involves an expectation over $\pi(\cdot | s', g, h - 1)$ , which, when replaced by a max returns the recursion for $C^*$ .
+
+Proposition 1 is relevant as it allows us to learn $C^*$ , enabling goal-reaching policies to be recovered:
+
+$$
+\pi^ {*} (a | s, g, h) = \mathbb {1} \left(a = \underset {a ^ {\prime}} {\arg \max } C ^ {*} (s, a ^ {\prime}, g, h)\right). \tag {4}
+$$
+
+$C^*$ itself is useful for determining reachability. After maximizing over actions, it estimates whether a given goal is reachable from a state within some horizon. Comparing these probabilities for different horizons allows us to make a speed / reliability trade-off for reaching goals.
+
+We observe that an optimal $C^*$ -function is non-decreasing in $h$ , but this does not necessarily hold for non-optimal $C$ -functions. For example, a horizon-aware policy could actively try to avoid the goal for high values of $h$ , and the $C^\pi$ -function constructed from it would show lower probabilities of success for larger $h$ . See appendix A for a concrete example of this counter-intuitive behavior.
+
+# Proposition 2: $C^*$ is non-decreasing in $h$ .
+
+See appendix A for a detailed mathematical proof. Intuitively, the proof consists of showing that an optimal policy can not exhibit the pathology mentioned above. Given an optimal policy $\pi^{*}(a|s,g,h)$ for a fixed horizon $h$ we construct a policy $\tilde{\pi}$ for $h + 1$ which always performs better, and lower bounds the performance of $\pi^{*}(a|s,g,h + 1)$ .
+
+In addition to being an elegant theoretical property, proposition 2 suggests that there is additional structure in a $C^*$ function which mitigates the added complexity from using horizon-aware policies. Indeed, in our preliminary experiments we used a non-cumulative version of $C$ -functions (see section 3.3) and obtained significantly improved performance upon changing to $C$ -functions. Moreover, monotonicity in $h$ could be encoded in the architecture of $C^*$ (Sill, 1998; Wehenkel & Louppe, 2019). However, we found that actively doing so hurt empirical performance (appendix F).
+
+# 3.1 SHORTCOMINGS OF $Q$ -LEARNING
+
+Before describing our method for learning $C^*$ , we highlight a shortcoming of $Q$ -learning. Consider a 2D navigation environment where an agent can move deterministically in the cardinal directions, and fix $s$ and $g$ . For an optimal action $a^*$ , the optimal $Q$ function will achieve some value $Q^*(s, g, a^*) \in [0,1]$ in the sparse reward setting. Taking a sub-optimal action $a^-$ initially results in the agent taking two extra steps to reach the intended goal, given that the agent acts optimally after the first action, so that $Q^*(s, g, a^-) = \gamma^2 Q^*(s, g, a^*)$ . The value of $\gamma$ is typically chosen close to 1, for example 0.99, to ensure that future rewards are not too heavily discounted. As a consequence $\gamma^2 \approx 1$ and thus the value of $Q^*$ at the optimal action is very close to its value at a sub-optimal action. We illustrate this issue in Figure 1b. In this scenario, recovering an optimal policy requires that the error between the learned $Q$ -function and $Q^*$ should be at most $(1 - \gamma^2)/2$ ; this is reflected empirically by $Q$ -learning having high sample complexity and learning sub-optimal paths. This shortcoming surfaces in any environment where taking a sub-optimal action results in a slightly longer path than an optimal one, as in e.g. motion planning tasks.
+
+The $C^*$ function does not have this shortcoming. Consider the same 2D navigation example, and let $h^*$ be the smallest horizon for which $g$ can be reached from $s$ . $h^*$ can be easily obtained from $C^*$ as the smallest $h$ such that $\max_a C^*(s, a, g, h) = 1$ . Again, denoting $a^*$ as an optimal action and $a^-$ as a sub-optimal one, we have that $C^*(s, a^*, g, h^*) = 1$ whereas $C^*(s, a^-, g, h^*) = 0$ , which is illustrated in Figure 1b. Therefore, the threshold for error is much higher when learning the $C^*$ function. This property results in fewer interactions with the environment needed to learn $C^*$ and more efficient solutions.
+
+# 3.2 HORIZON-INDEPENDENT POLICIES
+
+Given a $C^*$ -function, equation 4 lets us recover a horizon-aware policy. At test time, using small values of $h$ can achieve goals faster, while large values of $h$ will result in safe policies. However, a natural question arises: how small is small and how large is large? In this section we develop a method to quantitatively recover reasonable values of $h$ which adequately balance the speed/reliability trade-off. First, a safety threshold $\alpha \in (0,1]$ is chosen, which indicates the percentage of the maximum value of $C^*$ we are willing to accept as safe enough. Smaller values of $\alpha$ will thus result in quicker policies, while larger values will result in safer ones. Then we consider a range of viable horizons, $\mathcal{H}$ , and find the maximal $C^*$ value, $M(s,g) = \max_{h\in \mathcal{H},a\in \mathcal{A}}C^{*}(s,a,g,h)$ . We then compute:
+
+$$
+h _ {\alpha} (s, g) = \underset {h \in \mathcal {H}} {\arg \min } \left\{\underset {a \in \mathcal {A}} {\max } C ^ {*} (s, a, g, h): \underset {a \in \mathcal {A}} {\max } C ^ {*} (s, a, g, h) \geq \alpha M (s, g) \right\}, \tag {5}
+$$
+
+and take $\pi_{\alpha}^{*}(a|s,g) = \mathbb{1}\left(a = \arg \max_{a'}C^{*}(s,a',g,h_{\alpha}(s,g))\right)$ . This procedure also allows us to recover horizon-independent policies from $C^*$ by using a fixed value of $\alpha$ , which makes comparing against horizon-unaware methods straightforward. We used horizon-unaware policies with added randomness as the behaviour policy when interacting with the environment for exploration.
+
+# 3.3 ALTERNATIVE RECURSIONS
+
+One could consider defining a non-cumulative version of the $C$ -function, which we call an $A$ -function for "accessibility" (not for "advantage"), yielding the probability of reaching a goal in exactly $h$ steps. However, this version is more susceptible to pathological behaviors that hinder learning for certain environments. For illustration, consider an agent that must move one step at a time in the cardinal directions on a checkerboard. Starting on a dark square, the probability of reaching a light square in an even number of steps is always zero, but may be non-zero for odd numbers. An optimal $A$ -function would fluctuate wildly as the step horizon $h$ is increased, resulting in a harder target to learn. Nonetheless, $A$ -functions admit a similar recursion to equation 18, which we include in appendix C for completeness. In any case, the $C$ -function provides a notion of reachability which is more closely tied to related work (Savinov et al., 2018; Venkattaramanujam et al., 2019; Ghosh et al., 2018; Bharadhwaj et al., 2020a).
+
+In $Q$ -learning, discount factors close to 1 encourage safe policies, while discount factors close to 0 encourage fast policies. One could then also consider discount-conditioned policies $\pi(a|s, g, \gamma)$ as a way to achieve horizon-awareness. In appendix D we introduce $D$ -functions (for "discount"), $D(s, a, g, \gamma)$ , which allow recovery of discount-conditioned policies. However $D$ -functions suffer from the same limitation as $Q$ -functions in that they need to be learned to a high degree of accuracy.
+
+# 4 CUMULATIVE ACCESSIBILITY ESTIMATION
+
+Our training algorithm, which we call cumulative accessibility estimation (CAE), or $C$ -learning, is detailed in algorithm 1. Similarly to $Q$ -learning, the $C^*$ function can be modelled as a neural network with parameters $\theta$ , denoted $C_{\theta}^{*}$ , which can be learned by minimizing:
+
+$$
+- \sum_ {i} \left[ y _ {i} \log C _ {\theta} ^ {*} \left(s _ {i}, a _ {i}, g _ {i}, h _ {i}\right) + \left(1 - y _ {i}\right) \log \left(1 - C _ {\theta} ^ {*} \left(s _ {i}, a _ {i}, g _ {i}, h _ {i}\right)\right) \right], \tag {6}
+$$
+
+where $y_{i}$ corresponds to a stochastic estimate of the right hand side of equation 18. The sum is over tuples $(s_i, a_i, s_i', g_i, h_i)$ drawn from a replay buffer which we specially tailor to successfully learn $C^*$ . Since $C^*$ corresponds to a probability, we change the usual squared loss to be the binary cross entropy loss. Note that even if the targets $y_{i}$ are not necessarily binary, the use of binary cross entropy is still justified as it is equivalent to minimizing the KL divergence between Bernoulli distributions with parameters $y_{i}$ and $C_{\theta}^{*}(s_i, a_i, g_i, h_i)$ (Bellemare et al., 2017). Since the targets used do not correspond to the right-hand side of equation 18 but to a stochastic estimate (through $s_i'$ ) of it, using this loss instead of the square loss results in an unbiased estimate of $\sum_{i} \mathcal{L}(C_{\theta}^{*}(s_i, a_i, g_i, h_i), y_i^{\mathrm{true}})$ , where $y_i^{\mathrm{true}}$ corresponds to the right-hand side of equation 18 at $(s_i, a_i, g_i, h_i)$ (see appendix B for a detailed explanation). This confers another benefit of $C$ -learning over $Q$ -learning, as passing a stochastic estimate of equation 1 through the typical squared loss results in biased estimators. As in Double $Q$ -learning (Van Hasselt et al., 2015), we use a second network $C_{\theta'}^*$ to compute the $y_{i}$ targets, and do not minimize equation 6 with respect to $\theta'$ . We periodically update $\theta'$ to match $\theta$ . We now explain our algorithmic details.
+
+Algorithm 1: Training C-learning Network
+Parameter: $N_{\mathrm{explore}}$ : Number of random exploration episodes
+Parameter: $N_{\mathrm{GD}}$ : Number of goal-directed episodes
+Parameter: $N_{\mathrm{train}}$ : Number of batches to train on per goal-directed episode
+Parameter: $N_{\mathrm{copy}}$ : Number of batches between target network updates
+Parameter: $\alpha$ : Learning rate
+1 $\theta \gets$ Initial weights for $C_\theta^*$
+2 $\theta^{\prime}\gets \theta$ // Copy weights to target network
+3 $\mathcal{D}\gets []$ // Initialize experience replay buffer
+4 $n_{\mathrm{b}}\gets 0$ // Counter for training batches
+5 repeat $N_{\mathrm{explore}}$ times
+6 $\mathcal{E}\gets$ get-rollout( $\pi_{\mathrm{random}}$ // Do a random rollout
+7 D.append(E) // Save the rollout to the buffer
+8 repeat $N_{\mathrm{GD}}$ times
+9 g $\leftarrow$ goal_sample(nb) // Sample a goal
+10 $\mathcal{E}\gets$ get-rollout( $\pi_{\mathrm{behavior}}$ // Try to reach the goal
+11 D.append(E) // Save the rollout
+12 repeat $N_{\mathrm{train}}$ times
+13 if $n_{\mathrm{b}}$ mod $N_{\mathrm{copy}} = 0$ then
+14 $\begin{array}{rlr}{|\theta^{\prime}\gets \theta} & {} & {//~Copy~weights~periodically} \end{array}$
+15 $\mathcal{B}:= \{s_i,a_i,s_i',g_i,h_i\}_{i = 1}^{|\mathcal{B}|}\gets$ sample_batch(D) // Sample a batch
+16 $\{y_i\}_{i = 1}^{|\mathcal{B}|}\gets$ get_targets(B, $\theta^{\prime}$ ) // Estimate RHS of equation 18
+17 $\hat{\mathcal{L}}\gets -\frac{1}{|\mathcal{B}|}\sum_{i = 1}^{|\mathcal{B}|}y_i\log C_\theta^* (s_i,a_i,g_i,h_i) + (1 - y_i)\log (1 - C_\theta^* (s_i,a_i,g_i,h_i))$
+18 $\theta \gets \theta -\alpha \nabla_{\theta}\hat{\mathcal{L}}$ // Update weights
+19 $n_{\mathrm{b}}\gets n_{\mathrm{b}} + 1$ // Trained one batch
+
+Reachability-guided sampling: When sampling a batch (line 15 of algorithm 1), for each $i$ we first sample an episode from $\mathcal{D}$ , and then a transition $(s_i, a_i, s_i')$ from that episode. We sample $h_i$ favoring smaller $h$ 's at the beginning of training. We achieve this by selecting $h_i = h$ with probability proportional to $h^{-\kappa n_{GD} / N_{GD}}$ , where $n_{GD}$ is the current episode number, $N_{GD}$ the total number of episodes, and $\kappa$ is a hyperparameter. For deterministic environments, we follow HER (Andrychowicz et al., 2017) and select $g_i$ uniformly at random from the states observed in the episode after $s_i$ . For stochastic environments we use a slightly modified version, which addresses the bias incurred by HER (Matthias et al., 2018) in the presence of stochasticity. All details are included in appendix G.
+
+Extension to continuous action spaces: Note that constructing the targets $y_{i}$ requires taking a maximum over the action space. While this is straightforward in discrete action spaces, it is not so for continuous actions. Lillicrap et al. (2015) proposed DDPG, a method enabling deep $Q$ -learning in continuous action spaces. Similarly, Fujimoto et al. (2018) proposed TD3, a method for further stabilizing $Q$ -learning. We note that $C$ -learning is compatible with the ideas underpinning both of these methods. We present a TD3 version of $C$ -learning in appendix E.
+
+We finish this section by highlighting differences between $C$ -learning and related work. Ghosh et al. (2019) proposed GCSL, a method for goal-reaching inspired by supervised learning. In their derivations, they also include a horizon $h$ which their policies can depend on, but they drop this dependence in their experiments as they did not see a practical benefit by including $h$ . We find the opposite for $C$ -learning. TDMs (Pong et al., 2018) use horizon-aware policies and a similar recursion to ours. In practice they use a negative $L_{1}$ distance reward, which significantly differs from our goal-reaching indicators. This is an important difference as TDMs operate under dense rewards, while we use sparse rewards, making the problem significantly more challenging. Additionally, distance between states in nonholonomic environments is very poorly described by $L_{1}$ distance, resulting in TDMs being ill-suited for motion planning tasks. We also highlight that even if the reward in TDMs was swapped with our goal checking function $G$ , the resulting objective would be much closer to the non-cumulative version of $C$ -learning presented in appendix C. TDMs recover policies from their horizon-aware $Q$ -function in a different manner to ours, not allowing a speed vs reliability trade-off. The recursion presented for TDMs is used by definition, whereas we have
+
+
+Figure 3: Experimental environments, from left to right: frozen lake, Dubins' car, FetchPickAndPlace-v1 and HandManipulatePenFull-v0. Arrows represent actions (only their direction should be considered, their magnitude is not representative of the distance travelled by an agent taking an action). See text for description.
+
+provided a mathematical derivation of the Bellman equation for $C$ -functions along with proofs of additional structure. As a result of the $C$ -function being a probability we use a different loss function (binary cross-entropy) which results in unbiased estimates of the objective's gradients. Finally, we point out that TDMs sample horizons uniformly at random, which differs from our specifically tailored replay buffer.
+
+# 5 EXPERIMENTS
+
+# 5.1 SETUP
+
+Our experiments are designed to show that $(i) C$ -learning enables the speed vs reliability trade-off, $(ii)$ the $C^*$ function recovered through $C$ -learning meaningfully matches reachability in nonholonomic environments, and $(iii) C$ -learning scales to complex and high-dimensional motion planning tasks, resulting in improved sample complexity and goal-reaching. We use success rate (percentage of trajectories which reach the goal) and path length (average, over trajectories and goals, number of steps needed to achieve the goal among successful trajectories) as metrics.
+
+We compare $C$ -learning against GCSL (Ghosh et al., 2019) (for discrete action states only) and deep $Q$ -learning with HER (Andrychowicz et al., 2017) across several environments, which are depicted in Figure 3. All experimental details and hyperparameters are given in appendix G. We also provide ablations, and comparisons against TDMs and the alternative recursions from section 3.3 in appendix F. We evaluate $C$ -learning on the following domains:
+
+1. Frozen lake is a 2D navigation environment where the agent must navigate without falling in the holes (dark blue). Both the state space $(7 \times 5$ grid, with two $3 \times 1$ holes) and the action state are discrete. Falling in a hole terminates an episode. The agent's actions correspond to intended directions. The agent moves in the intended direction with probability 0.8, and in perpendicular directions with probabilities 0.1 each. Moving against the boundaries of the environment has no effect. We take $\mathcal{G} = \mathcal{S}$ and $G(s, g) = \mathbb{1}(g = s)$ .
+2. Dubins' car is a more challenging deterministic 2D navigation environment, where the agent drives a car which cannot turn by more than $10^{\circ}$ . The states, with spatial coordinates in $[0,15]^2$ , are continuous and include the direction of the car. There are 7 actions: the 6 combinations of $\{\text{left } 10^{\circ}, \text{straight}, \text{right } 10^{\circ}\} \times \{\text{forward } 1, \text{reverse } 1\}$ , and the option to not move. As such, the set of reachable states is not simply a ball around the current state. The environment also has walls through which the car cannot drive. We take $\mathcal{G} = [0,15]^2$ and the goal is considered to be reached when the car is within an $L_{\infty}$ distance of 0.5 from the goal, regardless of its orientation.
+3. FetchPickAndPlace-v1 (Brockman et al., 2016) is a complex, higher-dimensional environment in which a robotic arm needs to pick up a block and move it to the goal location. Goals are defined by their 3-dimensional coordinates. The state space is 25-dimensional, and the action space is continuous and 4-dimensional.
+4. HandManipulatePenFull-v0 (Brockman et al., 2016) is a realistic environment known the be a difficult goal-reaching problem, where deep Q-learning with HER shows very limited success (Plappert et al., 2018). The environment has a continuous action space of dimension 20, a 63-dimensional state space, and 7-dimensional goals. Note that we are considering the more challenging version of the environment where both target location and orientation are chosen randomly.
+
+
+
+
+(a)
+
+
+
+
+
+
+(b)
+
+
+
+
+
+
+(c)
+Figure 4: (a) Multimodal policies recovered by $C$ -learning in frozen lake for different values of $h$ for reaching $(G)$ from $(S)$ (top); unimodal policies recovered by GCSL and HER (bottom). (b) Heatmaps over the goal space of $\max_{a} C^{*}(s, a, g, h)$ with a fixed $s$ and $h$ for Dubins' car, with $h = 7$ (top), $h = 15$ (middle) and $h = 25$ (bottom). (c) Trajectories learned by $C$ -learning (top), GCSL (middle) and HER (bottom) for Dubins' car. (d) Success rate throughout the training for Dubins' car (top), FetchPickAndPlace-v1 (middle) and HandManipulatePenFull-v0 (bottom) for $C$ -learning (blue), HER (red) and GCSL (green).
+
+
+
+
+
+
+(d)
+
+# 5.2 RESULTS
+
+Speed vs reliability trade-off: We use frozen lake to illustrate this trade-off. At test time, we set the starting state and goal as shown in Figure 4a. Notice that, given enough time, an agent can reach the goal with near certainty by going around the holes on the right side of the lake. However, if the agent is time-constrained, the optimal policy must accept the risk of falling in a hole. We see that $C$ -learning does indeed learn both the risky and safe policies. Other methods, as previously explained, can only learn one. To avoid re-plotting the environment for every horizon, we have plotted arrows in Figure 4a corresponding to $\arg \max_{a} \pi^{*}(a|s_{0}, g, h)$ , $\arg \max_{a} \pi^{*}(a|s_{1}, g, h - 1)$ , $\arg \max_{a} \pi^{*}(a|s_{2}, g, h - 2)$ and so on, where $s_{t + 1}$ is obtained from the previous state and action while ignoring the randomness in the environment (i.e. $s_{t + 1} = \arg \max_{s} p(s|s_{t}, a_{t})$ ). In other words, we are plotting the most likely trajectory of the agent. When given the minimal amount of time to reach the goal ( $h = 6$ ), the CAE agent learns correctly to accept the risk of falling in a hole by taking the direct path. When given four times as long ( $h = 24$ ), the agent takes a safer path by going around the hole. Notice that the agent makes a single mistake at the upper right corner, which is quickly corrected when the value of $h$ is decreased. On the other hand, we can see on the bottom panel that both GCSL and HER recover a single policy, thus not enabling the speed vs reliability trade-off. Surprisingly, GCSL learns to take the high-risk path, despite Ghosh et al. (2019)'s intention to incentivize paths that are guaranteed to reach the goal.
+
+Reachability learning: To demonstrate that we are adequately learning reachability, we removed the walls from Dubins' car and further restricted the turning angles to $5^{\circ}$ , thus making the dumbbell shape of the true reachable region more extreme. In Figure 4b, we show that our learned $C^*$ function correctly learns which goals are reachable from which states for different time horizons in Dubins' car: not only is the learned $C^*$ function increasing in $h$ , but the shapes are as expected. None of the competing alternatives recover this information in any way, and thus comparisons are not available. As previously mentioned, the optimal $C^*$ -function in this task is not trivial. Reachability is defined by the "geodesics" in $\mathcal{S} = [0,15]^2 \times S^1$ that are constrained by the finite turning radius, and thus follow a more intricate structure than a ball in $\mathbb{R}^2$ .
+
+Table 1: Comparison of $C$ -Learning against relevant benchmarks in three environments, averaged across five random seeds. Runs in bold are either the best on the given metric in that environment, or have a mean score within the error bars of the best.
+
+| ENVIRONMENT | METHOD | SUCCESSION RATE | PATH LENGTH |
| Dubins' Car | CAE | 86.15% ± 2.44% | 16.45 ± 0.99 |
| GCSL | 79.69% ± 6.35% | 32.64 ± 6.16 |
| HER | 51.25% ± 6.48% | 20.13 ± 1.66 |
| FetchPickAndPlace-v1 | CAE (TD3) | 99.34% ± 0.27% | 8.53 ± 0.09 |
| HER (TD3) | 98.25% ± 2.51% | 8.86 ± 0.14 |
| HandManipulatePenFull-v0 | CAE (TD3) | 39.83% ± 1.64% | 7.68 ± 0.90 |
| HER (TD3) | 21.99% ± 2.46% | 15.03 ± 0.71 |
+
+Motion planning and goal-reaching: For a qualitative comparison of performance for goal-reaching, Figure 4c shows trajectories for $C$ -learning and competing alternatives for Dubins' car. We can see that $C$ -learning finds the optimal route, which is a combination of forward and backward movement, while other methods find inefficient paths. We also evaluated our method on challenging goal reaching environments, and observed that $C$ -learning achieves state-of-the-art results both in sample complexity and success rate. Figure 4d shows that $C$ -learning is able to learn considerably faster in the FetchPickAndPlace-v1 environment. More importantly, $C$ -learning achieves an absolute $20\%$ improvement in its success rate over the current state-of-the-art algorithm HER (TD3) in the HandManipulatePenFull-v0 environment. Quantitative results are shown in Table 1. On Dubins' car, we also note that $C$ -learning ends up with a smaller final $L_{\infty}$ distance to the goal: $0.93 \pm 0.15$ vs. $1.28 \pm 0.51$ for GCSL.
+
+# 6 DISCUSSION
+
+We have shown that $C$ -learning enables simultaneous learning of how to reach goals quickly and how to reach them safely, which current methods cannot do. We point out that reaching the goal safely in our setting means doing so at test time, and is different to what is usually considered in the safety literature where safe exploration is desired (Chow et al., 2018; Bharadhwaj et al., 2020b). Additionally, learning $C^*$ effectively learns reachability within an environment, and could thus naturally lend itself to incorporation into other frameworks, for example, in goal setting for hierarchical RL tasks (Nachum et al., 2018) where intermediate, reachable goals need to be selected sequentially. We believe further investigations on using $C$ -functions on safety-critical environments requiring adaptation (Peng et al., 2018; Zhang et al., 2020), or for hierarchical RL, are promising directions for further research.
+
+We have also argued that $C$ -functions are more tolerant of errors during learning than $Q$ -functions which increases sample efficiency. This is verified empirically in that $C$ -learning is able to solve goal-reaching tasks earlier on in training than $Q$ -learning.
+
+We finish by noticing a parallel between $C$ -learning and the options framework (Sutton et al., 1999; Bacon et al., 2017), which introduces temporal abstraction and allows agents to not always follow the same policy when at a given state $s$ . However, our work does not fit within this framework, as options evolve stochastically and new options are selected according only to the current state, while horizons evolve deterministically and depend on the previous horizon only, not the state. Additionally, and unlike $C$ -learning, nothing encourages different options to learn safe or quick policies, and there is no reachability information contained in options.
+
+# 7 CONCLUSIONS
+
+In this paper we introduced $C$ -learning, a $Q$ -learning-inspired method for goal-reaching. Unlike previous approaches, we propose the use of horizon-aware policies, and show that not only can these policies be tuned to reach the goal faster or more reliably, but they also outperform horizon-unaware approaches for goal-reaching on complex motion planning tasks. We hope our method will inspire further research into horizon-aware policies and their benefits.
+
+# REFERENCES
+
+Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, and Wojciech Zaremba. Hindsight experience replay. In Advances in neural information processing systems, pp. 5048-5058, 2017.
+Pierre-Luc Bacon, Jean Harb, and Doina Precup. The option-critic architecture. In Thirty-First AAAI Conference on Artificial Intelligence, 2017.
+Marc G Bellemare, Will Dabney, and Rémi Munos. A distributional perspective on reinforcement learning. arXiv preprint arXiv:1707.06887, 2017.
+Homanga Bharadhwaj, Animesh Garg, and Florian Shkurti. Leaf: Latent exploration along the frontier. arXiv preprint arXiv:2005.10934, 2020a.
+Homanga Bharadhwaj, Aviral Kumar, Nicholas Rhinehart, Sergey Levine, Florian Shkurti, and Animesh Garg. Conservative safety critics for exploration. arXiv preprint arXiv:2010.14497, 2020b.
+Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym, 2016.
+Yinlam Chow, Ofir Nachum, Edgar Duenez-Guzman, and Mohammad Ghavamzadeh. A lyapunov-based approach to safe reinforcement learning. In Advances in neural information processing systems, pp. 8092-8101, 2018.
+Scott Fujimoto, Herke Van Hoof, and David Meger. Addressing function approximation error in actor-critic methods. arXiv preprint arXiv:1802.09477, 2018.
+Dibya Ghosh, Abhishek Gupta, and Sergey Levine. Learning actionable representations with goal-conditioned policies. arXiv preprint arXiv:1811.07819, 2018.
+Dibya Ghosh, Abhishek Gupta, Ashwin Reddy, Justin Fu, Coline Devin, Benjamin Eysenbach, and Sergey Levine. Learning to Reach Goals via Iterated Supervised Learning. arXiv e-prints, art. arXiv:1912.06088, December 2019.
+Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. arXiv preprint arXiv:1801.01290, 2018.
+Leslie Pack Kaelbling. Learning to achieve goals. In *IJCAI*, pp. 1094–1099. CiteSeer, 1993.
+Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
+Plappert Matthias, Marcin Andrychowicz, Alex Ray, Bob McGrew, Bowen Baker, Glenn Powell, Jonas Schneider, Josh Tobin, Maciek Chociej, Peter Welinder, Vikash Kumar, and Wojciech Zaremba. Multi-goal reinforcement learning: Challenging robotics environments and request for research. arXiv preprint arXiv:1802.09464, 2018.
+Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. nature, 518(7540):529-533, 2015.
+Ofir Nachum, Shixiang Shane Gu, Honglak Lee, and Sergey Levine. Data-efficient hierarchical reinforcement learning. In Advances in Neural Information Processing Systems, pp. 3303-3313, 2018.
+Xue Bin Peng, Marcin Andrychowicz, Wojciech Zaremba, and Pieter Abbeel. Sim-to-real transfer of robotic control with dynamics randomization. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 1-8. IEEE, 2018.
+
+Matthias Plappert, Marcin Andrychowicz, Alex Ray, Bob McGrew, Bowen Baker, Glenn Powell, Jonas Schneider, Josh Tobin, Maciek Chociej, Peter Welinder, et al. Multi-goal reinforcement learning: Challenging robotics environments and request for research. arXiv preprint arXiv:1802.09464, 2018.
+Vitchyr Pong, Shixiang Gu, Murtaza Dalal, and Sergey Levine. Temporal difference models: Model-free deep rl for model-based control. arXiv preprint arXiv:1802.09081, 2018.
+Vitchyr H Pong, Murtaza Dalal, Steven Lin, Ashvin Nair, Shikhar Bahl, and Sergey Levine. Skewfit: State-covering self-supervised reinforcement learning. arXiv preprint arXiv:1903.03698, 2019.
+Ahmed Qureshi, Yinglong Miao, Anthony Simeonov, and Michael Yip. Motion planning networks: Bridging the gap between learning-based and classical motion planners. arXiv preprint arXiv:1907.06013, 2020.
+Nikolay Savinov, Anton Raichuk, Raphaël Marinier, Damien Vincent, Marc Pollefeys, Timothy Lillicrap, and Sylvain Gelly. Episodic curiosity through reachability. arXiv preprint arXiv:1810.02274, 2018.
+Tom Schaul, Daniel Horgan, Karol Gregor, and David Silver. Universal value function approximators. In International conference on machine learning, pp. 1312-1320, 2015.
+Joseph Sill. Monotonic networks. In Advances in neural information processing systems, pp. 661-667, 1998.
+Richard S Sutton, Andrew G Barto, et al. Introduction to reinforcement learning, volume 135. MIT press Cambridge, 1998.
+Richard S Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artificial intelligence, 112(1-2):181-211, 1999.
+Hado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q-learning. arXiv preprint arXiv:1509.06461, 2015.
+Srinivas Venkattaramanujam, Eric Crawford, Thang Doan, and Doina Precup. Self-supervised learning of distance functions for goal-conditioned reinforcement learning. arXiv preprint arXiv:1907.02998, 2019.
+Christopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 8(3-4):279-292, 1992.
+Antoine Wehenkel and Gilles Louppe. Unconstrained monotonic neural networks. In Advances in Neural Information Processing Systems, pp. 1545-1555, 2019.
+Jesse Zhang, Brian Cheung, Chelsea Finn, Sergey Levine, and Dinesh Jayaraman. Cautious adaptation for reinforcement learning in safety-critical settings. arXiv preprint arXiv:2008.06622, 2020.
+
+# A PROOFS OF PROPOSITIONS
+
+Proposition 1: $C^\pi$ can be framed as a $Q$ -function within the MDP formalism, and if $\pi^*$ is optimal in the sense that $C^{\pi^*}(s,a,g,h) \geq C^\pi (s,a,g,h)$ for every $\pi$ and $(s,a,g,h) \in S \times \mathcal{A} \times \mathcal{G} \times \mathbb{N}$ , then $C^{\pi^{*}}$ matches the optimal $C$ -function, $C^*$ , which obeys the following equation:
+
+$$
+C ^ {*} (s, a, g, h) = \left\{ \begin{array}{l l} \mathbb {E} _ {s ^ {\prime} \sim p (\cdot | s, a)} \left[ \max _ {a ^ {\prime} \in \mathcal {A}} C ^ {*} (s ^ {\prime}, a ^ {\prime}, g, h - 1) \right] & \text {i f} G (s, g) = 0 \text {a n d} h \geq 1, \\ G (s, g) & \text {o t h e r w i s e .} \end{array} \right.
+$$
+
+Proof: First, we frame $C$ -functions within the MDP formalism. Consider a state space given by $S' = S \times \mathcal{G} \times \mathbb{N}$ , where states corresponding to reached goals (i.e. $G(s,g) = 1$ ) or with last coordinate equal to 0 (i.e. $h = 0$ ) are considered terminal. The dynamics in this environment are given by:
+
+$$
+p \left(s _ {t + 1}, g _ {t + 1}, h _ {t + 1} \mid s _ {t}, g _ {t}, h _ {t}, a _ {t}\right) = p \left(s _ {t + 1} \mid s _ {t}, a _ {t}\right) \mathbb {1} \left(g _ {t + 1} = g _ {t}\right) \mathbb {1} \left(h _ {t + 1} = h _ {t} - 1\right). \tag {7}
+$$
+
+That is, states evolve according to the original dynamics, while goals remain unchanged and horizons decrease by one at every step. The initial distribution is given by:
+
+$$
+p \left(s _ {0}, g _ {0}, h _ {0}\right) = p \left(s _ {0}\right) p \left(g _ {0}, h _ {0} \mid s _ {0}\right), \tag {8}
+$$
+
+where $p(s_0)$ corresponds to the original starting distribution and $p(g_0,h_0|s_0)$ needs to be specified beforehand. Together, the initial distribution and the dynamics, along with a policy $\pi (a|s,g,h)$ properly define a distribution over trajectories, and again, we use $\mathbb{P}_{\pi (\cdot |\cdot ,g,h)}$ to denote probabilities with respect to this distribution. The reward function $r:\mathcal{S}^{\prime}\times \mathcal{A}\to \mathbb{R}$ is given by:
+
+$$
+r (s, a, g, h) = G (s, g). \tag {9}
+$$
+
+By taking the discount factor to be 1, and since we take states for which $G(s,g) = 1$ to be terminal, the return (sum of rewards) over a trajectory $(s_0,s_1,\ldots)$ with $h_0 = h$ is given by:
+
+$$
+\max _ {t = 0, \dots , h} G \left(s _ {t}, g\right). \tag {10}
+$$
+
+For notational simplicity we take $G(s_{t},g) = 0$ whenever $t$ is greater than the length of the trajectory to properly account for terminal states. Since the return is binary, its expectation matches its probability of being equal to 1, so that indeed, the $Q$ -functions in this MDP correspond to $C^\pi$ :
+
+$$
+C^{\pi}(s,a,g,h) = \mathbb{P}_{\pi (\cdot | \cdot ,g,h)}\left(\max_{t = 0,\ldots ,h}G(S_{t},g) = 1\bigg|s_{0} = s,a_{0} = a\right).
+$$
+
+Now, we derive the Bellman equation for our $C$ -functions. Trivially:
+
+$$
+C ^ {\pi} (s, a, g, h) = G (s, g), \tag {11}
+$$
+
+whenever $G(s, g) = 1$ or $h = 0$ . For the rest of the derivation, we assume that $G(s, g) = 0$ and $h \geq 1$ . We note that the probability of reaching $g$ from $s$ in at most $h$ steps is given by the probability of reaching it in exactly one step, plus the probability of not reaching it in the first step and reaching it in at most $h - 1$ steps thereafter. Formally:
+
+$$
+\begin{array}{l} C ^ {\pi} (s, a, g, h) \tag {12} \\ = C ^ {\pi} (s, a, g, 1) + \mathbb {E} _ {s ^ {\prime} \sim p (\cdot | s, a)} \left[ (1 - G (s ^ {\prime}, g)) \mathbb {E} _ {a ^ {\prime} \sim \pi (\cdot | s ^ {\prime}, g, h - 1)} \left[ C ^ {\pi} (s ^ {\prime}, a ^ {\prime}, g, h - 1) \right] \right] \\ = \mathbb {E} _ {s ^ {\prime} \sim p (\cdot | s, a)} [ G (s ^ {\prime}, g) ] + \mathbb {E} _ {s ^ {\prime} \sim p (\cdot | s, a)} \left[ (1 - G (s ^ {\prime}, g)) \mathbb {E} _ {a ^ {\prime} \sim \pi (\cdot | s ^ {\prime}, g, h - 1)} \left[ C ^ {\pi} (s ^ {\prime}, a ^ {\prime}, g, h - 1) \right] \right] \\ = \mathbb {E} _ {s ^ {\prime} \sim p (\cdot | s, a)} \left[ \mathbb {E} _ {a ^ {\prime} \sim \pi (\cdot | s ^ {\prime}, g, h - 1)} \left[ C ^ {\pi} \left(s ^ {\prime}, a ^ {\prime}, g, h - 1\right) \right] \right], \\ \end{array}
+$$
+
+where the last equality follows from the fact that $C^\pi(s', a', g, h - 1) = 1$ whenever $G(s', g) = 1$ . Putting everything together, we obtain the Bellman equation for $C^\pi$ :
+
+$$
+C ^ {\pi} (s, a, g, h) = \left\{ \begin{array}{l l} \mathbb {E} _ {s ^ {\prime} \sim p (\cdot | s, a)} \left[ \mathbb {E} _ {a ^ {\prime} \sim \pi (\cdot | s ^ {\prime}, g, h - 1)} \left[ C ^ {\pi} \left(s ^ {\prime}, a ^ {\prime}, g, h - 1\right) \right] \right] & \text {i f} G (s, g) = 0 \text {a n d} h \geq 1, \\ G (s, g) & \text {o t h e r w i s e .} \end{array} \right. \tag {13}
+$$
+
+Recall that the optimal policy is defined by the fact that $C^* \geq C^\pi$ for any arguments and for any policy $\pi$ . We can by maximizing equation 13 that, given the $C^*(\cdot, \cdot, h - 1)$ values, the optimal policy values at horizon $h - 1$ must be $\pi^*(a | s, g, h - 1) = \mathbb{1}$ ( $a = \arg \max_{a'} C^*(s, a, g, h - 1)$ ).
+
+We thus we obtain:
+
+$$
+C ^ {*} (s, a, g, h) = \left\{ \begin{array}{l l} \mathbb {E} _ {s ^ {\prime} \sim p (\cdot | s, a)} \left[ \max _ {a ^ {\prime} \in \mathcal {A}} C ^ {*} (s ^ {\prime}, a ^ {\prime}, g, h - 1) \right] & \text {i f} G (s, g) = 0 \text {a n d} h \geq 1, \\ G (s, g) & \text {o t h e r w i s e}. \end{array} \right.
+$$
+
+Note that, as in $Q$ -learning, the Bellman equation for the optimal policy has been obtained by replacing the expectation with respect to $a^\prime$ with a max.
+
+Proposition 2: $C^*$ is non-decreasing in $h$ .
+
+As mentioned in the main manuscript, one might naively think that this monotonicity property should hold for $C^\pi$ for any $\pi$ . However this is not quite the case, as $\pi$ depends on $h$ and a perverse policy may actively avoid the goal for large values of $h$ . Restricting to optimal $C^*$ -functions, such pathologies are avoided. As an example, consider an environment with three states, $\{0,1,2\}$ , and two actions $\{-1, + 1\}$ . The transition rule is $s_{t + 1} = \max (0,\min (2,s_t + a_t))$ , that is, the agent moves deterministically in the direction of the action unless doing so would move it out of the domain. Goals are defined as states. Let $\pi$ be such that $\pi (a|s = 1,g = 2,h = 1) = \mathbb{1}(a = 1)$ and $\pi (a|s = 1,g = 2,h = 2) = \mathbb{1}(a = -1)$ . While clearly a terrible policy, $\pi$ is such that $C^\pi (s = 0,a = 1,g = 2,h = 2) = 1$ and $C^\pi (s = 0,a = 1,g = 2,h = 3) = 0$ , so that $C^\pi$ can decrease with $h$ .
+
+Proof: Fix a distribution $p$ on the action space. For any policy $\pi(a|s, g, h)$ , we define a new policy $\tilde{\pi}$ as:
+
+$$
+\tilde {\pi} (a | s, g, h + 1) = \left\{ \begin{array}{l l} \pi (a | s, g, h), & \text {i f} h > 0 \\ p (a) & , \text {o t h e r w i s e .} \end{array} \right. \tag {14}
+$$
+
+The new policy $\tilde{\pi}$ acts the same as $\pi$ for the first $h$ steps and on the last step it samples an action from the fixed distribution $p$ . The final step can only increase the cumulative probability of reaching the goal, therefore:
+
+$$
+C ^ {\tilde {\pi}} (s, a, g, h + 1) \geq C ^ {\pi} (s, a, g, h). \tag {15}
+$$
+
+Since equation 15 holds for all policies $\pi$ , taking the maximum over policies gives:
+
+$$
+\begin{array}{l} C ^ {*} (s, a, g, h + 1) \geq \max _ {\pi} C ^ {\tilde {\pi}} (s, a, g, h + 1) \\ \geq \max _ {\pi} C ^ {\pi} (s, a, g, h) \\ = C ^ {*} (s, a, g, h). \tag {16} \\ \end{array}
+$$
+
+
+
+# B UNBIASEDNESS OF THE CROSS ENTROPY LOSS
+
+For given $s_i$ , $a_i$ , $g_i$ and $h_i$ from the replay buffer, we denote the right-hand side of equation 2 as $y_i^{true}$ :
+
+$$
+y _ {i} ^ {\text {t r u e}} = \left\{ \begin{array}{l l} \mathbb {E} _ {s ^ {\prime} \sim p (\cdot | s _ {i}, a _ {i})} \left[ \max _ {a ^ {\prime} \in \mathcal {A}} C _ {\theta} ^ {*} \left(s ^ {\prime}, a ^ {\prime}, g _ {i}, h _ {i} - 1\right) \right] & \text {i f} G \left(s _ {i}, g _ {i}\right) = 0 \text {a n d} h _ {i} \geq 1, \\ G \left(s _ {i}, g _ {i}\right) & \text {o t h e r w i s e .} \end{array} \right. \tag {17}
+$$
+
+Note that $y_{i}^{true}$ cannot be evaluated exactly, as the expectation over $s'$ would require knowledge of the environment dynamics to compute in closed form. However, using $s_{i}'$ , the next state after $s_{i}$ in the replay buffer, we can obtain a single-sample Monte Carlo estimate of $y_{i}^{true}$ , $y_{i}$ as:
+
+$$
+y _ {i} = \left\{ \begin{array}{l l} \max _ {a ^ {\prime} \in \mathcal {A}} C _ {\theta} ^ {*} \left(s _ {i} ^ {\prime}, a ^ {\prime}, g _ {i}, h _ {i} - 1\right) & \text {i f} G \left(s _ {i}, g _ {i}\right) = 0 \text {a n d} h _ {i} \geq 1, \\ G \left(s _ {i}, g _ {i}\right) & \text {o t h e r w i s e .} \end{array} \right. \tag {18}
+$$
+
+Clearly $y_{i}$ is an unbiased estimate of $y_{i}^{true}$ . However, the optimization objective is given by:
+
+$$
+\sum_ {i} \mathcal {L} \left(C _ {\theta} ^ {*} \left(s _ {i}, a _ {i}, g _ {i}, h _ {i}\right), y _ {i} ^ {\text {t r u e}}\right) \tag {19}
+$$
+
+where the sum is over tuples in the replay buffer and $\mathcal{L}$ is the loss being used. Simply replacing $y_{i}^{true}$ with its estimate $y_{i}$ , while commonly done in $Q$ -learning, will in general result in a biased estimate of the loss:
+
+$$
+\begin{array}{l} \sum_ {i} \mathcal {L} \left(C _ {\theta} ^ {*} \left(s _ {i}, a _ {i}, g _ {i}, h _ {i}\right), y _ {i} ^ {t r u e}\right) = \sum_ {i} \mathcal {L} \left(C _ {\theta} ^ {*} \left(s _ {i}, a _ {i}, g _ {i}, h _ {i}\right), \mathbb {E} _ {s _ {i} ^ {\prime} | s _ {i}, a _ {i}} [ y _ {i} ]\right) \\ \neq \sum_ {i} \mathbb {E} _ {s _ {i} ^ {\prime} | s _ {i}, a _ {i}} [ \mathcal {L} \left(C _ {\theta} ^ {*} \left(s _ {i}, a _ {i}, g _ {i}, h _ {i}\right), y _ {i}\right) ] \tag {20} \\ \end{array}
+$$
+
+since in general the expectation of a function need not match the function of the expectation. In other words, pulling the expectation with respect to $s_i'$ outside of the loss will in general incur in bias. However, if $\mathcal{L}$ is linear in its second argument, as is the case with binary cross entropy but not with the squared loss, then one indeed recovers:
+
+$$
+\sum_ {i} \mathcal {L} \left(C _ {\theta} ^ {*} \left(s _ {i}, a _ {i}, g _ {i}, h _ {i}\right), y _ {i} ^ {\text {t r u e}}\right) = \sum_ {i} \mathbb {E} _ {s _ {i} ^ {\prime} | s _ {i}, a _ {i}} \left[ \mathcal {L} \left(C _ {\theta} ^ {*} \left(s _ {i}, a _ {i}, g _ {i}, h _ {i}\right), y _ {i}\right) \right] \tag {21}
+$$
+
+so that replacing $y_{i}^{true}$ with $y_{i}$ does indeed recover an unbiased estimate of the loss.
+
+# C NON-CUMULATIVE CASE
+
+We originally considered a non-cumulative version of the $C$ -function, giving the probability of reaching the goal in exactly $h$ steps. We call this function the accessibility function, $A^{\pi}$ (despite our notation, this is unrelated to the commonly used advantage function), defined by:
+
+$$
+A ^ {\pi} (s, a, g, h) = \mathbb {P} _ {\pi (\cdot | \cdot , g, h)} \left(G \left(s _ {h}, g\right) = 1 \mid s _ {0} = s, a _ {0} = a\right). \tag {22}
+$$
+
+Here the trivial case is only when $h = 0$ , where we have:
+
+$$
+A ^ {\pi} (s, a, g, 0) = G (s, g) \tag {23}
+$$
+
+If $h \geq 1$ , we can obtain a similar recursion to that of the $C$ -functions. Here we no longer assume that states which reach the goal are terminal. The probability of reaching the goal in exactly $h$ steps is equal to the probability of reaching it in $h - 1$ steps after taking the first step. After the first action $a$ , subsequent actions are sampled from the policy $\pi$ :
+
+$$
+\begin{array}{l} A ^ {\pi} (s, a, g, h) = \mathbb {P} _ {\pi (\cdot |, g, h)} (G (s _ {h}, g) = 1 | s _ {0} = s, a _ {0} = a) \\ = \mathbb {E} _ {s ^ {\prime} \sim p (\cdot | s, a)} \left[ \mathbb {E} _ {a ^ {\prime} \sim \pi (\cdot | s ^ {\prime}, g, h - 1)} \left[ \mathbb {P} _ {\pi (\cdot | \cdot , g, h)} (G (s _ {h}, g) = 1 | s _ {1} = s ^ {\prime}, a _ {1} = a ^ {\prime}) \right] \right] \\ = \mathbb {E} _ {s ^ {\prime} \sim p (\cdot | s, a)} \left[ \mathbb {E} _ {a ^ {\prime} \sim \pi (\cdot | s ^ {\prime}, g, h - 1)} \left[ \mathbb {P} _ {\pi (\cdot | \cdot , g, h - 1)} (G (s _ {h - 1}, g) = 1 | s _ {0} = s ^ {\prime}, a _ {0} = a ^ {\prime}) \right] \right] \\ = \mathbb {E} _ {s ^ {\prime} \sim p (\cdot | s, a)} \left[ \mathbb {E} _ {a ^ {\prime} \sim \pi (\cdot | s ^ {\prime}, g, h - 1)} \left[ A ^ {\pi} \left(s ^ {\prime}, a ^ {\prime}, g, h - 1\right) \right] \right]. \tag {24} \\ \end{array}
+$$
+
+By the same argument we employed in proposition 1, the recursion for the optimal $A$ -function, $A^*$ , is obtained by replacing the expectation with respect to $a'$ with $\max$ . Putting this together with the base case, we have:
+
+$$
+A ^ {*} (s, a, g, h) = \left\{ \begin{array}{l l} \mathbb {E} _ {s ^ {\prime} \sim p (\cdot | s, a)} \left[ \max _ {a ^ {\prime} \in \mathcal {A}} A ^ {*} \left(s ^ {\prime}, a, g, h - 1\right) \right] & \text {i f} h \geq 1, \\ G (s, g) & \text {o t h e r w i s e .} \end{array} \right. \tag {25}
+$$
+
+Note that this recursion is extremely similar to that of $C^*$ , and differs only in the base cases for the recursion. The difference is empirically relevant however for the two reasons that make $C^*$ easier to learn: $C^*$ is monotonic in $h$ , and the cumulative probability is more well-behaved generally.
+
+# D DISCOUNTED CASE
+
+Another approach which would allow a test-time trade-off between speed and reliability is to learn the following $D$ -function ( $D$ standing for "discounted accessibility").
+
+$$
+D ^ {\pi} (s, a, g, \gamma) = \mathbb {E} _ {\pi} \left[ \gamma^ {T - 1} | s _ {0} = s, a _ {0} = a \right], \tag {26}
+$$
+
+where the random variable $T$ is the smallest positive number such that $G(s_{T}, g) = 1$ . If no such state occurs during an episode then $\gamma^{T-1}$ is interpreted as zero. This is the discounted future return of an environment in which satisfying the goal returns a reward of 1 and terminates the episode.
+
+We may derive a recursion relation for this formalism too
+
+$$
+\begin{array}{l} D ^ {\pi} (s, a, g, \gamma) = \mathbb {E} _ {s ^ {\prime} \sim p (\cdot | s, a)} \left[ G (s ^ {\prime}, g) + \gamma (1 - G (s ^ {\prime}, g)) \mathbb {E} _ {a ^ {\prime} \sim \pi (\cdot | s ^ {\prime})} \left[ \mathbb {E} _ {\pi} \left[ \gamma^ {T - 2} | s ^ {\prime}, a ^ {\prime} \right] \right] \right] \\ = \mathbb {E} _ {s ^ {\prime} \sim p (\cdot | s, a)} \left[ G (s ^ {\prime}, g) + \gamma (1 - G (s ^ {\prime}, g)) \mathbb {E} _ {a ^ {\prime} \sim \pi (\cdot | s ^ {\prime})} \left[ D ^ {\pi} \left(s ^ {\prime}, a ^ {\prime}, g, \gamma\right) \right] \right]. \tag {27} \\ \end{array}
+$$
+
+By the same argument as employed for the $C$ and $A$ functions, the $D$ -function of the optimal policy is obtained by replacing the expectation over actions with a max, giving
+
+$$
+D ^ {*} (s, a, g, \gamma) = \mathbb {E} _ {s ^ {\prime} \sim p (\cdot | s, a)} \left[ G \left(s ^ {\prime}, g\right) + \gamma \left(1 - G \left(s ^ {\prime}, g\right)\right) \max _ {a ^ {\prime}} D ^ {*} \left(s ^ {\prime}, a ^ {\prime}, g, \gamma\right) \right]. \tag {28}
+$$
+
+Learning such a $D$ -function would allow a test time trade-off between speed and reliability, and might well be more efficient than training independent models for different values of $\gamma$ . We did not pursue this experimentally for two reasons. Firstly, our initial motivation was to allow some higher-level operator (either human or a controlling program) to trade off speed for reliability, and a hard horizon is usually more interpretable than a discounting factor. Secondly, we noticed that $C$ -learning performs strongly at goal-reaching in deterministic environments, and we attribute this to the hard horizon allowing the optimal policy to be executed even with significant errors in the $C$ -value. Conversely, $Q$ -learning can typically only tolerate small errors before the actions selected become sub-optimal. $D$ -learning would suffer from the same issue.
+
+# E CONTINUOUS ACTION SPACE CASE
+
+As mentioned in the main manuscript, $C$ -learning is compatible with the ideas that underpin DDPG and TD3 (Lillicrap et al., 2015; Fujimoto et al., 2018), allowing it to be used in continuous action spaces. This requires the introduction of another neural network approximating a deterministic policy, $\mu_{\phi}: S \times \mathcal{G} \times \mathbb{N} \to \mathcal{A}$ , which is intended to return $\mu(s, g, h) = \arg \max_{a} C^{*}(s, a, g, h)$ . This network is trained alongside $C_{\theta}^{*}$ as detailed in Algorithm 2.
+
+We do point out that the procedure to obtain a horizon-agnostic policy, and thus $\pi_{\mathrm{behavior}}$ , is also modified from the discrete version. Here, $M(s,g) = \max_{h\in \mathcal{H}}C^{*}(s,\mu (s,g,h),g,h)$ , and:
+
+$$
+h _ {\gamma} (s, g) = \underset {h \in \mathcal {H}} {\arg \min } \left\{C ^ {*} (s, \mu (s, g, h), g, h): C ^ {*} (s, \mu (s, g, h), g, h) \geq \gamma M (s, g) \right\}, \tag {29}
+$$
+
+where we now take $\pi_{\gamma}^{*}(a|s,g)$ as a point mass at $\mu (s,g,h_{\gamma}(s,g))$
+
+# F ADDITIONAL EXPERIMENTS
+
+In this section we study the performance of $C$ -learning across different types of goals. We first evaluate $C$ -learning for the Mini maze and Dubins' car environments, and partition their goal space into easy, medium and hard as shown in Figure 5. We run experiments for 3 (Mini maze) or 5 (Dubins' car) choices of random seed and take the average for each metric and goal stratification, plus/minus the standard deviation across runs.
+
+The Mini maze results are shown in Table 2. We see that $C$ -learning beats GCSL on success rate, although only marginally, with the bulk of the improvement observed on the reaching of hard goals. Both methods handily beat HER on success rate, and note that the path length results in the HER section are unreliable because very few runs reach the goal.
+
+The Dubins' car results are shown in Table 3. We see that $C$ -learning is the clear winner here across the board, with GCSL achieving a somewhat close success rate, and HER ending up with paths which are almost as efficient. Note that this result is the quantitative counterpart to the qualitative trajectory visualizations from Figure 4c. As mentioned in the main manuscript, we also tried explicitly enforcing monotonicity of our $C$ -functions using the method of Sill (1998), but we obtained success rates below $50\%$ in Dubins' car when doing so.
+
+We also compare $C$ -learning to naive horizon-aware $Q$ -learning, where instead of sampling $h$ as in $C$ -learning, where consider it as part of the state space and thus sample it along the state in the replay buffer. Results are shown in the left panel of Figure 6. We can see that our sampling of $h$ achieves the same performance in roughly half the time. Additionally, we compare against sampling $h$ uniformly in the right panel of Figure 6, and observe similar results.
+
+Algorithm 2: Training C-learning (TD3) Version
+Parameter: $N_{\mathrm{explore}}$ : Number of random exploration episodes
+Parameter: $N_{\mathrm{GD}}$ : Number of goal-directed episodes
+Parameter: $N_{\mathrm{train}}$ : Number of batches to train on per goal-directed episode
+Parameter: $N_{\mathrm{copy}}$ : Number of batches between target network updates
+Parameter: $\alpha$ Learning rate
+1 $\theta_1,\theta_2\gets$ Initial weights for $C_{\theta_1}^*,C_{\theta_2}^*$
+2 $\theta_1',\theta_2'\gets \theta_1,\theta_2$ // Copy weights to target network
+3 $\phi \leftarrow$ Initial weights for $\mu_{\phi}$
+4 $\mathcal{D}\gets []$ // Initialize experience replay buffer
+5 $n_{\mathrm{b}}\gets 0$ // Counter for training batches
+6 repeat $N_{\mathrm{explore}}$ times
+7 $\mathcal{E}\gets$ get-rollout( $\pi_{\mathrm{random}}$ // Do a random rollout
+8 D.append(E) // Save the rollout to the buffer
+9 repeat $N_{\mathrm{GD}}$ times
+10 g $\leftarrow$ goal_sample(nb) // Sample a goal
+11 $\mathcal{E}\gets$ get-rollout( $\pi_{\mathrm{behavior}}$ // Try to reach the goal
+12 D.append(E) // Save the rollout
+13 repeat $N_{\mathrm{train}}$ times
+14 $\mathcal{B}:= \{s_i,a_i,s_i',g_i,h_i\}_{i = 1}^{|\mathcal{B}|}\gets$ sample_batch(D) // Sample a batch
+15 $\{y_i\}_{i = 1}^{|\mathcal{B}|}\gets$ get_targets(B, $\theta^{\prime}$ ) // Estimate RHS of equation 18
+16 $\hat{\mathcal{L}}_1\gets -\frac{1}{|\mathcal{B}|}\sum_{i = 1}^{|\mathcal{B}|}y_i\log C_{\theta_1}^* (s_i,a_i,g_i,h_i) + (1 - y_i)\log \big(1 - C_{\theta_1}^* (s_i,a_i,g_i,h_i)\big)$
+17 $\hat{\mathcal{L}}_2\gets -\frac{1}{|\mathcal{B}|}\sum_{i = 1}^{|\mathcal{B}|}y_i\log C_{\theta_2}^* (s_i,a_i,g_i,h_i) + (1 - y_i)\log \big(1 - C_{\theta_2}^* (s_i,a_i,g_i,h_i)\big)$
+18 $\theta_1\gets \theta_1 - \alpha \nabla_{\theta_1}\hat{\mathcal{L}}_1$ // Update weights
+19 $\theta_2\gets \theta_2 - \alpha \nabla_\theta \hat{\mathcal{L}}_2$ // Update weights
+20 if nb mod policy_delay $= 0$ then
+21 $\hat{\mathcal{L}}_{actor}\gets -\frac{1}{|\mathcal{B}|}\sum_{i = 1}^{|\mathcal{B}|}C_{\theta_1}^* (s_i,\mu_\phi (s_i,g_i,h_i),g_i,h_i)$
+22 $\phi \gets \phi -\alpha \nabla_{\phi}\hat{\mathcal{L}}_{actor}$ // Update actor weight
+23 $\theta_1'\gets \theta_1*(1 - \tau) + \theta_1'\ast \tau$ // Update target networks
+24 $\theta_2'\gets \theta_2*(1 - \tau) + \theta_2'\ast \tau$
+25 $\phi '\gets \phi *(1 - \tau) + \phi '*\tau$
+26
+27 n b + 1 // Trained one batch
+
+
+Figure 5: Partitioning of environments into easy (green), medium (yellow) and hard (red) goals to reach from a given starting state (blue). The left panel shows mini maze, and the right shows Dubins' car.
+
+
+
+Table 2: Relevant metrics for the Mini maze environment, stratified by difficulty of goal (see Figure 5 (left) from appendix F). Runs in bold are either the best on the given metric in that environment, or have a mean score within the error bars of the best.
+
+| METHOD | SUCCEEDRSTATE | PATH LENGTH |
| EASY | MED | HARD | ALL | EASY | MED | HARD | ALL |
| CAE | 99.44% ± 0.79% | 97.52% ± 0.89% | 53.17% ± 25.77% | 86.89% ± 6.56% | 5.68 ± 0.02 | 14.84 ± 0.05 | 25.76 ± 0.61 | 13.81 ± 0.80 |
| GCSL | 99.17% ± 0.68% | 99.72% ± 0.19% | 37.74% ± 1.40% | 84.06% ± 0.61% | 5.65 ± 0.04 | 14.81 ± 0.02 | 23.46 ± 0.22 | 13.10 ± 0.02 |
| HER | 22.78% ± 15.73% | 21.63% ± 9.68% | 5.51% ± 7.79% | 17.87% ± 7.90% | 4.73 ± 0.63 | 13.94 ± 1.40 | 23.50 ± 0.00 | 11.58 ± 2.12 |
+
+Table 3: Relevant metrics for the Dubins' car environment, stratified by difficulty of goal (see Figure 5 (right) from appendix F). Runs in bold are either the best on the given metric in that environment, or have a mean score within the error bars of the best.
+
+| METHOD | SUCCESSION RATE | PATH LENGTH |
| EASY | MED | HARD | ALL | EASY | MED | HARD | ALL |
| CAE | 97.44% ± 2.23% | 86.15% ± 4.06% | 81.14% ± 4.02% | 86.15% ± 2.44% | 6.66 ± 0.91 | 16.16 ± 1.47 | 21.01 ± 1.50 | 16.45 ± 0.99 |
| GCSL | 93.85% ± 4.76% | 76.62% ± 7.11% | 75.68% ± 11.05% | 79.69% ± 6.35% | 17.58 ± 5.81 | 31.36 ± 3.99 | 41.85 ± 9.50 | 32.64 ± 6.16 |
| HER | 51.79% ± 12.60% | 55.38% ± 3.77% | 47.95% ± 10.20% | 51.25% ± 6.48% | 9.30 ± 2.37 | 18.06 ± 1.90 | 26.72 ± 2.84 | 20.13 ± 1.66 |
+
+We also compare $C$ -learning against TDMs Pong et al. (2018) for motion planning tasks. Results are in Figure 7. As mentioned in the main manuscript, TDMs assume a dense reward function, which $C$ -learning does not have access to. In spite of this, $C$ -learning significantly outperforms TDMs.
+
+Finally, we also compare against the non-cumulative version of $C$ -learning ( $A$ -learning), and the discount-based recursion ( $D$ -learning) in Dubins' car. For $D$ -learning, we selected $\gamma$ uniformly at random in [0, 1]. Results are shown in table 4. We can see that indeed, $C$ -learning outperforms both alternatives. Curiously, while underperformant, $A$ -learning and $D$ -learning seem to obtain shorter paths among successful trials.
+
+# G EXPERIMENTAL DETAILS
+
+C-learning for stochastic environments: As mentioned in the main manuscript, we modify the replay buffer in order to avoid the bias incurred by HER sampling (Matthias et al., 2018) in non-deterministic environments. We sample goals independently from the chosen episode. We sample $g_{i}$ to be a potentially reachable goal from $s_i$ in $h_i$ steps. For example, if given access to a distance $d$ between states and goals such that the distance can be decreased at most by 1 unit after a single step, we sample $g_{i}$ from the set of $g$ 's such that $d(s_i, g) \leq h_i$ . Moreover, when constructing $y_{i}$ (line 16 of algorithm 1), if we know for a fact that $g_{i}$ cannot be reached from $s_i'$ in $h_i - 1$ steps, we output 0 instead of $\max_{a'} C_{\theta'}^* (s_i', a', g_i, h_i - 1)$ . For example, $d(s_i', g_i) > h_i - 1$ allows us to set $y_{i} = 0$ . We found this practice, combined with the sampling of $g_{i}$ described above, to significantly improve performance. While this requires some knowledge of the environment, for example a metric over states, this knowledge is often available in many environments. For frozen lake, we use the $L_1$ metric to determine if a goal is not reachable from a state, while ignoring the holes in the environment so as to not use too much information about the environment.
+
+
+Figure 6: Success rate throughout training of $C$ -learning (blue) and naive horizon-aware $Q$ -learning (red) on FetchPickAndPlace-v1 (left); and $C$ -learning (blue) and $C$ -learning with uniform $h$ sampling (red) on FetchPickAndPlace-v1 (right).
+
+
+
+
+Figure 7: Success rate throughout training of $C$ -learning (blue) and TDMs (red) in FetchPickAndPlace-v1 (left) and HandManipulatePenFull-v0 (right).
+
+
+
+Table 4: Relevant metrics for the Dubins' car environment, stratified by difficulty of goal (see Figure 5 (right) from appendix F). Runs in bold are either the best on the given metric in that environment, or have a mean score within the error bars of the best.
+
+| METHOD | SUCCEED RATE | PATH LENGTH |
| EASY | MED | HARD | ALL | EASY | MED | HARD | ALL |
| C-learning | 97.44% ± 2.23% | 86.15% ± 4.06% | 81.14% ± 4.02% | 86.15% ± 2.44% | 6.66 ± 0.91 | 16.16 ± 1.47 | 21.01 ± 1.50 | 16.45 ± 0.99 |
| A-learning | 93.16% ± 3.20% | 89.74 ± 4.41% | 62.88% ± 16.50% | 78.12% ± 6.75% | 5.84 ± 0.59 | 13.29 ± 0.28 | 16.94 ± 0.18 | 12.83 ± 0.39 |
| D-learning | 86.32% ± 12.27% | 34.36% ± 15.71% | 2.27% ± 3.21% | 30.21% ± 8.47% | 5.62 ± 0.25 | 11.44 ± 0.26 | 13.97 ± 0.00 | 7.91 ± 0.82 |
+
+# G.1 FROZEN LAKE
+
+For all methods, we train for 300 episodes, each one of maximal length 50 steps, we use a learning rate $10^{-3}$ , a batch size of size 256, and train for 64 gradient steps per episode. We use a 0.1-greedy for the behavior policy. We use a neural network with two hidden layers of respective sizes 60 and 40 with ReLU activations. We use 15 fully random exploration episodes before we start training. We take $p(s_0)$ as uniform among non-hole states during training, and set it as a point mass at $(1,0)$ for testing. We set $p(g)$ as uniform among states during training, and we evaluate at every goal during testing. For $C$ -learning, we use $\kappa = 3$ , and copy the target network every 10 steps. We take the metric $d$ to be the $L_{1}$ norm, completely ignoring the holes so as to not use too much environment information. For the horizon-independent policy, we used $\mathcal{H} = \{1,2,\dots,50\}$ and $\alpha = 0.9$ . We do point out that, while $C$ -learning did manage to recover the safe path for large $h$ , it did not do always do so: we suspect that the policy of going directly up is more likely to be explored. However, we never observed GCSL taking the safe path.
+
+# G.2 MINI MAZE
+
+We again train for 3000 episodes, now of maximal length 200. We train 32 gradient steps per episode, and additionally decay the exploration noise of the $C$ -learning behavior policy throughout training according to $\epsilon = 0.5 / (1 + n_{GD} / 1000)$ . The network we used has two hidden layers of size 200 and 100 with ReLU activations, respectively. While the environment is deterministic, we use the same replay buffer as for frozen lake, and take the metric $d$ to be the $L_{1}$ norm, completely ignoring walls. We found this helped improve performance and allowed $C$ -learning to still learn to reach far away goals. We also define $\mathcal{H}(s,g) \coloneqq \{\| s - g\|_1, \| s - g\|_1 + 1, \dots, h_{\max}\}$ , with $h_{\max} \coloneqq 50$ , as the set of horizons over which we will check when rolling out the policies. We take $p(s_0)$ as a point mass both for training and testing. We use $p(g)$ during training as specified in Algorithm 1, and evaluate all the methods on a fixed set of goals. Additionally, we lower the learning rate by a factor of 10 after 2000 episodes. All the other hyperparameters are as in frozen lake. Figure 5 shows the split between easy, medium, and hard goals.
+
+# G.3 DUBINS' CAR
+
+We train for 4500 episodes, each one with a maximal length of 100 steps and using 80 gradient steps per episode. We use a neural network with two hidden layers of respective sizes 400 and 300 with ReLU activations. We take the metric $d$ to be the $L_{\infty}$ norm, completely ignoring walls, which we only use to decide whether or not we have reached the goal. We take $p(s_0)$ as a point mass both for
+
+training and testing. We use $p(g)$ during training as specified in Algorithm 1, and evaluate all the methods on a fixed set of goals. All other hyperparameters are as in frozen lake. The partition of goals into easy, medium and hard is specified in Figure 5, where the agent always starts at the upper left corner.
+
+# G.4 FETCHPICKANDPLACE-V1
+
+We train with a learning rate of 0.0001 and batch size of 256. We take 64 gradient steps per episode, and only update $\phi$ with half the frequency of $\theta$ . We use 40000 episodes for FetchPickAndPlace-v1. All the other hyperparameters are as in Dubins' car.
+
+# G.5 HANDMANIPULATEPENFULL-V0
+
+We train with a learning rate of 0.0001 and batch size of 256. We take 80 gradient steps per episode, and only update $\phi$ with half the frequency of $\theta$ . We use 60000 episodes for HandManipulatePenFullv0. All the other hyperparameters are as in Dubins' car.
\ No newline at end of file
diff --git a/clearninghorizonawarecumulativeaccessibilityestimation/images.zip b/clearninghorizonawarecumulativeaccessibilityestimation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..99856bc582588a829cd26e34cdfcef2b009da325
--- /dev/null
+++ b/clearninghorizonawarecumulativeaccessibilityestimation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:19e45f33464d89b1595aa33edd5fae68a1548547d9f9d8fa7e6e2dae14c043bc
+size 616109
diff --git a/clearninghorizonawarecumulativeaccessibilityestimation/layout.json b/clearninghorizonawarecumulativeaccessibilityestimation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..5a4983f0eee5a93da12f2688518f990cfdfb3c00
--- /dev/null
+++ b/clearninghorizonawarecumulativeaccessibilityestimation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9f9f26ca1c88a57d632e07620842524e36c918b55f9cd85d3159bbd88408865e
+size 982830
diff --git a/clearninglearningtoachievegoalsviarecursiveclassification/e6f1245b-463c-47ed-be53-77028fe7a3a3_content_list.json b/clearninglearningtoachievegoalsviarecursiveclassification/e6f1245b-463c-47ed-be53-77028fe7a3a3_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..030f698c9bcec16743d315ab5cf3c745cc5b2094
--- /dev/null
+++ b/clearninglearningtoachievegoalsviarecursiveclassification/e6f1245b-463c-47ed-be53-77028fe7a3a3_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2f6c2aaf7dcaa8480be4c04c1f640450beba189565350d9b3cad5898aaaa2f4f
+size 195985
diff --git a/clearninglearningtoachievegoalsviarecursiveclassification/e6f1245b-463c-47ed-be53-77028fe7a3a3_model.json b/clearninglearningtoachievegoalsviarecursiveclassification/e6f1245b-463c-47ed-be53-77028fe7a3a3_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..262890933a363e22806d6df60572229b5f2b60e0
--- /dev/null
+++ b/clearninglearningtoachievegoalsviarecursiveclassification/e6f1245b-463c-47ed-be53-77028fe7a3a3_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ecb0e74b7240892be464835dab2010ec84b123895731988c6367a23e7c8b32eb
+size 233250
diff --git a/clearninglearningtoachievegoalsviarecursiveclassification/e6f1245b-463c-47ed-be53-77028fe7a3a3_origin.pdf b/clearninglearningtoachievegoalsviarecursiveclassification/e6f1245b-463c-47ed-be53-77028fe7a3a3_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..2524a943840e38434357e2bac2d42b23f48ba3fe
--- /dev/null
+++ b/clearninglearningtoachievegoalsviarecursiveclassification/e6f1245b-463c-47ed-be53-77028fe7a3a3_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:765fc79f63adbd1c2722dee43b4bbec7b18128a1fbf1530bc86632cb6c38ad34
+size 11014932
diff --git a/clearninglearningtoachievegoalsviarecursiveclassification/full.md b/clearninglearningtoachievegoalsviarecursiveclassification/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..7a6b86402ff24ab5f78f8249b9ba56e669996d2f
--- /dev/null
+++ b/clearninglearningtoachievegoalsviarecursiveclassification/full.md
@@ -0,0 +1,896 @@
+# C-LEARNING: LEARNING TO ACHIEVE GOALS VIA RECURSIVE CLASSIFICATION
+
+Benjamin Eysenbach
+
+CMU, Google Brain
+
+beysenba@cs.cmu.edu
+
+Ruslan Salakhutdinov
+
+CMU
+
+Sergey Levine
+
+UC Berkeley, Google Brain
+
+# ABSTRACT
+
+We study the problem of predicting and controlling the future state distribution of an autonomous agent. This problem, which can be viewed as a reframing of goal-conditioned reinforcement learning (RL), is centered around learning a conditional probability density function over future states. Instead of directly estimating this density function, we indirectly estimate this density function by training a classifier to predict whether an observation comes from the future. Via Bayes' rule, predictions from our classifier can be transformed into predictions over future states. Importantly, an off-policy variant of our algorithm allows us to predict the future state distribution of a new policy, without collecting new experience. This variant allows us to optimize functionals of a policy's future state distribution, such as the density of reaching a particular goal state. While conceptually similar to Q-learning, our work lays a principled foundation for goal-conditioned RL as density estimation, providing justification for goal-conditioned methods used in prior work. This foundation makes hypotheses about Q-learning, including the optimal goal-sampling ratio, which we confirm experimentally. Moreover, our proposed method is competitive with prior goal-conditioned RL methods.1
+
+# 1 INTRODUCTION
+
+In this paper, we aim to reframe the goal-conditioned reinforcement learning (RL) problem as one of predicting and controlling the future state of the world. This reframing is useful not only because it suggests a new algorithm for goal-conditioned RL, but also because it explains a commonly used heuristic in prior methods, and suggests how to automatically choose an important hyperparameter. The problem of predicting the future amounts to learning a probability density function over future states, agnostic of the time that a future state is reached. The future depends on the actions taken by the policy, so our predictions should depend on the agent's policy. While we could simply witness the future, and fit a density model to the observed states, we will be primarily interested in the following prediction question: Given experience collected from one policy, can we predict what states a different policy will visit? Once we can predict the future states of a different policy, we can control the future by choosing a policy that effects a desired future.
+
+While conceptually similar to Q-learning, our perspective is different in that we make no reliance on reward functions. Instead, an agent can solve the prediction problem before being given a reward function, similar to models in model-based RL. Reward functions can require human supervision to construct and evaluate, so a fully autonomous agent can learn to solve this prediction problem before being provided any human supervision, and reuse its predictions to solve many different downstream tasks. Nonetheless, when a reward function is provided, the agent can estimate its expected reward under the predicted future state distribution. This perspective is different from prior approaches. For example, directly fitting a density model to future states only solves the prediction problem in the on-policy setting, precluding us from predicting where a different policy will go. Model-based approaches, which learn an explicit dynamics model, do allow us to predict the future state distribution of different policies, but require a reward function or distance metric to learn goal-reaching policies for controlling the future. Methods based on temporal difference (TD) learning (Sutton, 1988) have been used to predict the future state distribution (Dayan, 1993;
+
+Szepesvari et al., 2014; Barreto et al., 2017) and to learn goal-reaching policies (Kaelbling, 1993; Schaul et al., 2015). Section 3 will explain why these approaches do not learn a true Q function in continuous environments with sparse rewards, and it remains unclear what the learned Q function corresponds to. In contrast, our method will estimate a well defined classifier.
+
+Since it is unclear how to use Q-learning to estimate such a density, we instead adopt a contrastive approach, learning a classifier to distinguish "future states" from random states, akin to Gutmann & Hyvarinen (2010). After learning this binary classifier, we apply Bayes' rule to obtain a probability density function for the future state distribution, thus solving our prediction problem. While this initial approach requires on-policy data, we then develop a bootstrapping variant for estimating the future state distribution for different policies. This bootstrapping procedure is the core of our goal-conditioned RL algorithm.
+
+The main contribution of our paper is a reframing of goal-conditioned RL as estimating the probability density over future states. We derive a method for solving this problem, C-learning, which we use to construct a complete algorithm for goal-conditioned RL. Our reframing lends insight into goal-conditioned Q-learning, leading to a hypothesis for the optimal ratio for sampling goals, which we demonstrate empirically. Experiments demonstrate that C-learning more accurately estimates the density over future states, while remaining competitive with recent goal-conditioned RL methods across a suite of simulated robotic tasks.
+
+# 2 RELATED WORK
+
+Common goal-conditioned RL algorithms are based on behavior cloning (Ghosh et al., 2019; Ding et al., 2019; Gupta et al., 2019; Eysenbach et al., 2020; Lynch et al., 2020; Oh et al., 2018; Sun et al., 2019), model-based approaches (Nair et al., 2020; Ebert et al., 2018), Q-learning (Kaelbling, 1993; Schaul et al., 2015; Pong et al., 2018), and semi-parametric planning (Savinov et al., 2018; Eysenbach et al., 2019; Nasiriany et al., 2019; Chaplot et al., 2020). Most prior work on goal-conditioned RL relies on manually-specified reward functions or distance metric, limiting the applicability to high-dimensional tasks. Our method will be most similar to the Q-learning methods, which are applicable to off-policy data. These Q-learning methods often employ hindsight relabeling (Kaelbling, 1993; Andrychowicz et al., 2017), whereby experience is modified by changing the commanded goal. New goals are often taken to be a future state or a random state, with the precise ratio being a sensitive hyperparameter. We emphasize that our discussion of goal sampling concerns relabeling previously-collected experience, not on the orthogonal problem of sampling goals for exploration (Pong et al., 2018; Fang et al., 2019; Pitis et al., 2020).
+
+Our work is closely related to prior methods that use TD-learning to predict the future state distribution, such as successor features (Dayan, 1993; Barreto et al., 2017; 2019; Szepesvari et al., 2014) and generalized value functions (Sutton & Tanner, 2005; Schaul et al., 2015; Schroecker & Isbell, 2020). Our approach bears a resemblance to these prior TD-learning methods, offering insight into why they work and how hyperparameters such as the goal-sampling ratio should be selected. Our approach differs in that it does not require a reward function or manually designed relabeling strategies, with the corresponding components being derived from first principles. While prior work on off-policy evaluation (Liu et al., 2018; Nachum et al., 2019) also aims to predict the future state distribution, our work differs is that we describe how to control the future state distribution, leading to goal-conditioned RL algorithm.
+
+Our approach is similar to prior work on noise contrastive estimation (Gutmann & Hyvarinen, 2010), mutual-information based representation learning (Oord et al., 2018; Nachum et al., 2018), and variational inference methods (Bickel et al., 2007; Uehara et al., 2016; Dumoulin et al., 2016; Huszár, 2017; Sønderby et al., 2016). Like prior work on the probabilistic perspective on RL (Kappen, 2005; Todorov, 2008; Theodorou et al., 2010; Ziebart, 2010; Rawlik et al., 2013; Ortega & Braun, 2013; Levine, 2018), we treat control as a density estimation problem, but our main contribution is orthogonal: we propose a method for estimating the future state distribution, which can be used as a subroutine in both standard RL and these probabilistic RL methods.
+
+# 3 PRELIMINARIES
+
+We start by introducing notation and prior approaches to goal-conditioned RL. We define a controlled Markov process by an initial state distribution $p_1(\mathbf{s}_1)$ and dynamics function $p(\mathbf{s}_{t + 1} \mid \mathbf{s}_t, \mathbf{a}_t)$ .
+
+We control this process by a Markovian policy $\pi_{\theta}(\mathbf{a}_{\mathbf{t}}\mid \mathbf{s}_{\mathbf{t}})$ with parameters $\theta$ . We use $\pi_{\theta}(\mathbf{a}_{\mathbf{t}}\mid \mathbf{s}_{\mathbf{t}},\mathbf{g})$ to denote a goal-oriented policy, which is additionally conditioned on a goal $\mathbf{g}\in S$ . We use $\mathbf{s}_{\mathbf{t}+}$ to denote the random variable representing a future observation, defined by the following distribution:
+
+Definition 1. The future $\gamma$ -discounted state density function is
+
+$$
+p _ {+} ^ {\pi} \left(\mathbf {s} _ {\mathbf {t} +} \mid \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}\right) \triangleq (1 - \gamma) \sum_ {\Delta = 1} ^ {\infty} \gamma^ {\Delta} p _ {\Delta} ^ {\pi} \left(\mathbf {s} _ {\mathbf {t} + \Delta} = \mathbf {s} _ {\mathbf {t} +} \mid \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}\right),
+$$
+
+where $s_{t + \Delta}$ denotes the state exactly $\Delta$ in the future, and constant $(1 - \gamma)$ ensures that this density function integrates to 1.
+
+This density reflects the states that an agent would visit if we collected many infinite-length trajectories and weighted states in the near-term future more highly. Equivalently, $p(\mathbf{s}_{t+})$ can be seen as the distribution over terminal states we would obtain if we (hypothetically) terminated episodes at a random time step, sampled from a geometric distribution. We need not introduce a reward function to define the problems of predicting and controlling the future.
+
+In discrete state spaces, we can convert the problem of estimating the future state distribution into a RL problem by defining a reward function $r_{\mathbf{s}_{t+}}(\mathbf{s}_t, \mathbf{a}_t) = \mathbb{1}(\mathbf{s}_t = \mathbf{s}_{t+})$ , and terminating the episode when the agent arrives at the goal. The Q-function, which typically represents the expected discounted sum of future rewards, can then be interpreted as a (scaled) probability mass function:
+
+$$
+Q ^ {\pi} (\mathbf {s _ {t}}, \mathbf {a _ {t}}, \mathbf {s _ {t +}}) = \mathbb {E} _ {\boldsymbol {\pi}} \left[ \sum_ {t} \gamma^ {t} r _ {\mathbf {s _ {t +}}} (\mathbf {s _ {t}}, \mathbf {a _ {t}}) \right] = \sum_ {t} \gamma^ {t} \mathbb {P} _ {\boldsymbol {\pi}} (\mathbf {s _ {t}} = \mathbf {s _ {t +}}) = \frac {1}{1 - \gamma} p _ {+} ^ {\pi} (\mathbf {s _ {t +}} | \mathbf {s _ {t}}, \mathbf {a _ {t}}).
+$$
+
+However, in continuous state spaces with some stochasticity in the policy or dynamics, the probability that any state exactly matches the goal state is zero.
+
+Remark 1. In a stochastic, continuous environment, for any policy $\pi$ the $Q$ -function for the reward function $r_{\mathbf{s}_{t+}} = \mathbb{1}(\mathbf{s}_t = \mathbf{s}_{t+})$ is always zero: $Q^{\pi}(\mathbf{s}_t, \mathbf{a}_t, \mathbf{s}_{t+}) = 0$ .
+
+This Q-function is not useful for predicting or controlling the future state distribution. Fundamentally, this problem arises because the relationship between the reward function, the Q function, and the future state distribution in prior work remains unclear. Prior work avoids this issue by manually defining reward functions (Andrychowicz et al., 2017) or distance metrics (Schaul et al., 2015; Pong et al., 2018; Zhao et al., 2019; Schroecker & Isbell, 2020). An alternative is to use hindsight relabeling, changing the commanded goal to be the goal actually reached. This form of hindsight relabeling does not require a reward function, and indeed learns Q-functions that are not zero (Lin et al., 2019). However, taken literally, Q-functions learned in this way must be incorrect: they do not reflect the expected discounted reward. An alternative hypothesis is that these Q-functions reflect probability density functions over future states. However, this also cannot be true:
+
+Remark 2. For any MDP with the sparse reward function $\mathbb{1}(\mathbf{s}_{\mathbf{t}} = \mathbf{s}_{\mathbf{t} + })$ where the episode terminates upon reaching the goal, $Q$ -learning with hindsight relabeling acquires a $Q$ -function in the range $Q^{\pi}(\mathbf{s}_{\mathbf{t}},\mathbf{a}_{\mathbf{t}},\mathbf{s}_{\mathbf{t} + })\in [0,1]$ , but the probability density function $p_{+}^{\pi}(\mathbf{s}_{\mathbf{t} + }|\mathbf{s}_{\mathbf{t}},\mathbf{a}_{\mathbf{t}})$ has a range $[0,\infty)$ .
+
+For example, if the state space is $\mathcal{S} = [0,\frac{1}{2}]$ , then there must exist some state $s_{t+}$ such that $Q^{\pi}(s_t,a_t,s_{t+1})\leq 1 < p_+^\pi (\mathbf{s}_{\mathbf{t}+} = s_{t+}\mid s_t,a_t)$ . See Appendix H for two worked examples. Thus, Q-learning with hindsight relabeling also fails to learn the future state distribution. In fact, it is unclear what quantity Q-learning with hindsight relabeling optimizes. In the rest of this paper, we will define goal reaching in continuous state spaces in a way that is consistent and admits well-defined solutions (Sec. 4), and then present a practical algorithm for finding these solutions (Sec. 5).
+
+# 4 FRAMING GOAL CONDITIONED RL AS DENSITY ESTIMATION
+
+This section presents a novel framing of the goal-conditioned RL problem, which resolves the ambiguity discussed in the previous section. Our main idea is to view goal-conditioned RL as a problem of estimating the density $p_{+}^{\pi}(\mathbf{s}_{t + }|\mathbf{s}_{t},\mathbf{a}_{t})$ over future states that a policy $\pi$ will visit, a problem that Q-learning does not solve (see Section 3). Section 5 will then explain how to use this estimated distribution as the core of a complete goal-conditioned RL algorithm.
+
+Definition 2. Given policy $\pi$ , the future state density estimation problem is to estimate the $\gamma$ -discounted state distribution of $\pi$ : $f_{\theta}^{\pi}(\mathbf{s}_{t+} \mid \mathbf{s}_t, \mathbf{a}_t) \approx p_+^{\pi}(\mathbf{s}_{t+} \mid \mathbf{s}_t, \mathbf{a}_t)$ .
+
+The next section will show how to estimate $f_{\theta}^{\pi}$ . Once we have found $f_{\theta}^{\pi}$ , we can determine the probability that a future state belongs to a set $S_{t+}$ by integrating over that set: $\mathbb{P}(\mathbf{s}_{t+} \in S_{t+}) = \int f_{\theta}^{\pi}(\mathbf{s}_{t+} \mid \mathbf{s}_{t}, \mathbf{a}_{t}) \mathbb{1}(\mathbf{s}_{t+} \in S_{t+}) ds_{t+}$ . Appendix A discusses a similar relationship with partially observed goals. There is a close connection between this integral and a goal-conditioned Q-function:
+
+Remark 3. For a goal $g$ , define a reward function as an $\epsilon$ -ball around the true goal: $r_g(\mathbf{s_t},\mathbf{a_t}) = \mathbb{1}(\mathbf{s}_{t + }\in \mathcal{B}(g;\epsilon))$ . Then the true $Q$ -function is a scaled version of the probability density, integrated over the set $S_{t + } = \mathcal{B}(g;\epsilon)$ : $Q^{\pi}(\mathbf{s_t},\mathbf{a_t},g) = \mathbb{E}_\pi [\sum_t\gamma^t r_g(s,a)] = (1 - \gamma)\mathbb{P}(\mathbf{s}_{t + }\in \mathcal{B}(g;\epsilon))$ .
+
+# 5 C-LEARNING
+
+We now derive an algorithm (C-learning) for solving the future state density estimation problem (Def. 2). First (Sec. 5.1), we assume that the policy is fixed, and present on-policy and off-policy solutions. Based on these ideas, Section 5.2 builds a complete goal-conditioned RL algorithm for learning an optimal goal-reaching policy. Our algorithm bears a resemblance to Q-learning, and our derivation makes two hypotheses about when and where Q-learning will work best (Sec. 5.3).
+
+# 5.1 LEARNING THE CLASSIFIER
+
+Rather than estimating the future state density directly, we will estimate it indirectly by learning a classifier. Not only is classification generally an easier problem than density estimation, but also it will allow us to develop an off-policy algorithm in the next section. We will call our approach $C$ -learning. We start by deriving an on-policy Monte Carlo algorithm (Monte Carlo $C$ -learning), and then modify it to obtain an off-policy, bootstrapping algorithm (off-policy $C$ -learning). After learning this classifier, we can apply Bayes' rule to convert its binary predictions into future state density estimates. Given a distribution over state action pairs, $p(\mathbf{s}_{t},\mathbf{a}_{t})$ , we define the marginal future state distribution $p(\mathbf{s}_{t + }) = \int p_{+}^{\pi}(\mathbf{s}_{t + }|\mathbf{s}_{t},\mathbf{a}_{t})p(\mathbf{s}_{t},\mathbf{a}_{t})d\mathbf{s}_{t}d\mathbf{a}_{t}$ . The classifier
+
+takes as input a state-action pair $(\mathbf{s_t},\mathbf{a_t})$ together with another state $\mathbf{s}_{\mathbf{t} + }$ , and predicts whether $\mathbf{s}_{\mathbf{t} + }$ was sampled from the future state density $p_{+}^{\pi}(\mathbf{s}_{\mathbf{t} + }|\mathbf{s}_{\mathbf{t}},\mathbf{a}_{\mathbf{t}})$ $(F = 1)$ or the marginal state density $p(\mathbf{s}_{\mathbf{t} + })$ $(F = 0)$ . The Bayes optimal classifier is
+
+# Algorithm 1 Monte Carlo C-learning
+
+Input trajectories $\{\tau_i\}$
+
+$$
+\text {D e f i n e} p (s, a) \leftarrow \operatorname {U n i f} \left(\{s, a \} _ {(s, a) \sim \tau}\right),
+$$
+
+$$
+p \left(s _ {t +}\right) \leftarrow \operatorname {U n i f} \left(\left\{s _ {t} \right\} _ {s _ {t} \sim \tau , t > 1}\right)
+$$
+
+while not converged do
+
+$$
+\begin{array}{l} \text {S a m p l e} s _ {t}, a _ {t} \sim p (s, a), s _ {t +} ^ {(0)} \sim p (s _ {t +}), \\ \Delta \sim \operatorname {G E O M} (1 - \gamma). \\ \end{array}
+$$
+
+$$
+\text {S e t} s _ {t +} ^ {(1)} \leftarrow s _ {t + \Delta}
+$$
+
+$$
+\begin{array}{l} \mathcal {F} (\theta) \leftarrow \log C _ {\theta} ^ {\pi} (F = 1 \mid s _ {t}, a _ {t}, s _ {t +} ^ {(1)}) \\ + \log C _ {\theta} ^ {\pi} (F = 0 \mid s _ {t}, a _ {t}, s _ {t +} ^ {(0)}) \\ \end{array}
+$$
+
+$$
+\theta \leftarrow \theta - \eta \nabla_ {\theta} \mathcal {F} (\theta)
+$$
+
+Return classifier $C_{\theta}$
+
+$$
+p \left(F = 1 \mid \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}, \mathbf {s} _ {\mathbf {t} +}\right) = \frac {p _ {+} ^ {\pi} \left(\mathbf {s} _ {\mathbf {t} +} \mid \mathbf {s} _ {\mathbf {t}} , \mathbf {a} _ {\mathbf {t}}\right)}{p _ {+} ^ {\pi} \left(\mathbf {s} _ {\mathbf {t} +} \mid \mathbf {s} _ {\mathbf {t}} , \mathbf {a} _ {\mathbf {t}}\right) + p \left(\mathbf {s} _ {\mathbf {t} +}\right)}. \tag {1}
+$$
+
+Thus, using $C_{\theta}^{\pi}(F = 1 \mid \mathbf{s}_{t}, \mathbf{a}_{t}, \mathbf{s}_{t + })$ to denote our learned classifier, we can obtain an estimate $f_{\theta}^{\pi}(\mathbf{s}_{t + } \mid \mathbf{s}_{t}, \mathbf{a}_{t})$ for the future state density function using our classifier's predictions as follows:
+
+$$
+f _ {\theta} ^ {\pi} \left(\mathbf {s} _ {\mathbf {t} +} \mid \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}\right) = \frac {C _ {\theta} ^ {\pi} (F = 1 \mid \mathbf {s} _ {\mathbf {t}} , \mathbf {a} _ {\mathbf {t}} , \mathbf {s} _ {\mathbf {t} +})}{C _ {\theta} ^ {\pi} (F = 0 \mid \mathbf {s} _ {\mathbf {t}} , \mathbf {a} _ {\mathbf {t}} , \mathbf {s} _ {\mathbf {t} +})} p \left(\mathbf {s} _ {\mathbf {t} +}\right). \tag {2}
+$$
+
+While our estimated density $f_{\theta}$ depends on the marginal density $p(\mathbf{s}_{t+})$ , our goal-conditioned RL algorithm (Sec. 5.2) will note require estimating this marginal density. In particular, we will learn a policy that chooses the action $\mathbf{a}_t$ that maximizes this density, but the solution to this maximization problem does not depend on the marginal $p(\mathbf{s}_{t+})$ .
+
+We now present an on-policy approach for learning the classifier, which we call Monte Carlo C-Learning. After sampling a state-action pair $(\mathbf{s_t},\mathbf{a_t})\sim p(\mathbf{s_t},\mathbf{a_t})$ , we can either sample a future state $\mathbf{s}_{t + }^{(\mathbf{1})}\sim p_{+}^{\pi}(\mathbf{s}_{t + }|\mathbf{\Delta}s_{t},\mathbf{a}_{t})$ with a label $F = 1$ , or sample $\mathbf{s}_{t + }^{(\mathbf{0})}\sim p(\mathbf{s}_{t + })$ with a label $F = 0$ . We then train the classifier maximize log likelihood (i.e., the negative cross entropy loss):
+
+$$
+\begin{array}{l} \mathcal {F} (\theta) \triangleq \mathbb {E} _ {\mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}} \sim p (\mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}})} [ \log C _ {\theta} ^ {\pi} (F = 1 | \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}, \mathbf {s} _ {\mathbf {t} +} ^ {(1)}) ] + \mathbb {E} _ {\mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}} \sim p (\mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}})} [ \log C _ {\theta} ^ {\pi} (F = 0 | \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}, \mathbf {s} _ {\mathbf {t} +} ^ {(0)}) ]. \\ \mathbf {s} _ {\mathbf {t} +} ^ {(1)} \sim p _ {+} ^ {\pi} \left(\mathbf {s} _ {\mathbf {t} +} | \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}\right) \quad \mathbf {s} _ {\mathbf {t} +} ^ {(0)} \sim p \left(\mathbf {s} _ {\mathbf {t} +}\right) \tag {3} \\ \end{array}
+$$
+
+To sample future states, we note that the density $p_{+}^{\pi}(\mathbf{s}_{t + }|\mathbf{s}_{t},\mathbf{a}_{t})$ is a weighted mixture of distributions $p(\mathbf{s}_{t + \Delta}\mid \mathbf{s}_t,\mathbf{a}_t)$ indicating the future state exactly $\Delta$ steps in the future:
+
+$$
+p _ {+} ^ {\pi} \left(\mathbf {s} _ {\mathbf {t} +} \mid \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}\right) = \sum_ {\Delta = 0} ^ {\infty} p \left(s _ {t + \Delta} \mid \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}\right) p (\Delta) \quad \text {w h e r e} \quad p (\Delta) = (1 - \gamma) \gamma^ {\Delta} = \operatorname {G E O M} (\Delta ; 1 - \gamma),
+$$
+
+where $\mathrm{GEOM}$ is the geometric distribution. Thus, we sample a future state $\mathbf{s}_{\mathbf{t}+}$ via ancestral sampling: first sample $\Delta \sim \mathrm{GEOM}(1 - \gamma)$ and then, looking at the trajectory containing $(\mathbf{s}_{\mathbf{t}}, \mathbf{a}_{\mathbf{t}})$ , return the state that is $\Delta$ steps ahead of $(\mathbf{s}_{\mathbf{t}}, \mathbf{a}_{\mathbf{t}})$ . We summarize Monte Carlo C-learning in Alg. 1.
+
+While conceptually simple, this algorithm requires on-policy data, as the distribution $p_{+}^{\pi}(\mathbf{s}_{t + }|\mathbf{s}_{t},\mathbf{a}_{t})$ depends on the current policy $\pi$ and the commanded goal. Even if we fixed the policy parameters, we cannot use experience collected when commanding one goal to learn a classifier for another goal. This limitation precludes an important benefit of goal-conditioned learning: the ability to readily share experience across tasks. To lift this limitation, the next section will develop a bootstrapped version of this algorithm that works with off-policy data.
+
+We now extend the Monte Carlo algorithm introduced above to work in the off-policy setting, so that we can estimate the future state density for different policies. In the off-policy setting, we are given a dataset of transitions $(\mathbf{s}_{t},\mathbf{a}_{t},\mathbf{s}_{t + 1})$ and a new policy $\pi$ , which we will use to generate actions for the next time step, $\mathbf{a}_{t + 1}\sim \pi (\mathbf{a}_{t + 1}\mid \mathbf{s}_{t + 1})$ . The main challenge is sampling from $p_{+}^{\pi}(\mathbf{s}_{t + }|\mathbf{s}_{t},\mathbf{a}_{t})$ which depends on the new policy $\pi$ . We address this challenge in two steps. First, we note a recursive relationship between the future state density at the current time step and the next time step:
+
+$$
+\underbrace {p _ {+} ^ {\pi} \left(\mathbf {s} _ {\mathbf {t} +} = s _ {t +} \mid \mathbf {s} _ {\mathbf {t}} , \mathbf {a} _ {\mathbf {t}}\right)} _ {\text {f u t u r e s t a t e d e n s i t y a t c u r r e n t t i m e s t p}} = (1 - \gamma) \underbrace {p \left(\mathbf {s} _ {\mathbf {t} + 1} = s _ {t +} \mid \mathbf {s} _ {\mathbf {t}} , \mathbf {a} _ {\mathbf {t}}\right)} _ {\text {e n v i r o n m e n t d y n a m i c s}} + \gamma \underset {\pi \left(\mathbf {a} _ {\mathbf {t} + 1} \mid \mathbf {s} _ {\mathbf {t} + 1}\right)} {\mathbb {E}} \left[ \underbrace {p _ {+} ^ {\pi} \left(\mathbf {s} _ {\mathbf {t} +} = s _ {t +} \mid \mathbf {s} _ {\mathbf {t} + 1} , \mathbf {a} _ {\mathbf {t} + 1}\right)} _ {\text {f u t u r e s t a t e d e n s i t y a t n e x t t i m e s t p}} \right]. \tag {4}
+$$
+
+We can now rewrite our classification objective in Eq. 3 as
+
+$$
+\begin{array}{l} \mathcal{F}(\theta ,\pi) = \mathbb{E}_{\substack{p(\mathbf{s}_{t},\mathbf{a}_{t}), p(\mathbf{s}_{t + 1}|\mathbf{s}_{t},\mathbf{a}_{t}),\\ \pi (\mathbf{a}_{t + 1}|\mathbf{s}_{t + 1}), p_{+}^{\pi}(\mathbf{s}_{t + }|\mathbf{s}_{t + 1},\mathbf{a}_{t + 1})}}[(1 - \gamma)\log C_{\theta}^{\pi}(F = 1|\mathbf{s}_{t},\mathbf{a}_{t},\mathbf{s}_{t + 1}) + \gamma \log C_{\theta}^{\pi}(F = 1|\mathbf{s}_{t},\mathbf{a}_{t},\mathbf{s}_{t + })] \\ + \mathbb {E} _ {p (\mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}), p (\mathbf {s} _ {\mathbf {t} +})} \left[ \log C _ {\theta} ^ {\pi} (F = 0 \mid \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}, \mathbf {s} _ {\mathbf {t} +}) \right]. \tag {5} \\ \end{array}
+$$
+
+This equation is different from the Monte Carlo objective (Eq. 3) because it depends on the new policy, but it still requires sampling from $p_{+}^{\pi}(\mathbf{s}_{\mathbf{t} + } \mid \mathbf{s}_{\mathbf{t} + 1}, \mathbf{a}_{\mathbf{t} + 1})$ , which also depends on the new policy. Our second step is to observe that we can estimate expectations that use $p_{+}^{\pi}(\mathbf{s}_{\mathbf{t} + } \mid \mathbf{s}_{\mathbf{t}}, \mathbf{a}_{\mathbf{t}})$ by sampling from the marginal $\mathbf{s}_{\mathbf{t} + } \sim p(\mathbf{s}_{\mathbf{t} + })$ and then weighting those samples by an importance weight, which we can estimate using our learned classifier:
+
+$$
+w \left(\mathbf {s} _ {\mathbf {t} + 1}, \mathbf {a} _ {\mathbf {t} + 1}, \mathbf {s} _ {\mathbf {t} +}\right) \triangleq \frac {p _ {+} ^ {\pi} \left(\mathbf {s} _ {\mathbf {t} +} \mid \mathbf {s} _ {\mathbf {t} + 1} , \mathbf {a} _ {\mathbf {t} + 1}\right)}{p \left(\mathbf {s} _ {\mathbf {t} +}\right)} = \frac {C _ {\theta} ^ {\pi} (F = 1 \mid \mathbf {s} _ {\mathbf {t} + 1} , \mathbf {a} _ {\mathbf {t} + 1} , \mathbf {s} _ {\mathbf {t} +})}{C _ {\theta} ^ {\pi} (F = 0 \mid \mathbf {s} _ {\mathbf {t} + 1} , \mathbf {a} _ {\mathbf {t} + 1} , \mathbf {s} _ {\mathbf {t} +})}. \tag {6}
+$$
+
+The second equality is obtained by taking Eq. 2 and dividing both sides by $p(\mathbf{s}_{\mathbf{t} + })$ . In effect, these weights account for the effect of the new policy on the future state density. We can now rewrite our objective by substituting the identity in Eq. 6 for the $p(\mathbf{s}_{\mathbf{t} + })$ term in the expectation in Eq. 5. The written objective is $\mathcal{F}(\theta ,\pi) =$
+
+$$
+\begin{array}{l} \mathbb{E}_{\substack{p(\mathbf{s_{t}},\mathbf{a_{t}}), p(\mathbf{s_{t + 1}}|\mathbf{s_{t}},\mathbf{a_{t}}),\\ p(\mathbf{s_{t + }}), \pi (\mathbf{a_{t + 1}}|\mathbf{s_{t + 1}})}}[(1 - \gamma) \log C_{\theta}^{\pi}(F = 1|\mathbf{s_{t}},\mathbf{a_{t}},\mathbf{s_{t + 1}}) + \gamma \left\lfloor w(\mathbf{s_{t + 1}},\mathbf{a_{t + 1}},\mathbf{s_{t + }})\right\rfloor_{\mathrm{sg}}\log C_{\theta}^{\pi}(F = 1 | \mathbf{s_{t}},\mathbf{a_{t}},\mathbf{s_{t + }}) \\ + \log C _ {\theta} ^ {\pi} (F = 0 | \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}, \mathbf {s} _ {\mathbf {t} +}) ]. \tag {7} \\ \end{array}
+$$
+
+We use $\lfloor \cdot \rfloor_{\mathrm{sg}}$ as a reminder that the gradient of an importance-weighted objective should not depend on the gradients of the importance weights. Intuitively, this loss says that next states should be labeled as positive examples, states sampled from the marginal should be labeled as negative examples, but reweighted states sampled from the marginal are positive examples.
+
+Algorithm summary. Alg 2 reviews off policy C-learning, which takes as input a policy and a dataset of transitions. At each iteration, we sample a $(\mathbf{s_t},\mathbf{a_t},\mathbf{s}_{t + 1})$ transition from the dataset, a potential future state $\mathbf{s}_{t + }\sim p(\mathbf{s}_{t + })$ and the next action $\mathbf{a}_{t + 1}\sim \pi (\mathbf{a}_{t + 1}\mid \mathbf{s}_{t + 1},\mathbf{s}_{t + })$ . We compute the importance weight using the current estimate from the classifier, and then plug the importance weight into the loss from Eq. 3. We then update the classifier using the gradient of this objective.
+
+# Algorithm 2 Off-Policy C-learning
+
+Input transitions $\{s_t,a,s_{t + 1}\}$ , policy $\pi_{\phi}$ while not converged do
+
+$$
+\text {S a m p l e} \left(s _ {t}, a _ {t}, s _ {t + 1}\right) \sim p \left(s _ {t}, a _ {t}, s _ {t + 1}\right),
+$$
+
+$$
+s _ {t +} \sim p (s _ {t +}), a _ {t + 1} \sim \pi_ {\phi} (a _ {t + 1} \mid s _ {t}, a _ {t})
+$$
+
+$$
+w \leftarrow \text {s t o p - g r a d} \left(\frac {C _ {\theta} ^ {\pi} (F = 1 | s _ {t + 1} , a _ {t + 1} , s _ {t +})}{C _ {\theta} ^ {\pi} (F = 0 | s _ {t + 1} , a _ {t + 1} , s _ {t +})}\right)
+$$
+
+$$
+\begin{array}{l} \mathcal {F} (\theta , \pi) \leftarrow (1 - \gamma) \log C _ {\theta} ^ {\pi} (F = 1 | s _ {t}, a _ {t}, s _ {t + 1}) \\ + \log C _ {\theta} ^ {\pi} (F = 0 | s _ {t}, a _ {t}, s _ {t +}) \\ + \gamma w \log C _ {\theta} ^ {\pi} (F = 1 | s _ {t}, a _ {t}, s _ {t +}) \\ \end{array}
+$$
+
+$$
+\theta \leftarrow \theta - \eta \nabla_ {\theta} \mathcal {F} (\theta , \pi)
+$$
+
+Return classifier $C_{\theta}^{\pi}$
+
+# Algorithm 3 Goal-Conditioned C-learning
+
+Input transitions $\{s_t, a, s_{t+1}\}$
+
+while not converged do
+
+$$
+\text {S a m p l e} \left(s _ {t}, a _ {t}, s _ {t + 1}\right) \sim p \left(s _ {t}, a _ {t}, s _ {t + 1}\right),
+$$
+
+$$
+s _ {t +} \sim p \left(s _ {t +}\right), a _ {t + 1} \sim \pi \left(a _ {t + 1} \mid s _ {t}, a _ {t}, s _ {t +}\right)
+$$
+
+$$
+w \leftarrow \text {s t o p - g r a d} \left(\frac {C _ {\theta} ^ {\pi} (F = 1 | s _ {t + 1} , a _ {t + 1} , s _ {t +})}{C _ {\theta} ^ {\pi} (F = 0 | s _ {t + 1} , a _ {t + 1} , s _ {t +})}\right)
+$$
+
+$$
+\begin{array}{l} \mathcal {F} (\theta , \pi) \leftarrow (1 - \gamma) \log C _ {\theta} ^ {\pi} (F = 1 | s _ {t}, a _ {t}, s _ {t + 1}) \\ + \log C _ {\theta} ^ {\pi} (F = 0 | s _ {t}, a _ {t}, s _ {t +}) \\ + \gamma w \log C _ {\theta} ^ {\pi} (F = 1 | s _ {t}, a _ {t}, s _ {t +}) \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \theta \leftarrow \theta - \eta \nabla_ {\theta} \mathcal {F} (\theta , \pi) \\ \mathcal {G} (\phi) \leftarrow \mathbb {E} _ {\pi_ {\phi} (a _ {t} | s _ {t}, g = s _ {t +})} [ \log C _ {\theta} ^ {\pi} (F = 1 | s _ {t}, a _ {t}, s _ {t +}) ] \\ \phi \leftarrow \phi + \eta \nabla_ {\phi} \mathcal {G} (\phi) \\ \end{array}
+$$
+
+Return policy $\pi_{\phi}$
+
+C-learning Bellman Equations. In Appendix D.1, we provide a convergence proof for off-policy C-learning in the tabular setting. Our proof hinges on the fact that the TD C-learning update rule has the same effect as applying the following (unknown) Bellman operator:
+
+$$
+\frac{C_{\theta}^{\pi}(F = 1\mid\mathbf{s}_{t},\mathbf{a}_{t},\mathbf{s}_{t + })}{C_{\theta}^{\pi}(F = 0\mid\mathbf{s}_{t},\mathbf{a}_{t},\mathbf{s}_{t + })} = (1 - \gamma)\frac{p(\mathbf{s}_{t + 1} = \mathbf{s}_{t + } \mid\mathbf{s}_{t},\mathbf{a}_{t})}{p(\mathbf{s}_{t + })} +\gamma \mathbb{E}_{\substack{p(\mathbf{s}_{t + 1}|\mathbf{s}_{t},\mathbf{a}_{t}),\\ \pi (\mathbf{a}_{t + 1}|\mathbf{s}_{t})}}\left[\frac{C_{\theta}^{\pi}(F = 1\mid\mathbf{s}_{t + 1},\mathbf{a}_{t + 1},\mathbf{s}_{t + })}{C_{\theta}^{\pi}(F = 0\mid\mathbf{s}_{t + 1},\mathbf{a}_{t + 1},\mathbf{s}_{t + })}\right]
+$$
+
+This equation tells us that C-learning is equivalent to maximizing the reward function $r_{\mathbf{s}_{t + }}(\mathbf{s}_t,\mathbf{a}_t) = p(\mathbf{s}_{t + 1} = \mathbf{s}_{t + }|\mathbf{s}_t,\mathbf{a}_t) / p(\mathbf{s}_{t + })$ , but does so without having to estimate either the dynamics $p(\mathbf{s}_{t + 1}|\mathbf{s}_t,\mathbf{a}_t)$ or the marginal distribution $p(\mathbf{s}_t)$ .
+
+# 5.2 GOAL-CONDITIONED RL VIA C-LEARNING
+
+We now build a complete algorithm for goal-conditioned RL based on C-learning. We will derive this algorithm in two steps. First, while Section 5.1 shows how to estimate the future state density of a single policy, for goal-conditioned RL we will want to estimate the future state density of a conditional policy, which may be conditioned on many goals. Second, we will discuss how to update a policy using the learned density.
+
+To acquire a classifier for a goal-conditioned policy, we need to apply our objective function (Eq. 7) to all policies $\{\pi_{\phi}(a \mid s, g) \mid g \in S\}$ . We can do this efficiently by additionally conditioning the classifier and the policy on the commanded goal $g \in S$ . However, for learning a goal-reaching policy, we will only need to query the classifier on inputs where $\mathbf{s}_{t+} = g$ . Thus, we only need to learn a classifier conditioned on inputs where $\mathbf{s}_{t+} = g$ , resulting in the following objective:
+
+$$
+\begin{array}{l} \mathbb {E} \quad p (\mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}), p (\mathbf {s} _ {\mathbf {t} + 1} | \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}), [ (1 - \gamma) \log C _ {\theta} ^ {\pi} (F = 1 \mid \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}, \mathbf {s} _ {\mathbf {t} + 1}) + \log C _ {\theta} ^ {\pi} (F = 0 \mid \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}, \mathbf {s} _ {\mathbf {t} +}) \\ p \left(\mathbf {s} _ {\mathbf {t} +}\right), \pi \left(\mathbf {a} _ {\mathbf {t} + 1} \mid \mathbf {s} _ {\mathbf {t} + 1}, \mathbf {g} = \mathbf {s} _ {\mathbf {t} +}\right) \\ + \gamma \left\lfloor w \left(\mathbf {s} _ {\mathbf {t} + 1}, \mathbf {a} _ {\mathbf {t} + 1}, \mathbf {s} _ {\mathbf {t} +}\right) \right\rfloor_ {\mathrm {s g}} \log C _ {\theta} ^ {\pi} (F = 1 \mid \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}, \mathbf {s} _ {\mathbf {t} +}) ]. \tag {8} \\ \end{array}
+$$
+
+The difference between this objective and the one derived in Section 5.1 (Eq. 7) is that the next action is sampled from a goal-conditioned policy. The density function obtained from this classifier (Eq. 2) represents the future state density of $\mathbf{s}_{t+}$ , given that the policy was commanded to reach goal $g = s_{t+}: f_{\theta}^{\pi}(\mathbf{s}_{t+} = s_{t+} \mid \mathbf{s}_{t}, \mathbf{a}_{t}) = p_{+}^{\pi}(\mathbf{s}_{t+} = s_{t+} \mid \mathbf{s}_{t}, \mathbf{a}_{t}, g = s_{t+})$ .
+
+Now that we can estimate the future state density of a goal-conditioned policy, our second step is to optimize the policy w.r.t. this learned density function. We need to define a reward function that says how good a particular future state density is for reaching a particular goal. While we can use any functional of future state density, a natural choice is the KL divergence between a Dirac density centered at the commanded goal and the future state density of the goal-conditioned policy:
+
+$$
+- D _ {\mathrm {K L}} \left(\mathbb {1} \left(\mathbf {s} _ {\mathbf {t} +} = g\right) \| p _ {+} ^ {\pi} \left(\mathbf {s} _ {\mathbf {t} +} \mid \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}, g\right)\right) = \log p _ {+} ^ {\pi} \left(\mathbf {s} _ {\mathbf {t} +} = g \mid \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}, g\right).
+$$
+
+Importantly, computing this KL only requires the future state density of the commanded goal. Since $p_{+}^{\pi}(\mathbf{s}_{\mathbf{t} + } \mid \mathbf{s}_{\mathbf{t}}, \mathbf{a}_{\mathbf{t}}, g = \mathbf{s}_{\mathbf{t} + })$ is a monotone increasing function of the classifier predictions (see Eq. 2), we can write the policy objective in terms of the classifier predictions:
+
+$$
+\mathcal {G} (\phi) = \max _ {\phi} \mathbb {E} _ {\pi_ {\phi} (\mathbf {a} _ {\mathbf {t}} | \mathbf {s} _ {\mathbf {t}}, g)} \left[ \log C _ {\theta} ^ {\pi} (F = 1 \mid \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}, \mathbf {s} _ {\mathbf {t} +} = g) \right].
+$$
+
+If we collect new experience during training, then the marginal distribution $p(\mathbf{s}_{\mathbf{t} + })$ will change throughout training. While this makes the learning problem for the classifier non-stationary, the learning problem for the policy (whose solution is independent of $p(\mathbf{s}_{\mathbf{t} + })$ ) remains stationary.
+
+
+(a) Hypothesis 1: Underestimating the density
+
+
+Figure 1: Testing Hypotheses about Q-learning: (Left) As predicted, Q-values often sum to less than 1. (Right) The performance of Q-learning is sensitive to the relabeling ratio. Our analysis predicts that the optimal relabeling ratio is approximately $\lambda = \frac{1}{2}(1 + \gamma)$ . C-learning (dashed orange) does not require tuning this ratio and outperforms Q-learning, even when the relabeling ratio for Q-learning is optimally chosen.
+
+
+(b) Hypothesis 2: Optimal goal sampling ratio
+
+Algorithm Summary: We summarize our approach, which we call goal-conditioned C-learning, in Alg. 3. Given a dataset of transitions, we alternate between estimating the future state density of the goal-conditioned policy and updating the policy to maximize the probability density of reaching the commanded goal. This algorithm is simply to implement by taking a standard actor-critic RL algorithm and changing the loss function for the critic (a few lines of code). In the tabular setting, goal-conditioned C-learning converges to the optimal policy (proof in Appendix D.3).
+
+# 5.3 IMPLICATIONS FOR Q-LEARNING AND HINDSIGHT RELABELING
+
+Off-policy C-learning (Alg. 2) bears a resemblance to Q-learning with hindsight relabeling, so we now compare these two algorithms to make hypotheses about Q-learning, which we will test in Section 6. We start by writing the objective for both methods using the cross-entropy loss, $\mathcal{C}\mathcal{E}(\cdot ,\cdot)$ :
+
+$$
+\begin{array}{l} F _ {\mathrm {C} \text {- l e a n i n g}} (\theta , \pi) = (1 - \gamma) \mathcal {C E} \left(C _ {\theta} ^ {\pi} \left(F \mid \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}, \mathbf {s} _ {\mathbf {t} + 1}\right), y = 1\right) \\ + (1 + \gamma w) \mathcal {C E} \left(C _ {\theta} ^ {\pi} \left(F \mid \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}, \mathbf {s} _ {\mathbf {t} +}\right), y = \frac {\gamma w}{\gamma w + 1} = \frac {\gamma C _ {\theta} ^ {\pi^ {\prime}}}{\gamma C _ {\theta} ^ {\pi^ {\prime}} + \left(1 - C _ {\theta} ^ {\pi^ {\prime}}\right)}\right), \tag {9} \\ \end{array}
+$$
+
+$$
+\begin{array}{l} F _ {\text {Q - l e a n i n g}} (\theta , \pi) = (1 - \lambda) \mathcal {C E} \left(Q _ {\theta} ^ {\pi} \left(\mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}, g = \mathbf {s} _ {\mathbf {t} + 1}\right), y = 1\right) \\ + \lambda \mathcal {C E} \left(Q _ {\theta} ^ {\pi} \left(\mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}, g = \mathbf {s} _ {\mathbf {t} +}\right), y = \gamma Q _ {\theta} ^ {\pi} \left(\mathbf {s} _ {\mathbf {t} + 1}, \mathbf {a} _ {\mathbf {t} + 1}, \mathbf {s} _ {\mathbf {t} +}\right)\right), \tag {10} \\ \end{array}
+$$
+
+where $C_{\theta}^{\prime} = C_{\theta}^{\pi}(F = 1 \mid \mathbf{s}_{t+1}, \mathbf{a}_{t+1}, \mathbf{s}_{t+})$ is the classifier prediction at the next state and where $\lambda \in [0,1]$ denotes the relabeling ratio used in Q-learning, corresponding to the fraction of goals sampled from $p(\mathbf{s}_{t+})$ . There are two differences between these equations, which lead us to make two hypotheses about the performance of Q-learning, which we will test in Section 6. The first difference is how the predicted targets are scaled for random goals, with Q-learning scaling the prediction by $\gamma$ while C-learning scales the prediction by $\gamma / (\gamma C_{\theta}^{\prime} + (1 - C_{\theta}^{\prime}))$ . Since Q-learning uses a smaller scale, we make the following hypothesis:
+
+Hypothesis 1. $Q$ -learning will predict smaller future state densities and therefore underestimate the true future state density function.
+
+This hypothesis is interesting because it predicts that prior methods based on Q-learning will not learn a proper density function, and therefore fail to solve the future state density estimation problem. The second difference between C-learning and Q-learning is that Q-learning contains a tunable parameter $\lambda$ , which controls the ratio with which next-states and random states are used as goals. This ratio is equivalent to a weight on the two loss terms, and our experiments will show that Q-learning with hindsight relabeling is sensitive to this parameter. In contrast, C-learning does not require specification of this hyperparameter. Matching the coefficients in the Q-learning loss (Eq. 10) with those in our loss (Eq. 9) (i.e., $[1 - \lambda, \lambda] \propto [1 - \gamma, 1 + \gamma w]$ ), we make the following hypothesis:
+
+Hypothesis 2. $Q$ -learning with hindsight relabeling will most accurately solve the future state density estimation problem (Def. 2) when random future states are sampled with probability $\lambda = \frac{1 + \gamma}{2}$ .
+
+Prior work has found that this goal sampling ratio is a sensitive hyperparameter (Andrychowicz et al., 2017; Pong et al., 2018; Zhao et al., 2019); this hypothesis is useful because it offers an automatic way to choose the hyperparameter. The next section will experimentally test these hypotheses.
+
+# 6 EXPERIMENTS
+
+We aim our experiments at answering the following questions:
+
+1. Do Q-learning and C-learning accurately estimate the future state density (Problem 2)?
+
+2. (Hypothesis 1) Does Q-learning underestimate the future state density function ( $\S$ 5.3)?
+3. (Hypothesis 2) Is the predicted relabeling ratio $\lambda = (1 + \gamma) / 2$ optimal for Q-learning ( $\S 5.3$ )?
+4. How does C-learning compare with prior goal-conditioned RL methods on benchmark tasks?
+
+Do Q-learning and C-learning accurately predict the future? Our first experiment studies how well Q-learning and C-learning solve the future state density estimation problem (Def. 2). We use a continuous version of a gridworld for this task and measure how close the predicted future state density is to the true future state density using a KL divergence. Since this environment is continuous and stochastic, Q-learning without hindsight relabeling learns $Q = 0$ on this environment. In the on-policy setting, MC C-learning and TD C-learning perform similarly, while the prediction error for Q-learning (with hindsight relabeling) is more than three times worse. In the off-policy setting, TD C-learning is more accurate than Q-learning (with hindsight relabeling), achieving a KL divergence that is $14\%$ lower than that of Q-learning. As expected, TD C-learning performs better than MC C-learning in the off-policy setting. These experiments demonstrate that C-learning yields a more accurate solution to the future state density estimation problem, as compared with Q-learning. See Appendix G.1 for full experimental details and results.
+
+Our next experiment studies the ability of C-learning to predict the future in higher-dimensional continuous control tasks. We collected a dataset of experience from agents pretrained to solve three locomotion tasks from OpenAI Gym. We applied C-learning to each dataset, and used the resulting classifier to predict the expected future state. As a baseline, we trained a 1-step dynamics model on this same dataset and unrolled this model autoregressively to obtain a prediction for the expected future state. Varying the discount factor,
+
+
+(a) Walker2d-v2
+(b) Predicted future states
+
+
+Figure 2: Predicting the Future: C-learning makes accurate predictions of the expected future state across a range of tasks and discount values. In contrast, learning a 1-step dynamics model and unrolling that model results in high error for large discount values.
+
+we compared each method on Walker2d-v2 in Fig. 2 and the other tasks in Appendix Fig. 7. The 1-step dynamics model is accurate over short horizons but performance degrades for larger values of $\gamma$ , likely due to prediction errors accumulating over time. In contrast, the predictions obtained by MC C-learning and TD C-learning remain accurate for large values of $\gamma$ . Appendix G.2 contains for experimental details; Appendix I and the project website contain more visualizations.
+
+Testing our hypotheses about Q-learning: We now test two hypotheses made in Section 5.3. The first hypothesis is that Q-learning will underestimate the future state density function. To test this hypothesis, we compute the sum over the predicted future state density function, $\int_{s_{t+}} p_{+}^{\pi} (\mathbf{s}_{\mathbf{t}+} = s_{t+} \mid \mathbf{s}_{\mathbf{t}}, \mathbf{a}_{\mathbf{t}})$ , which in theory should equal one. We compared the predictions from MC C-learning and Q-learning using on-policy data (details in Appendix G.1). As shown in Fig. 1a, the predictions from C-learning summed to 1, but the predictions from Q-learning consistently summed to less than one, especially for large values of $\lambda$ . However, our next experiment shows that Q-learning works best when using large values of $\lambda$ , suggesting that successful hyperparameters for Q-learning are ones for which Q-learning does not learn a proper density function.
+
+Our second hypothesis is that Q-learning will perform best when the relabeling ratio is chosen to be $\lambda = (1 + \gamma) / 2$ . Fig. 1b shows the results from this experiment. The performance of Q-learning is highly sensitive to the relabeling ratio: values of $\lambda$ that are too large or too small result in Q-learning performing poorly, worse than simply predicting a uniform distribution. Second, not only does the optimal choice of $\lambda$ increase with $\gamma$ , but our theoretical hypothesis of $\lambda = (1 - \gamma) / 2$ almost exactly predicts the optimal value of $\lambda$ . Our third observation is that C-learning, which uses a 50-50 sampling ratio, consistently does better than Q-learning, even for the best choice of $\lambda$ . These experiments support our hypothesis for the choice of relabeling ratio while reaffirming that our principled approach to future state density estimation obtains a more accurate solution.
+
+Goal-Conditioned RL for continuous control tasks: Our last set of experiments apply goal-conditioned C-learning (Alg. 3) to benchmark continuous control tasks from prior work, shown in Fig. 3. These tasks range in difficulty from the 6-dimensional Sawyer Reach task to the 45-dimensional Pen task (see Appendix G). The aim of these experiments is to show that C-learning is competitive with prior goal-conditioned RL methods, without requiring careful tuning of the goal sampling ratio. We compare C-learning with a number of prior methods based on Q-learning, which
+
+
+
+
+
+
+Figure 3: Goal-conditioned RL: C-learning is competitive with prior goal-conditioned RL methods across a suite of benchmark tasks, without requiring careful tuning of the relabeling distribution.
+
+
+
+
+
+
+#
+C-learnin
+g (ours)
++
+
+
+
+
+[Fujimoto 2018]
+
+
+
+
+State: [Lin 20]
+[19]
+#
+TD3 [An]
++10dych
+0%HEROWICZ2
+
+
+
+
+→
+TD3+ [Andry
+0% HEIhowicz
+2017
+
+differ in how goals are sampled during training: TD3 (Fujimoto et al., 2018) does no relabeling, Lin et al. (2019) uses $50\%$ next state goals and $50\%$ random goals, and HER (Andrychowicz et al., 2017) uses final state relabeling (we compare against both $100\%$ and $50\%$ relabeling). None of these methods require a reward function or distance function for training; for evaluation, we use the L2 metric between the commanded goal and the terminal state (the average distance to goal and minimum distance to goal show the same trends). As shown in Fig. 3, C-learning is competitive with the best of these baselines across all tasks, and substantially better than all baselines on the Sawyer manipulation tasks. These manipulation tasks are more complex than the others because they require indirect manipulation of objects in the environment. Visualizing the learned policies, we observe that C-learning has discovered regrasping and fine-grained adjustment behaviors, behaviors that typically require complex reward functions to learn (Popov et al., 2017). On the Sawyer Push and Sawyer Printer tasks, we found that a hybrid of TD C-learning and MC C-learning performed better than standard C-learning. This variant, which is analogous to an "n-step" version of C-learning, is simple to implement and is described in Appendix E. In summary, C-learning performs as well as prior methods on simpler tasks and better on complex tasks, does not depend on a sensitive hyperparameter (the goal sampling ratio), and maximizes a well-defined objective function.
+
+Predicting the goal sampling ratio for goal conditioned RL: While C-learning prescribes a precise method for sampling goals, prior hindsight relabeling methods are sensitive to these parameters. To visualize this, we varied the goal sampling ratio used by Lin et al. (2019) on the maze2d-umaze-v0 task from Fu et al. (2020). As shown in Fig. 4, properly choosing this ratio can result in a $50\%$ decrease in final distance. Additionally, our hypothesis that the optimal goal sampling ratio is $\lambda = (1 - \gamma) / 2$ accurately predicts the best value for this ratio.
+
+
+Figure 4: Q-learning is sensitive to the relabeling ratio. Our analysis predicts the optimal relabeling ratio.
+
+# 7 CONCLUSION
+
+A goal-oriented agent should be able to predict and control the future state of its environment. In this paper, we used this idea to reformulate the standard goal-conditioned RL problem as one of estimating and optimizing the future state density function. We showed that Q-learning does not directly solve this problem in (stochastic) environments with continuous states, and hindsight relabeling produces, at best, a mediocre solution for an unclear objective function. In contrast, C-learning yields more accurate solutions. Moreover, our analysis makes two hypotheses about when and where hindsight relabeling will most effectively solve this problem, both of which are validated in our experiments. Our experiments also demonstrate that C-learning scales to high-dimensional continuous controls tasks, where performance is competitive with state-of-the-art goal conditioned RL methods while offering an automatic and principled mechanism for hindsight relabeling.
+
+# ACKNOWLEDGEMENTS
+
+We thank Dibya Ghosh and Vitchyr Pong for discussions about this work, and thank Vincent Vanhouke, Ofir Nachum, and anonymous reviewers for providing feedback on early versions of this work. This work is supported by the Fannie and John Hertz Foundation, and the National Science Foundation (DGE-1745016, IIS1763562), and the US Army (W911NF1920104).
+
+# REFERENCES
+
+Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, OpenAI Pieter Abbeel, and Wojciech Zaremba. Hindsight experience replay. In Advances in neural information processing systems, pp. 5048-5058, 2017.
+André Barreto, Will Dabney, Rémi Munos, Jonathan J Hunt, Tom Schaul, Hado P van Hasselt, and David Silver. Successor features for transfer in reinforcement learning. In Advances in neural information processing systems, pp. 4055-4065, 2017.
+Andre Barreto, Diana Borsa, Shaobo Hou, Gheorghe Comanici, Eser Aygün, Philippe Hamel, Daniel Toyama, Shibl Mourad, David Silver, Doina Precup, et al. The option keyboard: Combining skills in reinforcement learning. In Advances in Neural Information Processing Systems, pp. 13052-13062, 2019.
+Steffen Bickel, Michael Brückner, and Tobias Scheffer. Discriminative learning for differing training and test distributions. In Proceedings of the 24th international conference on Machine learning, pp. 81-88, 2007.
+Devendra Singh Chaplot, Ruslan Salakhutdinov, Abhinav Gupta, and Saurabh Gupta. Neural topological slam for visual navigation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12875-12884, 2020.
+Peter Dayan. Improving generalization for temporal difference learning: The successor representation. *Neural Computation*, 5(4):613-624, 1993.
+Yiming Ding, Carlos Florensa, Pieter Abbeel, and Mariano Phielipp. Goal-conditioned imitation learning. In Advances in Neural Information Processing Systems, pp. 15324-15335, 2019.
+Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Olivier Mastropietro, Alex Lamb, Martin Arjovsky, and Aaron Courville. Adversarily learned inference. arXiv preprint arXiv:1606.00704, 2016.
+Frederik Ebert, Chelsea Finn, Sudeep Dasari, Annie Xie, Alex Lee, and Sergey Levine. Visual foresight: Model-based deep reinforcement learning for vision-based robotic control. arXiv preprint arXiv:1812.00568, 2018.
+Benjamin Eysenbach, Russ R Salakhutdinov, and Sergey Levine. Search on the replay buffer: Bridging planning and reinforcement learning. In Advances in Neural Information Processing Systems, pp. 15246-15257, 2019.
+Benjamin Eysenbach, Xinyang Geng, Sergey Levine, and Ruslan Salakhutdinov. Rewriting history with inverse rl: Hindsight inference for policy improvement. arXiv preprint arXiv:2002.11089, 2020.
+Meng Fang, Tianyi Zhou, Yali Du, Lei Han, and Zhengyou Zhang. Curriculum-guided hindsight experience replay. In Advances in Neural Information Processing Systems, pp. 12623-12634, 2019.
+Justin Fu, Aviral Kumar, Matthew Soh, and Sergey Levine. Diagnosing bottlenecks in deep q-learning algorithms. arXiv preprint arXiv:1902.10250, 2019.
+Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, and Sergey Levine. D4rl: Datasets for deep data-driven reinforcement learning. arXiv preprint arXiv:2004.07219, 2020.
+Scott Fujimoto, Herke Van Hoof, and David Meger. Addressing function approximation error in actor-critic methods. arXiv preprint arXiv:1802.09477, 2018.
+Dibya Ghosh, Abhishek Gupta, Justin Fu, Ashwin Reddy, Coline Devin, Benjamin Eysenbach, and Sergey Levine. Learning to reach goals without reinforcement learning. arXiv preprint arXiv:1912.06088, 2019.
+Karol Gregor, George Papamakarios, Frederic Besse, Lars Buesing, and Theophane Weber. Temporal difference variational auto-encoder. arXiv preprint arXiv:1806.03107, 2018.
+Sergio Guadarrama, Anoop Korattikara, Oscar Ramirez, Pablo Castro, Ethan Holly, Sam Fishman, Ke Wang, Ekaterina Gonina, Neal Wu, Chris Harris, et al. Tf-agents: A library for reinforcement learning in tensorflow, 2018.
+
+Abhishek Gupta, Vikash Kumar, Corey Lynch, Sergey Levine, and Karol Hausman. Relay policy learning: Solving long-horizon tasks via imitation and reinforcement learning. arXiv preprint arXiv:1910.11956, 2019.
+Michael Gutmann and Aapo Hyvärinen. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 297-304, 2010.
+Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. arXiv preprint arXiv:1801.01290, 2018.
+Ferenc Huszár. Variational inference using implicit distributions. arXiv preprint arXiv:1702.08235, 2017.
+Tommi Jaakkola, Michael I Jordan, and Satinder P Singh. On the convergence of stochastic iterative dynamic programming algorithms. Neural computation, 6(6):1185-1201, 1994.
+Leslie Pack Kaelbling. Learning to achieve goals. In IJCAI, pp. 1094-1099. CiteSeer, 1993.
+Hilbert J Kappen. Path integrals and symmetry breaking for optimal control theory. Journal of statistical mechanics: theory and experiment, 2005(11):P11011, 2005.
+Sergey Levine. Reinforcement learning and control as probabilistic inference: Tutorial and review. arXiv preprint arXiv:1805.00909, 2018.
+Xingyu Lin, Harjatin Singh Baweja, and David Held. Reinforcement learning without ground-truth state. arXiv preprint arXiv:1905.07866, 2019.
+Qiang Liu, Lihong Li, Ziyang Tang, and Dengyong Zhou. Breaking the curse of horizon: Infinite-horizon off-policy estimation. In Advances in Neural Information Processing Systems, pp. 5356-5366, 2018.
+Corey Lynch, Mohi Khansari, Ted Xiao, Vikash Kumar, Jonathan Tompson, Sergey Levine, and Pierre Sermanet. Learning latent plans from play. In Conference on Robot Learning, pp. 1113-1132, 2020.
+Ofir Nachum, Shixiang Gu, Honglak Lee, and Sergey Levine. Near-optimal representation learning for hierarchical reinforcement learning. arXiv preprint arXiv:1810.01257, 2018.
+Ofir Nachum, Yinlam Chow, Bo Dai, and Lihong Li. Dualdice: Behavior-agnostic estimation of discounted stationary distribution corrections. In Advances in Neural Information Processing Systems, pp. 2318-2328, 2019.
+Suraj Nair, Silvio Savarese, and Chelsea Finn. Goal-aware prediction: Learning to model what matters. arXiv preprint arXiv:2007.07170, 2020.
+Soroush Nasiriany, Vitchyr Pong, Steven Lin, and Sergey Levine. Planning with goal-conditioned policies. In Advances in Neural Information Processing Systems, pp. 14843-14854, 2019.
+Junhyuk Oh, Yijie Guo, Satinder Singh, and Honglak Lee. Self-imitation learning. arXiv preprint arXiv:1806.05635, 2018.
+Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.
+Pedro A Ortega and Daniel A Braun. Thermodynamics as a theory of decision-making with information-processing costs. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 469(2153):20120683, 2013.
+Silviu Pitis, Harris Chan, Stephen Zhao, Bradly Stadie, and Jimmy Ba. Maximum entropy gain exploration for long horizon multi-goal reinforcement learning. arXiv preprint arXiv:2007.02832, 2020.
+Vitchyr Pong, Shixiang Gu, Murtaza Dalal, and Sergey Levine. Temporal difference models: Model-free deep rl for model-based control. arXiv preprint arXiv:1802.09081, 2018.
+Vitchyr H Pong, Murtaza Dalal, Steven Lin, Ashvin Nair, Shikhar Bahl, and Sergey Levine. Skew-fit: Statecovering self-supervised reinforcement learning. arXiv preprint arXiv:1903.03698, 2019.
+Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, and Martin Riedmiller. Data-efficient deep reinforcement learning for dexterous manipulation. arXiv preprint arXiv:1704.03073, 2017.
+
+Konrad Rawlik, Marc Toussaint, and Sethu Vijayakumar. On stochastic optimal control and reinforcement learning by approximate inference. In Twenty-third international joint conference on artificial intelligence, 2013.
+Nikolay Savinov, Alexey Dosovitskiy, and Vladlen Koltun. Semi-parametric topological memory for navigation. arXiv preprint arXiv:1803.00653, 2018.
+Tom Schaul, Daniel Horgan, Karol Gregor, and David Silver. Universal value function approximators. In International conference on machine learning, pp. 1312-1320, 2015.
+Yannick Schroecker and Charles Isbell. Universal value density estimation for imitation learning and goal-conditioned reinforcement learning. arXiv preprint arXiv:2002.06473, 2020.
+Evan Shelhamer, Parsa Mahmoudieh, Max Argus, and Trevor Darrell. Loss is its own reward: Self-supervision for reinforcement learning. arXiv preprint arXiv:1612.07307, 2016.
+Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Huszár. Amortised map inference for image super-resolution. arXiv preprint arXiv:1610.04490, 2016.
+Hao Sun, Zhizhong Li, Xiaotong Liu, Bolei Zhou, and Dahua Lin. Policy continuation with hindsight inverse dynamics. In Advances in Neural Information Processing Systems, pp. 10265-10275, 2019.
+Richard S Sutton. Learning to predict by the methods of temporal differences. Machine learning, 3(1):9-44, 1988.
+Richard S Sutton and Brian Tanner. Temporal-difference networks. In Advances in neural information processing systems, pp. 1377-1384, 2005.
+Csaba Szepesvari, Richard S Sutton, Joseph Modayil, Shalabh Bhatnagar, et al. Universal option models. In Advances in Neural Information Processing Systems, pp. 990-998, 2014.
+Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Budden, Abbas Abdelmaleki, Josh Merel, Andrew Lefrancq, et al. Deepmind control suite. arXiv preprint arXiv:1801.00690, 2018.
+Yuval Tassa, Saran Tunyasuvunakool, Alistair Muldal, Yotam Doron, Siqi Liu, Steven Bohez, Josh Merel, Tom Erez, Timothy Lillicrap, and Nicolas Heess. dm_control: Software and tasks for continuous control. arXiv preprint arXiv:2006.12983, 2020.
+Evangelos Theodorou, Jonas Buchli, and Stefan Schaal. A generalized path integral control approach to reinforcement learning. The Journal of Machine Learning Research, 11:3137-3181, 2010.
+Emanuel Todorov. General duality between optimal control and estimation. In 2008 47th IEEE Conference on Decision and Control, pp. 4286-4292. IEEE, 2008.
+Masatoshi Uehara, Issei Sato, Masahiro Suzuki, Kotaro Nakayama, and Yutaka Matsuo. Generative adversarial nets from a density ratio estimation perspective. arXiv preprint arXiv:1610.02920, 2016.
+Tianhe Yu, Deirdre Quillen, Zhanpeng He, Ryan Julian, Karol Hausman, Chelsea Finn, and Sergey Levine. Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning. In Conference on Robot Learning, pp. 1094-1100, 2020.
+Rui Zhao, Xudong Sun, and Volker Tresp. Maximum entropy-regularized multi-goal reinforcement learning. arXiv preprint arXiv:1905.08786, 2019.
+Brian D Ziebart. Modeling purposeful adaptive behavior with the principle of maximum causal entropy. 2010.
+
+# A PARTIALLY-OBSERVED GOALS
+
+In many realistic scenarios we have uncertainty over the true goal, with many goals having some probability of being the user's true goal. These scenarios might arise because of sensor noise or because the user wants the agent to focus on a subset of the goal observation (e.g., a robot's center of mass).
+
+Noisy Goals. In many prior works, this setting is approached by assuming that a noisy measurement of the goal $z \sim p(z \mid \mathbf{s}_{\mathbf{t}+} = g)$ is observed, and conditioning the Q-function on this measurement. For example, the measurement $z$ might be the VAE latent of the image $g$ (Pong et al., 2019; Gregor et al., 2018; Shelhamer et al., 2016). In this setting, we will instead aim to estimate the future discounted measurement distribution,
+
+$$
+p \left(\mathbf {z} _ {\mathbf {t} +} \mid \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}\right) = (1 - \gamma) \sum_ {\Delta = 0} ^ {\infty} \gamma^ {\Delta} \mathbb {E} _ {\mathbf {s} _ {\mathbf {t} + \Delta} \sim p \left(\mathbf {s} _ {\mathbf {t} + \Delta} \mid \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}\right)} \left[ p \left(\mathbf {z} _ {\mathbf {t} + \Delta} \mid \mathbf {s} _ {\mathbf {t} + \Delta}\right) \right].
+$$
+
+Whereas the goal-conditioned setting viewed $f_{\theta}^{\pi}(\mathbf{s}_{t+} \mid \mathbf{s}_t, \mathbf{a}_t)$ as defining a measure over goals, here we view $f_{\theta}^{\pi}(\mathbf{z}_{t+} \mid \mathbf{s}_t, \mathbf{a}_t)$ as an implicitly-defined distribution over measurements.
+
+Partial Goals. In some settings, the user may want the agent to pay attention to part of the goal, ignoring certain coordinates or attributes. Applying C-learning to this setting is easy. Let $\mathrm{crop}(\mathbf{s_t})$ be a user-provided function that extracts relevant coordinates or aspects of the goal state. Then, the user can simply parametrize the C-learning classifier as $C_{\theta}^{\pi}(F|\mathbf{s_t},\mathbf{a_t},g = \mathrm{crop}(\mathbf{s_{t + }}))$ .
+
+# B ANALYTIC FUTURE STATE DISTRIBUTION
+
+In this section we describe how to analytically compute the discounted future state distribution for the gridworld examples. We started by creating two matrices:
+
+$$
+T \in \mathbb {R} ^ {2 5 \times 2 5}: T [ s, s ^ {\prime} ] = \sum_ {a} \mathbb {1} (f (s, a) = s ^ {\prime}) \pi (a \mid s)
+$$
+
+$$
+T _ {0} \in \mathbb {R} ^ {2 5 \times 4 \times 2 5}: T [ s, a, s ^ {\prime} ] = \mathbb {1} (f (s, a) = s ^ {\prime}),
+$$
+
+where $f(s, a)$ denotes the deterministic transition function. The future discounted state distribution is then given by:
+
+$$
+\begin{array}{l} P = (1 - \gamma) \left[ T _ {0} + \gamma T _ {0} T + \gamma^ {2} T _ {0} T ^ {2} + \gamma^ {3} T _ {0} T ^ {3} + \dots \right] \\ = (1 - \gamma) T _ {0} \left[ I + \gamma T + \gamma^ {2} T ^ {2} + \gamma^ {3} T ^ {3} + \dots \right] \\ = (1 - \gamma) T _ {0} \left(I - \gamma T\right) ^ {- 1} \\ \end{array}
+$$
+
+The tensor-matrix product $T_0T$ is equivalent to $\mathrm{einsum}(\mathrm{ijk},\mathrm{kh}\to \mathrm{ijh})$ , $T_0,T$ . We use the forward KL divergence for estimating the error in our estimate, $D_{\mathrm{KL}}(P\parallel Q)$ , where $Q$ is the tensor of predictions:
+
+$$
+Q \in \mathbb {R} ^ {2 5 \times 4 \times 2 5}: \quad Q [ s, a, g ] = q (g \mid s, a).
+$$
+
+# C ASSIGNMENT EQUATIONS FOR THE MSE LOSS
+
+In Section 5.3, we derived the assignment equations for C-learning under the cross entropy loss. In this section we show that using the mean squared error (MSE) loss results in the same update equations. Equivalently, this can be viewed as using a Gaussian model for Q values instead of a logistic model. This result suggests that the difference in next-state scaling between C-learning and Q-learning is not just a quirk of the loss function.
+
+To start, we write the loss for C-learning using the MSE and then completing the square.
+
+$$
+\begin{array}{l} L (\theta , \pi) = (1 - \gamma) \left(C _ {\theta} ^ {\pi} (F = 1 \mid \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}, \mathbf {s} _ {\mathbf {t} + 1}) - 1\right) ^ {2} + \gamma w \left(C _ {\theta} ^ {\pi} (F = 1 \mid \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}, \mathbf {s} _ {\mathbf {t} +}) - 1\right) ^ {2} \\ + \left(C _ {\theta} ^ {\pi} (F = 1 \mid \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}, \mathbf {s} _ {\mathbf {t} + \mathbf {1}}) - 0\right) ^ {2} \\ = (1 - \gamma) \left(C _ {\theta} ^ {\pi} (F = 1 \mid \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}, \mathbf {s} _ {\mathbf {t} + \mathbf {1}}) - 1\right) ^ {2} + \gamma w \left(C _ {\theta} ^ {\pi} (F = 1 \mid \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}, \mathbf {s} _ {\mathbf {t} +}) - \frac {\gamma w}{\gamma w + 1}\right) ^ {2} \\ + \underbrace {\gamma w - \left(\frac {\gamma w}{\gamma w + 1}\right) ^ {2}} _ {\text {c o n s t a n t w . r . t .} C _ {\theta}}. \\ \end{array}
+$$
+
+The optimal values for $C_{\theta}$ for both cases of goal are the same as for the cross entropy loss:
+
+$$
+C _ {\theta} ^ {\pi} (F = 1 \mid \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}, \mathbf {s} _ {\mathbf {t} +}) \leftarrow \left\{ \begin{array}{l l} 1 & \text {i f} \mathbf {s} _ {\mathbf {t} + \mathbf {1}} = \mathbf {s} _ {\mathbf {t} +} \\ \frac {\gamma w}{\gamma w + 1} & \text {o t h e r w i s e} \end{array} \right..
+$$
+
+Additionally, observe that the weights on the two loss terms are the same as for the cross entropy loss: the next-state goal loss is scaled by $(1 - \gamma)$ while the random goal loss is scaled by $\gamma w$ .
+
+# D A BELLMAN EQUATION FOR C-LEARNING AND CONVERGENCE GUARANTEES
+
+The aim of this section is to show that off-policy C-learning converges, and that the fixed point corresponds to the Bayes-optimal classifier. This result guarantees that C-learning will accurately evaluate the future state density of a given policy. We then provide a policy improvement theorem, which guarantees that goal-conditioned C-learning converges to the optimal goal-conditioned policy.
+
+# D.1 BELLMAN EQUATIONS FOR C-LEARNING
+
+We start by introducing a new Bellman equation for C-learning, which will be satisfied by the Bayes optimal classifier. While actually evaluating this Bellman equation requires privileged knowledge of the transition dynamics and the marginal state density, if we knew these quantities we could turn this Bellman equation into a convergent value iteration procedure. In the next section, we will show that the updates of off-policy C-learning are equivalent to this value iteration procedure, but do not require knowledge of the transition dynamics or marginal state density. This equivalence allows us to conclude that C-learning converges to the Bayes-optimal classifier.
+
+Our Bellman equation says that the future state density function $f_{\theta}$ induced by a classifier $C_{\theta}$ should satisfy the recursive relationship noted in Eq. 4.
+
+Lemma 1 (C-learning Bellman Equation). Let policy $\pi (\mathbf{a_t}\mid \mathbf{s_t})$ , dynamics function $p(\mathbf{s}_{t + 1}\mid \mathbf{s_t},\mathbf{a_t})$ and marginal distribution $p(\mathbf{s}_{t + })$ be given. If a classifier $C_{\theta}$ is the Bayes-optimal classifier, then it satisfies the follow identity for all states $\mathbf{s_t}$ , actions $\mathbf{a_t}$ , and potential future states $\mathbf{s}_{t + }$ :
+
+$$
+\frac {C _ {\theta} ^ {\pi} \left(F = 1 \mid \mathbf {s} _ {\mathbf {t}} , \mathbf {a} _ {\mathbf {t}} , \mathbf {s} _ {\mathbf {t} +}\right)}{C _ {\theta} ^ {\pi} \left(F = 0 \mid \mathbf {s} _ {\mathbf {t}} , \mathbf {a} _ {\mathbf {t}} , \mathbf {s} _ {\mathbf {t} +}\right)} = (1 - \gamma) \frac {p \left(\mathbf {s} _ {\mathbf {t} + 1} = \mathbf {s} _ {\mathbf {t} +} \mid \mathbf {s} _ {\mathbf {t}} , \mathbf {a} _ {\mathbf {t}}\right)}{p \left(\mathbf {s} _ {\mathbf {t} +}\right)} + \gamma \underset {\pi \left(\mathbf {a} _ {\mathbf {t} + 1} \mid \mathbf {s} _ {\mathbf {t}}\right)} {\mathbb {E}} _ {p \left(\mathbf {s} _ {\mathbf {t} + 1} \mid \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}\right)}, \left[ \frac {C _ {\theta} ^ {\pi} \left(F = 1 \mid \mathbf {s} _ {\mathbf {t} + 1} , \mathbf {a} _ {\mathbf {t} + 1} , \mathbf {s} _ {\mathbf {t} +}\right)}{C _ {\theta} ^ {\pi} \left(F = 0 \mid \mathbf {s} _ {\mathbf {t} + 1} , \mathbf {a} _ {\mathbf {t} + 1} , \mathbf {s} _ {\mathbf {t} +}\right)} \right] \tag {11}
+$$
+
+Proof. If $C_{\theta}$ is the Bayes-optimal classifier, then $f_{\theta}^{\pi}(\mathbf{s}_{t+} \mid \mathbf{s}_t, \mathbf{a}_t) = p_+^{\pi}(\mathbf{s}_{t+} \mid \mathbf{s}_t, \mathbf{a}_t)$ . Substituting the definition of $f_{\theta}$ (Eq. 2) into Eq. 4, we obtain a new Bellman equation:
+
+This Bellman equation is similar to the standard Bellman equation with a goal-conditioned reward function $r_{\mathbf{s}_{t+}}(\mathbf{s}_t, \mathbf{a}_t) = p(\mathbf{s}_{t+1} = \mathbf{s}_{t+} \mid \mathbf{s}_t, \mathbf{a}_t) / p(\mathbf{s}_{t+})$ , where $\mathbf{Q}$ functions represent the ratio $f_{\theta}^{\pi}(\mathbf{s}_{t+} \mid \mathbf{s}_t, \mathbf{a}_t) = p_{+}^{\pi}(\mathbf{s}_{t+} \mid \mathbf{s}_t, \mathbf{a}_t)$ . However, actually computing this reward function to evaluate this Bellman equation requires knowledge of the densities $p(\mathbf{s}_{t+1} \mid \mathbf{s}_t, \mathbf{a}_t)$ and $p(\mathbf{s}_{t+})$ , both of which we assume are unknown to our agent. Nonetheless, if we had this privileged information,
+
+we could readily turn this Bellman equation into the following assignment equation:
+
+$$
+\frac {C _ {\theta} ^ {\pi} \left(F = 1 \mid \mathbf {s} _ {\mathbf {t}} , \mathbf {a} _ {\mathbf {t}} , \mathbf {s} _ {\mathbf {t} +}\right)}{C _ {\theta} ^ {\pi} \left(F = 0 \mid \mathbf {s} _ {\mathbf {t}} , \mathbf {a} _ {\mathbf {t}} , \mathbf {s} _ {\mathbf {t} +}\right)} \leftarrow (1 - \gamma) \frac {p \left(\mathbf {s} _ {\mathbf {t} + 1} = \mathbf {s} _ {\mathbf {t} +} \mid \mathbf {s} _ {\mathbf {t}} , \mathbf {a} _ {\mathbf {t}}\right)}{p \left(\mathbf {s} _ {\mathbf {t} +}\right)} + \gamma \mathbb {E} _ {p _ {\substack {\pi (\mathbf {a} _ {\mathbf {t} + 1} \mid \mathbf {s} _ {\mathbf {t}}),}}}} \left[ \frac {C _ {\theta} ^ {\pi} \left(F = 1 \mid \mathbf {s} _ {\mathbf {t} + 1} , \mathbf {a} _ {\mathbf {t} + 1} , \mathbf {s} _ {\mathbf {t} +}\right)}{C _ {\theta} ^ {\pi} \left(F = 0 \mid \mathbf {s} _ {\mathbf {t} + 1} , \mathbf {a} _ {\mathbf {t} + 1} , \mathbf {s} _ {\mathbf {t} +}\right)} \right] \tag{12}
+$$
+
+Lemma 2. If we use a tabular representation for the ratio $\frac{C_{\theta}^{\pi}(F = 1|\mathbf{s}_{t},\mathbf{a}_{t},\mathbf{s}_{t + })}{C_{\theta}^{\pi}(F = 0|\mathbf{s}_{t},\mathbf{a}_{t},\mathbf{s}_{t + })}$ , then iterating the assignment equation (Eq. 12) converges to the optimal classifier.
+
+Proof. Eq. 12 can be viewed as doing value iteration with a goal-conditioned Q function parametrized as $Q(\mathbf{s_t},\mathbf{a_t},\mathbf{s_{t + }}) = \frac{C_\theta^\pi(F = 1|\mathbf{s_t},\mathbf{a_t},\mathbf{s_{t + }})}{C_\theta^\pi(F = 0|\mathbf{s_t},\mathbf{a_t},\mathbf{s_{t + }})}$ and a goal-conditioned reward function $r_{\mathbf{s}_{\mathbf{t} + }}(\mathbf{s}_{\mathbf{t}},\mathbf{a}_{\mathbf{t}}) = \frac{p(\mathbf{s}_{\mathbf{t} + 1} = \mathbf{s}_{\mathbf{t} + }|\mathbf{s}_{\mathbf{t}},\mathbf{a}_{\mathbf{t}})}{p(\mathbf{s}_{\mathbf{t} + })}$ . We can then employ standard convergence proofs for Q-learning to guarantee convergence (Jaakkola et al., 1994, Theorem 1).
+
+# D.2 OFF-POLICY C-LEARNING CONVERGES
+
+In this section we show that off-policy C-learning converges to the Bayes-optimal classifier, and thus recovers the true future state density function. The main idea is to show that the updates for off-policy C-learning have the same effect as the assignment equation above (Eq. 12), without relying on knowledge of the dynamics function or marginal density function.
+
+Lemma 3. Off-policy $C$ -learning results in the same updates to the classifier as the assignment equations for the $C$ -learning Bellman equation (Eq. 12)
+
+Proof. We start by viewing the off-policy C-learning loss (Eq. 9) as a probabilistic assignment equation. A given triplet $(\mathbf{s_t},\mathbf{a_t},\mathbf{s}_{t+})$ can appear in Eq. 9 in two ways:
+
+1. We sample a "positive" $\mathbf{s}_{\mathbf{t}+} = \mathbf{s}_{\mathbf{t}+1}$ , which happens with probability $(1 - \gamma)p(\mathbf{s}_{\mathbf{t}+1} = \mathbf{s}_{\mathbf{t}+} \mid \mathbf{s}_{\mathbf{t}}, \mathbf{a}_{\mathbf{t}})$ , and results in the label $y = 1$ .
+2. We sample a "negative" $\mathbf{s}_{\mathbf{t}+}$ , which happens with probability $(1 + \gamma w)p(\mathbf{s}_{\mathbf{t}+})$ and results in the label $y = \frac{\gamma w}{\gamma w + 1}$ .
+
+Thus, conditioned on the given triplet containing $\mathbf{s}_{\mathbf{t} + }$ , the expected target value $y$ is
+
+$$
+\begin{array}{l} \mathbb {E} [ y \mid \mathbf {s _ {t}}, \mathbf {a _ {t}}, \mathbf {s _ {t +}} ] = \frac {(1 - \gamma) p (\mathbf {s _ {t + 1}} = \mathbf {s _ {t +}} \mid \mathbf {s _ {t} , a _ {t}}) \cdot 1 + \mathbb {E} \left[ (1 + \gamma w) p (\mathbf {s _ {t +}}) \cdot \frac {\gamma w}{\gamma w + 1} \right]}{(1 - \gamma) p (\mathbf {s _ {t + 1}} = \mathbf {s _ {t +}} \mid \mathbf {s _ {t} , a _ {t}}) + \mathbb {E} \left[ (1 + \gamma w) p (\mathbf {s _ {t +}}) \right]} \\ = \frac {(1 - \gamma) \frac {p \left(\mathbf {s} _ {\mathbf {t} + 1} = \mathbf {s} _ {\mathbf {t} +} \mid \mathbf {s} _ {\mathbf {t}} , \mathbf {a} _ {\mathbf {t}}\right)}{p \left(\mathbf {s} _ {\mathbf {t} +}\right)} + \gamma \mathbb {E} [ w ]}{(1 - \gamma) \frac {p \left(\mathbf {s} _ {\mathbf {t} + 1} = \mathbf {s} _ {\mathbf {t} +} \mid \mathbf {s} _ {\mathbf {t}} , \mathbf {a} _ {\mathbf {t}}\right)}{p \left(\mathbf {s} _ {\mathbf {t} +}\right)} + \left. \gamma \mathbb {E} [ w ] + 1 \right.}. \tag {13} \\ \end{array}
+$$
+
+Note that $w$ is a random variable because it depends on $\mathbf{s}_{\mathbf{t} + 1}$ and $\mathbf{a}_{\mathbf{t} + 1}$ , so we take its expectation above. We can write the assignment equation for $C$ as
+
+$$
+C _ {\theta} ^ {\pi} (F = 1 \mid \mathbf {s} _ {t}, \mathbf {a} _ {t}, \mathbf {s} _ {t +}) \leftarrow \mathbb {E} [ y \mid \mathbf {s} _ {t}, \mathbf {a} _ {t}, \mathbf {s} _ {t +} ].
+$$
+
+Noting that the function $\frac{C}{1 - C}$ is strictly monotone increasing, the assignment equation is equivalent to the following assignment for the ratio $\frac{C_{\theta}^{\pi}(F = 1|\mathbf{s}_{t},\mathbf{a}_{t},\mathbf{s}_{t + })}{C_{\theta}^{\pi}(F = 0|\mathbf{s}_{t},\mathbf{a}_{t},\mathbf{s}_{t + })}$ :
+
+$$
+\frac {C _ {\theta} ^ {\pi} (F = 1 \mid \mathbf {s} _ {\mathbf {t}} , \mathbf {a} _ {\mathbf {t}} , \mathbf {s} _ {\mathbf {t} +})}{C _ {\theta} ^ {\pi} (F = 0 \mid \mathbf {s} _ {\mathbf {t}} , \mathbf {a} _ {\mathbf {t}} , \mathbf {s} _ {\mathbf {t} +})} \gets \frac {\mathbb {E} [ y \mid \mathbf {s} _ {\mathbf {t}} , \mathbf {a} _ {\mathbf {t}} , \mathbf {s} _ {\mathbf {t} +} ]}{1 - \mathbb {E} [ y \mid \mathbf {s} _ {\mathbf {t}} , \mathbf {a} _ {\mathbf {t}} , \mathbf {s} _ {\mathbf {t} +} ]} = (1 - \gamma) \frac {p (\mathbf {s} _ {\mathbf {t} + 1} = \mathbf {s} _ {\mathbf {t} +} \mid \mathbf {s} _ {\mathbf {t}} , \mathbf {a} _ {\mathbf {t}})}{p (\mathbf {s} _ {\mathbf {t} +})} + \gamma \mathbb {E} [ w ].
+$$
+
+The equality follows from substituting Eq. 13 and then simplifying. Substituting our definition of $w$ , we observe that the assignment equation for off-policy C-learning is exactly the same as the assignment equation for the C-learning Bellman equation (Eq. 11):
+
+$$
+\frac {C _ {\theta} ^ {\pi} (F = 1 \mid \mathbf {s _ {t} , a _ {t} , s _ {t +}})}{C _ {\theta} ^ {\pi} (F = 0 \mid \mathbf {s _ {t} , a _ {t} , s _ {t +}})} \gets (1 - \gamma) \frac {p (\mathbf {s _ {t + 1}} = \mathbf {s _ {t +}} \mid \mathbf {s _ {t} , a _ {t}})}{p (\mathbf {s _ {t +}})} + \gamma \left[ \frac {C _ {\theta} ^ {\pi} (F = 1 \mid \mathbf {s _ {t + 1} , a _ {t + 1} , s _ {t +}})}{C _ {\theta} ^ {\pi} (F = 0 \mid \mathbf {s _ {t + 1} , a _ {t + 1} , s _ {t +}})} \right].
+$$
+
+
+
+Since the off-policy C-learning assignments are equivalent to the assignments of the C-learning Bellman equation, any convergence guarantee that applies to the later must apply to the former. Thus, Lemma 2 tells us that off-policy C-learning must also converge to the Bayes-optimal classifier. We state this final result formally:
+
+Corollary 3.1. If we use a tabular representation for the classifier, then off-policy $C$ -learning converges to the Bayes-optimal classifier. In this case, the predicted future state density (Eq. 2) also converges to the true future state density.
+
+# D.3 GOAL-CONDITIONED C-LEARNING CONVERGES
+
+In this section we prove that the version of policy improvement done by C-learning is guaranteed to improve performance. We start by noting a Bellman optimality equation for goal-conditioned C-learning, which indicates whether a goal-conditioned policy is optimal:
+
+Lemma 4 (C-learning Bellman Optimality Equation). Let dynamics function $p(\mathbf{s}_{\mathbf{t} + 1} \mid \mathbf{s}_{\mathbf{t}}, ca)$ , and marginal distribution $p(\mathbf{s}_{\mathbf{t} + })$ be given. If a classifier $C_{\theta}$ is the Bayes-optimal classifier, then it satisfies the follow identity for all states $\mathbf{s}_{\mathbf{t}}$ , actions $\mathbf{a}_{\mathbf{t}}$ , and goals $g = \mathbf{s}_{\mathbf{t} + }$ :
+
+$$
+\frac {C _ {\theta} ^ {\pi} (F = 1 \mid \mathbf {s} _ {\mathbf {t}} , \mathbf {a} _ {\mathbf {t}} , \mathbf {s} _ {\mathbf {t} +})}{C _ {\theta} ^ {\pi} (F = 0 \mid \mathbf {s} _ {\mathbf {t}} , \mathbf {a} _ {\mathbf {t}} , \mathbf {s} _ {\mathbf {t} +})} = (1 - \gamma) \frac {p \left(\mathbf {s} _ {\mathbf {t} + 1} = \mathbf {s} _ {\mathbf {t} +} \mid \mathbf {s} _ {\mathbf {t}} , \mathbf {a} _ {\mathbf {t}}\right)}{p \left(\mathbf {s} _ {\mathbf {t} +}\right)} + \gamma \mathbb {E} _ {p \left(\mathbf {s} _ {\mathbf {t} + 1} \mid \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}\right)} \left[ \max _ {\mathbf {a} _ {\mathbf {t} + 1}} \frac {C _ {\theta} ^ {\pi} (F = 1 \mid \mathbf {s} _ {\mathbf {t} + 1} , \mathbf {a} _ {\mathbf {t} + 1} , \mathbf {s} _ {\mathbf {t} +})}{C _ {\theta} ^ {\pi} (F = 0 \mid \mathbf {s} _ {\mathbf {t} + 1} , \mathbf {a} _ {\mathbf {t} + 1} , \mathbf {s} _ {\mathbf {t} +})} \right] \tag {14}
+$$
+
+We now apply the standard policy improvement theorem to C-learning.
+
+Lemma 5. If the estimate of the future state density is accurate, then updating the policy according to Eq. 5.2 guarantees improvement at each step.
+
+Proof. We use $\pi$ to denote the current policy and $\pi'$ to denote the policy that acts greedily w.r.t. the current density function:
+
+$$
+\pi^ {\prime} \left(\mathbf {a} _ {\mathbf {t}} \mid \mathbf {s} _ {\mathbf {t}}, \mathbf {s} _ {\mathbf {t} +}\right) = \mathbb {1} \left(\mathbf {a} _ {\mathbf {t}} = \underset {a} {\arg \max } p ^ {\pi} \left(\mathbf {s} _ {\mathbf {t} +} \mid \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}\right)\right)
+$$
+
+The proof is quite similar to the standard policy improvement proof for Q-learning.
+
+$$
+\begin{array}{l} p ^ {\pi} \left(\mathbf {s} _ {t +} \mid \mathbf {s} _ {t}\right) = \mathbb {E} _ {\pi \left(\mathbf {a} _ {t} \mid \mathbf {s} _ {t}, \mathbf {s} _ {t +}\right)} \left[ p ^ {\pi} \left(\mathbf {s} _ {t +} \mid \mathbf {s} _ {t}, \mathbf {a} _ {t}\right) \right] \\ = \mathbb {E} _ {\pi (\mathbf {a} _ {t} | \mathbf {s} _ {t}, \mathbf {s} _ {t +})} [ (1 - \gamma) p (\mathbf {s} _ {t + 1} = \mathbf {s} _ {t +} | \mathbf {s} _ {t}, \mathbf {a} _ {t}) + \gamma p ^ {\pi} (\mathbf {s} _ {t +} | \mathbf {s} _ {t + 1}, \mathbf {a} _ {t + 1}) ] \\ \leq \mathbb {E} _ {\pi^ {\prime} (\mathbf {a} _ {t} | \mathbf {s} _ {t}, \mathbf {s} _ {t +})} [ (1 - \gamma) p (\mathbf {s} _ {t + 1} = \mathbf {s} _ {t +} \mid \mathbf {s} _ {t}, \mathbf {a} _ {t}) + \gamma p ^ {\pi} (\mathbf {s} _ {t +} \mid \mathbf {s} _ {t + 1}, \mathbf {a} _ {t + 1}) ] \\ = \mathbb {E} _ {\pi^ {\prime} (\mathbf {a} _ {t} | \mathbf {s} _ {t}, \mathbf {s} _ {t +})} [ (1 - \gamma) p (\mathbf {s} _ {t + 1} = \mathbf {s} _ {t +} | \mathbf {s} _ {t}, \mathbf {a} _ {t}) \\ + \mathbb {E} _ {\substack {p (\mathbf {s} _ {\mathbf {t} + 1} | \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}), \\ \pi (\mathbf {a} _ {\mathbf {t} + 1} | \mathbf {s} _ {\mathbf {t} + 1}, \mathbf {s} _ {\mathbf {t} + })}} [ \gamma ((1 - \gamma) p (\mathbf {s} _ {\mathbf {t} + 1} = \mathbf {s} _ {\mathbf {t} +} | \mathbf {s} _ {\mathbf {t} + 1}, \mathbf {a} _ {\mathbf {t} + 1}) + \gamma p ^ {\pi} (\mathbf {s} _ {\mathbf {t} +} | s _ {t + 2}, a _ {t + 2})) ] ] \\ \leq \mathbb {E} _ {\pi^ {\prime} (\mathbf {a} _ {t} | \mathbf {s} _ {t}, \mathbf {s} _ {t +})} [ (1 - \gamma) p (\mathbf {s} _ {t + 1} = \mathbf {s} _ {t +} | \mathbf {s} _ {t}, \mathbf {a} _ {t}) \\ + \mathbb{E}_{\substack{p(\mathbf{s}_{t + 1}|\mathbf{s}_{t},\mathbf{a}_{t}),\\ \pi^{\prime}(\mathbf{a}_{t + 1}|\mathbf{s}_{t + 1},\mathbf{s}_{t + })}}[\gamma ((1 - \gamma)p(\mathbf{s}_{t + 1} = \mathbf{s}_{t + }|\mathbf{s}_{t + 1},\mathbf{a}_{t + 1}) + \gamma p^{\pi}(\mathbf{s}_{t + }|s_{t + 2},a_{t + 2}))]] \\ \end{array}
+$$
+
+.
+
+$$
+\leq p ^ {\pi^ {\prime}} \left(\mathbf {s} _ {\mathbf {t} +} \mid \mathbf {s} _ {\mathbf {t}}\right)
+$$
+
+Taken together with the convergence of off-policy C-learning, this proof guarantees that goal-conditioned C-learning converges to the optimal goal-reaching policy (w.r.t. the functional in Eq. 5.2) in the tabular setting.
+
+# E MIXING TD C-LEARNING WITH MC C-LEARNING
+
+Recall that the main challenge in constructing an off-policy procedure for learning the classifier was getting samples from the future state distribution of a new policy. Recall that TD C-learning
+
+(Alg. 3) uses importance weighting to estimate expectations under this new distribution, where the importance weights are computed using the learned classifier. However, this approach can result in high-variance, especially when the new policy has a future state distribution that is very different from the background distribution. In this section we describe how to decrease the variance of this importance weighting estimator at the cost of increasing bias.
+
+The main idea is to combine TD C-learning with MC C-learning. We will modify off-policy C-learning to also use samples $\hat{p}(\mathbf{s}_{t+} \mid \mathbf{s}_t, \mathbf{a}_t)$ from previous policies as positive examples. These samples will be sampled from trajectories in the replay buffer, in the same way that samples were generated for MC C-learning. We will use a mix of these on-policy samples (which are biased because they come from a different policy) and importance-weighted samples (which may have higher variance). Weighting the TD C-learning estimator by $\lambda$ and the MC C-learning estimator by $(1 - \lambda)$ , we get the following objective:
+
+$$
+\begin{array}{l} \lambda \mathbb {E} _ {p (\mathbf {s} _ {\mathbf {t} + 1} | \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}), p (\mathbf {s} _ {\mathbf {t} +})} [ (1 - \gamma) \log C (F = 1 \mid \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}, \mathbf {s} _ {\mathbf {t} + 1}) + \log C (F = 0 \mid \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}, \mathbf {s} _ {\mathbf {t} +}) \\ + \gamma w \log C (F = 1 \mid \mathbf {s} _ {\mathbf {t}}, \mathbf {a} _ {\mathbf {t}}, \mathbf {s} _ {\mathbf {t} +}) \\ + (1 - \lambda) \mathbb {E} _ {\hat {p} (\widehat {\mathbf {s}} _ {t +} | \mathbf {s} _ {t}, \mathbf {a} _ {t}), p (\mathbf {s} _ {t +})} [ \log C (F = 1 \mid \mathbf {s} _ {t}, \mathbf {a} _ {t}, \widehat {\mathbf {s} _ {t +}}) + \log C (F = 0 \mid \mathbf {s} _ {t}, \mathbf {a} _ {t}, \mathbf {s} _ {t +}) ] \\ \end{array}
+$$
+
+This method is surprisingly easy to implement. Given a batch of $B$ transitions $(\mathbf{s}_{\mathbf{t}},\mathbf{a}_{\mathbf{t}})$ , we label $\frac{\lambda}{2} B$ with the next state $\mathbf{s}_{\mathbf{t} + 1}$ , $\frac{1}{2} B$ with a random state $\mathbf{s}_{\mathbf{t} + } \sim p(\mathbf{s}_{\mathbf{t} + })$ , and $\frac{1 - \lambda}{2} B$ with a state sampled from the empirical future state distribution $\hat{p} (\mathbf{s}_{\mathbf{t} + }|\mathbf{s}_{\mathbf{t}},\mathbf{a}_{\mathbf{t}})$ . To make sure that each term in the loss above receives the correct weight, we scale each of the terms by the inverse sampling probability:
+
+(Next states): $\frac{2}{\chi B} (\chi (1 - \gamma)\log C(F = 1\mid \mathbf{s_t},\mathbf{a_t},\mathbf{s_{t + 1}})$
+
+(Random states): $\frac{2}{B}\left((\lambda + 1 - \lambda) \log C(F = 0 \mid \mathbf{s}_{t}, \mathbf{a}_{t}, \mathbf{s}_{t+}) + \lambda \gamma w \log C(F = 1 \mid \mathbf{s}_{t}, \mathbf{a}_{t}, \mathbf{s}_{t+})\right)$
+
+(Future states): $\frac{2}{(1 - \lambda)B} (1 - \lambda)\log C(F = 1\mid \mathbf{s_t},\mathbf{a_t},\mathbf{s}_{t + })$
+
+Without loss of generality, we scale each term by $\frac{B}{2}$ . Since each of these terms is a cross entropy loss, we can simply implement this loss as a weighted cross entropy loss, where the weights and labels are given in the table below.
+
+ | Fraction of batch | Label | Weight |
| Next states | λ/2 | 1 | 1 - γ |
| Future states | 1-λ/2 | 1 | 1 |
| Random states | 1/2 | λγw/1+λγw | (1 + λγw) |
+
+On many tasks, we observed that this approach performed no differently than TD C-learning. However, we found this strategy to be crucial for learning some of the Sawyer manipulation tasks. In our experiments we used $\lambda = 0.6$ for the Sawyer Push and Sawyer Spinner tasks, and used $\lambda = 1$ (i.e., pure TD C-learning) for all other tasks.
+
+# F ADDITIONAL EXPERIMENTS
+
+Comparing C-learning and C-learning for Future State Density Estimation Fig. 5 shows the results of our comparison of C-learning and Q-learning on the "continuous gridworld" environment, in both the on-policy and off-policy setting. In both settings, off-policy C-learning achieves lower error than Q-learning. As expected, Monte Carlo C-learning performs well in the on-policy setting, but poorly in the off-policy setting, motivating the use of off-policy C-learning.
+
+Additional Results on Predicting the Goal Sampling Ratio To further test Hypothesis 2, we repeated the experiment from Fig. 1b across a range of values for $\gamma$ . As shown in Fig. 6, our Hypothesis accurately predicts the optimal goal sampling ratio across a wide range of values for $\gamma$ .
+
+
+(a) On Policy
+
+
+(b) Off Policy
+
+
+Figure 5: We use C-learning and Q-learning to predict the future state distribution. (Right) In the on-policy setting, both the Monte Carlo and TD versions of C-learning achieve significantly lower error than Q-learning. (Right) In the off-policy setting, the TD version of C-learning achieves lower error than Q-learning, while Monte Carlo C-learning performs poorly, as expected.
+
+
+
+
+
+
+Figure 6: The performance of Q-learning (blue line) is sensitive to the relabeling ratio. Our analysis accurately predicts that the optimal relabeling ratio is approximately $\lambda = \frac{1}{2} (1 + \gamma)$ . Our method, C-learning, does not require tuning this ratio, and outperforms Q-learning, even with the relabeling ratio for Q-learning is optimally chosen.
+
+# G EXPERIMENTAL DETAILS
+
+# G.1 "CONTINUOUS GRIDWORLD" EXPERIMENTS
+
+The "Continuous Gridworld" Environment Our first set of experiments aimed to compare the predictions of Q-learning and C-learning to the true future state density. We carefully chose the environment to conduct this experiment. We want it to have stochastic dynamics and a continuous state space, so that the true Q function for the indicator reward is 0. On the other hand, to evaluate our hypotheses, we want to be able to analytically compute the true future state density. Thus, we use a modified $5 \times 5$ gridworld environment where the agent observes a noisy version of the current state. Precisely, when the agent is in position $(i,j)$ , the agent observes $(i + \epsilon_i,j + \epsilon_j)$ where $\epsilon_i,\epsilon_j\sim \mathrm{Unif}[-0.5,0.5]$ . Note that the observation uniquely identifies the agent's position, so there is no partial observability. We can analytically compute the exact future state density function by first computing the future state density of the underlying gridworld and noting that the density is uniform within each cell (see Appendix B). We generated a tabular policy by sampling from a Dirichlet(1) distribution, and sampled 100 trajectories of length 100 from this policy.
+
+On-Policy Experiment We compare C-learning (Algorithms 1 and 2) to Q-learning in the on-policy setting, where we aim to estimate the future state density function of the same policy that collected the dataset. This experiment aims to answer whether C-learning and Q-learning solve the future state density estimation problem (Def. 2). Each of the algorithms used a 2 layer neural network with a hidden layer of size 32, optimized for 1000 iterations using the Adam optimizer with a learning rate of 3e-3 and a batch size of 256. C-learning and Q-learning with hindsight relabeling differ only in their loss functions, and the fact that C-learning acquires a classifier while Q-learning predicts a continuous Q-value. After training with each of the algorithms, we extracted the estimated future state distribution $f_{\theta}^{\pi}(\mathbf{s}_{t+} \mid \mathbf{s}_t, \mathbf{a}_t)$ using Eq. 2. We also normalized the predicted distributions $f_{\theta}^{\pi}(\mathbf{s}_{t+} \mid \mathbf{s}_t, \mathbf{a}_t)$ to sum to 1 (i.e., $\sum_{s_{t+}} f_{\theta}^{\pi}(\mathbf{s}_{t+} = s_{t+} \mid \mathbf{s}_t, \mathbf{a}_t) = 1 \forall \mathbf{s}_t, \mathbf{a}_t)$ ). For evaluation, we computed the KL divergence with the true future state distribution. We show results in Fig. 5a.
+
+
+(a) Walker2d-v2
+
+
+(b) Hopper-v2
+Figure 7: Predicting the Future with C-Learning: We ran the experiment from Fig. 2 on three locomotions tasks.
+
+
+(c) Ant-v2
+
+Off-Policy Experiment We use the same "continuous gridworld" environment to compare C-learning to Q-learning in the off-policy setting, where we want to estimate the future state distribution of a policy that is different from the behavior policy. Recall that the motivation for deriving the bootstrapping version of the classifier learning algorithm was precisely to handle this setting. To conduct this experiment, we generated two tabular policies by sampling from a Dirichlet(1) distribution, using one policy for data collection and the other for evaluation. We show results in Fig. 5b.
+
+Experiment Testing Hypothesis 1 (Fig. 1a) For this experiment, we found that using a slightly larger network improved the results of both methods, so we increased hidden layer size from 32 to 256.
+
+Experiment Testing Hypothesis 2 (Fig. 1b, Fig. 6) To test this hypothesis, we used the Q-learning objective in Eq. 10, where $\lambda$ represents probability of sampling a random state as $s_{t+}$ . Following prior work (Fu et al., 2019), we reweight the loss function instead of changing the actual sampling probabilities, thereby avoiding additional sampling error. We ran this experiment in the on-policy setting, using 5 random seeds for each value of $\lambda$ . For each trial, we normalized the predicted density function to sum to 1 and then computed the KL divergence with the true future state density function.
+
+# G.2 PREDICTION EXPERIMENTS ON MUJOCO LOCOMOTION TASKS
+
+In this section we provide details for the prediction experiment discussed in Sec. 6 and plotted in Fig. 2 and Fig. 7.
+
+We first describe how we ran the experiment computing the prediction error in Fig. 7a- 7c. For these experiments, we used the "expert" data provided for each task in Fu et al. (2020). We split these trajectories into train $(80\%)$ and test $(20\%)$ splits. All methods (MC C-learning, TD C-learning, and the 1-step dynamics model) used the same architecture (one hidden layer of size 256 with ReLU activation), but C-learning output a binary prediction whereas the 1-step dynamics model output a vector with the same dimension as the observations. While we could have used larger networks and likely learned more accurate models, the aim of this experiment is to compare the relative performance of C-learning and the 1-step dynamics model when they use similarly-expressive networks. We trained MC C-learning and the 1-step dynamics model for 3e4 batches and trained the TD C-learning model for just 3e3 batches (we observed overfitting after that point). For TD C-learning, we clipped $w$ to lie in [0, 20]. To evaluate each model, we randomly sampled a 1000 state-action pairs from the validation set and computed the average MSE with the empirical expected future state. We computed the empirical expected future state by taking a geometric-weighted average of the next 100 time steps. We computed the predictions from the 1-step model by unrolling the model for 100 time steps and taking the geometric-weighted average of these predictions. To compute predictions from the classifier, we evaluate the classifier at 1000 randomly-sampled state-action pairs (taken from the same dataset), convert these predictions to importance weights using Eq. 2, normalize the importance weights to sum to 1, and then take the weighted average of the randomly-sampled states.
+
+To visualize the predictions from C-learning (Fig. 2b), we had to modify these environments to include the global $X$ coordinate of the agent in the observation. We learned policies for solving these modified tasks by running the SAC (Haarnoja et al., 2018) implementation in (Guadarrama et al., 2018) using the default hyperparameters. We created a dataset of 1e5 transitions for each environment. We choose the maximum horizon for each task based on when the agent ran out of the frame (200 steps for HalfCheetah-v2, 1000 steps for Hopper-v2, 1000 steps for Walker2d-v2, and 300 steps for Ant-v2). We then trained TD C-learning on each of these datasets. The hyperparameters were the same as before, except that we trained for 3e4 batches. We evaluated the classifier
+
+
+
+
+Jaco Reach
+
+
+
+
+
+
+Sawyer Reach
+Pen
+Figure 8: Continuous Control Environments
+
+
+Sawyer Push
+
+
+Finger
+Sawyer Window
+Sawyer Diagram
+
+
+Hand
+
+at 1000 randomly-sampled state-action pairs taken from a separate validation set, computed the importance weights as before, and then took the weighted average of the rendered images of each of these 1000 randomly-sampled states. We normalized the resulting images to be in [0, 255].
+
+# G.3 GOAL-CONDITIONED RL EXPERIMENTS
+
+In this section we describe the continuous control tasks used in our experiments (shown in Fig. 8), as well as the hyperparameters used in our implementation of goal-conditioned C-learning. One important detail is that we used a subset of the state coordinates as the goal in many tasks. For example, in the Jaco Reach task, we used just the joint angles, not the joint velocities, as the goal. When C-learning is only conditioned on a subset of the state coordinates, it estimates the marginal future state distribution over just those coordinates. Unless otherwise mentioned, environments used the default episode length. Code will be released.
+
+- Jaco Reach This task is based on the manipulation environment in Tassa et al. (2020). We sampled goals by resetting the environment twice, using the state after the first reset as the goal. For this task we used just the position, not the velocity, of the joint angles as the goal.
+- Finger This task is based on the Finger task in Tassa et al. (2018). We sampled goals by resetting the environment twice, using the state after the first reset as the goal. For this task we used the position of the spinner as the goal.
+- Hand This task is a modified version of the door-human-v0 task in Fu et al. (2020), but modified to remove the door, so just the hand remains. We sampled goals by taking 50 random actions, recording the current state as the goal, and then resetting the environment. This task used the entire state as the goal. Episodes had length 50.
+- Pen This task is based on the pen-human-v0 task in Fu et al. (2020). We sampled goals by randomly choosing a state from a dataset of human demonstrations provided in Fu et al. (2020). For this task we used the position (but not orientation) of the pen as the goal.
+- Sawyer Reach This task is based on the SawyerReachXYZEnv-v0 environment from Yu et al. (2020). We used the default goal sampling method, and used the end effector position as the goal. Episodes had length 50.
+- Sawyer Push This task is based on the SawyerReachPushPickPlaceEnv-v0 environment from Yu et al. (2020). We sampled goals uniformly from Unif([-0.1, 0.1], [0.5, 0.9], [0.015, 0.015]), using the same goal for the arm and puck. Episodes had length 150.
+- Sawyer Window This task is based on the SawyerWindowCloseEnv-v0 environment from Yu et al. (2020). We randomly sampled the initial state and goal state uniformly from the possible positions of the window inside the frame. The goal for the arm was set to be the same as the goal for the window. Episodes had length 150.
+
+ | C-learning | TD3 + Next State Relabeling |
| Jaco Reach | 0.9 | 0.99 |
| Finger | 0.99 | 0.99 |
| Hand | 0.3 | 0.8 |
| Pen | 0.7 | 0.99 |
| Sawyer Reach | 0.99 | 0.99 |
| Sawyer Push | 0.99 | 0.99 |
| Sawyer Window | 0.99 | 0.99 |
| Sawyer Printer | 0.99 | 0.99 |
+
+Table 1: Values of the discount $\gamma$ for C-learning and TD3 with Next State Relabeling
+
+- Sawyer Printer This task is based on the SawyerPrinterEnv-v0 environment from Yu et al. (2020). We randomly sampled the initial state and goal state uniformly from the feasible positions of the printer. The goal for the arm was set to be the same as the goal for the window. Episodes had length 150.
+
+Our implementation of C-learning is based on the TD3 implementation in Guadarrama et al. (2018). We learned a stochastic policy, but (as expected) found that all methods converged to a deterministic policy. We list the hyperparameter below, noting that almost all were taken without modification from the SAC implementation in Guadarrama et al. (2018) (the SAC implementation has much more reasonable hyperparameters):
+
+- Actor network: 2 fully-connected layers of size 256 with ReLU activations.
+- Critic network: 2 fully-connected layers of size 256 with ReLU activations.
+- Initial data collection steps: 1e4
+- Replay buffer size: 1e6
+- Target network updates: Polyak averaging at every iteration with $\tau = 0.005$
+- Batch size: 256. We tried tuning this but found no effect.
+- Optimizer: Adam with a learning rate of 3e-4 and default values for $\beta$
+- Data collection: We collect one transition every one gradient step.
+
+When training the policy (Eq. ??), we used the same goals that we sampled for the classifier (Eq. ??), with $50\%$ being the immediate next state and $50\%$ being random states. The most sensitive hyperparameter was the discount, $\gamma$ . We therefore tuned $\gamma$ for both C-learning and the most competitive baseline, TD3 with next state relabeling (Lin et al., 2019). The tuned values for $\gamma$ are shown in Table 1. We used $\gamma = 0.99$ for the other baselines.
+
+In our implementation of C-learning, we found that values for the importance weight $w$ became quite larger, likely because the classifier was making overconfident predictions. We therefore clipped the values of $w$ to be in [0, 2] for most tasks, though later ablation experiments found that the precise value of the upper bound had little effect. We used a range of [0, 50] for the finger and Sawyer tasks. Further analysis of the importance weight revealed that it effectively corresponds to the planning horizon; clipping the importance weight corresponds to ignoring the possibility of reaching goals beyond some horizon. We therefore suggest that $1 / (1 - \gamma)$ is a reasonable heuristic for choosing the maximum value of $w$ .
+
+# H ANALYTIC EXAMPLES OF Q-LEARNING FAILURES
+
+In this section we describe a few simple MDPs where Q-learning with hindsight relabeling fails to learn the true future state distribution. While we describe both examples as discrete state MDPs, we assume that observations are continuous and noisy, as described in Appendix G
+
+# H.1 EXAMPLE 1
+
+The first example is a Markov process with $n$ states. Each state deterministically transitions to itself. We will examine the one state, $s_1$ (all other states are symmetric). If the agent starts at $s_1$ , it will remain at $s_1$ at every future time step, so $p(\mathbf{s}_{\mathbf{t} + } \mid \mathbf{s}_{\mathbf{t}} = s_1) = \mathbb{1}(\mathbf{s}_{\mathbf{t} + } = s_1)$ .
+
+We use $\lambda$ , defined above, to denote the relabeling fraction. The only transition including state $s_1$ is $(s_t = s_1, s_{t+1} = s_1)$ . We aim to determine $Q = Q(\mathbf{s}_{\mathbf{t}} = s_1, \mathbf{s}_{\mathbf{t}+} = s_1, \mathbf{s}_{\mathbf{t}+} = s_1)$ . There are two possible ways we can observe the transition $(s_t = s_1, \mathbf{s}_{\mathbf{t}+} = s_1, \mathbf{s}_{\mathbf{t}+} = s_1)$ . First, with probability $1 - \lambda$ we sample the next state as $\mathbf{s}_{\mathbf{t}+}$ , and the TD target is 1. Second, with probability $\lambda$ we sample a random state with TD target $\gamma Q$ , and with probability $1/n$ this random state is $s_1$ . Thus, conditioned on $\mathbf{s}_{\mathbf{t}+} = s_1$ , the Bellman equation is
+
+$$
+Q = \left\{ \begin{array}{l l} 1 & \text {w . p .} 1 - \lambda \\ \gamma Q & \text {w . p .} \frac {\lambda}{n} \end{array} \right..
+$$
+
+Solving for $Q$ , we get $Q^{*} = \frac{1 - \gamma}{1 - \frac{\lambda\gamma}{n}}$ . This Q function is clearly different from the true future state distribution, $p(\mathbf{s}_{\mathbf{t} + }|\mathbf{s}_{\mathbf{t}} = s_1) = \mathbb{1}(\mathbf{s}_{\mathbf{t} + } = s_1)$ . First, since $\frac{\lambda}{n} < 1$ , the Q function is greater than one ( $Q^{*} > 1$ ), but a density function over a discrete set of states cannot be greater than one. Second, even if we normalized this Q function to be less than one, we observe that the Q function depends on the discount ( $\gamma$ ), the relabeling ratio ( $\lambda$ ), and the number of states ( $n$ ). However, the true future state distribution has no such dependence. Thus, we conclude that even scaling the optimal Q function by a constant (even one that depends on $\gamma$ ) would not yield the true future state distribution.
+
+# H.2 EXAMPLE 2
+
+Our second example is a stochastic Markov process with two states, $s_1$ and $s_2$ . The transition probabilities are
+
+$$
+p (\mathbf {s} _ {\mathbf {t} + \mathbf {1}} \mid \mathbf {s} _ {\mathbf {t}} = s _ {1}) = \left\{ \begin{array}{l l} \frac {1}{2} & \text {i f} \mathbf {s} _ {\mathbf {t} + \mathbf {1}} = s _ {1} \\ \frac {1}{2} & \text {i f} \mathbf {s} _ {\mathbf {t} + \mathbf {1}} = s _ {2} \end{array} \right., \qquad p (\mathbf {s} _ {\mathbf {t} + \mathbf {1}} \mid \mathbf {s} _ {\mathbf {t}} = s _ {2}) = \mathbb {1} (\mathbf {s} _ {\mathbf {t} + \mathbf {1}} = s _ {2})
+$$
+
+We assume that we have observed each transition once. To simplify notation, we will use $Q_{ij} = Q(\mathbf{s_t} = s_i,\mathbf{s_{t + }} = s_j)$ . There are three ways to end up with $Q_{11}$ on the LHS of a Bellman equation: (1) the next state is $s_1$ and we sample the next state as the goal, (2) the next state is $s_1$ and we sample a random state as the goal, and (3) the next state is $s_2$ and we sample a random state as the goal.
+
+$$
+Q _ {1 1} = \left\{ \begin{array}{l l} 1 & \text {w . p .} 1 - \lambda \\ \gamma Q _ {1 1} & \text {w . p .} \frac {\lambda}{2} \\ \gamma Q _ {2 1} & \text {w . p .} \frac {\lambda}{2} \end{array} \right..
+$$
+
+We can likewise observe $Q_{12}$ on the LHS in the same three ways: (1) the next state is $s_2$ and we sample the next state as the goal, (2) the next state is $s_1$ and we sample a random state as the goal, and (3) the next state is $s_2$ and we sample a random state as the goal.
+
+$$
+Q _ {1 2} = \left\{ \begin{array}{l l} 1 & \mathrm {w . p .} 1 - \lambda \\ \gamma Q _ {1 2} & \mathrm {w . p .} \frac {\lambda}{2} \\ \gamma Q _ {2 2} & \mathrm {w . p .} \frac {\lambda}{2} \end{array} \right.. \tag {15}
+$$
+
+We can only observe $Q_{21}$ on the LHS in one way: the next state is $s_2$ and we sample we sample a random state as the goal:
+
+$$
+Q _ {2 1} = \left\{\gamma Q _ {2 1} \quad \text {w . p .} 1 \right.. \tag {16}
+$$
+
+We can observe $Q_{22}$ on the LHS in two ways: (1) the next state is $s_2$ and we sample the next state as the goal, (2) the next state is $s_2$ and we sample a random state as the goal:
+
+$$
+Q _ {2 2} = \left\{ \begin{array}{l l} 1 & \text {w . p .} \frac {1 - \lambda}{1 - \lambda + \frac {\lambda}{2}} \\ \gamma Q _ {2 2} & \text {w . p .} \frac {\frac {\lambda}{2}}{1 - \lambda + \frac {\lambda}{2}} \end{array} \right.. \tag {17}
+$$
+
+To solve these equations, we immediately note that the only solution to Eq. 16 is $Q_{21} = 0$ . Intuitively this makes sense, as there is zero probability of transition $s_2 \rightarrow s_1$ . We can also solve Eq. 17 directly:
+
+$$
+(1 - \frac {\gamma \lambda}{2 - \lambda}) Q _ {2 2} = \frac {2 - 2 \lambda}{2 - \lambda} \Rightarrow \frac {2 - \lambda + \gamma \lambda}{2 - \lambda} Q _ {2 2} = \frac {2 - 2 \lambda}{2 - \lambda} \Rightarrow Q _ {2 2} = \frac {2 - 2 \lambda}{2 - \lambda + \gamma \lambda}.
+$$
+
+Next, we solve Eq. 15. We start by rearranging terms and substituting our solution to $Q_{22}$ :
+
+$$
+\left(1 - \frac {\gamma \lambda}{2}\right) Q _ {1 2} = 1 - \lambda + \frac {\gamma \lambda}{2} Q _ {2 2} = 1 - \lambda + \frac {\gamma \lambda}{2} \frac {2 - 2 \lambda}{2 - \lambda + \gamma \lambda} = (1 - \lambda) \frac {2 - \lambda + \gamma \lambda + \gamma \lambda}{2 - \lambda + \gamma \lambda}
+$$
+
+Rearranging terms, we obtain the following:
+
+$$
+Q _ {1 2} = \frac {2 (1 - \lambda) (2 - \lambda + 2 \gamma \lambda)}{(2 - \gamma \lambda) (2 - \lambda + \gamma \lambda)}
+$$
+
+Finally, we solve for $Q_{11}$ , recalling that our solution $Q_{21} = 0$ :
+
+$$
+(1 - \frac {\gamma \lambda}{2}) Q _ {1 1} = 1 - \lambda + \frac {\gamma \lambda}{2} Q _ {2 1} \stackrel {0} {\Longrightarrow} Q _ {1 1} = \frac {2 - 2 \lambda}{2 - \gamma \lambda}
+$$
+
+To summarize, Q-learning with hindsight relabeling obtains the following Q values:
+
+$$
+Q _ {1 1} = \frac {2 (1 - \lambda)}{2 - \gamma \lambda}, Q _ {1 2} = \frac {2 (1 - \lambda) (2 - \lambda + 2 \gamma \lambda)}{(2 - \gamma \lambda) (2 - \lambda + \gamma \lambda)}, Q _ {2 1} = 0, Q _ {2 2} = \frac {2 - 2 \lambda}{2 - \lambda + \gamma \lambda}.
+$$
+
+For comparison, we compute $p(\mathbf{s}_{\mathbf{t} + } = s_1 \mid s_1)$ , the probability that the agent remains in state $s_1$ in the future. The probability that the agent is in $s_1$ at time step $\Delta$ is $1 / 2^{\Delta}$ , so the total, discounted probability is
+
+$$
+\begin{array}{l} p \left(\mathbf {s} _ {\mathbf {t} +} = s _ {1} \mid s _ {1}\right) = (1 - \gamma) \left(1 + \frac {1}{2} \gamma + \frac {1}{4} \gamma^ {2} + \dots\right) \\ = (1 - \gamma) \sum_ {\Delta = 0} ^ {\infty} \left(\frac {\gamma}{2}\right) ^ {\Delta} \\ = \frac {1 - \gamma}{1 - \gamma / 2} = \frac {2 - 2 \gamma}{2 - \gamma}. \\ \end{array}
+$$
+
+Thus, in general, the Q value $Q_{11}$ does not equal the future state distribution. One might imagine that Q-learning acquires Q values that are only accurate up to scale, so we should consider the normalized prediction:
+
+$$
+\frac {Q _ {1 1}}{Q _ {1 1} + Q _ {1 2}} = \frac {1}{1 + \frac {2 - \lambda + 2 \gamma \lambda}{2 - \lambda + \gamma \lambda}} = \frac {2 - \lambda + \gamma \lambda}{4 - 2 \lambda + 3 \gamma \lambda}
+$$
+
+However, this normalized Q-learning prediction is also different from the true future state distribution.
+
+# I PREDICTIONS FROM C-LEARNING
+
+Figures 9 and 10 visualizes additional predictions from the C-learning model in Sec. 6. In each image, the top half shows the current state and the bottom half shows the predicted expected future state. Animations of these results can be found on the project website.
+
+
+
+
+
+
+(a) HalfCheetah-v2, $\gamma = 0.9$
+
+
+
+
+(b) HalfCheetah-v2, $\gamma = 0.9$
+
+
+
+
+(c) Ant-v2, $\gamma = 0.5$
+
+
+
+
+(d) Ant-v2, $\gamma = 0.9$
+
+
+(e) Ant-v2, $\gamma = 0.99$
+Figure 9: Predictions from C-learning
+
+
+
+
+
+
+(a) Walker2d-v2, $\gamma = 0.5$
+
+
+
+
+(b) Walker2d-v2, $\gamma = 0.9$
+
+
+
+
+(c) Walker2d-v2, $\gamma = 0.99$
+
+
+
+
+(d) Hopper-v2, $\gamma = 0.5$
+
+
+(e) Hopper-v2, $\gamma = 0.9$
+Figure 10: More Predictions from C-learning
\ No newline at end of file
diff --git a/clearninglearningtoachievegoalsviarecursiveclassification/images.zip b/clearninglearningtoachievegoalsviarecursiveclassification/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..e463f974728731e5c15bf2df797cb3a6e6bdbe51
--- /dev/null
+++ b/clearninglearningtoachievegoalsviarecursiveclassification/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2f10f97f89086d3f34800f5b1c7555cacc6843e64de315e982b9e422c38ef98c
+size 1033835
diff --git a/clearninglearningtoachievegoalsviarecursiveclassification/layout.json b/clearninglearningtoachievegoalsviarecursiveclassification/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..ff3c0f660a69850d2168d112f0bc8145673e352f
--- /dev/null
+++ b/clearninglearningtoachievegoalsviarecursiveclassification/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3d4970265d638dc65ebe05b5a4b5209ec8f88ff99962620081060c7c7418508f
+size 1047535
diff --git a/clusteringfriendlyrepresentationlearningviainstancediscriminationandfeaturedecorrelation/8e076d66-0bd1-4e26-a394-e096a22fe0af_content_list.json b/clusteringfriendlyrepresentationlearningviainstancediscriminationandfeaturedecorrelation/8e076d66-0bd1-4e26-a394-e096a22fe0af_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..f0f4845d84e36c2489cd90810a37b758a109d6fa
--- /dev/null
+++ b/clusteringfriendlyrepresentationlearningviainstancediscriminationandfeaturedecorrelation/8e076d66-0bd1-4e26-a394-e096a22fe0af_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:04922cade89d2cdc155b3f88eca3a80bca9b323340f26c5cd44b75549e9b9e71
+size 93008
diff --git a/clusteringfriendlyrepresentationlearningviainstancediscriminationandfeaturedecorrelation/8e076d66-0bd1-4e26-a394-e096a22fe0af_model.json b/clusteringfriendlyrepresentationlearningviainstancediscriminationandfeaturedecorrelation/8e076d66-0bd1-4e26-a394-e096a22fe0af_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..581b7a5ebb64d229dd64949979b099d6bf89bb95
--- /dev/null
+++ b/clusteringfriendlyrepresentationlearningviainstancediscriminationandfeaturedecorrelation/8e076d66-0bd1-4e26-a394-e096a22fe0af_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8f81e310214cab6dca37b1164d579ca76c0ffad18d0f4e2f24928f3d0f9a9890
+size 110087
diff --git a/clusteringfriendlyrepresentationlearningviainstancediscriminationandfeaturedecorrelation/8e076d66-0bd1-4e26-a394-e096a22fe0af_origin.pdf b/clusteringfriendlyrepresentationlearningviainstancediscriminationandfeaturedecorrelation/8e076d66-0bd1-4e26-a394-e096a22fe0af_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..89e2aabb46f82640e6ea7f71ccc87bd807b1a8da
--- /dev/null
+++ b/clusteringfriendlyrepresentationlearningviainstancediscriminationandfeaturedecorrelation/8e076d66-0bd1-4e26-a394-e096a22fe0af_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:52773d8f55804b05f52ec22a3d61b4b0256009904d5de25c8e37ec341d4a3e7e
+size 3128621
diff --git a/clusteringfriendlyrepresentationlearningviainstancediscriminationandfeaturedecorrelation/full.md b/clusteringfriendlyrepresentationlearningviainstancediscriminationandfeaturedecorrelation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..1bff5e3ab9a71b343e1a7e8395f3b9634cf39da5
--- /dev/null
+++ b/clusteringfriendlyrepresentationlearningviainstancediscriminationandfeaturedecorrelation/full.md
@@ -0,0 +1,366 @@
+# CLUSTERING-FRIENDLY REPRESENTATION LEARNING VIA INSISTANCE DISCRIMINATION AND FEATURE DECORRELATION
+
+Yaling Tao, Kentaro Takagi & Kouta Nakata
+
+Corporate R&D Center, Toshiba Corporation
+
+1, Komukai Toshiha-cho, Saiwai-ku, Kawasaki, Kanagawa, Japan
+
+{yaling1.tao,kentarol takagi,kouta.nakata}@toshiba.co.jp
+
+# ABSTRACT
+
+Clustering is one of the most fundamental tasks in machine learning. Recently, deep clustering has become a major trend in clustering techniques. Representation learning often plays an important role in the effectiveness of deep clustering, and thus can be a principal cause of performance degradation. In this paper, we propose a clustering-friendly representation learning method using instance discrimination and feature decorrelation. Our deep-learning-based representation learning method is motivated by the properties of classical spectral clustering. Instance discrimination learns similarities among data and feature decorrelation removes redundant correlation among features. We utilize an instance discrimination method in which learning individual instance classes leads to learning similarity among instances. Through detailed experiments and examination, we show that the approach can be adapted to learning a latent space for clustering. We design novel softmax-formulated decorrelation constraints for learning. In evaluations of image clustering using CIFAR-10 and ImageNet-10, our method achieves accuracy of $81.5\%$ and $95.4\%$ , respectively. We also show that the softmax-formulated constraints are compatible with various neural networks.
+
+# 1 INTRODUCTION
+
+Clustering is one of the most fundamental tasks in machine learning. Recently, deep clustering has become a major trend in clustering techniques. In a fundamental form, autoencoders are used for feature extraction, and classical clustering techniques such as $k$ -means are serially applied to the features. Recent deep clustering techniques integrate learning processes of feature extraction and clustering, yielding high performance for large-scale datasets such as handwritten digits Hu et al. (2017); Shaham et al. (2018); Xie et al. (2016); Tao et al. (2018). However, those methods have fallen short when targets become more complex, as in the case of real-world photograph dataset CIFAR-10 Krizhevsky et al. (2009). Several works report powerful representation learning leads to improvement of clustering performance on complex datasets Chang et al. (2017); Wu et al. (2019). Learning representation is a key challenge to unsupervised clustering.
+
+In order to learn representations for clustering, recent works utilize metric learning which automatically learns similarity functions from data Chang et al. (2017); Wu et al. (2019). They assign pseudo-labels or pseudo-graph to unlabeled data by similarity measures in latent space, and learn discriminative representations to cluster data. These works improve clustering performance on real world images such as CIFAR-10 and ImageNet-10, and indicate the impact of representation learning on clustering. Although features from learned similarity function and pseudo-labels work well for clustering, algorithms still seem to be heuristic; we design a novel algorithm which is based on knowledge from established clustering techniques. In this work, we exploit a core idea of spectral clustering which uses eigenvectors derived from similarities.
+
+Spectral clustering has been theoretically and experimentally investigated, and known to outperform other traditional clustering methods Von Luxburg (2007). The algorithm involves similarity matrix construction, transformation from similarity matrix to Laplacian, and eigendecomposition. Based on
+
+eigenvectors, data points are mapped into a lower dimensional representation which carries information of similarities and is preferable for clustering. We bring this idea of eigenvector representation into deep representation learning.
+
+We design the representation learning with two aims: 1) learning similarities among instances; and 2) reducing correlations within features. The first corresponds to Laplacian, and the second corresponds to feature orthogonality constrains in the spectral clustering algorithm. Learning process integrating both is relevant to eigendecomposition of Laplacian matrix in the spectral clustering.
+
+For the first aim, we adopt the instance discrimination method presented in Wu et al. (2018), where each unlabeled instance is treated as its own distinct class, and discriminative representations are learned to distinguish between individual instance classes. This numerous-class discriminative learning enables learning partial but important features, such as small foreground objects in natural images. Wu et al. (2018) showed that the representation features retain apparent similarity among images and improve the performance of image classification by the nearest neighbor method. We extend their work to the clustering tasks. We clarify their softmax formulation works like similarity matrix in spectral clustering under the condition that temperature parameter $\tau$ , which was underexplored in Wu et al. (2018), is set to be a larger value.
+
+For the second aim, we introduce constraints which have the effect of making latent features orthogonal. Orthogonality is often an essential idea in dimension reduction methods such as principal components analysis, and it is preferable for latent features to be independent to ensure that redundant information is reduced. Orthogonality is also essential to a connection between proposed method and spectral clustering, as stated in Section 3.4. In addition to a simple soft orthogonal constraint, we design a novel softmax-formulated decorrelation constraint. Our softmax constraint is "softer" than the soft orthogonal constraint for learning independent feature spaces, but realizes stable improvement of clustering performance.
+
+Finally, we combine instance discrimination and feature decorrelation into learning representation to improve the performance of complex image clustering. For the CIFAR-10 and ImageNet-10 datasets, our method achieves accuracy of $81.5\%$ and $95.4\%$ , respectively. Our PyTorch Paszke et al. (2019) implementation of IDFD is available at https://github.com/TTN-YKK/Clustering Friendly Representation_earning.
+
+Our main contributions are as follows:
+
+- We propose a clustering-friendly representation learning method combining instance discrimination and feature decorrelation based on spectral clustering properties.
+- We adapt deep representation learning by instance discrimination to clustering and clarify the essential properties of the temperature parameter.
+- We design a softmax-formulated orthogonal constraint for learning latent features and realize stable improvement of clustering performance.
+- Our representation learning method achieves performance comparable to state-of-the-art levels for image clustering tasks with simple $k$ -means.
+
+# 2 RELATED WORK
+
+Deep clustering methods offer state-of-the-art performance in various fields. Most early deep clustering methods, such as Vincent et al. (2010); Tian et al. (2014), are two-stage methods that apply clustering after learning low-dimensional representations of data in a nonlinear latent space. The autoencoder method proposed in Hinton & Salakhutdinov (2006) is one of the most effective methods for learning representations. Recent works have simultaneously performed representation learning and clustering Song et al. (2013); Xie et al. (2016); Yang et al. (2017); Guo et al. (2017); Tao et al. (2018). Several methods based on generative models have also been proposed Jiang et al. (2016); Dilokthanakul et al. (2016). These methods outperform conventional methods, and sometimes offer performance comparable to that of supervised learning for simple datasets. Deep-learning-based unsupervised image clustering is also being developed Chang et al. (2017); Wu et al. (2019); Ji et al. (2019); Gupta et al. (2020); Van Gansbeke et al. (2020).
+
+Several approaches focus on learning discriminative representations via deep learning. Bojanowski & Joulin (2017) found a mapping between images on a uniformly discretized target space, and enforced their representations to resemble a distribution of pairwise relationships. Caron et al. (2018) applied pseudo-labels to output as supervision by $k$ -means and then trained a deep neural network. Donahue et al. (2016) proposed bidirectional generative adversarial networks for learning generative models that map simple latent distributions to complex real distributions, in order for generators to capture semantic representations. Hjelm et al. (2018) proposed deep infomax to maximize mutual information between the input and output of an encoder. Wu et al. (2018) was motivated by observations in supervised learning that the probabilities of similar image classes become simultaneously high. They showed that discriminating individual instance classes leads to learning representations that retain similarities among data.
+
+IIC Ji et al. (2019) and SCAN Van Gansbeke et al. (2020) are two recent works focusing on image clustering and obtained high performance. IIC Ji et al. (2019) directly learns semantic labels without learning representations based on mutual information between image pairs. SCAN Van Gansbeke et al. (2020) focuses on the clustering phase and largely improved performance based on a given pre-designed representation learning. By contrast, we focus on learning a clustering-friendly representation space where objects can be simply clustered.
+
+Our method exploits the idea of spectral clustering Shi & Malik (2000); Meila & Shi (2001); Von Luxburg (2007); Ng et al. (2002). From one perspective, spectral clustering finds a low dimensional embedding of data in the eigenspace of the Laplacian matrix, which is derived from pairwise similarities between data. By using the embedded representations, we can proceed to cluster the data by the $k$ -means algorithm in the low-dimensional space. Spectral clustering often outperforms earlier algorithms such as $k$ -means once pair similarities are properly calculated. Shaham et al. (2018) incorporated the concept of spectral clustering into deep a neural network structure. Similarities were calculated by learning a Siamese net Shaham & Lederman (2018) where the input positive and negative pairs were constructed according to the Euclidean distance.
+
+# 3 PROPOSED METHOD
+
+Given an unlabeled dataset $X = \{x_{i}\}_{i=1}^{n}$ and a predefined number of clusters $k$ , where $x_{i}$ denotes the $i$ th sample, we perform the clustering task in two phases, namely, representation learning and clustering. This work focuses on the first phase, which aims to learn an embedding function $\boldsymbol{v} = f_{\theta}(\boldsymbol{x})$ mapping data $\boldsymbol{x}$ to representation $\boldsymbol{v}$ so that $\boldsymbol{v}$ is preferable for clustering. $f_{\theta}$ is modeled as a deep neural network with parameter $\theta$ . We use $V = \{v_{i}\}_{i=1}^{n}$ to denote the whole representation set.
+
+# 3.1 INSTANCE DISCRIMINATION
+
+We apply the instance discrimination method proposed by Wu et al. (2018) to learn clustering-friendly representations that capture similarity between instances. The objective function is formulated based on the softmax criterion. Each instance is assumed to represent a distinct class. For given data $x_{1},\ldots ,x_{n}$ , the corresponding representations are $v_{1},\ldots ,v_{n}$ , and data $x_{i}$ is classified into the $i$ th class. Accordingly, the weight vector for the $i$ th class can be approximated by a vector $\boldsymbol{v}_i$ . The probability of representation $\boldsymbol{v}$ being assigned into the $i$ th class is
+
+$$
+P (i | \boldsymbol {v}) = \frac {\exp \left(\boldsymbol {v} _ {i} ^ {T} \boldsymbol {v} / \tau\right)}{\sum_ {j = 1} ^ {n} \exp \left(\boldsymbol {v} _ {j} ^ {T} \boldsymbol {v} / \tau\right)}, \tag {1}
+$$
+
+where $\pmb{v}_j^T\pmb{v}$ measures how well $\pmb{v}$ matches the $j$ th class, $\tau$ is a temperature parameter that controls the concentration of the distribution Hinton et al. (2015), and $\pmb{v}$ is normalized to $||\pmb{v}|| = 1$ .
+
+The objective maximizes the joint probability $\prod_{i=1}^{n} P_{\theta}(i | f_{\theta}(x_i))$ as
+
+$$
+L _ {I} = - \sum_ {i = 1} ^ {n} \log P (i | f _ {\theta} (x _ {i})) = - \sum_ {i} ^ {n} \log \left(\frac {\exp \left(\boldsymbol {v} _ {\boldsymbol {i}} ^ {T} \boldsymbol {v} _ {\boldsymbol {i}} / \tau\right)}{\sum_ {j = 1} ^ {n} \exp \left(\boldsymbol {v} _ {\boldsymbol {j}} ^ {T} \boldsymbol {v} _ {\boldsymbol {i}} / \tau\right)}\right). \tag {2}
+$$
+
+Wu et al. (2018) shows that features obtained by minimizing the objective retain similarity between image instances and improve the performance of nearest neighbor classification. For clustering, we note that the parameter $\tau$ , which is underexplored in Wu et al. (2018), has a large impact on clustering performance. The effect of $\tau$ is discussed later and experimental results are shown in 4.2.1.
+
+
+Figure 1: Pipeline of our method.
+
+# 3.2 FEATURE DECORRELATION
+
+We define a set of latent feature vectors $\pmb{f}$ and use $f_{l}$ to denote the $l$ th feature vector. Transposition of latent vectors $V$ coincides with $\{f_l\}_{l = 1}^d$ , where $d$ is the dimensionality of representations.
+
+The simple constraint for orthogonal features is,
+
+$$
+L _ {F O} = \left\| V V ^ {T} - I \right\| ^ {2} = \sum_ {l = 1} ^ {d} \left(\left(\boldsymbol {f} _ {l} ^ {T} \boldsymbol {f} _ {l} - 1\right) ^ {2} + \sum_ {j = 1, j \neq l} ^ {n} \left(\boldsymbol {f} _ {j} ^ {T} \boldsymbol {f} _ {l}\right) ^ {2}\right). \tag {3}
+$$
+
+Our novel constraint is based on a softmax formulation of
+
+$$
+Q (l | \boldsymbol {f}) = \frac {\exp \left(\boldsymbol {f} _ {l} ^ {T} \boldsymbol {f} / \tau_ {2}\right)}{\sum_ {m = 1} ^ {d} \exp \left(\boldsymbol {f} _ {m} ^ {T} \boldsymbol {f} / \tau_ {2}\right)}, \tag {4}
+$$
+
+$Q(l|\pmb{f})$ is analogous to $P(i|\pmb{v})$ . $Q(l|\pmb{f})$ measures how correlated a feature vector is to itself and how dissimilar it is to others. $\tau_{2}$ is the temperature parameter. We formulate the feature decorrelation constraint as
+
+$$
+L _ {F} = - \sum_ {l = 1} ^ {d} \log Q (l | \boldsymbol {f}) = \sum_ {l = 1} ^ {d} \left(- \boldsymbol {f} _ {l} ^ {T} \boldsymbol {f} _ {l} / \tau_ {2} + \log \sum_ {j} ^ {d} \exp \left(\boldsymbol {f} _ {j} ^ {T} \boldsymbol {f} _ {l} / \tau_ {2}\right)\right). \tag {5}
+$$
+
+Both constrains in Eq. (3) and Eq. (5) aim to construct independent features. Conventionally, it is preferable for features to be independent to ensure that redundant information is reduced, and orthogonality is a common technique. Compare Eq. (3) and Eq. (5), we can see that minimizing $L_{F}$ and $L_{FO}$ can result in a similar effect, $f_{l}^{T}f_{l}\rightarrow 1$ and $f_{j}^{T}f_{l}\rightarrow -1$ or $0(l\neq j)$ , and both try to decorrelate latent features.
+
+Our softmax constraint in Eq. (5) shows practical advantages in flexibility and stability. Eq. (3) is called a soft orthogonal constraint, but is still strict enough to force the features to be orthogonal. If $d$ is larger than underlying structures that are hidden and unknown, all features are forcibly orthogonalized and the resultant features may not be appropriate. Softmax formulation allows off-diagonal elements to be non-zero and alleviates the problem of strict orthogonality.
+
+Partial derivatives of $L_{F}$ and $L_{FO}$ with respect to $z_{jl} = f_j^T f_l$ are calculated as $\frac{\partial L_F}{\partial z_{jl}} = -\frac{1}{\tau_2}\delta_{jl} + \frac{1}{\tau_2}\frac{\exp(z_{jl} / \tau_2)}{\sum_j^d\exp(z_{jl} / \tau_2)}$ and $\frac{\partial L_{FO}}{\partial z_{jl}} = -2\delta_{jl} + 2z_{jl}$ , where $\delta_{jl}$ is an indicator function. Since the derivatives
+
+nearly equal zero due to $z_{jl} = 1$ in the case of $j = l$ , we focus on the case of $j \neq l$ . When $j \neq l$ , the ranges of partial derivatives are $0 \leq \frac{\partial L_F}{\partial z_{jl}} \leq \frac{1}{\tau_2}$ and $-2 \leq \frac{\partial L_{FO}}{\partial z_{jl}} \leq 2$ . The monotonicity of $L_F$ can lead to more stable convergence. The advantages of $L_F$ are confirmed by experiments in section 4.
+
+# 3.3 OBJECTIVE FUNCTION AND LEARNING MODEL
+
+Combining instance discrimination and feature decorrelation learning, we formulate our objective function $L_{IDFD}$ as follows:
+
+$$
+L _ {I D F D} = L _ {I} + \alpha L _ {F}, \tag {6}
+$$
+
+Where $\alpha$ is a weight that balances the contributions of two terms $L_{I}$ and $L_{F}$ .
+
+Figure 1 shows the learning process for the motif of image clustering. Input images $X$ are converted into feature representations $V$ in a lower $d$ -dimensional latent space, via nonlinear mapping with deep neural networks such as ResNet He et al. (2016). The $d$ -dimensional vectors are simultaneously learned through instance discrimination and feature decorrelation. A clustering method, such as classical $k$ -means clustering, is then used on the learned representations to obtain the clustering results.
+
+Optimization can be performed by mini-batch training. To compute the probability $P(i|\mathbf{v})$ in Eq. (1), $\{\mathbf{v}_j\}$ is needed for all images. Like Wu et al. (2018); Xiao et al. (2017), we maintain a feature memory bank for storing them. For $Q(l|\mathbf{f})$ in Eq. (4), all $\{\mathbf{f}_m\}$ of $d$ dimensions in the current mini-batch can be obtained, we simply calculate the $Q(l|\mathbf{f})$ within the mini-batches.
+
+We combine $L_{I}$ and $L_{FO}$ to formulate an alternative loss $L_{IDFO}$ in E.q. (7),
+
+$$
+L _ {I D F O} = L _ {I} + \alpha L _ {F O}. \tag {7}
+$$
+
+We refer to representation learning using $L_{IDFD}$ , $L_{IDFO}$ , and $L_I$ loss as instance discrimination and feature decorrelation (IDFD), instance discrimination and feature orthogonalization (IDFO), and instance discrimination (ID), respectively.
+
+# 3.4 CONNECTION WITH SPECTRAL CLUSTERING
+
+We explain the connection between IDFD and spectral clustering. We consider a fully connected graph consisting of all representation points, and the similarity matrix $W$ and degree matrix $D$ can be written as $W_{ij} = \exp (v_i^T v_j / \tau)$ and $D_{ii} = \sum_{m}^{n}\exp (v_i^T v_m / \tau)$ . The loss function of spectral clustering Shaham et al. (2018) can be reformulated as
+
+$$
+L _ {S P} = (T r) (\boldsymbol {f} L \boldsymbol {f}) = \frac {1}{2} \sum_ {k} \sum_ {i j} ^ {n} w _ {i j} \left(f _ {i} ^ {k} - f _ {j} ^ {k}\right) ^ {2} = \frac {1}{2} \sum_ {k} \sum_ {i j} ^ {n} \exp \left(\frac {v _ {i} ^ {T} v _ {j}}{\tau}\right) \left\| v _ {i} - v _ {j} \right\| ^ {2}, \tag {8}
+$$
+
+where $L$ is Laplacian matrix, $f$ are feature vectors. Spectral clustering is performed by minimizing $L_{SP}$ subject to orthogonal condition of $f$ , and when $L_{SP}$ takes minimum value $f$ become eigenvectors of Laplacian $L$ . According to Section 3.2, minimizing $L_{F}$ can approximate the orthogonal condition. Under this condition, minimizing $L_{I}$ can approximate the minimizing $L_{SP}$ , which is explained as follows.
+
+According to Eq.(2), minimizing loss $L_{I}$ means maximizing $v_{i}^{T}v_{i}$ and minimizing $v_{i}^{T}v_{j}$ . When $i = j$ , we have $||v_{i} - v_{j}||^{2} = 0$ , $L_{SP}$ becomes zero. We need consider only the influence on $L_{SP}$ from minimizing $v_{i}^{T}v_{j}$ . As $\pmb{v}$ are normalized, $L_{SP}$ can be rewritten using cosine metric as
+
+$$
+L _ {S P} = \sum_ {i j} ^ {n} \exp \left(\frac {\cos \theta}{\tau}\right) \sin^ {2} \frac {\theta}{2}, \tag {9}
+$$
+
+then $\frac{\partial L_{SP}}{\partial\theta}$ can be calculated as
+
+$$
+\frac {\partial L _ {S P}}{\partial \theta} = \frac {1}{\tau} \sin \theta (\tau - 1 + \cos \theta) \exp \left(\frac {\cos \theta}{\tau}\right). \tag {10}
+$$
+
+According to Eq.(10), we get $\frac{\partial L_{SP}}{\partial\theta} \geq 0$ when $\tau \geq 2$ . This means $L_{SP}$ monotonically decreases when we minimize $v_{i}^{T}v_{j}$ . Therefore, the impact from minimizing $v_{i}^{T}v_{j}$ is good for minimizing $L_{SP}$ . Even if $\tau$ is a little smaller than 2, because $\tau$ controls the scale of derivatives and the range of $\theta$ where the derivative is negative, large $\tau$ decreases the scale and narrows the range, resulting in a small influence on the total loss. From this viewpoint, the effectiveness of minimizing $L_{I}$ using large $\tau$ is approximately the same as that of $L_{SP}$ . By adding feature decorrelation constraints, IDFD becomes analogous to spectral clustering.
+
+# 4 EXPERIMENTS
+
+We conducted experiments using five datasets: CIFAR-10 Krizhevsky et al. (2009), CIFAR-100 Krizhevsky et al. (2009), STL-10 Coates et al. (2011), ImageNet-10 Deng et al. (2009), and ImageNet-Dog Deng et al. (2009). We adopted ResNet18 He et al. (2016) as the neural network architecture in our main experiments. The same architecture is used for all datasets. Our experimental settings are in accordance with that of Wu et al. (2018). Data augmentation strategies often used on images are also adopted in experiments. Details about datasets and experimental setup are given in Appendix A.
+
+For IDFD, the weight $\alpha$ is simply fixed at 1. Orthogonality constraint weights for IDFO were $\alpha = 10$ on CIFAR-10 and CIFAR-100, and $\alpha = 0.5$ on STL-10 and ImageNet subsets. The weight $\alpha$ was set according to the orders of magnitudes of losses. In the main experiments, we set temperature parameter $\tau = 1$ for IDFO and IDFD, and $\tau_{2} = 2$ for IDFD. In order to fully investigate our work, we also constructed two versions of instance discrimination (ID) that uses only $L_{I}$ loss, ID(original) with small $\tau = 0.07$ and ID(tuned) with large $\tau = 1$ .
+
+We compared ID(tuned), IDFO, and IDF with ID(original) and six other competitive methods, clustering with an autoencoder (AE) Hinton & Salakhutdinov (2006), deep embedded clustering (DEC) Xie et al. (2016), deep adaptive image clustering (DAC) Chang et al. (2017), deep comprehensive correlation mining (DCCM) Wu et al. (2019), invariant information clustering (IIC) Ji et al. (2019), and semantic clustering by adopting nearest neighbors (SCAN) Van Gansbeke et al. (2020). We use three metrics to measure clustering performance: standard clustering accuracy (ACC), normalized mutual information (NMI), and adjusted rand index (ARI). These metrics give values in [0, 1], with higher scores indicating more accurate clustering assignments.
+
+# 4.1 MAIN RESULTS
+
+Table 1 lists the best performances for each method. The results for the four methods AE, DEC, DAC, and DCCM are cited from Wu et al. (2019), and results for two methods IIC and SCAN are cited from Van Gansbeke et al. (2020). Comparing these results, we conclude that ID(tuned), IDFO, and IDFD, clearly outperform these methods excluding SCAN for all datasets, according to the metrics ACC, NMI, and ARI. For dataset CIFAR-10, ID(tuned), IDFO, and IDFD yielded ACC values of $77.6\%$ , $82.8\%$ , and $81.5\%$ , respectively. For dataset ImageNet-10, ID(tuned), IDFO, and IDFD achieved ACC values of $93.7\%$ , $94.2\%$ , and $95.4\%$ . The high performance is comparable with that of supervised and semi-supervised methods. Gaps between the results of ID(tuned) and those of IDFO and IDFD reflect the effect of the feature constraint term. The performance is improved for all datasets by introducing feature orthogonalization and decorrelation. Impressively, ID(tuned) significantly outperformed ID(original) on all datasets, showing strong impact of temperature parameter. This will be discussed separately in section 4.2.1.
+
+In addition, we note that IDFD differs from SCAN in that IDFD focuses on the representation leaning while SCAN focuses on clustering by given a representation learning. Both SCAN and IDFD demonstrate significant improvement on performance compared with other methods. Results of IDFD and SCAN showed effectiveness of efforts on both representation learning and clustering phases of deep clustering.
+
+We also examine the learning stability of ID(tuned), IDFO, and IDFD. Figure 2 illustrates the accuracy on CIFAR-10 running each of ID(tuned), IDFO, and IDFD. We can see that both IDFO and IDFD obtained higher peak ACC values than ID(tuned). In particular, IDFD yielded higher performance than ID over the entire learning process. IDFO performed better than the other two methods and obtained the highest ACC value in earlier epochs. However, the ACC widely fluctuated
+
+Table 1: Clustering results $(\%)$ of various methods on five datasets.
+
+| Dataset | CIFAR-10 | CIFAR-100 | STL-10 | ImageNet-10 | ImageNet-Dog |
| Metric | ACC | NMI | ARI | ACC | NMI | ARI | ACC | NMI | ARI | ACC | NMI | ARI | ACC | NMI | ARI |
| AE | 31.4 | 23.9 | 16.9 | 16.5 | 10.0 | 4.8 | 30.3 | 25.0 | 16.1 | 31.7 | 21.0 | 15.2 | 18.5 | 10.4 | 7.3 |
| DEC | 30.1 | 25.7 | 16.1 | 18.5 | 13.6 | 5.0 | 35.9 | 27.6 | 18.6 | 38.1 | 28.2 | 20.3 | 19.5 | 12.2 | 7.9 |
| DAC | 52.2 | 39.6 | 30.6 | 23.8 | 18.5 | 8.8 | 47.0 | 36.6 | 25.7 | 52.7 | 39.4 | 30.2 | 27.5 | 21.9 | 11.1 |
| DCCM | 62.3 | 49.6 | 40.8 | 32.7 | 28.5 | 17.3 | 48.2 | 37.6 | 26.2 | 71.0 | 60.8 | 55.5 | 38.3 | 32.1 | 18.2 |
| ID(original) | 44.0 | 30.9 | 22.1 | 26.7 | 22.1 | 10.8 | 51.4 | 36.2 | 28.5 | 63.2 | 47.8 | 42.0 | 36.5 | 24.8 | 17.2 |
| IIC | 61.7 | 51.1 | 41.1 | 25.7 | 22.5 | 11.7 | 59.6 | 49.6 | 39.7 | - | - | - | - | - | - |
| SCAN | 88.3 | 79.7 | 77.2 | 50.7 | 48.6 | 33.3 | 80.9 | 69.8 | 64.6 | - | - | - | - | - | - |
| ID(tuned) | 77.6 | 68.2 | 61.6 | 40.9 | 39.2 | 24.3 | 72.6 | 64.0 | 52.6 | 93.7 | 86.7 | 86.5 | 47.6 | 47.0 | 33.5 |
| IDFO | 82.8 | 71.4 | 67.9 | 42.5 | 43.2 | 24.4 | 75.6 | 63.6 | 56.9 | 94.2 | 87.1 | 87.6 | 61.2 | 57.9 | 41.4 |
| IDFD | 81.5 | 71.1 | 66.3 | 42.5 | 42.6 | 26.4 | 75.6 | 64.3 | 57.5 | 95.4 | 89.8 | 90.1 | 59.1 | 54.6 | 41.3 |
+
+over the learning process and dropped in later epochs. As analyzed in 3.2, our proposed IDFD makes performance higher than ID and more stable than IDFO.
+
+# 4.2 DISCUSSION
+
+# 4.2.1 ANALYSIS ON TEMPERATURE PARAMETER
+
+Gaps between results of ID(original) and ID(tuned) in Table 1 show strong impact of temperature parameter. We theoretically and intuitively analyze the essential change caused by the temperature parameter in this subsection.
+
+First, we consider why instance-level discrimination works and under what conditions. Difference in the performance of ID(original) and ID(tuned) suggests optimal distribution in latent space changes with the magnitude of $\tau$ . According to empirical investigation and theoretical analysis, we find that a large $\tau$ in $L_{I}$ encourages data points to follow a compact distribution when minimizing the loss, while a small $\tau$ drives them to follow a uniform distribution. This means minimizing $L_{I}$ with a large $\tau$ can reach a good clustering-friendly solution. This property was explained by demonstrating examples and calculation, details are given in Appendix B.
+
+In the definition of $P(i|\pmb{v})$ in Eq. (1), when $\tau$ is small, we compute softmax on larger logits, resulting in higher prediction, and obtain a more confident model. From this viewpoint, we can leverage a small $\tau$ to decrease class entanglement if we can learn an accurate class-weight vector. In the general classification problem, since the weight of each class can be learned according to the real labels, it is preferable for models to be more confident. Most works therefore recommend setting a small value, such as $\tau = 0.07$ Wu et al. (2018). In clustering, however, instance-level discrimination is used to learn similarity among samples, with only one sample in each class. Because the model is highly confident, each sample tends to be completely independent from each other. Similarity among samples is seemingly encouraged to approach close to zero, even for samples from the same class. This clearly deviates from the original intent of adopting instance-level discrimination to learn sample entanglements under the condition that each sample can be discriminative. A larger $\tau$ than that used for classification is thus needed.
+
+More experiments over different temperature settings on ID and IDFD were conducted on CIFAR-10. Figure 3 shows the accuracy of ID for $\tau = \{0.07, 0.2, 0.5, 0.8, 1, 2, 5, 10\}$ . We calculated the mean and standard deviation of ACC values over the last 500 epochs for each experiment. From the results, we can see that ID can suffer significant performance degradation when $\tau$ is too small or too large. This agrees with our analysis above. We also investigate the impact of $\tau_{2}$ by fixing $\tau = 1$ . Figure 4 shows the accuracy of the IDFD for $\tau_{2} = \{0.1, 0.5, 1, 2, 3, 4, 5, 10\}$ . Experimental results show that IDFD is relatively robust to the parameter $\tau_{2}$ and enables stable representation learning.
+
+# 4.2.2 REPRESENTATION DISTRIBUTION AND FEATURE BEHAVIOR
+
+Figure 5 visualizes the results of representations learned in four experiments: (a) ID(original), (b) ID(tuned), (c) IDFO with $\tau = 1$ and $\alpha = 10$ , and (d) IDFD with $\tau = 1$ , $\tau_{2} = 2$ , and $\alpha = 1$ on CIFAR-10. 128-dimension representations were embedded into two dimensions by t-SNE (t-distributed stochastic neighbor embedding) Maaten & Hinton (2008). Colors indicate ground truth classes. The distributions for the ID(original) and ID(tuned) again show the significant difference between
+
+
+Figure 2: ACC values over learn-Figure 3: Accuracy of ID for var-Figure 4: Accuracy of IDFD for ing process. ious $\tau$ settings. various $\tau_{2}$ settings.
+
+
+
+
+
+them. Data distribution when $\tau = 1$ is apparently more clustering-friendly than when $\tau = 0.07$ . Furthermore, compared with ID(tuned), IDFO and IDFD can separate samples from different classes with certain margins. IDFO tended to construct a patch-like distribution within one class. In contrast, IDFD maintained a tighter connection among samples of the same class and more distinct borders between different classes.
+
+
+(a) ID (original)
+
+
+(b) ID (tuned)
+
+
+(c) IDFO
+Figure 5: Distribution of feature representations on CIFAR-10.
+
+
+(d) IDFD
+
+Figure 6 shows distribution of feature representations on ImageNet-10 learned by IDFD. We can see that representations of ImageNet-10 are clustering-friendly and even better than that of CIFAR-10. This is consistent with the results in Table 1 evaluated by metrics ACC, NMI, and ARI. In addition to that, we also plot sample images corresponding to points lying near the border between clusters. We can see that these samples are certainly similar in appearance.
+
+
+Figure 6: Distribution of feature representations on ImageNet-10 learned by IDFD and samples corresponding to points in some areas.
+
+
+
+
+
+We investigate the effects of orthogonal and decorrelation constraints $L_{FO}$ and $L_{F}$ . Figure 7 illustrates the feature correlations of ID(tuned), IDFO, and IDFD on dataset CIFAR-10. We see that IDFO clearly decorrelates features and IDFD retains a moderate level of feature correlation between ID and IDFD. Taken together with Figure 2, these results suggest that the softmax formulation of IDFD alleviates the problem of strict orthogonality and enables stable representation learning.
+
+# 4.2.3 INVESTIGATION FOR PRACTICAL USE
+
+We investigate the dependencies of our method on networks through experiments on other networks: ConvNet Wu et al. (2019), VGG16 Simonyan & Zisserman (2014), and ResNet34 He et al. (2016). Performance was evaluated using the CIFAR-10 dataset. Results listed in Table 2 show that IDFD
+
+
+Figure 7: Feature correlation matrix on CIFAR-10 with ResNet18
+
+
+
+
+
+
+
+can work on various networks. IDFD outperforms ID(tuned), and FD term shows more obvious effect on these networks. We also confirm the effect of cooperation between $L_{I}$ and $L_{F}$ from the viewpoint of spectral clustering, combinations of AE and $L_{F}$ were evaluated in terms of clustering performance. We found that AE cannot benefit from $L_{F}$ as $L_{I}$ did. This result verified that $L_{F}$ has a deep relation with $L_{I}$ , and IDFD is not a simple combination. We also investigate the importance of data augmentation in performance through experiments. Due to the page limit, our extended experiments are given in Appendix C.
+
+Table 2: Clustering results (%) on various network architectures.
+
+| Network | ConvNet | VGG16 | ResNet18 | ResNet34 |
| Metric | ACC | NMI | ARI | ACC | NMI | ARI | ACC | NMI | ARI | ACC | NMI | ARI |
| ID(tuned) | 26.8 | 15.0 | 8.9 | 39.3 | 31.6 | 20.9 | 77.6 | 68.2 | 61.6 | 80.2 | 71.1 | 64.6 |
| IDFD | 42.0 | 32.7 | 23.2 | 56.8 | 46.7 | 36.5 | 81.5 | 71.1 | 66.3 | 82.7 | 73.4 | 68.4 |
+
+# 5 CONCLUSION
+
+We present a clustering-friendly representation learning method combining instance discrimination and feature decorrelation based on spectral clustering properties. Instance discrimination learns similarities among data and feature decorrelation removes redundant correlation among features. We analyzed why instance discrimination works for clustering and clarified the conditions. We designed a softmax-formulated feature decorrelation constraint for learning the latent space to realize stable improvement of clustering performance. We also explained the connection between our method and spectral clustering. The proposed representation learning method achieves accuracies comparable to state-of-the-art values on the CIFAR-10 and ImageNet-10 datasets with simple $k$ -means. We also verified IDFD loss works on multiple neural network structures, and our method is expected to be effective for various kinds of problems.
+
+# REFERENCES
+
+Piotr Bojanowski and Armand Joulin. Unsupervised learning by predicting noise. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 517-526. JMLR.org, 2017.
+Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsupervised learning of visual features. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 132-149, 2018.
+Jianlong Chang, Lingfeng Wang, Gaofeng Meng, Shiming Xiang, and Chunhong Pan. Deep adaptive image clustering. In The IEEE International Conference on Computer Vision (ICCV), Oct 2017.
+Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709, 2020.
+
+Adam Coates, Andrew Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised feature learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pp. 215-223, 2011.
+Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. Ieee, 2009.
+Nat Dilokthanakul, Pedro AM Mediano, Marta Garnelo, Matthew CH Lee, Hugh Salimbeni, Kai Arulkumaran, and Murray Shanahan. Deep unsupervised clustering with gaussian mixture variational autoencoders. arXiv preprint arXiv:1611.02648, 2016.
+Jeff Donahue, Philipp Kähenbuhl, and Trevor Darrell. Adversarial feature learning. arXiv preprint arXiv:1605.09782, 2016.
+Xifeng Guo, Long Gao, Xinwang Liu, and Jianping Yin. Improved deep embedded clustering with local structure preservation. In International Joint Conference on Artificial Intelligence, pp. 1753-1759, 2017.
+Divam Gupta, Ramachandran Ramjee, Nipun Kwatra, and Muthian Sivathanu. Unsupervised clustering using pseudo-semi-supervised learning. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=rJlnxkSYPS.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
+Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
+Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks. science, 313(5786):504-507, 2006.
+R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. arXiv preprint arXiv:1808.06670, 2018.
+Weihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, and Masashi Sugiyama. Learning discrete representations via information maximizing self-augmented training. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1558-1567. JMLR.org, 2017.
+Xu Ji, João F Henriques, and Andrea Vedaldi. Invariant information clustering for unsupervised image classification and segmentation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 9865-9874, 2019.
+Zhuxi Jiang, Yin Zheng, Huachun Tan, Bangsheng Tang, and Hanning Zhou. Variational deep embedding: An unsupervised and generative approach to clustering. arXiv preprint arXiv:1611.05148, 2016.
+Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
+Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579-2605, 2008.
+Marina Meila and Jianbo Shi. Learning segmentation by random walks. In Advances in neural information processing systems, pp. 873-879, 2001.
+Andrew Y Ng, Michael I Jordan, and Yair Weiss. On spectral clustering: Analysis and an algorithm. In Advances in neural information processing systems, pp. 849-856, 2002.
+
+Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, pp. 8024-8035. Curran Associates, Inc., 2019. URL http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf.
+Uri Shaham and Roy R Lederman. Learning by coincidence: Siamese networks and common variable learning. Pattern Recognition, 74:52-63, 2018.
+Uri Shaham, Kelly Stanton, Henry Li, Boaz Nadler, Ronen Basri, and Yuval Kluger. Spectralnet: Spectral clustering using deep neural networks. arXiv preprint arXiv:1801.01587, 2018.
+Jianbo Shi and Jitendra Malik. Normalized cuts and image segmentation. IEEE Transactions on pattern analysis and machine intelligence, 22(8):888-905, 2000.
+Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
+Chunfeng Song, Feng Liu, Yongzhen Huang, Liang Wang, and Tieniu Tan. Auto-encoder based data clustering. In Iberoamerican Congress on Pattern Recognition, pp. 117-124. Springer, 2013.
+Yaling Tao, Kentaro Takagi, and Kouta Nakata. Rdec: Integrating regularization into deep embedded clustering for imbalanced datasets. In Asian Conference on Machine Learning, pp. 49-64, 2018.
+Fei Tian, Bin Gao, Qing Cui, Enhong Chen, and Tie-Yan Liu. Learning deep representations for graph clustering. In Twenty-Eighth AAAI Conference on Artificial Intelligence, 2014.
+Wouter Van Gansbeke, Simon Vandenhende, Stamatios Georgoulis, Marc Proesmans, and Luc Van Gool. Scan: Learning to classify images without labels. In Proceedings of the European Conference on Computer Vision, 2020.
+Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of machine learning research, 11(Dec):3371-3408, 2010.
+Ulrike Von Luxburg. A tutorial on spectral clustering. Statistics and computing, 17(4):395-416, 2007.
+Jianlong Wu, Keyu Long, Fei Wang, Chen Qian, Cheng Li, Zhouchen Lin, and Hongbin Zha. Deep comprehensive correlation mining for image clustering. In Proceedings of the IEEE International Conference on Computer Vision, pp. 8150-8159, 2019.
+Zhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua Lin. Unsupervised feature learning via nonparametric instance discrimination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3733-3742, 2018.
+Tong Xiao, Shuang Li, Bochao Wang, Liang Lin, and Xiaogang Wang. Joint detection and identification feature learning for person search. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3415-3424, 2017.
+Junyuan Xie, Ross Girshick, and Ali Farhadi. Unsupervised deep embedding for clustering analysis. In Proceedings of the 33rd International Conference on International Conference on Machine Learning-Volume 48, pp. 478-487, 2016.
+Bo Yang, Xiao Fu, Nicholas D Sidiropoulos, and Mingyi Hong. Towards k-means-friendly spaces: Simultaneous deep learning and clustering. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 3861-3870. JMLR.org, 2017.
+
+# APPENDICES
+
+# A DATASETS AND EXPERIMENTAL SETUP
+
+Five datasets were used to conduct experiments: CIFAR-10 Krizhevsky et al. (2009), CIFAR-100 Krizhevsky et al. (2009), STL-10 Coates et al. (2011), ImageNet-10 Deng et al. (2009), and ImageNet-Dog Deng et al. (2009). Table 3 lists the numbers of images, number of clusters, and image sizes of these datasets. Specifically, the training and testing sets of dataset STL-10 were jointly used in our experiments. Images from the three ImageNet subsets were resized to $96 \times 96 \times 3$ .
+
+Table 3: Image datasets used in experiments.
+
+| Dataset | Images | Clusters | Image size |
| CIFAR-10 Krizhevsky et al. (2009) | 50,000 | 10 | 32 × 32 × 3 |
| CIFAR-100 Krizhevsky et al. (2009) | 50,000 | 20 | 32 × 32 × 3 |
| STL-10 Coates et al. (2011) | 13,000 | 10 | 96 × 96 × 3 |
| Imagenet-10 Deng et al. (2009) | 13,000 | 10 | 96 × 96 × 3 |
| Imagenet-Dog Deng et al. (2009) | 19,500 | 15 | 96 × 96 × 3 |
+
+We adopted ResNet He et al. (2016) as the neural network architecture in our main experiments. For simplicity, we used ResNet18, which according to our preliminary experiments yields sufficiently high performance. The same architecture was used for all datasets except the input layer. In accordance with the experimental settings of Wu et al. (2018), the dimension of latent feature vectors was set to $d = 128$ , and a stochastic gradient descent optimizer with momentum $\beta = 0.9$ was used. The learning rate $lr$ was initialized to 0.03, then gradually scaled down after the first 600 epochs using a coefficient of 0.1 every 350 epochs. The total number of epochs was set to 2000, and the batch size was set to $B = 128$ . Orthogonality constraint weights for IDFO were $\alpha = 10$ for CIFAR-10 and CIFAR-100 and $\alpha = 0.5$ for the STL-10 and ImageNet subsets. The weight for IDFO $\alpha$ was set according to the orders of magnitudes of the two losses $L_{I}$ and $L_{FO}$ . For IDFD, the weight $\alpha$ was simply fixed at 1. In the main experiments, we set the default temperature parameter value $\tau = 1$ for ID(tuned), IDFO, and IDFD, and $\tau_{2} = 2$ for IDFD.
+
+# B OPTIMAL SOLUTIONS OF CLUSTERING ANDINSTANCE DISCRIMINATION
+
+In Section 4.2.1, we concluded that minimizing $L_{I}$ under the condition that $\tau$ is large can reach a clustering-friendly solution. Details about the analysis and calculation was demonstrated by a two-dimensional toy model as follows.
+
+Empirically, we observe that visually similar images tend to get similar assignment probabilities. Similar images can thus be projected to close locations in the latent space. This also motivated ID Wu et al. (2018). In the case of ID, similar images $x_{i}$ and $x_{j}$ yield respective highest probabilities $p_{ii}$ and $p_{jj}$ , and also receive relatively high $p_{ij}$ and $p_{ji}$ values. This property can retain over the process of approximation to the optimal solution. Because instance-level discrimination tries to maximally scatter embedded features of instances over the unit sphere Wu et al. (2018), all representations are thus uniformly spread over the latent space with each representation relatively similar to its surroundings, we call this uniform case. We also consider another case that yields an optimal clustering solution where all samples from the same class are compacted to one point and $k$ clusters are uniformly spread over the space. We call this compact case. Figure 8 shows the representation distributions in the two cases. Because we normalize $v$ , two-dimensional representations form a circle.
+
+In the uniform case, $n$ representations are uniformly located on a circle with an angular interval of $\theta = 2\pi /n$ , and the inner product between two neighboring representations is $\cos \theta$ . Without loss of generality, we can start with an arbitrary point $v_{i}$ and orderly mark all samples as $v_{i + j}$ . The cosine similarity between $v_{i}$ and $v_{i + j}$ can then be calculated by $v_{i + j}^{T}v_{i} = \cos j\theta$ . Accordingly, the loss
+
+
+Figure 8: Two extreme cases of representation distributions over two-dimensional space. Left: uniform. Right: Figure 9: $\exp (\cos \theta /\tau)$ with different compact.
+
+
+
+
+
+contributed by sample $i$ in the uniform case can be calculated as
+
+$$
+L _ {u n i f o r m} ^ {i} = - \log \frac {\exp (1 / \tau)}{\sum_ {m = 0} ^ {n - 1} \exp (\cos m \theta / \tau)} = - \log \frac {\frac {1}{n} \exp (1 / \tau)}{\frac {1}{n} \sum_ {m = 0} ^ {n - 1} \exp (\cos m \theta / \tau)}. \tag {11}
+$$
+
+Similarly, in the compact case, $n / k$ data from the same class are exactly compacted to a point and $k$ corresponding points located on a circle at an angular interval of $\theta' = 2\pi / k$ . The inner product between an arbitrary start sample $v_i$ and the $j$ -th sample can be calculated as $v_i^T v_{i+j} = \cos l\theta'$ , where $l = j \mod n / k$ . The probability of assigning $i$ to the cluster with $j$ becomes $p_{ij} = \frac{\exp(\cos\theta' / \tau)}{\sum_{c=0}^{k-1}\frac{n}{k}\exp(\cos c\theta' / \tau)}$ . Accordingly, the loss contributed by sample $i$ in the compact case can be calculated as
+
+$$
+L _ {\text {c o m p a c t}} ^ {i} = - \log \frac {\exp (1 / \tau)}{\sum_ {c = 0} ^ {k - 1} \frac {n}{k} \exp (\cos c \theta^ {\prime} / \tau)} = - \log \frac {\frac {1}{n} \exp (1 / \tau)}{\frac {1}{k} \sum_ {c = 0} ^ {k - 1} \exp (\cos c \theta^ {\prime} / \tau)}. \tag {12}
+$$
+
+Comparing Eq. (11) and (12), we see that the difference between $L_{uniform}^{i}$ and $L_{compact}^{i}$ comes only from the denominator part of the logarithm. These are two discrete forms of the same integral $\int \exp (\cos \theta /\tau)d\theta$ . Clearly, $L_{uniform}^{i}$ equals $L_{compact}^{i}$ when $k,n\to +\infty$ . We therefore need to consider only the general case where $n$ is sufficiently large and $k\ll n$ .
+
+Figure 9 shows a plot of function values $\exp\left(\frac{\cos\theta}{\tau}\right)$ with different $\tau$ settings over the domain $\theta \in [0,2\pi]$ . We can see that the curve becomes flatter as $\tau$ increases. A flat function $f$ means that for an arbitrary $(\theta ,\theta^{\prime})$ pair in its domain of definition, we have $f(\theta)\approx f(\theta^{\prime})$ . In this situation even $k\ll n$ , the difference between the summations of these two discrete functions is not large. Accordingly, we can say $L_{compact}^{i}$ is approximate to $L_{uniform}^{i}$ for a large $\tau$ . In other words, minimizing $L_{I}$ can approach the compact situation where same-class samples assemble and differing samples separate. Learning instance-level discrimination for clustering is therefore reasonable.
+
+# C EXTENDED EXPERIMENTS
+
+In Section 4.2.3, we have reported some investigations of our method for practical use. Details about several important experiments are supplemented as follows.
+
+# C.1 IMPACT OF NETWORK ARCHITECTURE
+
+As Table 2 shows, IDFD can be applied to various networks, and the performance gaps between IDFD and ID-turned) on networks like ConvNet Wu et al. (2019) and VGG16 Simonyan & Zisserman (2014) are more significant than on ResNet He et al. (2016). We added the feature correlation matrix of VGG16 in Figure 10. IDFD on VGG16 obtained sparse correlations similar to the case of ResNet18 in Figure 7, while ID on VGG16 obtained denser and stronger correlations than ResNet18, presumably constructing redundant features that degraded clustering. In the case of VGG16, the feature decorrelation term $L_{F}$ exhibits a larger effect on clustering performance than that of ResNet.
+
+Our proposed losses work on all network architectures, and we expect to introduce the losses to various networks that are suitable for individual problems.
+
+
+Figure 10: Feature correlation matrix learned by VGG16 on CIFAR-10.
+
+
+
+
+
+
+
+# C.2 COMBINATION OF AUTOENCODER AND FEATURE DECORRELATION
+
+In order to further confirm the cooperation effect of instance discrimination and feature decorrelation from the viewpoint of spectral clustering, a combination of autoencoder and feature decorrelation was evaluated in terms of clustering performance. Autoencoder has been verified by datasets such as handwritten digits to be an effective method for deep clustering. In this experiment, we used ConvNet Wu et al. (2019) for the autoencoder architecture and trained it on the CIFAR-10 dataset. We applied $k$ -means to representations learned from autoencoder only and autoencoder combined with feature decorrelation, which are called AE and AEFD, respectively. According to our experiments, the ACC value of AE was $26.0\%$ , and the ACC value of AEFD was $22.4\%$ . Compared to the improvement from ID to IDFD (from $26.8\%$ to $42.0\%$ as shown in Table 2), we see that AE cannot benefit from FD as ID. This result again indicates that FD has a deep relation with ID as we analyzed in Section 3.
+
+# C.3 IMPACT OF DATA AUGMENTATION
+
+For reproduction of our results and practical use, we note that data augmentation (DA) has strong impact on the performance. DA is known to have impact on image classification and representation learning. Like in Wu et al. (2018), several generic and accepted techniques, such as cropping and grayscale, were used for data augmenting in this work. The details of the augmentation in the original code can be linked to Wu et al. (2018). In order to investigate the impact of DA, we conducted experiments on five datasets with and without DA and compared their clustering results. Table 4 shows the results. We can see that methods without DA suffered significant performance degradations for clustering, as well as for classification Chen et al. (2020). This reminds us not to ignore the effects of DA in practical use.
+
+Table 4: Clustering results (%) with or without data augmentation on five datasets.
+
+| Dataset | CIFAR-10 | CIFAR-100 | STL-10 | ImageNet-10 | ImageNet-Dog |
| Metric | ACC | NMI | ARI | ACC | NMI | ARI | ACC | NMI | ARI | ACC | NMI | ARI | ACC | NMI | ARI |
| ID W/O DA | 18.7 | 9.5 | 4.1 | 14.8 | 10.7 | 3.2 | 19.6 | 9.0 | 3.7 | 23.6 | 14.1 | 6.2 | 12.7 | 4.6 | 1.9 |
| IDFD W/O DA | 23.6 | 12.1 | 6.0 | 16.2 | 11.6 | 4.4 | 24.8 | 17.6 | 8.3 | 37.2 | 23.8 | 15.6 | 15.5 | 5.5 | 2.5 |
| ID With DA | 76.6 | 65.7 | 58.3 | 36.7 | 35.7 | 21.9 | 57.1 | 49.0 | 36.8 | 85.8 | 79.1 | 70.5 | 29.4 | 16.0 | 28.5 |
| IDFD With DA | 81.5 | 71.1 | 66.3 | 42.5 | 42.6 | 26.4 | 75.6 | 64.3 | 57.5 | 95.4 | 89.8 | 90.1 | 59.1 | 54.6 | 41.3 |
+
+To further find out main factors affecting the performance, we also executed experiments by removing each technique used for DA. Take the example of CIFAR-10, techniques used for data augmentation include: ColorJitter, RandomResizedCrop, RandomGrayscale, and RandomHorizontalFlip. All these techniques are generic and easy to be implemented. They have been integrated into general deep learning frameworks such as PyTorch. According to our experimental results as shown in Figure 11, we find that RandomResizedCrop, RandomGrayscale, and ColorJitter have strong effect on image clustering.
+
+
+Figure 11: Effect of each technique used for DA on CIFAR-10.
+
+For practice, we also applied IDFD to our private images produced by manufacturing process. Generic DA like above were used to these images. IDFD showed good performance on these images according to our experiments. This indicates that our method can be simply applied to practical images. For other types of data such as text and time series, corresponding data augmentation techniques are needed to cooperate with our method.
\ No newline at end of file
diff --git a/clusteringfriendlyrepresentationlearningviainstancediscriminationandfeaturedecorrelation/images.zip b/clusteringfriendlyrepresentationlearningviainstancediscriminationandfeaturedecorrelation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..48e5b32ffc952b623dbfeaa5bbcf8854d524747e
--- /dev/null
+++ b/clusteringfriendlyrepresentationlearningviainstancediscriminationandfeaturedecorrelation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ec22ab939cc5c36c1f6cde3b114892d5eecf6c0802bc42f25b1b72cc2ac67f1b
+size 681025
diff --git a/clusteringfriendlyrepresentationlearningviainstancediscriminationandfeaturedecorrelation/layout.json b/clusteringfriendlyrepresentationlearningviainstancediscriminationandfeaturedecorrelation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..d4d7f9bc6676ca70b652beb765e6b547f4cb3290
--- /dev/null
+++ b/clusteringfriendlyrepresentationlearningviainstancediscriminationandfeaturedecorrelation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b4286a4e30950c7820a12898085f613e2d901080373d97adfabe6e01c3081ec9
+size 570685
diff --git a/co2consistentcontrastforunsupervisedvisualrepresentationlearning/4c17f36e-57c1-42bb-95b9-78baab5f4b0d_content_list.json b/co2consistentcontrastforunsupervisedvisualrepresentationlearning/4c17f36e-57c1-42bb-95b9-78baab5f4b0d_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..760c747b3b32ef414cf03d40e44716701ccf1b57
--- /dev/null
+++ b/co2consistentcontrastforunsupervisedvisualrepresentationlearning/4c17f36e-57c1-42bb-95b9-78baab5f4b0d_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3e7fe723a6e99396fa6cf6f2b641a2bd970ca8bb4cce4499b4353086bd1cdd58
+size 80643
diff --git a/co2consistentcontrastforunsupervisedvisualrepresentationlearning/4c17f36e-57c1-42bb-95b9-78baab5f4b0d_model.json b/co2consistentcontrastforunsupervisedvisualrepresentationlearning/4c17f36e-57c1-42bb-95b9-78baab5f4b0d_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..00508272b6338354e8c2f238ca0a3e18d0a0cd3f
--- /dev/null
+++ b/co2consistentcontrastforunsupervisedvisualrepresentationlearning/4c17f36e-57c1-42bb-95b9-78baab5f4b0d_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5eccfdb1fb9b52949a41e4f8b33c14ca4951b287fd4b462a1d65bbbffa14205e
+size 100957
diff --git a/co2consistentcontrastforunsupervisedvisualrepresentationlearning/4c17f36e-57c1-42bb-95b9-78baab5f4b0d_origin.pdf b/co2consistentcontrastforunsupervisedvisualrepresentationlearning/4c17f36e-57c1-42bb-95b9-78baab5f4b0d_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..36260cc450ab52e773dff33bdfc251ff197a03e9
--- /dev/null
+++ b/co2consistentcontrastforunsupervisedvisualrepresentationlearning/4c17f36e-57c1-42bb-95b9-78baab5f4b0d_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:25b37a541786e863bd622ea1fa79a9846babe5b524eca7d3cb003280503b0e07
+size 4878226
diff --git a/co2consistentcontrastforunsupervisedvisualrepresentationlearning/full.md b/co2consistentcontrastforunsupervisedvisualrepresentationlearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..d71353eb91ad2dc86481bfde419232aba25cdf2c
--- /dev/null
+++ b/co2consistentcontrastforunsupervisedvisualrepresentationlearning/full.md
@@ -0,0 +1,281 @@
+# CO2: CONSISTENT CONTRAST FOR UNSUPERVISED VISUAL REPRESENTATION LEARNING
+
+Chen Wei $^{1}$ , Huiyu Wang $^{1}$ , Wei Shen $^{2*}$ , Alan Yuille $^{1}$
+
+1 Johns Hopkins University 2 Shanghai Jiao Tong University
+
+# ABSTRACT
+
+Contrastive learning has been adopted as a core method for unsupervised visual representation learning. Without human annotation, the common practice is to perform an instance discrimination task: Given a query image crop, this task labels crops from the same image as positives, and crops from other randomly sampled images as negatives. An important limitation of this label assignment strategy is that it can not reflect the heterogeneous similarity between the query crop and each crop from other images, taking them as equally negative, while some of them may even belong to the same semantic class as the query. To address this issue, inspired by consistency regularization in semi-supervised learning on unlabeled data, we propose Consistent Contrast (CO2), which introduces a consistency regularization term into the current contrastive learning framework. Regarding the similarity of the query crop to each crop from other images as "unlabeled", the consistency term takes the corresponding similarity of a positive crop as a pseudo label, and encourages consistency between these two similarities. Empirically, CO2 improves Momentum Contrast (MoCo) by $2.9\%$ top-1 accuracy on ImageNet linear protocol, $3.8\%$ and $1.1\%$ top-5 accuracy on $1\%$ and $10\%$ labeled semi-supervised settings. It also transfers to image classification, object detection, and semantic segmentation on PASCAL VOC. This shows that CO2 learns better visual representations for these downstream tasks.
+
+# 1 INTRODUCTION
+
+Unsupervised visual representation learning has attracted increasing research interests for it unlocks the potential of large-scale pre-training for vision models without human annotation. Most of recent works learn representations through one or more pretext tasks, in which labels are automatically generated from image data itself. Several early methods propose pretext tasks that explore the inherent structures within a single image. For example, by identifying spatial arrangement (Doersch et al., 2015), orientation (Gidaris et al., 2018), or chromatic channels (Zhang et al., 2016), models learn useful representations for downstream tasks. Recently, another line of works (Wu et al., 2018; Bachman et al., 2019; Hjelm et al., 2018; Tian et al., 2019; He et al., 2020; Misra & van der Maaten, 2020; Chen et al., 2020a), e.g. Momentum Contrast (MoCo), falls within the framework of contrastive learning (Hadsell et al., 2006), which directly learns relations of images as the pretext task. In practice, contrastive learning methods show better generalization in downstream tasks.
+
+Although designed differently, most contrastive learning methods perform an instance discrimination task, i.e., contrasting between image instances. Specifically, given a query crop from one image, a positive sample is an image crop from the same image; negative samples are crops randomly sampled from other images in the training set. Thus, the label for instance discrimination is a one-hot encoding over the positive and negative samples. This objective is to bring together crops from the same image and keep away crops from different images in the feature space, forming an instance discrimination task.
+
+However, the one-hot label used by instance discrimination might be problematic, since it takes all the crops from other images as equally negative, which cannot reflect the heterogeneous similarities between the query crop and each of them. For example, some "negative" samples are semantically similar to the query, or even belong to the same semantic class as the query. This is referred to as
+
+"class collision" in Saunshi et al. (2019) and "sampling bias" in Chuang et al. (2020). The ignorance of the heterogeneous similarities between the query crop and the crops from other images can thus raise an obstacle for contrastive methods to learn a good representation. A recent work, supervised contrastive learning (Khosla et al., 2020), fixes this problem by using human annotated class labels and achieves strong classification performance. However, in unsupervised representation learning, the human annotated class labels are unavailable, and thus it is more challenging to capture the similarities between crops.
+
+In this paper, we propose to view this instance discrimination task from the perspective of semi-supervised learning. The positive crop should be similar to the query for sure since they are from the same image, and thus can be viewed as labeled. On the contrary, the similarity between the query and each crop from other images is unknown, or unlabeled. With the viewpoint of semi-supervised learning, we introduce Consistent Contrast (CO2), a consistency regularization method which fits into current contrastive learning framework. Consistency regularization (Sajjadi et al., 2016) is at the core of many state-of-the-art semi-supervised learning algorithms (Xie et al., 2019; Berthelot et al., 2019b; Sohn et al., 2020). It generates pseudo labels for unlabeled data by relying on the assumption that a good model should output similar predictions on perturbed versions of the same image. Similarly, in unsupervised contrastive learning, since the query crop and the positive crop naturally form two perturbed versions of the same image, we encourage them to have consistent similarities to each crop from other images. Specifically, the similarity of the positive sample predicted by the model is taken as a pseudo label for that of the query crop.
+
+Our model is trained with both the original instance discrimination loss term and the introduced consistency regularization term. The instance discrimination label and the pseudo similarity label jointly construct a virtual soft label on-the-fly, and the soft label further guides the model itself in a bootstrap manner. In this way, CO2 exploits the consistency assumption on unlabeled data, mitigates the "class collision" effect introduced by the one-hot labels, and results in a better visual representation. More importantly, our work brings a new perspective of unsupervised visual representation learning. It relaxes the stereotype that the pretext task can only be self-supervised which aims to construct artificial labels for every sample, e.g., a specific degree of rotation (Gidaris et al., 2018), a configuration of jigsaw puzzle (Noroozi & Favaro, 2016), and a one-hot label that indicates whether a crop comes from the same instance or not (Wu et al., 2018). In contrast, the pretext task can also be self-semi-supervised, allowing the task itself to be partially labeled. This relaxation is especially helpful when information for artificial label construction is not enough and imposing a label is harmful, such as the case of imposing the one-hot labels in instance discrimination.
+
+This simple modification brings consistent gains on various evaluation protocols. We first benchmark CO2 on ImageNet (Deng et al., 2009) linear classification protocol. CO2 improves MoCo by $2.9\%$ on top-1 accuracy. It also provides $3.8\%$ and $1.1\%$ top-5 accuracy gains under the semi-supervised setting on ImageNet with $1\%$ and $10\%$ labels respectively, showing the effectiveness of the introduced consistency regularization. We also evaluate the transfer ability of the learned representations on three different downstream tasks: image classification, object detection and semantic segmentation. CO2 models consistently surpass their MoCo counterparts, showing that CO2 can improve the generalization ability of learned representation. Besides, our experiments on ImageNet-100 (Tian et al., 2019) demonstrate the efficacy of CO2 on SimCLR (Chen et al., 2020a), showing the generality of our method on different contrastive learning frameworks.
+
+# 2 METHOD
+
+In this section, we begin by formulating current unsupervised contrastive learning as an instance discrimination task. Then, we propose our consistency regularization term which addresses the ignorance of the heterogeneous similarity between the query crop and each crop of other images in the instance discrimination task.
+
+# 2.1 CONTRASTIVE LEARNING
+
+Contrastive learning (Hadsell et al., 2006) is recently adopted as an objective for unsupervised learning of visual representations. Its goal is to find a parametric function $f_{\theta}:\mathbb{R}^{D}\to \mathbb{R}^{d}$ that maps an input vector $\mathbf{x}$ to a feature vector $f_{\theta}(\mathbf{x})\in \mathbb{R}^{d}$ with $D\gg d$ , such that a simple distance measure (e.g., cosine distance) in the low-dimensional feature space can reflect complex similarities in the high-dimensional input space.
+
+
+(a)
+Figure 1: Illustration of (a) instance discrimination and (b) our consistency regularization term. $\mathbf{q}$ is a query and $\mathbf{p}$ is a positive key, both are encoded from crops of the same image. $\{\mathbf{n}_k\}_{k=1}^K$ are negative keys, encoded from random crops. In (a), the similarities are softmax cosine distances between $\mathbf{q}$ and each key. $(\mathbf{p}$ and $\{\mathbf{n}_k\}_{k=1}^K)$ . These similarities are optimized towards an artificial one-hot label which identifies $\mathbf{p}$ among all keys. However, some negatives can be semantically similar but not reflected by the one-hot label (e.g., the one rounded by a red box). In (b), our proposed consistency regularization encourages the agreement between $P$ , the positive-negative similarities, and $Q$ , the query-negative similarities, reflecting the heterogeneous similarities between the query/positive and the negatives.
+
+
+(b)
+
+For each input vector $\mathbf{x}_i$ in the training set $\mathbb{S}$ , the similarity measure in the input space is defined by a subset of training vectors $\mathbb{S}_i \subset \mathbb{S}$ , called similarity set. The sample $\mathbf{x}_i$ is deemed similar to samples in the similarity set $\mathbb{S}_i$ , but dissimilar to samples in $\mathbb{S} \setminus \mathbb{S}_i$ . Then, the contrastive objective encourages $f_{\theta}(\mathbf{x}_j)$ to be close to $f_{\theta}(\mathbf{x}_i)$ in the feature space if $\mathbf{x}_j \in \mathbb{S}_i$ , and otherwise to be distant.
+
+By training with contrastive loss, the similarities defined by the similarity set determine characteristics of the learned representation and the mapping function $f_{\theta}$ . For example, if the similarity is defined as samples from the same semantic class, then $f_{\theta}$ will probably learn invariances to other factors, e.g., object deformation. In the supervised setting, this definition of similarity requires a large amount of human labeling. On the contrary, unsupervised contrastive learning exploits similarities with no need of human labels. One natural definition of unsupervised similarity is multiple views of an image, as explored by many recent methods. For example, random augmented crops (Wu et al., 2018; Ye et al., 2019; He et al., 2020; Chen et al., 2020a;b) of an image could be defined as a similarity set. In this case, the contrastive objective is effectively solving an instance discrimination task (Wu et al., 2018) as illustrated in Figure 1a.
+
+The training of this instance discriminator involves randomly sampling a query crop $\mathbf{x}^q\in \mathbb{S}_i$ , a positive crop $\mathbf{x}^p\in \mathbb{S}_i$ from the same image, and $K$ negative crops $\{\mathbf{x}^k\in \mathbb{S}\setminus \mathbb{S}_i\}_{k = 1}^K$ from other images. These $K + 2$ crops (the query, the positive, and $K$ negatives) are encoded with $f_{\theta}$ respectively, $\mathbf{q} = f_{\theta}(\mathbf{x}^{q}),\mathbf{p} = f_{\theta}(\mathbf{x}^{p}),\mathbf{n}_{k} = f_{\theta}(\mathbf{x}^{k})$ . Then, an effective contrastive loss function, InfoNCE (Hjelm et al., 2018), is written as:
+
+$$
+\mathcal {L} _ {i n s} = - \log \frac {\exp \left(\mathbf {q} \cdot \mathbf {p} / \tau_ {i n s}\right)}{\exp \left(\mathbf {q} \cdot \mathbf {p} / \tau_ {i n s}\right) + \sum_ {k = 1} ^ {K} \exp \left(\mathbf {q} \cdot \mathbf {n} _ {k} / \tau_ {i n s}\right)}, \tag {1}
+$$
+
+where $\tau_{ins}$ is a temperature hyper-parameter (Hinton et al., 2015). This loss can be interpreted as a cross entropy loss that trains the model to discriminate the positive crop (labeled as 1) from negative crops (labeled as 0) given the query crop. We denote this loss as $\mathcal{L}_{ins}$ as it performs an instance discrimination task. One direct instantiation of InfoNCE loss, represented by SimCLR (Chen et al., 2020a), formulates $f_{\theta}$ as an end-to-end encoder. In this case, two crops of the same image are exchangeable or symmetric to each other as both are encoded by $f_{\theta}$ . The final loss is also symmetric
+
+with either one of the two crops as the query and the other crop as the positive. Another popular instantiation, represented by MoCo (He et al., 2020), encodes the query with $f_{\theta}$ and encodes the positive and the negatives with $f_{\theta'}$ which is the moving average of $f_{\theta}$ . In this case, only $\mathbf{q}$ can propagate gradients, which causes $\mathcal{L}_{ins}$ to be asymmetric.
+
+# 2.2 CONSISTENT CONTRAST
+
+The one-hot labels used by InfoNCE loss is effective, showing good generalization ability across tasks and datasets (Chen et al., 2020b;a). Nevertheless, we argue that the hard, zero-one labels is uninformative. Specifically, crops from other images are taken as equally negative as they are all labeled as 0. This is contradictory to the fact that some so-called "negative" crops can be similar or even in the same semantic class, especially when $K$ is large. For example, SimCLR (Chen et al., 2020a) uses 16,382 negative samples in a batch, and MoCo (He et al., 2020; Chen et al., 2020b) uses a memory bank of 65,536 features as negative samples. Even worse, the current objective forces negatives to be as far from the query as possible, with larger weights for closer negatives since they are "hard negatives". However, these "hard negative" crops in fact tend to be semantically close. These issues impair good representation learning because the one-hot labels can not faithfully reflect the heterogeneous similarities between the query crop and the crops from other images.
+
+Although generating labels based on instance discrimination is trivial, revealing the similarity between two arbitrary crops is exactly what we want to learn from unsupervised pre-training. Therefore, the label of the similarity between the query crop and each crop from other images is of little hope to get. This situation is similar to the usage of unlabeled data in semi-supervised learning setting, in which consistency regularization is widely used to propagate knowledge from labeled data to discover the structures in unlabeled data. Inspired by this, we propose to encourage the consistency between the similarities of crops from the same image, i.e., the query crop and the positive crop. We illustrate the consistency regularization in Figure 1b.
+
+First, we denote the similarity between the query $\mathbf{q}$ and the negatives $\mathbf{n}_i(i\in \{1,\dots ,K\})$ as:
+
+$$
+Q (i) = \frac {\exp \left(\mathbf {q} \cdot \mathbf {n} _ {i} / \tau_ {c o n}\right)}{\sum_ {k = 1} ^ {K} \exp \left(\mathbf {q} \cdot \mathbf {n} _ {k} / \tau_ {c o n}\right)}, \tag {2}
+$$
+
+where $\tau_{con}$ is also a temperature hyper-parameter. $Q(i)$ is the probability that the query $\mathbf{q}$ selects $\mathbf{n}_i$ as its match from $\{\mathbf{n}_k\}_{k=1}^K$ . Similarly, the similarity between the positive $\mathbf{p}$ and the negatives is written as:
+
+$$
+P (i) = \frac {\exp \left(\mathbf {p} \cdot \mathbf {n} _ {i} / \tau_ {\text {c o n}}\right)}{\sum_ {k = 1} ^ {K} \exp \left(\mathbf {p} \cdot \mathbf {n} _ {k} / \tau_ {\text {c o n}}\right)}. \tag {3}
+$$
+
+We impose the consistency between the probability distributions $P$ and $Q$ by using symmetric Kullback-Leibler (KL) Divergence as the measure of disagreement:
+
+$$
+\mathcal {L} _ {\text {c o n}} = \frac {1}{2} D _ {\mathrm {K L}} (P \| Q) + \frac {1}{2} D _ {\mathrm {K L}} (Q \| P). \tag {4}
+$$
+
+When $\mathbf{p}$ and $\mathbf{q}$ are encoded by the same end-to-end encoder $f_{\theta}$ , it is natural to use symmetric KL as their disagreement measure, since $\mathbf{p}$ and $\mathbf{q}$ are exchangeable. Even when $\mathbf{p}$ and $\mathbf{n}_i$ are encoded by the momentum encoder $f_{\theta}'$ , symmetric KL empirically works as well as forward KL, i.e., $D_{\mathrm{KL}}(P\|Q)$ , as shown in Section 3.5. Thus, we use symmetric KL as a unified objective for both cases.
+
+The total loss is a weighted average of the original instance discrimination loss term and the consistency regularization term:
+
+$$
+\mathcal {L} = \mathcal {L} _ {i n s} + \alpha \mathcal {L} _ {c o n}, \tag {5}
+$$
+
+where $\alpha$ denotes the coefficient to balance the two terms. It is possible to merge the two terms by creating a unique label containing information of both the one-hot label and the pseudo similarity label, but we find the weighted average can already get good performance and is easy to control.
+
+The pseudo label is informative to reveal the similarity between the query $\mathbf{q}$ and each $\mathbf{n}_i$ , while the one-hot label is unable to provide such information, since it only describes co-occurrence within one image. Note that, the pseudo label is also dynamic since the embedding function $f_{\theta}$ is updated in every training step, and thus generating better pseudo labels during training. It indicates that the unsupervised embedding function and the soft similarity labels give positive feedback to each other.
+
+Table 1: Linear classification protocol on ImageNet-1K
+
+| Pretext Task | Arch. | Head | #epochs | Top-1 Acc. (%) |
| ImageNet Classification | R50 | - | 90 | 76.5 |
| Exemplar (Dosovitskiy et al., 2014) | R50w3× | - | 35 | 46.0 |
| Relative Position (Doersch et al., 2015) | R50w2× | - | 35 | 51.4 |
| Rotation (Gidaris et al., 2018) | Rv50w4× | - | 35 | 55.4 |
| Jigsaw (Noroozi & Favaro, 2016) | R50 | - | 90 | 45.7 |
| Methods based on contrastive learning: |
| InsDisc (Wu et al., 2018) | R50 | Linear | 200 | 54.0 |
| Local Agg. (Zhuang et al., 2019) | R50 | Linear | 200 | 58.2 |
| CPC v2 (Hénaff et al., 2019) | R170w | - | ~200 | 65.9 |
| CMC (Tian et al., 2019) | R50 | Liner | 240 | 60.0 |
| AMDIM (Bachman et al., 2019) | AMDIMlarge | - | 150 | 68.1 |
| PIRL (Misra & van der Maaten, 2020) | R50 | Linear | 800 | 63.6 |
| SimCLR (Chen et al., 2020a) | R50 | MLP | 1000 | 69.3 |
| MoCo (He et al., 2020) | R50 | Linear | 200 | 60.6 |
| MoCo (He et al., 2020) + CO2 | R50 | Linear | 200 | 63.5 |
| MoCo v2 (Chen et al., 2020b) | R50 | MLP | 200 | 67.5 |
| MoCo v2 (Chen et al., 2020b) + CO2 | R50 | MLP | 200 | 68.0 |
+
+Table 2: Top-5 accuracy for semi-supervised learning on ImageNet
+
+| Pretext Task | 1% labels | 10% labels |
| Supervised Baseline | 48.4 | 80.4 |
| InsDisc (Wu et al., 2018) | 39.2 | 77.4 |
| PIRL (Misra & van der Maaten, 2020) | 57.2 | 83.8 |
| MoCo (He et al., 2020) | 62.4 | 84.1 |
| MoCo (He et al., 2020) + CO2 | 66.2 | 85.2 |
| MoCo v2 (Chen et al., 2020b) | 69.5 | 85.1 |
| MoCo v2 (Chen et al., 2020b) + CO2 | 70.6 | 85.4 |
+
+Our method is simple and low-cost. It captures the similarity to each $\mathbf{n}_i$ while introducing unnoticeable computational overhead with only one extra loss term computed. This is unlike clustering based unsupervised learning methods, which are costly, since they explicitly compute the similarity sets in the training set after every training epoch (Caron et al., 2018; Zhuang et al., 2019; Li et al., 2020; Caron et al., 2020).
+
+# 3 EXPERIMENTS
+
+Herein, we first report our implementation details and benchmark the learned representations on ImageNet. Next, we examine how the unsupervised pre-trained models transfer to other datasets and tasks. We then analyze the characteristics of our proposed method.
+
+# 3.1 LINEAR CLASSIFICATION
+
+Setup We mainly evaluate CO2 based on MoCo (He et al., 2020) and MoCo v2 (Chen et al., 2020b). Both of them use instance discrimination as pretext task, while MoCo v2 adopts more sophisticated design choices on projection head architecture, learning rate schedule and data augmentation strategy. We test CO2 on MoCo for its representativeness and simplicity. On MoCo v2, we evaluate how CO2 is compatible with advanced design choices. We also demonstrate the impact of CO2 on the end-to-end contrastive framework in Section 3.5.
+
+The unsupervised training is performed on the train split of ImageNet-1K (Deng et al., 2009) without using label information. We keep aligned every detail with our baseline MoCo to effectively pinpoint the contribution of our approach, except the number of GPUs (MoCo uses 8 GPUs while we use 4). A further search on MoCo-related hyper-parameters might lead to better results of our
+
+Table 3: Transfer learning performance on PASCAL VOC datasets
+
+| Pretext Task | Image Classification | Object Detection | Semantic Segmentation |
| mAP | \( AP_{50} \) | \( AP_{all} \) | \( AP_{75} \) | mIoU |
| ImageNet Classification | 88.0 | 81.3 | 53.5 | 58.8 | 74.4 |
| Rotation (Gidaris et al., 2018) | 63.9 | 72.5 | 46.3 | 49.3 | - |
| Jigsaw (Noroozi & Favaro, 2016) | 64.5 | 75.1 | 48.9 | 52.9 | - |
| InsDisc (Wu et al., 2018) | 76.6 | 79.1 | 52.3 | 56.9 | - |
| PIRL (Misra & van der Maaten, 2020) | 81.1 | 80.7 | 54.0 | 59.7 | - |
| SimCLR (Chen et al., 2020a)* | - | 81.8 | 55.5 | 61.4 | - |
| BYOL (Grill et al., 2020)* | - | 81.4 | 55.3 | 61.1 | - |
| SwAV (Caron et al., 2020)* | - | 81.5 | 55.4 | 61.4 | - |
| SimSiam (Chen & He, 2020)* | - | 82.4 | 57.0 | 63.7 | - |
| MoCo (He et al., 2020) | - | 81.5 | 55.9 | 62.6 | 72.5 |
| MoCo (He et al., 2020) (our impl.) | 79.7 | 81.6 | 56.2 | 62.4 | 72.6 |
| MoCo (He et al., 2020) + CO2 | 82.6 | 81.9 | 56.0 | 62.6 | 73.3 |
| MoCo v2 (Chen et al., 2020b) | 85.0 | 82.4 | 57.0 | 63.6 | 74.2 |
| MoCo v2 (Chen et al., 2020b) + CO2 | 85.2 | 82.7 | 57.2 | 64.1 | 74.7 |
+
+* Results reported in Chen & He (2020).
+
+method. For the hyper-parameters of CO2, we set $\tau_{con}$ as 0.04, $\alpha$ as 10 for MoCo-based CO2, and $\tau_{con}$ as 0.05, $\alpha$ as 0.3 for MoCo v2-based CO2. Please refer to the appendix for more detailed implementation description.
+
+# 3.2 LINEAR CLASSIFICATION
+
+We first benchmark the learned representations on the common linear classification protocol. After the unsupervised pre-training stage, we freeze the backbone network including the batch normalization parameters, and train a linear classifier consisting of a fully-connected layer and a softmax layer on the 2048-D features following the global average pooling layer. Table 1 summaries the single-crop top-1 classification accuracy on the validation set of ImageNet-1K. Our method consistently improves by $2.9\%$ on MoCo and by $0.5\%$ on MoCo v2. We also list several top-performing methods in the table for reference. These results indicate that the representation is more linearly separable on ImageNet with consistency regularization, since the consistency regularization mitigates the "class collision" effect caused by semantically similar negative samples.
+
+# 3.3 SEMI-SUPERVISED LEARNING
+
+We next perform semi-supervised learning on ImageNet to evaluate the effectiveness of the pretrained network in data-efficient settings. Following (Wu et al., 2018; Misra & van der Maaten, 2020; Chen et al., 2020a), we finetune the whole pre-trained networks with only $1\%$ and $10\%$ labels which are sampled in a class-balanced way. Table 2 summarizes the mean of the top-5 accuracy on the validation set of ImageNet-1K over three runs. The results for MoCo and MoCo v2 are produced by us using their officially released models. The proposed consistency regularization term can provide $3.8\%$ and $1.1\%$ top-5 accuracy gains for MoCo with $1\%$ and $10\%$ labels respectively. CO2 also improves from MoCo v2 by $1.1\%$ top-5 accuracy with $1\%$ labels, and by $0.3\%$ with $10\%$ labels.
+
+# 3.4 TRANSFER LEARNING
+
+To further investigate the generalization ability of our models across different datasets and tasks, we evaluate the transfer learning performance on PASCAL VOC (Everingham et al., 2015) with three typical visual recognition tasks, i.e., image classification, object detection and semantic segmentation. Table 3 reports the transfer learning performance comparing with other methods using ResNet-50. CO2 shows competitive or better performance comparing with the corresponding baselines. In addition, it achieves better performance than state-of-the-art unsupervised representation learning methods.
+
+Image Classification Following the evaluation setup in Goyal et al. (2019), we train a linear SVM (Boser et al., 1992) on the frozen 2048-D features extracted after the global average pool-
+
+
+(a) Effect of varying the coefficient $\alpha$
+
+
+(b) Effect of varying the temperature $\tau_{con}$ .
+
+
+(a) $\mathcal{L}_{ins}$
+
+
+Figure 2: Ablation on the effect of hyper-parameters.
+(b) $\mathcal{L}_{con}$
+
+
+(c) Instance discrimination acc.
+Figure 3: Training curves of ResNet-18 on ImageNet-100.
+
+ing layer. The results of MoCo are produced by us with their official models. In this case, CO2 is $2.9\%$ better than MoCo, and $0.2\%$ than MoCo v2.
+
+Object Detection Following the detection benchmark set up in He et al. (2020), we use Faster R-CNN (Ren et al., 2015) object detector and ResNet-50 C4 (He et al., 2017) backbone, and all the layers are finetuned including the batch normalization parameters. The numbers of our method are averaged over three runs. Our reproduced results for MoCo are also listed in the table for reference. CO2 provides $0.3\%$ $\mathrm{AP}_{50}$ gains on both MoCo and MoCo v2.
+
+Semantic Segmentation We follow the settings in He et al. (2020) for semantic segmentation. Results are average over three runs. Similarly, we include our reproduced results of MoCo as a reference. The result of MoCo v2 is produced by us using its officially released model. CO2 gives $0.9\%$ mIoU improvement upon MoCo, and $0.5\%$ upon MoCo v2, which finally surpasses its supervised counterpart.
+
+The overall transfer learning improvements, though consistent, are smaller than linear classification and semi-supervised learning. Similar observations have also been made in Chen et al. (2020b). We hypothesize that the current unsupervised contrastive methods, which bring close different crops from the same image, reduce the representation's sensitivity to location which is useful for tasks like detection. It is still an open question which properties of an unsupervised representation benefit the transfer ability to various downstream tasks.
+
+# 3.5 ANALYSIS
+
+In this section, we study the characteristics of the proposed method on a smaller backbone ResNet-18 and a smaller dataset ImageNet-100 due to the consideration of the computational resource. ImageNet-100 is firstly used in Tian et al. (2019) and consists of 100 randomly selected classes from all 1,000 classes of ImageNet.
+
+Hyper-parameter Our method introduces two new hyper-parameters, the coefficient of consistency regularization term $\alpha$ , and its temperature $\tau_{con}$ . In Figure 2, we show the top-1 accuracy of a linear classifier on models pre-trained by CO2 with different hyper-parameters. In Figure 2a, we fix the temperature $\tau_{con}$ as 0.04 and vary the coefficient $\alpha$ . The best coefficient is 10. We see that by using the consistency regularization term, the linear classification accuracy can be boosted from $63.6\%$ to $69.2\%$ . Increasing $\alpha$ to 20 and beyond causes performance degeneration. We hypothesize that the model is over-regularized by the consistency loss, and thus it loses some discrimination among different instances. In Figure 2b, we fix the coefficient to be 10 and varying the temperature. As other consistency regularization methods (e.g., Berthelot et al. (2019b)), temperature $\tau_{con}$ effectively influences the quality of the learned representation, and the best to use is 0.04.
+
+Training Curves In Figure 3 we show the training curves of the instance discrimination loss $\mathcal{L}_{ins}$ , the consistency loss $\mathcal{L}_{con}$ and the instance discrimination accuracy. Instance discrimination accuracy represents the percent of query crops which successfully select their corresponding positive crops, i.e., successfully identify their instances. MoCo is trained with $\mathcal{L}_{ins}$ only and its $\mathcal{L}_{con}$ is just calculated out for comparison. We see that $\mathcal{L}_{ins}$ of MoCo drops quickly from the beginning at the cost of a jump of $\mathcal{L}_{con}$ . As the training proceeds, $\mathcal{L}_{con}$ of MoCo decreases spontaneously, possibly because more semantic knowledge has been learned, but it is still relatively high. Training with $\mathcal{L}_{con}$ and $\mathcal{L}_{ins}$ together, i.e., $\mathrm{MoCo} + \mathrm{CO2}$ , $\mathcal{L}_{con}$ is kept very low from beginning, and $\mathcal{L}_{con}$ increases gradually since the model is trained to discriminate between images at the same time. At the end of the training, $\mathcal{L}_{con}$ stays much lower than $\mathcal{L}_{con}$ of MoCo.
+
+We also notice that with CO2, the instance discrimination accuracy drops from $97.57\%$ to $95.26\%$ . Although CO2 results in lower instance discrimination accuracy, it still does better in the downstream classification task. The linear classification accuracy improves from $63.6\%$ to $69.2\%$ , as shown in Figure 2a. It suggests again that there is a gap between instance discrimination and the downstream tasks.
+
+Comparison with Label Smoothing With the consistency regularization term, our approach assigns soft pseudo labels to crops from other images. This looks similar to label smoothing regularization on supervised classification (Szegedy et al., 2016), a useful trick which assigns a small constant value to the labels of all the negative classes to avoid overconfidence. We equip MoCo with label smoothing, i.e., assigning a small constant value to crops from other images (the "negatives"). Surprisingly, it reports $61.2\%$ linear classification accuracy, $2.4\%$ lower than MoCo alone. This suggests that assigning a constant value as label smoothing can be harmful for unsupervised contrastive learning, since it ignores the heterogeneous similarity relationship. And it is better to assign labels according to the similarities as our consistency regularization.
+
+End-to-End Encoder To further verify the effectiveness of the proposed consistency regularization term on different contrastive learning frameworks, we apply CO2 to SimCLR (Chen et al., 2020a), a representative method with an end-to-end encoder (without a momentum encoder). The results are presented in Table 4. On ImageNet-100 (Tian et al., 2019) with a ResNet-18, SimCLR obtains $68.9\%$ top-1 linear classification accuracy with batch size 128 and temperature $\tau_{ins}$ 0.1. Equipped with CO2 whose coefficient $\alpha$ is 0.07 and temperature $\tau_{con}$ is 1.0, the linear classification accuracy is boosted to $72.3\%$ . The improvement demonstrates that CO2 can be applied to different unsupervised contrastive frameworks and improve the quality of the learned representation regardless of whether using a momentum encoder or not.
+
+Table 4: Linear classification accuracy using an end-to-end encoder and with different choices of $\mathcal{L}_{con}$ . The results are summarized as mean and standard deviation over three different runs.
+
+| Method | Acc. (%) |
| SimCLR | 68.9±0.06 |
| SimCLR + CO2 | 72.3±0.14 |
| MoCo | 63.1±0.29 |
| MoCo + Forward KL | 69.6±0.27 |
| MoCo + Reverse KL | 65.1±0.52 |
| MoCo + CO2 | 69.7±0.41 |
+
+Varying the choices of $\mathcal{L}_{con}$ We ablate on different variants of $\mathcal{L}_{con}$ (Eq. 4) on MoCo including forward KL $(D_{\mathrm{KL}}(P\| Q))$ , reverse KL $(D_{\mathrm{KL}}(Q\| P))$ , and the objective of CO2, i.e., symmetric KL. Each of models uses a coefficient $\alpha$ of 10 and a temperature $\tau_{con}$ of 0.04. We present the linear classification accuracy in Table 4. Our CO2 (symmetric KL) improves over the baseline MoCo by a large margin, from $63.1\%$ to $69.7\%$ . Forward KL alone improves similarly to $69.6\%$ . And reserve KL alone can also provide a nontrivial $2.0\%$ gain in accuracy.
+
+# 4 RELATED WORK
+
+Our method falls in the area of unsupervised visual representation learning, especially for image data. In this section, we first revisit various design strategies of pretext tasks for unsupervised learning. Then we elaborate on the pretext tasks based on contrastive learning, which is the focus of our work. Next, we review the methods using consistency regularization in semi-supervised learning, which inspire our work.
+
+Unsupervised Learning and Pretext Tasks To learn from unlabeled image data, a wide range of pretext tasks have been established. The models can be taught to specify the relative position of a patch (Doersch et al., 2015), solve spatial jigsaw puzzles (Noroozi & Favaro, 2016; Wei et al.,
+
+2019), colorize gray scale images (Zhang et al., 2016; Larsson et al., 2017), inpaint images (Pathak et al., 2016), count objects (Noroozi et al., 2017), discriminate orientation (Gidaris et al., 2018), iteratively cluster (Caron et al., 2018; Zhuang et al., 2019; Asano et al., 2019; Zhong et al., 2020), generate images (Donahue et al., 2016; Donahue & Simonyan, 2019), etc. Doersch & Zisserman (2017) evaluates the combination of different pretext tasks. Kolesnikov et al. (2019) and Goyal et al. (2019) revisit and benchmark different pretext tasks.
+
+Contrastive Learning Contrastive learning (Hadsell et al., 2006) recently puts a new perspective on the design of pretext task and holds the key to most state-of-the-art methods. Most of them perform an instance discrimination task while differ in i) the strategies to synthesize positives and negatives, and ii) the mechanisms to manage a large amount of negatives. The synthesizing can base on context with patches (Hjelm et al., 2018; 2019), random resized crops with data augmentation (Wu et al., 2018; Ye et al., 2019; Bachman et al., 2019; He et al., 2020; Chen et al., 2020a), jigsaw puzzle transformation (Misra & van der Maaten, 2020) or luminance-chrominance decomposition (Tian et al., 2019). Regarding the mechanisms to maintain negative features, some methods (Wu et al., 2018; Misra & van der Maaten, 2020) keep tracking the features of all images, some directly utilize the samples within the minibatch (Chen et al., 2020a; Tian et al., 2019; Ye et al., 2019), and He et al. (2020) proposes to use a momentum encoder. Grill et al. (2020) recently proposes to only use positive examples without negatives. Recently, Li et al. (2020) argues that the lack of semantic structure is one fundamental weakness of instance discrimination, and proposes to generate prototypes by k-means clustering. However, the computational overhead and the degeneration introduced by clustering are difficult to address. Chuang et al. (2020) also points out the possible sampling bias of instance discrimination, and proposes a debiased objective.
+
+Consistency Regularization Consistency regularization is an important component of many successful semi-supervised learning methods. It is firstly proposed in Sajjadi et al. (2016), encouraging similar predictions on perturbed versions of the same image. Besides the consistency regularization on unlabeled data, the model is simultaneously trained with a supervised loss on a small set of labeled data. Several works made improvements on the way of perturbation, including using an adversarial transformation (Miyato et al., 2018), using the prediction of a moving average or previous model (Tarvainen & Valpola, 2017; Laine & Aila, 2017), and using strong data augmentation (Xie et al., 2019). Recently, several larger pipelines are proposed (Berthelot et al., 2019b;a; Sohn et al., 2020), in which consistency regularization still serves as a core component.
+
+The instance discrimination loss in unsupervised contrastive learning is analogous to the supervised loss in semi-supervised learning, as both of them rely on some concrete information, i.e., co-occurrence in one image and human annotation, respectively. Meanwhile, CO2 on the similarities between crops is analogous to consistency regularization on unlabeled samples of semi-supervised methods as their labels are both unknown. The main difference, however, is that semi-supervised methods crucially rely on the supervised loss to warm up the model, while there is no human annotation at all in unsupervised contrastive learning. Our work presents an example that a model learned completely without human annotations can also generate surprisingly effective pseudo labels for similarities between different crops and benefit from consistency regularization.
+
+# 5 DISCUSSION
+
+Unsupervised visual representation learning has shown encouraging progress recently, thanks to the introduction of instance discrimination and the contrastive learning framework. However, in this paper, we point out that instance discrimination is ignorant of the heterogeneous similarities between image crops. We address this issue with a consistency regularization term on the similarities between crops, inspired by semi-supervised learning methods which impose consistency regularization on unlabeled data. In such a simple way, the proposed CO2 consistently improves on supervised and semi-supervised image classification. It also transfers to other datasets and downstream tasks.
+
+More broadly, we encourage researchers to rethink label correctness in existing pretext tasks. Taking instance discrimination as an example, we show that a pretext task itself could be, in fact, a semi-supervised learning task. It might be harmful to think of the pretext task as a simple pure supervised task by assuming the unknown labels are negatives. In addition, our work relaxes the stereotype restriction that pretext task labels should always be known and clean. We hope this relaxation can give rise to novel pretext tasks which exploit noisy labels or partially-available labels, making a better usage of the data without human annotation.
+
+# REFERENCES
+
+YM Asano, C Rupprecht, and A Vedaldi. Self-labelling via simultaneous clustering and representation learning. In ICLR, 2019.
+Philip Bachman, R Devon Hjelm, and William Buchwalter. Learning representations by maximizing mutual information across views. In NeurIPS, 2019.
+David Berthelot, Nicholas Carlini, Ekin D Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, and Colin Raffel. ReMixMatch: Semi-supervised learning with distribution matching and augmentation anchoring. In ICLR, 2019a.
+David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin A Raffel. MixMatch: A holistic approach to semi-supervised learning. In NeurIPS, 2019b.
+Bernhard E Boser, Isabelle M Guyon, and Vladimir N Vapnik. A training algorithm for optimal margin classifiers. In Proceedings of the fifth annual workshop on computational learning theory, 1992.
+Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsupervised learning of visual features. In ECCV, 2018.
+Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. NeurIPS, 2020.
+Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. TPAMI, 2017.
+Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709, 2020a.
+Xinlei Chen and Kaiming He. Exploring simple siamese representation learning. arXiv preprint arXiv:2011.10566, 2020.
+Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He. Improved baselines with momentum contrastive learning. arXiv preprint arXiv:2003.04297, 2020b.
+Ching-Yao Chuang, Joshua Robinson, Lin Yen-Chen, Antonio Torralba, and Stefanie Jegelka. Debiased contrastive learning. arXiv preprint arXiv:2007.00224, 2020.
+Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In CVPR, 2009.
+Carl Doersch and Andrew Zisserman. Multi-task self-supervised visual learning. In ICCV, 2017.
+Carl Doersch, Abhinav Gupta, and Alexei A Efros. Unsupervised visual representation learning by context prediction. In ECCV, 2015.
+Jeff Donahue and Karen Simonyan. Large scale adversarial representation learning. In NeurIPS, 2019.
+Jeff Donahue, Philipp Krahenbuhl, and Trevor Darrell. Adversarial feature learning. In ICLR, 2016.
+Alexey Dosovitskiy, Jost Tobias Springenberg, Martin Riedmiller, and Thomas Brox. Discriminative unsupervised feature learning with convolutional neural networks. In NeurIPS, 2014.
+Mark Everingham, SM Ali Eslami, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The pascal visual object classes challenge: A retrospective. *IJCV*, 2015.
+Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. In ICLR, 2018.
+Priya Goyal, Dhruv Mahajan, Abhinav Gupta, and Ishan Misra. Scaling and benchmarking self-supervised visual representation learning. In ICCV, 2019.
+
+Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent: A new approach to self-supervised learning. arXiv preprint arXiv:2006.07733, 2020.
+Raia Hadsell, Sumit Chopra, and Yann LeCun. Dimensionality reduction by learning an invariant mapping. In CVPR, 2006.
+Bharath Hariharan, Pablo Arbeláez, Lubomir Bourdev, Subhransu Maji, and Jitendra Malik. Semantic contours from inverse detectors. In ICCV, 2011.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
+Kaiming He, Georgia Gkioxari, Piotr Dólar, and Ross Girshick. Mask R-CNN. In ICCV, 2017.
+Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In CVPR, 2020.
+Olivier J Henaff, Ali Razavi, Carl Doersch, SM Eslami, and Aaron van den Oord. Data-efficient image recognition with contrastive predictive coding. arXiv preprint arXiv:1905.09272, 2019.
+Geoffrey Hinton, Oriol Vinyls, and Jeff Dean. Distilling the knowledge in a neural network. In NeurIPS Deep Learning and Representation Learning Workshop, 2015.
+R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. arXiv preprint arXiv:1808.06670, 2018.
+R Devon Hjelm, Alefx Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. In ICLR, 2019.
+Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015.
+Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. Supervised contrastive learning. arXiv preprint arXiv:2004.11362, 2020.
+Alexander Kolesnikov, Xiaohua Zhai, and Lucas Beyer. Revisiting self-supervised visual representation learning. In CVPR, 2019.
+Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. In ICLR, 2017.
+G. Larsson, M. Maire, and G. Shakhnarovich. Colorization as a proxy task for visual understanding. In CVPR, 2017.
+Junnan Li, Pan Zhou, Caiming Xiong, Richard Socher, and Steven CH Hoi. Prototypical contrastive learning of unsupervised representations. arXiv preprint arXiv:2005.04966, 2020.
+Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015.
+Ilya Loshchilov and Frank Hutter. SGDR: Stochastic gradient descent with warm restarts. In ICLR, 2016.
+Ishan Misra and Laurens van der Maaten. Self-supervised learning of pretext-invariant representations. In CVPR, 2020.
+Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. TPAMI, 2018.
+Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In ECCV, 2016.
+
+Mehdi Noroozi, Hamed Pirsiavash, and Paolo Favaro. Representation learning by learning to count. In ICCV, 2017.
+Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In CVPR, 2016.
+Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In NeurIPS, 2015.
+Mehdi Sajjadi, Mehran Javanmardi, and Tolga Tasdizen. Regularization with stochastic transformations and perturbations for deep semi-supervised learning. In NeurIPS, 2016.
+Nikunj Saunshi, Orestis Plevrakis, Sanjeev Arora, Mikhail Khodak, and Hrishikesh Khandeparkar. A theoretical analysis of contrastive unsupervised representation learning. In ICML, 2019.
+Kihyuk Sohn, David Berthelot, Chun-Liang Li, Zizhao Zhang, Nicholas Carlini, Ekin D Cubuk, Alex Kurakin, Han Zhang, and Colin Raffel. FixMatch: Simplifying semi-supervised learning with consistency and confidence. In NeurIPS, 2020.
+Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In CVPR, 2016.
+Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In NeurIPS, 2017.
+Yonglong Tian, Dilip Krishnan, and Phillip Isola. Contrastive multiview coding. arXiv preprint arXiv:1906.05849, 2019.
+Chen Wei, Lingxi Xie, Xutong Ren, Yingda Xia, Chi Su, Jiaying Liu, Qi Tian, and Alan L Yuille. Iterative reorganization with weak spatial constraints: Solving arbitrary jigsaw puzzles for unsupervised representation learning. In CVPR, 2019.
+Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen Lo, and Ross Girshick. Detector2. https://github.com/facebookresearch/detectron2, 2019.
+Zhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua Lin. Unsupervised feature learning via non-parametric instance discrimination. In CVPR, 2018.
+Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, and Quoc V Le. Unsupervised data augmentation for consistency training. arXiv preprint arXiv:1904.12848, 2019.
+Mang Ye, Xu Zhang, Pong C Yuen, and Shih-Fu Chang. Unsupervised embedding learning via invariant and spreading instance feature. In CVPR, 2019.
+Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. In ECCV, 2016.
+Huasong Zhong, Chong Chen, Zhongming Jin, and Xian-Sheng Hua. Deep robust clustering by contrastive learning. arXiv preprint arXiv:2008.03030, 2020.
+Chengxu Zhuang, Alex Lin Zhai, and Daniel Yamins. Local aggregation for unsupervised learning of visual embeddings. In CVPR, 2019.
+
+# A APPENDIX
+
+# A.1 IMPLEMENTATION DETAILS OF CONTRASTIVE PRE-TRAINING
+
+We evaluate our approach based on MoCo (He et al., 2020). MoCo has two different encoders to encode queries and keys respectively. The query encoder is updated with respect to the loss function, while the key encoder is an exponential moving average of the query encoder. The keys are stored in a dynamic memory bank, whose entries are updated at every training step with the current minibatch dequeued and the oldest mini-batch dequeued. The backbone is a standard ResNet-50 (He et al., 2016), and features after the global average pooling layer are projected to 128-D vectors (Wu et al., 2018), normalized by $\ell_2$ norm. The size of the memory bank (i.e., the number of negative samples) is 65,536 and the momentum to update the key encoder is 0.999. $\tau_{ins}$ is 0.07 for MoCo variants and 0.2 for MoCo v2 variants, which are the default settings of these two methods.
+
+We use momentum SGD with momentum 0.9 and weight decay 1e-4. The batch size is 256 on 4 GPUs. To prevent potential information leak with Batch Normalization (BN) (Ioffe & Szegedy, 2015), shuffling BN (He et al., 2020) is performed. The model is trained for 200 epochs with the initial learning rate of 0.03. The learning rate is multiplied by 0.1 after 120 and 160 epochs for MoCo v1, while cosine decayed (Loshchilov & Hutter, 2016) for MoCo v2. We keep aligned all training details with MoCo except the number of GPUs. This could be problematic since it changes the per-worker minibatch size, which is related to potential information leaks pointed by He et al. (2020). However, we do not notice much difference when reproducing MoCo with 4 GPUs. Our reproduced MoCo v2 with 4 GPUs reaches the accuracy of $67.6\%$ on the linear classification protocol, $0.1\%$ higher than $67.5\%$ reported in its paper. For the hyper-parameters of the proposed consistency term, we set $\tau_{con}$ as 0.04 and $\alpha$ as 10 for the MoCo v1-based CO2, and $\tau_{con}$ as 0.05, $\alpha$ as 0.3 for the MoCo v2-based variant.
+
+# A.2 IMPLEMENTATION DETAILS OF DOWNSTREAM TASKS
+
+Linear Classification We freeze the backbone network including the batch normalization parameters, and train a linear classifier consisting of a fully-connected layer followed by softmax on the 2048-D features following the global average pooling layer. We train for 100 epochs. The learning rate is initialized as 15 and decayed by 0.1 every 20 epoch after the first 60 epochs. We set weight decay as 0 and momentum as 0.9. Only random cropping with random horizontal flipping is used as data augmentation.
+
+Semi-Supervised Learning We finetune the pre-trained model for 20 epochs with learning rate starting from 0.01 for the base model and 1.0 for the randomly initialized classification head, decayed by 0.2 after 12 and 16 epochs. Momentum is set to 0.9. Weight decay is 5e-4 for MoCo v1 and 1e-4 for MoCo v2. Only random cropping with random horizontal flipping is used as data augmentation.
+
+Classification on PASCAL VOC Following the evaluation setup in Goyal et al. (2019), we train a linearSVM (Boser et al., 1992) on the frozen 2048-D features extracted after the global average pooling layer. The models are trained on trainval2007 split and tested on test2007. The hyper-parameters are selected based on a held-out subset of the training set.
+
+Detection on PASCAL VOC Following the detection benchmark set up in He et al. (2020), we use FasterR-CNN (Ren et al., 2015) object detector and ResNet-50 C4 (He et al., 2017) backbone, implemented in Detectron2 (Wu et al., 2019). We finetune all the layers including the batch normalization parameters for 24k iterations on the trainval07+12 split and test on test2007 set. The hyper-parameters are the same as the counterpart with supervised ImageNet initialization and MoCo. To calibrate the small feature magnitude due to the output normalization in the unsupervised pre-training stage, two extra batch normalization layers are introduced, one is followed by the regional proposal head whose gradients are divided by 10 and the other is followed by the box prediction head.
+
+Segmentation on PASCAL VOC Following the setup in He et al. (2020), an FCN-based (Long et al., 2015) architecture with atrous convolutions (Chen et al., 2017) is used and ResNet-50 is the backbone. The training set is train-aug2012 (Hariharan et al., 2011) and the testing set is val2012. Initialized with CO2 models, we finetune all layers for 50 epochs (33k iterations) with batch size 16, initial learning rate 0.003, weight decay 1e-4 and momentum 0.9.
\ No newline at end of file
diff --git a/co2consistentcontrastforunsupervisedvisualrepresentationlearning/images.zip b/co2consistentcontrastforunsupervisedvisualrepresentationlearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..4548773dc6eee17aea595889ea14ebc8248dd925
--- /dev/null
+++ b/co2consistentcontrastforunsupervisedvisualrepresentationlearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3eaf920531424acee3495ddcd3efff1fa460157602cfb315e0e511e79be01bec
+size 393304
diff --git a/co2consistentcontrastforunsupervisedvisualrepresentationlearning/layout.json b/co2consistentcontrastforunsupervisedvisualrepresentationlearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..dacf5e0a891c8d45da7580d50fe0a19244c92627
--- /dev/null
+++ b/co2consistentcontrastforunsupervisedvisualrepresentationlearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:08fb5827786a06708e93d22e0c3c6637b72d82249d56b85a215248d1b6c5746d
+size 447309
diff --git a/cococontrollablecounterfactualsforevaluatingdialoguestatetrackers/726f962d-538b-4d72-ac60-349fefc1817a_content_list.json b/cococontrollablecounterfactualsforevaluatingdialoguestatetrackers/726f962d-538b-4d72-ac60-349fefc1817a_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..879f7edf0c6026c22004872fecb284f7a97535e1
--- /dev/null
+++ b/cococontrollablecounterfactualsforevaluatingdialoguestatetrackers/726f962d-538b-4d72-ac60-349fefc1817a_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b3f89db49719067b326ea592dcdaa474f6263350917668d1616f16ea29cce335
+size 137219
diff --git a/cococontrollablecounterfactualsforevaluatingdialoguestatetrackers/726f962d-538b-4d72-ac60-349fefc1817a_model.json b/cococontrollablecounterfactualsforevaluatingdialoguestatetrackers/726f962d-538b-4d72-ac60-349fefc1817a_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..ddafda74d4df322e6419d6d1dc9afb75c98160f4
--- /dev/null
+++ b/cococontrollablecounterfactualsforevaluatingdialoguestatetrackers/726f962d-538b-4d72-ac60-349fefc1817a_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c9c476d6bbf2a8fffdadbbcd67fc02d0e7b496e5e7f923b165510f89a3c78634
+size 157390
diff --git a/cococontrollablecounterfactualsforevaluatingdialoguestatetrackers/726f962d-538b-4d72-ac60-349fefc1817a_origin.pdf b/cococontrollablecounterfactualsforevaluatingdialoguestatetrackers/726f962d-538b-4d72-ac60-349fefc1817a_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..de5ebb21c56010983f505d07a91aaa52535cdb23
--- /dev/null
+++ b/cococontrollablecounterfactualsforevaluatingdialoguestatetrackers/726f962d-538b-4d72-ac60-349fefc1817a_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a1c46bcf3bf09a0b0a94c84fb6ecaba115d0c80c8d2ab195c77dde06c8a3e26a
+size 1175728
diff --git a/cococontrollablecounterfactualsforevaluatingdialoguestatetrackers/full.md b/cococontrollablecounterfactualsforevaluatingdialoguestatetrackers/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..0b3a5d49900254452add612139b4674f6067f807
--- /dev/null
+++ b/cococontrollablecounterfactualsforevaluatingdialoguestatetrackers/full.md
@@ -0,0 +1,320 @@
+# COCo: CONTROLLABLE COUNTERFACTUALS FOR EVALUATING DIALOGUE STATE TRACKERS
+
+Shiyang Li\*, Semih Yavuz, Kazuma Hashimoto, Jia Li, Tong Niu Nazneen Rajani, Xifeng Yan, Yingbo Zhou, Caiming Xiong Salesforce Research University of California, Santa Barbara
+
+{syavuz, k Hashimoto, jia.li, tniu, nazneen.rajani, yingbo.zhou, cxiong}@salesforce.com {shiyangli, xyan}@ucsb.edu
+
+# ABSTRACT
+
+Dialogue state trackers have made significant progress on benchmark datasets, but their generalization capability to novel and realistic scenarios beyond the held-out conversations is less understood. We propose controllable counterfactuals (CoCo) to bridge this gap and evaluate dialogue state tracking (DST) models on novel scenarios, i.e., would the system successfully tackle the request if the user responded differently but still consistently with the dialogue flow? CoCo leverages turn-level belief states as counterfactual conditionals to produce novel conversation scenarios in two steps: (i) counterfactual goal generation at turn-level by dropping and adding slots followed by replacing slot values, (ii) counterfactual conversation generation that is conditioned on (i) and consistent with the dialogue flow. Evaluating state-of-the-art DST models on MultiWOZ dataset with CoCo-generated counterfactuals results in a significant performance drop of up to $30.8\%$ (from $49.4\%$ to $18.6\%$ ) in absolute joint goal accuracy. In comparison, widely used techniques like paraphrasing only affect the accuracy by at most $2\%$ . Human evaluations show that COCO-generated conversations perfectly reflect the underlying user goal with more than $95\%$ accuracy and are as human-like as the original conversations, further strengthening its reliability and promise to be adopted as part of the robustness evaluation of DST models.
+
+# 1 INTRODUCTION
+
+Task-oriented dialogue (TOD) systems have recently attracted growing attention and achieved substantial progress (Zhang et al., 2019b; Neelakantan et al., 2019; Peng et al., 2020; Wang et al., 2020b;a), partly made possible by the construction of large-scale datasets (Budzianowski et al., 2018; Byrne et al., 2019; Rastogi et al., 2019). Dialogue state tracking (DST) is a backbone of TOD systems, where it is responsible for extracting the user's goal represented as a set of slot-value pairs (e.g., (area, center), (food, British)), as illustrated in the upper part of Figure 1. The DST module's output is treated as the summary of the user's goal so far in the dialogue and directly consumed by the subsequent dialogue policy component to determine the system's next action and response. Hence, the accuracy of the DST module is critical to prevent downstream error propagation (Liu and Lane, 2018), affecting the end-to-end performance of the whole system.
+
+With the advent of representation learning in NLP (Pennington et al., 2014; Devlin et al., 2019; Radford et al., 2019), the accuracy of DST models has increased from $15.8\%$ (in 2018) to $55.7\%$ (in 2020). While measuring the held-out accuracy is often useful, practitioners consistently overestimate their model's generalization (Ribeiro et al., 2020; Patel et al., 2008) since test data is usually collected in the same way as training data. In line with this hypothesis, Table 1 demonstrates that there is a substantial overlap of the slot values between training and evaluation sets of the MultiWOZ DST benchmark (Budzianowski et al., 2018). In Table 2, we observe that the slot co-occurrence distributions for evaluation sets tightly align with that of train split, hinting towards the potential
+
+| data | attraction-name | hotel-name | restaurant-name | taxi-departure | taxi-destination | train-departure | train-destination |
| dev | 94.5 | 96.4 | 97.3 | 98.6 | 98.2 | 99.6 | 99.6 |
| test | 96.2 | 98.4 | 96.8 | 95.6 | 99.5 | 99.4 | 99.4 |
+
+Table 1: The percentage (%) of domain-slot values in dev/test sets covered by training data.
+
+| slot name | data | area | book day | book time | food | name | price range |
| book people | train | 1.9 | 38.8 | 39.2 | 2.1 | 16.4 | 1.5 |
| dev | 1.9 | 38.9 | 38.9 | 1.9 | 16.3 | 2.2 |
| test | 2.7 | 36.9 | 37.7 | 1.6 | 18.7 | 2.4 |
+
+Table 2: Co-occurrence distribution(%) of book people slot with other slots in restaurant domain within the same user utterance. It rarely co-occurs with particulars slots (e.g., food), which hinders the evaluation of DST models on realistic user utterances such as "I want to book a Chinese restaurant for 8 people."
+
+limitation of the held-out accuracy in reflecting the actual generalization capability of DST models. Inspired by this phenomenon, we aim to address and provide insights into the following question: how well do state-of-the-art DST models generalize to the novel but realistic scenarios that are not captured well enough by the held-out evaluation set?
+
+Most prior work (Iyyer et al., 2018; Jin et al., 2019) focus on adversarial example generation for robustness evaluation. They often rely on perturbations made directly on test examples in the held-out set and assume direct access to evaluated models' gradients or outputs. Adversarial examples generated by these methods are often unnatural or obtained to hurt target models deliberately. It is imperative to emphasize here that both our primary goal and approach significantly differ from the previous line of work: (i) Our goal is to evaluate DST models beyond held-out accuracy, (ii) We leverage turn-level structured meaning representation (belief state) along with its dialogue history as conditions to generate user response without relying on the original user utterance, (iii) Our approach is entirely model-agnostic, assuming no access to evaluated DST models, (iv) Perhaps most importantly, we aim to produce novel but realistic and meaningful conversation scenarios rather than intentionally adversarial ones.
+
+We propose controllable counterfactuals (CoCo) as a principled, model-agnostic approach to generate novel scenarios beyond the held-out conversations. Our approach is inspired by the combination of two natural questions: how would DST systems react to (1) unseen slot values and (2) rare but realistic slot combinations? CoCo first encapsulates these two aspects under a unified concept called counterfactual goal obtained by a stochastic policy of dropping and adding slots to the original turn-level belief state followed by replacing slot values. In the second step, CoCo conditions on the dialogue history and the counterfactual goal to generate counterfactual conversation. We cast the actual utterance generation as a conditional language modeling objective. This formulation allows us to plug-in a pretrained encoder-decoder architecture (Raffel et al., 2020) as the backbone that powers the counterfactual conversation generation. We also propose a strategy to filter utterances that fail to reflect the counterfactual goal exactly. We consider value substitution (VS), as presented in Figure 1, as a special CoCo case that only replaces the slot values in the original utterance without adding or dropping slots. When we use VS as a fall-back strategy for CoCo (i.e., apply VS when CoCo fails to generate valid user responses after filtering), we call it $\mathrm{COCo + }$ .
+
+Evaluating three strong DST models (Wu et al., 2019; Heck et al., 2020; Hosseini-Asl et al., 2020) with our proposed controllable counterfactuals generated by CoCo and $\mathrm{CoCo + }$ shows that the performance of each significantly drops (up to $30.8\%$ ) compared to their joint goal accuracy on the original MultiWOZ held-out evaluation set. On the other hand, we find that these models are, in fact, quite robust to paraphrasing with back-translation, where their performance only drops up to $2\%$ . Analyzing the effect of data augmentation with $\mathrm{CoCo + }$ shows that it consistently improves the robustness of the investigated DST models on counterfactual conversations generated by each of VS, CoCo and $\mathrm{CoCo + }$ . More interestingly, the same data augmentation strategy improves the joint goal accuracy of the best of these strong DST models by $1.3\%$ on the original MultiWOZ evaluation set. Human evaluations show that CoCo-generated counterfactual conversations perfectly reflect the underlying user goal with more than $95\%$ accuracy and are found to be quite close to original conversations in terms of their human-like scoring. This further proves our proposed approach's reliability and potential to be adopted as part of DST models' robustness evaluation.
+
+
+Figure 1: The upper left is a dialogue example between user and system with its turn-level and dialogue-level belief states on the upper right. The lower left are valid user utterance variations generated by VS and CoCo with their corresponding belief states derived from the original ones on the right.
+
+# 2 RELATED WORK
+
+Dialogue State Tracking. DST has been a core component in current state-of-the-art TOD systems. Traditional approaches usually rely on hand-crafted features or domain-specific lexicon (Henderson et al., 2014; Wen et al., 2017) and require a predefined ontology, making them hard to extend to unseen values. To tackle this issue, various methods have been proposed. Gao et al. (2019) treats DST as a reading comprehension problem and predicts slot values with start and end positions in the dialogue context. Zhang et al. (2019a) proposes DS-DST, a dual-strategy model that predicts values in domains with a few possible candidates from classifiers and others from span extractors. Furthermore, Heck et al. (2020) proposes TripPy, a triple copy strategy model, which allows it to copy values from the context, previous turns' predictions and system informs.
+
+An alternative to classification and span prediction is value generation. Wu et al. (2019) generates slot values with a pointer generator network See et al. (2017) without relying on fixed vocabularies and spans. (Hosseini-Asl et al., 2020) models DST as a conditional generation problem and directly finetunes GPT2 (Radford et al., 2019) on DST task and achieves state-of-the-art on the MultiWOZ.
+
+Adversarial Example Generation. Adversarial example generation has been commonly studied in computer vision (Szegedy et al., 2014; Goodfellow et al., 2015). Recently, it has received growing attention in NLP domain as well. Papernot et al. (2016) finds adversarial examples in the embedding space, and then remapped them to the discrete space. Alzantot et al. (2018) proposes a population-based word replacing method and aims to generate fluent adversarial sentences. These methods often edit the original data greedily assuming access to the model's gradients or outputs besides querying the underlying model many times (Jin et al., 2019). Alternative line of work investigates generating adversarial examples in a model-agnostic way. Iyyer et al. (2018) proposes to generate adversarial paraphrases of original data with different syntactic structures. Jia and Liang (2017) automatically generates sentences with key word overlappings of questions in SQuAD (Rajpurkar et al., 2016) to distract computer systems without changing the correct answer or misleading humans.
+
+Although different methods have been proposed to evaluate the robustness of NLP models, majority of the prior work in this line focuses either on text classification, neural machine translation or reading comprehension problems. Perhaps the most similar existing work with ours are (Einolghozati et al., 2019) and (Cheng et al., 2019). Einolghozati et al. (2019) focuses on intent classification and slot tagging in TOD while Cheng et al. (2019) targets at synthetic competitive negotiation dialogues (Lewis et al., 2017) without DST component. In this work, however, we focus on evaluating a core component of state-of-the-art TOD, DST, on the widely used benchmark, MultiWOZ. To the best of our knowledge, ours is the first work to systematically evaluate the robustness of DST models.
+
+# 3 BACKGROUND
+
+Multi-domain DST task definition. Let $X_{t} = \{(U_{1}^{\mathrm{sys}}, U_{1}^{\mathrm{usr}}), \dots, (U_{t}^{\mathrm{sys}}, U_{t}^{\mathrm{usr}})\}$ denote a sequence of turns of a dialogue until the $t$ -th turn, where $U_{i}^{\mathrm{sys}}$ and $U_{i}^{\mathrm{usr}}$ ( $1 \leq i \leq t$ ) denote system and user utterance at the $i$ -th turn, respectively. In multi-domain DST, each turn $(U_{i}^{\mathrm{sys}}, U_{i}^{\mathrm{usr}})$ talks about a specific domain (e.g., hotel), and a certain number of slots (e.g., price range) in that domain. We denote all $N$ possible domain-slot pairs as $S = \{S_{1}, \dots, S_{N}\}$ . The task is to track the value for each
+
+$S_{j}$ $(1\leq j\leq N)$ over $X_{t}$ (e.g., hotel-price range, cheap). Belief states can be considered at two granularities: turn-level $(L_{t})$ and dialog-level $(B_{t})$ . $L_{t}$ tracks the information introduced in the last turn while $B_{t}$ tracks the accumulated state from the first turn to the last. As illustrated in the upper part of Figure 1, when the dialogue flow arrives at the second turn, $B_{2}$ becomes $\{(restaurant-area, center), (restaurant-food, British), (restaurant-book time, 18:00)\}$ , while $L_{2}$ is $\{(restaurant-food, British), (restaurant-book time, 18:00)\}$ , essentially tracking the update to $B_{t}$ by the last turn.
+
+Problem definition. Given a tuple $< X_{t}, L_{t}, B_{t} >$ , our goal is to generate a new user utterance $\hat{U}_{t}^{\mathrm{usr}}$ to form a novel conversation scenario $\hat{X}_{t} = \{(U_{1}^{\mathrm{sys}}, U_{1}^{\mathrm{usr}}), \dots, (U_{t}^{\mathrm{sys}}, \hat{U}_{t}^{\mathrm{usr}})\}$ by replacing the original user utterance $U_{t}^{\mathrm{usr}}$ with $\hat{U}_{t}^{\mathrm{usr}}$ . To preserve the coherence of dialogue flow, we cast the problem as generating an alternative user utterance $\hat{U}_{t}^{\mathrm{usr}}$ conditioned on a modified $\hat{L}_{t}$ derived from original turn-level belief state $L_{t}$ in a way that is consistent with the global belief state $B_{t}$ . This formulation naturally allows for producing a new tuple $< \hat{X}_{t}, \hat{L}_{t}, \hat{B}_{t} >$ controllable by $\hat{L}_{t}$ , where $\hat{B}_{t}$ is induced by $B_{t}$ based on the difference between $L_{t}$ and $\hat{L}_{t}$ . As illustrated in the lower part of Figure 1, $U_{2}^{\mathrm{usr}}$ is replaced with the two alternative utterances that are natural and coherent with the dialogue history. We propose to use the resulting set of $< \hat{X}_{t}, \hat{L}_{t}, \hat{B}_{t} >$ to probe the DST models.
+
+Paraphrase baseline with back-translation. Paraphrasing the original utterance $U_{t}^{\mathrm{usr}}$ is a natural way to generate $\hat{U}_{t}^{\mathrm{usr}}$ . With the availability of advanced neural machine translation (NMT) models, round-trip translation between two languages (i.e., back-translation (BT)) has become a widely used method to obtain paraphrases for downstream applications (Yu et al., 2018). We use publicly available pretrained English→German (log(g|e)) and German→English (log(e|g)) NMT models. We translate $U_{t}^{\mathrm{usr}}$ from English to German with a beam size $K$ , and then translate each of the $K$ hypotheses back to English with the beam size $K$ . Consequently, we generate $K^{2}$ paraphrase candidates of $\hat{U}_{t}^{\mathrm{usr}}$ and then rank them according to their round-trip confidence score $\log(g|e) + \log(e|g)$ . As paraphrases are expected to preserve the meaning of $U_{t}^{\mathrm{usr}}$ , we set $\hat{L}_{t} = L_{t}$ and $\hat{B}_{t} = B_{t}$ .
+
+# 4 CoCo
+
+As illustrated in Figure 2, CoCo consists of three main pillars. We first train a conditional user utterance generation model $p_{\theta}(U_t^{\mathrm{usr}}|U_t^{\mathrm{sys}},L_t)$ using original dialogues. Secondly, we modify $L_{t}$ into a possibly arbitrary $\hat{L}_t$ by our counterfactual goal generator. Given $\hat{L}_t$ and $U_{t}^{\mathrm{sys}}$ , we sample $\hat{U}_{t}^{\mathrm{usr}}\sim p_{\theta}(\hat{U}_{t}^{\mathrm{usr}}|U_{t}^{\mathrm{sys}},\hat{L}_{t})$ with beam search followed by two orthogonal filtering mechanisms to further eliminate user utterances that fail to reflect the counterfactual goal $\hat{L}_t$ .
+
+# 4.1 VALUE SUBSTITUTION
+
+A robust DST model should correctly reflect value changes in user utterances when tracking user's goal. However, slot-value combinations, e.g. (restaurant-book time, 18:00), in evaluation sets are limited and even have significant overlaps with training data as shown in Table 1. To evaluate DST models with more diverse patterns, we propose a Value Substitution (VS) method to generate $\hat{U}_t^{\mathrm{usr}}$ . Specifically, for each value of $S_{j}$ in $L_{t}$ , if the value only appears in $U_{t}^{\mathrm{usr}}$ rather than $\hat{U}_{t}^{\mathrm{sys}}$ , we allow it to be substituted. Otherwise, we keep it as is. This heuristic is based on the following three observations: (1) if the value comes from $U_{t}^{\mathrm{sys}}$ , e.g. TOD system's recommendation of restaurant food, changing it may make the dialogue flow less natural and coherent (2) if it never appears in the dialogue flow, e.g. yes of hotel-parking, changing it may cause belief state label errors (3) if it only appears in $U_{t}^{\mathrm{usr}}$ , it is expected that changing the value won't cause issues in (1) and (2).
+
+For values that can be substituted, new values are sampled from a Slot-Value Dictionary, a predefined value set for each domain-slot. These new values are then used to update their counterparts in $U_{t}^{\mathrm{usr}}$ , $L_{t}$ and $B_{t}$ . We defer the details of slot-value dictionary to section 4.2. After the update, we get $\hat{U}_{t}^{\mathrm{usr}}$ , $\hat{L}_{t}$ and $\hat{B}_{t}$ , and can use $< \hat{X}_{t}, \hat{L}_{t}, \hat{B}_{t} >$ to evaluate the performance of DST models. An example of how VS works is illustrated in the lower part of Figure 1. At the second turn, as British and 18:00 are in $L_{2}$ and only appear in $U_{2}^{\mathrm{usr}}$ rather than $U_{2}^{\mathrm{sys}}$ , we can replace them with Chinese and 17:00 that
+
+
+Figure 2: The overall pipeline of CoCo. The very left part represents the training phase of utterance generation model, where the concatenation of $U_{t}^{\mathrm{sys}}$ and $L_{t}$ is processed by the encoder, which the decoder then conditions on to generate the user utterance $U_{t}^{\mathrm{usr}}$ . The input and output of this model is shown within the box at the lower-left. The right part depicts the inference phase, where the counterfactual goal generator first modifies the original belief $L_{t}$ fed from the left part into a new one $\hat{L}_{t}$ , which is then fed to the trained utterance generator along with the same conversation history to generate $\hat{U}_{t}^{\mathrm{usr}}$ by beam search followed by filtering undesired utterances. Note that conversational turns in inference phase don't have to originate from training phase.
+
+are sampled from a slot-value dictionary, respectively, to get $\hat{U}_2^{\mathrm{usr}}$ , $\hat{L}_2$ and $\hat{X}_2$ without interrupting the naturalness of the dialogue flow.
+
+# 4.2 CONTROLLABLE COUNTERFACTUAL GENERATION
+
+Back-translation (BT) and value-substitution (VS) provide controllability at different granularities. BT only provides syntactic variety while preserving the meaning, hence the belief state. VS can only replace the values of the existing slots in an utterance while still having to exactly retain all the slots. However, neither of them are able to explore conversations with even slightly modified set of slots. We propose a principled approach to unlock the capability of conversation generation that generalizes beyond just transformation of existing utterances. We cast it as a task of generating novel user utterances $(U_{t}^{\mathrm{usr}})$ from a given conversation history $(U_{t}^{\mathrm{sys}})$ and a turn-level user goal $(L_{t})$ .
+
+We propose to tackle this problem with a conditional generation model that utilizes a pretrained encoder-decoder architecture (Raffel et al., 2020; Lewis et al., 2020) to approximate $p(U_{t}^{\mathrm{usr}}|U_{t}^{\mathrm{sys}},L_{t})$ , where the concatenation of $U_{t}^{\mathrm{sys}}$ and $L_{t}$ is used as input to the encoder and $U_{t}^{\mathrm{usr}}$ is set to be the target sequence to be generated by the decoder, as illustrated in the lower-left of Figure 2. To learn this distribution, we factorize it by chain rule (Bengio et al., 2003) and train a neural network with parameters $\theta$ to minimize the aggregated negative log-likelihood $\mathcal{I}_{\mathrm{gen}}$ over each dialogue turn tuple $(U_{t}^{\mathrm{sys}},L_{t},U_{t}^{\mathrm{usr}})$ where $U_{t}^{\mathrm{usr}} = (U_{t,1}^{\mathrm{usr}},U_{t,2}^{\mathrm{usr}},\dots ,U_{t,n_{t}}^{\mathrm{usr}})$ and $U_{t,k}^{\mathrm{usr}}$ is its $k$ -th token:
+
+$$
+p _ {\theta} \left(U _ {t} ^ {\mathrm {u s r}} \mid U _ {t} ^ {\mathrm {s y s}}, L _ {t}\right) = \prod_ {k = 1} ^ {n _ {t}} p _ {\theta} \left(U _ {t, k} ^ {\mathrm {u s r}} \mid U _ {t, < k} ^ {\mathrm {u s r}}, U _ {t} ^ {\mathrm {s y s}}, L _ {t}\right), \quad \mathcal {J} _ {\text {g e n}} = - \sum_ {k = 1} ^ {n _ {t}} \log p _ {\theta} \left(U _ {t, k} ^ {\mathrm {u s r}} \mid U _ {t, < k} ^ {\mathrm {u s r}}, U _ {t} ^ {\mathrm {s y s}}, L _ {t}\right) \tag {1}
+$$
+
+Once the parameters $\theta$ of the goal-conditioned utterance generation model $p_{\theta}$ are learned from these tuples, it gives us the unique ability to generate novel conversation turns by plugging in an arbitrary but consistent counterfactual goal $\hat{L}_t$ derived from $L_t$ . An example of how the counterfactual goal generator operates is shown in the middle part of Figure 2. The counterfactual goal generator has three components, namely operation, slot-value dictionary and slot-combination dictionary.
+
+Operation decides to apply which combination of the following three meta-operations, namely drop, change and add on $L_{t}$ . Drop is used to remove values from a non-empty slot in $L_{t}$ . Change borrows the same operation in VS, to substitute existing values. Add allows us to add new domain-slot values into $L_{t}$ , giving us the power of generating valid but more complicated $\hat{U}_{t}^{\mathrm{usr}}$ .
+
+Slot-Value Dictionary has a pre-defined value set $S_{j}^{\mathrm{val}}$ for each $S_{j}$ . Once change and/or add meta-operation is activated for $S_{j}$ , counterfactual goal generator will randomly sample a value from $S_{j}^{\mathrm{val}}$ .
+
+Slot-Combination Dictionary has a predefined domain-slot set $S_{j}^{\mathrm{add}}$ for each $S_{j}$ . When add meta-operation is activated, counterfactual goal generator will sample a domain-slot from the intersection among all $S_{j}^{\mathrm{add}}$ , where $S_{j}$ has non-empty values within $L_{t}$ . Once a new domains-slot is sampled, its value will then be sampled from its corresponding value set as defined in slot-value dictionary.
+
+Given $L_{t}$ , the counterfactual goal generator first takes $L_{t}$ as its input, and sequentially applies drop, change and add to output $\hat{L}_{t}$ . Given $\hat{L}_{t}$ and $U_{t}^{\mathrm{sys}}$ , we can sample $\hat{U}_{t}^{\mathrm{usr}} \sim p_{\theta}(\hat{U}_{t}^{\mathrm{usr}}|U_{t}^{\mathrm{sys}},\hat{L}_{t})$ with beam search. We use a rule-based method to get $\hat{B}_t$ of $\hat{X}_t$ . Specifically, we obtain $\bar{B}_{t - 1}$ by calculating the set difference of $B_{t}$ and $L_{t}$ . Given $\bar{B}_{t - 1}$ and $\hat{L}_{t}$ , we update the domain-slot in $\bar{B}_{t - 1}$ if its value in $\hat{L}_{t}$ is not none, otherwise we keep its value as it is in $\bar{B}_{t - 1}$ following (Chao and Lane, 2019). After the update, we get $\hat{B}_t$ and use it as the dialogue-level label of $\hat{X}_t$ .
+
+# 4.3 FILTERING
+
+We have presented methods to generate $\hat{U}_t^{\mathrm{usr}}$ , but how do we make sure that the generated utterance correctly reflects the user goal represented by $\hat{L}_t$ ? To motivate our methods, we take an example generated by beam search located at the lower right of Figure 2 for illustration. In this example, the first hypothesis doesn't include value 2 for restaurant-book people that is within $\hat{L}_t$ . On the contrary, the second hypothesis includes value 18:00 for restaurant-book time that is not part of $\hat{L}_t$ . We call these two phenomena de-generation and over-generation, respectively. Filtering candidates with these issues is thus an important step to make sure $(U_t^{\mathrm{sys}}, \hat{U}_t^{\mathrm{usr}})$ perfectly expresses the user goals in $\hat{L}_t$ . We propose two filtering methods, namely slot-value match filter and classifier filter, to alleviate de-generation and over-generation issues, respectively.
+
+Slot-Value Match Filter. To tackle with de-generation issue, we choose a subset of values in $\hat{L}_t$ (values that should only appear in $\hat{U}_t^{\mathrm{usr}}$ rather than $U_{t}^{\mathrm{sys}}$ ) to eliminate candidates that fail to contain all the values in the subset. In Figure 2, the first hypothesis from the beam search output will be eliminated by this filter because it does not include the value 2 for restaurant-book people in $\hat{L}_t$ .
+
+Classifier Filter. As shown in Table 2, the slot restaurant-book people frequently appears together with restaurant-book time in the data used to train our generation model $p_{\theta}(\hat{U}_t^{\mathrm{usr}}|U_t^{\mathrm{sys}},\hat{L}_t)$ , which may cause the resulting generation model to fall into over-generation issue. To deal with this over-generation problem, we propose to use a N-way multi-label classifier to eliminate such candidates. We employ BERT-base (Devlin et al., 2019) as its backbone:
+
+$$
+H _ {t} ^ {\mathrm {C L S}} = \operatorname {B E R T} ([ \mathrm {C L S} ] \oplus [ X _ {t - 1} ] \oplus [ \mathrm {S E P} ] \oplus [ U _ {t} ^ {\text {s y s}} ] \oplus [ U _ {t} ^ {\text {u s r}} ]) \in \mathbb {R} ^ {d _ {\mathrm {c m b}}} \tag {2}
+$$
+
+where $H_{t}^{\mathrm{CLS}} \in \mathbb{R}^{d_{\mathrm{emb}}}$ is the representations of CLS token of BERT with dimension $d_{\mathrm{emb}}$ . We then feed $H_{t}^{\mathrm{CLS}}$ into a linear projection layer followed by Sigmoid function:
+
+$$
+P = \operatorname {S i g m o i d} \left(W \left(H _ {t} ^ {\mathrm {C L S}}\right)\right) \in \mathbb {R} ^ {N}, \quad \mathcal {J} _ {\mathrm {c l s}} = - \frac {1}{N} \sum_ {j = 1} ^ {N} \left(Y _ {j} \cdot \log P _ {j} + \left(1 - Y _ {j}\right) \cdot \log \left(1 - P _ {j}\right)\right) \tag {3}
+$$
+
+where $W \in \mathbb{R}^{N \times d_{\mathrm{emb}}}$ is the trainable weight of the linear projection layer and $P_j$ is probability that slot $S_j$ appears at $t$ -th turn of $X_t$ with $Y_j$ as its label. The classifier is trained with $\mathcal{I}_{\mathrm{cls}}$ , i.e., the mean binary cross entropy loss of every slot $S_j$ and achieves a precision of 92.3% and a recall of 93.5% on the development set $^5$ . During inference, the classifier takes $\hat{X}_t$ as input and predicts whether a slot $S_i$ appears at $t$ -th turn or not with threshold 0.5. We use this filter to eliminate generated candidates for which the classifier predicts at least one slot $S_j$ mentioned in $(U_t^{\mathrm{sys}}, \hat{U}_t^{\mathrm{usr}})$ while $S_j \notin \hat{L}_t$ . In Figure 2, our classifier filter eliminates the second hypothesis from the output of beam search because $\hat{L}_t$ does not contain the slot restaurant-book time while it is mentioned in the generated utterance.
+
+# 5 EXPERIMENTS
+
+# 5.1 EXPERIMENTAL SETUP
+
+We consider three strong multi-domain DST models to evaluate the effect of CoCo-generated counterfactual conversations in several scenarios. TRADE (Wu et al., 2019) builds upon pointer generator network and contains a slot classification gate and a state generator module to generate states. TRIPPY (Heck et al., 2020) introduces a classification gate and a triple copy module, allowing the model to copy values either from the conversation context or previous turns' predictions or system informs. SIMPLETOD (Hosseini-Asl et al., 2020) models DST as a conditional generation problem with conversation history as its condition and belief state as its target and finetunes on GPT2.
+
+Evaluation. We train each of these three models following their publicly released implementations on the standard train/dev/test split of MultiWOZ 2.1 (Eric et al., 2019). We use the joint goal accuracy to evaluate the performance of DST models. It is 1.0 if and only if the set of (domain-slot, value) pairs in the model output exactly matches the oracle one, otherwise 0.
+
+Slot-Value Dictionary. We carefully design two sets of slot-value dictionaries to capture the effect of unseen slot values from two perspectives, namely in-domain $(I)$ and out-of-domain $(O)$ . $I$ is a dictionary that maps each slot to a set of values that appear in MultiWOZ test set, but not in the training set. On the other hand, we construct $O$ using external values (e.g., hotel names from Wikipedia) that fall completely outside of the MultiWOZ data for the slots (e.g., hotel-name, restaurant-name, etc.). Otherwise, we follow a similar fall-back strategy for slots (e.g., hotel-internet) with no possible external values beyond the ones (e.g., yes and no) in the original data.
+
+Slot-Combination Dictionary. As illustrated in Table 2, held-out evaluation set follows almost the same slot co-occurrence distribution with training data. This makes it difficult to estimate how well DST models would generalize on the valid conversation scenarios that just do not obey the same distribution. CoCo's flexibility at generating a conversation for an arbitrary turn-level belief state naturally allows us to seek an answer to this question. To this end, we design three slot combination dictionaries, namely freq, neu and rare. A slot combination dictionary directly controls how different slots can be combined while generating counterfactual goals. As suggested by their names, freq contains frequently co-occurring slot combinations (e.g., book people is combined only with book day and book time slots), while rare is the opposite of freq grouping rarely co-occurring slots together, and neu is more neutral allowing any meaningful combination within the same domain.[7]
+
+# 5.2 MAIN RESULTS
+
+Before reporting our results, it is important to note that several different post-processing strategies are used by different DST models. To make a fair comparison across different models, we follow the same post-processing strategy employed by SIMPLETOD evaluation script for TRADE and TRIPPY as well. We summarize our main results in Figure 3. While all three DST models are quite robust to back-translation (BT) $^{8}$ , their performance significantly drop on counterfactual conversations generated by each of VS, CoCo and $\mathrm{CoCo + }$ compared to MultiWOZ held-out set accuracy (original).
+
+Unseen Slot-Value Generalization. We analyze the effect of unseen slot values for the two dictionaries $(I$ and $O)$ introduced in the previous section compared to the original set of slot values that have large overlap with the training data. Results presented on the left part of Figure 3 show that the performance of DST models significantly drops up to $11.8\%$ compared to original accuracy even on the simple counterfactuals generated by VS strategy using in-domain unseen slot-value dictionary (I). Furthermore, using out-of-domain slot-value dictionary (O) results in about $10\%$ additional drop in accuracy consistently across the three models. Consistent and similar drop in accuracy suggests that TRADE, SIMPLETOD, and TRIPPY are almost equally susceptible to unseen slot values.
+
+Generalization to Novel Scenarios. The right section of Figure 3 presents the main results in our effort to answer the central question we posed at the beginning of this paper. Based on these re
+
+
+Figure 3: Joint goal accuracy $(\%)$ across different methods. "Original" refers to the results on the original held-out test set. * denotes results obtained from in-domain unseen slot-value dictionary (I). VS, CoCo and $\mathrm{CoCo + }$ results use out-of-domain slot-value dictionary $(O)$ . For brevity, we omit CoCo and $\mathrm{CoCo + }$ results using in-domain slot-value dictionary. See Appendix C for the full results. freq, neu, and rare indicate which slot-combination dictionary is used. Lower bound refers to the percentage of correct predictions on turns with empty turn-level belief state over original held-out test set.
+
+sults, we see that state-of-the-art DST models are having a serious difficulty generalizing to novel scenarios generated by both CoCo and $\mathrm{CoCo + }$ using three different slot combination strategies. The generalization difficulty becomes even more serious on counterfactuals generated by $\mathrm{CoCo + }$ . As expected, the performance drop consistently increases as we start combining less and less frequently co-occurring slots (ranging from freq to rare) while generating our counterfactual goals. In particular, $\mathrm{COCo + }$ (rare) counterfactuals drops the accuracy of TRADE from $49.4\%$ to $18.6\%$ , pushing its performance very close to its lower bound of $13.8\%$ . Even the performance of the most robust model (TRIPPY) among the three drops by up to $25.8\%$ , concluding that held-out accuracy for state-of-the-art DST models may not sufficiently reflect their generalization capabilities.
+
+Transferability Across Models. As highlighted before, a significant difference and advantage of our proposed approach lies in its model-agnostic nature, making it immediately applicable for evaluation of any DST model. As can be inferred from Figure 3, the effect of COCo-generated counterfactuals on the joint goal accuracy is quite consistent across all three DST models. This result empirically proves the transferability of CoCo, strengthening its reliability and applicability to be generally employed as a robustness evaluation of DST models by the future research.
+
+# 5.3 HUMAN EVALUATION
+
+We next examine the quality of our generated data from two perspectives: "human likeliness" and "turn-level belief state correctness". The human likeliness evaluates whether a user utterance is fluent and consistent with its dialog context. The turn-level belief state correctness evaluates whether $(\hat{U}_t^{\mathrm{sys}}, \hat{U}_t^{\mathrm{usr}})$ exactly expresses goals in $\hat{L}_t$ . Both metrics are based on binary evaluation. We randomly sample 100 turns in the original test data and their corresponding CoCo-generated ones.
+
+ | Human likeliness | Correctness |
| Human | 87% | 85% |
| CoCo(ori) | 90% | 91% |
| CoCo(freq) | 90% | 99% |
| CoCo(neu) | 79% | 98% |
| CoCo(rare) | 82% | 96% |
+
+Table 3: Human evaluation.
+
+For the CoCo-generated data, we have two different settings to examine its quality. The first is to use the original turn-level belief state to generate user utterance, denoted by CoCo(ori). The second setting is to verify the quality of the conversations generated by CoCo(freq)-, CoCo(neu)- and CoCo(rare) as they hurt the DST models' accuracy significantly as shown in Figure 3. For each result row reported in Table 3, we ask three individuals with proficient English and advanced NLP background to conduct the evaluation, and use majority voting to determine the final scores.
+
+We can see that CoCo(ori) generated conversations are almost as human-like as original conversations. Furthermore, CoCo(ori) generated slightly more "correct" responses than the original utterances in MultiWoZ 2.1. A presumable reason is that annotation errors exist in MultiWoZ 2.1, while our CoCo are trained on recently released cleaner MultiWoZ 2.2, making generated data have higher quality. In addition, all three variants of the CoCo-generated conversations consistently outper
+
+
+Figure 4: Comparison of retrained DST models (indicated by $\diamond$ ) on CoCo+(rare)-augmented training data with their counterparts trained on original MultiWOZ train split.
+
+form human response in terms of the turn-level belief state correctness. Although CoCo(neu) and CoCo(rare) are slightly less human-like than the original human response, CoCo(freq)-generated utterances have similar human-likeness as original ones. These results demonstrate the effectiveness of our proposed approach in generating not only high-fidelity but also human-like user utterances, proving its potential to be adopted as part of robustness evaluation of DST models.
+
+# 5.4 ANALYSIS OF COCo+ AS DATA AUGMENTATION DEFENSE
+
+So far, we have focused on the generalization capability of DST models on CoCo-generated conversations using different slot-value and slot-combination dictionaries. We have observed that all three DST models are consistently most susceptible to conversations generated by $\mathrm{CoCo} + (\mathrm{rare})$ strategy. Instead, we now seek to answer the following question: Would using conversations generated by $\mathrm{CoCo} + (\mathrm{rare})$ to augment the training data help these DST models in better generalizing to unseen slot values and/or novel scenarios? Towards exploring this direction in a principled way, we design a new slot value dictionary (train- $O$ ) similar to out-of-domain unseen slot-value dictionary $(O)$ . For a fair comparison, we make sure that the slot values in train- $O$ (please refer to Appendix I for the complete dictionary) do not overlap with the one $(O)$ used for generating test conversations.
+
+We first retrain each DST model on the MultiWOZ training split augmented with $\mathrm{CoCO + }$ (rare)-generated conversations using train- $O$ slot-value dictionary. Retrained DST models are then evaluated on original test set as well as on the counterfactual test sets generated by VS and various versions of $\mathrm{CoCo + }$ . Results presented in Figure 4 show that retraining on the $\mathrm{CoCo + }$ (rare)-augmented training data improves the robustness of all three DST models across the board. Most notably, it rebounds the performance of TRIPPY on $\mathrm{CoCo + }$ (rare)-generated test set from $35.5\%$ to $56.2\%$ , significantly closing the gap with its performance $(61.3\%)$ on the original test set. We also observe that retrained DST models obtain an improved joint goal accuracy on the original MultiWOZ test set compared to their counterparts trained only on the original MultiWOZ train split, further validating the quality of CoCo-generated conversations. Finally, we would like to highlight that retrained TRIPPY achieves $62.6\%$ joint goal accuracy, improving the previous state-of-the-art by $1.3\%$ . We leave the exploration of how to fully harness CoCo as a data augmentation approach as future work.
+
+# 6 CONCLUSION
+
+We propose a principled, model-agnostic approach (CoCo) to evaluate dialogue state trackers beyond the held-out evaluation set. We show that state-of-the-art DST models' performance significantly drop when evaluated on the CoCo-generated conversations. Human evaluations validate that they have high-fidelity and are human-like. Hence, we conclude that these strong DST models have difficulty in generalizing to novel scenarios with unseen slot values and rare slot combinations, confirming the limitations of relying only on the held-out accuracy. When explored as a data augmentation method, CoCo consistently improves state-of-the-art DST models not only on the CoCo-generated evaluation set but also on the original test set. This further proves the benefit and potential of our approach to be adopted as part of a more comprehensive evaluation of DST models.
+
+# REFERENCES
+
+M. Alzantot, Y. Sharma, A. Elgohary, B.-J. Ho, M. Srivastava, and K.-W. Chang. Generating natural language adversarial examples. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2890-2896, Brussels, Belgium, Oct.-Nov. 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1316. URL https://www.aclweb.org/anthology/D18-1316.
+Y. Bengio, R. Ducharme, P. Vincent, and C. Janvin. A neural probabilistic language model. J. Mach. Learn. Res., 3(null):1137-1155, Mar. 2003. ISSN 1532-4435.
+P. Budzianowski, T.-H. Wen, B.-H. Tseng, I. Casanueva, S. Ultes, O. Ramadan, and M. Gašić. MultiWOZ - a large-scale multi-domain wizard-of-Oz dataset for task-oriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 2018.
+B. Byrne, K. Krishnamoorthi, C. Sankar, A. Neelakantan, B. Goodrich, D. Duckworth, S. Yavuz, A. Dubey, K.-Y. Kim, and A. Cedilnik. Taskmaster-1: Toward a realistic and diverse dialog dataset. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019.
+G. Chao and I. Lane. Bert-dst: Scalable end-to-end dialogue state tracking with bidirectional encoder representations from transformer. In INTERSPEECH, 2019.
+L. Chen, B. Lv, C. Wang, S. Zhu, B. Tan, and K. Yu. Schema-guided multi-domain dialogue state tracking with graph attention neural networks. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):7521-7528, Apr. 2020. doi: 10.1609/aaai.v34i05.6250. URL https:// ojs.aaaai.org/index.php/AAAI/article/view/6250.
+M. Cheng, W. Wei, and C.-J. Hsieh. Evaluating and enhancing the robustness of dialogue systems: A case study on a negotiation agent. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3325-3335, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1336. URL https://www.aclweb.org/anthology/N19-1336.
+J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2019.
+A. Einolghozati, S. Gupta, M. Mohit, and R. Shah. Improving robustness of task oriented dialog systems. ArXiv, abs/1911.05153, 2019.
+M. Eric, R. Goel, S. Paul, A. Kumar, A. Sethi, A. K. Goyal, P. Ku, S. Agarwal, S. Gao, and D. Z. Hakkani-Tür. Multiwoz 2.1: Multi-domain dialogue state corrections and state tracking baselines. ArXiv, abs/1907.01669, 2019.
+S. Gao, A. Sethi, S. Agarwal, T. Chung, and D. Z. Hakkani-Tür. Dialog state tracking: A neural reading comprehension approach. *ArXiv*, abs/1908.01946, 2019.
+I. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, 2015. URL http://arxiv.org/abs/1412.6572.
+M. Heck, C. van Niekerk, N. Lubis, C. Geishauser, H.-C. Lin, M. Moresi, and M. Gasic. TripPy: A triple copy strategy for value independent neural dialog state tracking. In Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 35-44, 1st virtual meeting, July 2020. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/2020.sigdial-1.4.
+
+M. Henderson, B. Thomson, and S. Young. Word-based dialog state tracking with recurrent neural networks. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 292-299, Philadelphia, PA, U.S.A., June 2014. Association for Computational Linguistics. doi: 10.3115/v1/W14-4340. URL https://www.aclweb.org/anthology/W14-4340.
+E. Hosseini-Asl, B. McCann, C.-S. Wu, S. Yavuz, and R. Socher. A simple language model for task-oriented dialogue, 2020.
+M. Iyyer, J. Wieting, K. Gimpel, and L. Zettlemoyer. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1875-1885, New Orleans, Louisiana, June 2018. Association for Computational Linguistics. doi: 10.18653/v1/N18-1170. URL https://www.aclweb.org/anthology/N18-1170.
+R. Jia and P. Liang. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2021-2031, Copenhagen, Denmark, Sept. 2017. Association for Computational Linguistics. doi: 10.18653/v1/D17-1215. URL https://www.aclweb.org/anthology/D17-1215.
+D. Jin, Z. Jin, J. T. Zhou, and P. Szolovits. Is bert really robust? natural language attack on text classification and entailment. ArXiv, abs/1907.11932, 2019.
+D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), 2015.
+A. Kumar, P. Ku, A. K. Goyal, A. Metallinou, and D. Z. Hakkani-Tür. Ma-dst: Multi-attention based scalable dialog state tracking. ArXiv, abs/2002.08898, 2020.
+H. T. Le, R. Socher, and S. Hoi. Non-autoregressive dialog state tracking. *ArXiv*, abs/2002.08024, 2020.
+M. Lewis, D. Yarats, Y. Dauphin, D. Parikh, and D. Batra. Deal or no deal? end-to-end learning of negotiation dialogues. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2443-2453, Copenhagen, Denmark, Sept. 2017. Association for Computational Linguistics. doi: 10.18653/v1/D17-1259. URL https://www.aclweb.org/anthology/D17-1259.
+M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. ArXiv, abs/1910.13461, 2020.
+J. Li, M. Galley, C. Brockett, J. Gao, and W. Dolan. A diversity-promoting objective function for neural conversation models. *ArXiv*, abs/1510.03055, 2016.
+Z. Lin, A. Madotto, G. I. Winata, and P. Fung. Mintl: Minimalist transfer learning for task-oriented dialogue systems. ArXiv, abs/2009.12005, 2020.
+B. Liu and I. Lane. End-to-end learning of task-oriented dialogs. In Annual Meeting of the Association for Computational Linguistics (ACL), 2018.
+S. Mehri, M. Eric, and D. Hakkani-Tur. Dialogue: A natural language understanding benchmark for task-oriented dialogue. ArXiv, abs/2009.13570, 2020.
+A. Neelakantan, S. Yavuz, S. Narang, V. Prasad, B. Goodrich, D. Duckworth, C. Sankar, and X. Yan. Neural assistant: Joint action prediction, response generation, and latent knowledge reasoning. In NeurIPS 2019 Conversational AI Workshop, 2019.
+N. Papernot, P. McDaniel, A. Swami, and R. E. Harang. Crafting adversarial input sequences for recurrent neural networks. MILCOM 2016 - 2016 IEEE Military Communications Conference, pages 49-54, 2016.
+
+K. Patel, J. Fogarty, J. A. Landay, and B. Harrison. Investigating statistical machine learning as a tool for software development. In CHI, 2008.
+B. Peng, C. Li, J. Li, S. Shayandeh, L. Liden, and J. Gao. Soloist: Few-shot task-oriented dialog with a single pre-trained auto-regressive model, 2020.
+J. Pennington, R. Socher, and C. D. Manning. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, 2014. URL http://www.aclweb.org/anthology/D14-1162.
+A. Radford, J. Wu, R. Child, L. David, D. Amodei, and I. Sutskever. Language models are unsupervised multitask learners. OpenAI Blog, 2019.
+C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer, 2020.
+P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. Squad: 100, $000+$ questions for machine comprehension of text. ArXiv, abs/1606.05250, 2016.
+A. Rastogi, X. Zang, S. Sunkara, R. Gupta, and P. Khaitan. Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. arXiv preprint arXiv:1909.05855, 2019.
+M. T. Ribeiro, T. Wu, C. Guestrin, and S. Singh. Beyond accuracy: Behavioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 2020.
+A. See, P. J. Liu, and C. D. Manning. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073-1083, Vancouver, Canada, July 2017. Association for Computational Linguistics. doi: 10.18653/v1/P17-1099. URL https://www.aclweb.org/anthology/P17-1099.
+C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations, 2014. URL http://arxiv.org/abs/1312.6199.
+J. Wang, Y. Zhang, T.-K. Kim, and Y. Gu. Modelling hierarchical structure between dialogue policy and natural language generator with option framework for task-oriented dialogue system. ArXiv, abs/2006.06814, 2020a.
+K. Wang, J.-F. Tian, R. Wang, X. Quan, and J. Yu. Multi-domain dialogue acts and response cogeneration. In Annual Meeting of the Association for Computational Linguistics (ACL), 2020b.
+T.-H. Wen, D. Vandyke, N. Mrkšić, M. Gašić, L. M. Rojas-Barahona, P.-H. Su, S. Ultes, and S. Young. A network-based end-to-end trainable task-oriented dialogue system. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 438-449, Valencia, Spain, Apr. 2017. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/E17-1042.
+C.-S. Wu, A. Madotto, E. Hosseini-Asl, C. Xiong, R. Socher, and P. Fung. Transferable multi-domain state generator for task-oriented dialogue systems. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, 2019.
+A. W. Yu, D. Dohan, M.-T. Luong, R. Zhao, K. Chen, M. Norouzi, and Q. V. Le. QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension. In 6th International Conference on Learning Representations (ICLR), 2018.
+X. Zang, A. Rastogi, S. Sunkara, R. Gupta, J. Zhang, and J. Chen. MultiWOZ 2.2: A dialogue dataset with additional annotation corrections and state tracking baselines. In Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI, pages 109-117, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.nlp4convai-1.13. URL https://www.aclweb.org/anthology/2020.nlp4convai-1.13.
+
+J. Zhang, K. Hashimoto, C.-S. Wu, Y. Wan, P. S. Yu, R. Socher, and C. Xiong. Find or classify? dual strategy for slot-value predictions on multi-domain dialog state tracking. ArXiv, abs/1910.03544, 2019a.
+Y. Zhang, Z. Ou, and Z. Yu. Task-oriented dialog systems that consider multiple appropriate responses under the same context, 2019b.
+
+# APPENDIX
+
+# A SLOT-LEVEL ANALYSIS
+
+Closer Look at the Effect of $\mathrm{COCo} + (\mathrm{rare})$ on TRIPPY. In Figure 5, we take a closer look at the robustness of TRIPPY through slot-level analysis across three major scenarios. Comparison of blue and orange lines reveals that counterfactuals generated by $\mathrm{COCo} + (\mathrm{rare})$ consistently drops the performance of TRIPPY model (trained on the original MultiWOZ train split) across all the slots, significantly hurting the accuracy of most slots in train domain along with book day slot for hotel domain. On the other hand, comparing green and orange lines clearly demonstrates the effectiveness of $\mathrm{COCo} + (\mathrm{rare})$ as a data augmentation defense (see Section 5.4 for further details), assisting TRIPPY in recovering from most of the errors it made on $\mathrm{COCo} + (\mathrm{rare})$ evaluation set. In fact, it rebounds the joint goal accuracy of TRIPPY from $35.5\%$ to $56.2\%$ as presented more quantitatively in Figure 4.
+
+
+Figure 5: Slot-level accuracy analysis of TRIPPY. "Ori-TripPy-Clean" (blue) and "Ori-TripPy-CoCo+(rare)" (orange) denote TRIPPY (trained on original MultiWOZ training data) when evaluated against original test set and CoCo+(rare) generated test set, respectively. "Aug-TripPy-CoCo+(rare)" (green) indicates slot-level accuracy of TRIPPY after data augmentation (see Section 5.4 for further details) when evaluated against test set generated by CoCo+(rare).
+
+# B ABLATION STUDY ON OPERATIONS
+
+In Table 4, we present ablation results on three meta operations (i.e., drop, change, add) that are used to generate counterfactual goals. The result in the first row corresponds to the performance of three DST models on evaluation set generated by CoCo including all three meta operations along with the classifier filter. Each row analyzes the effects of the corresponding meta operation or classifier by removing it from full models. From Table 4, we observe that removing drop operation from full models hurts the performance of the three models further. This may indicate that the investigated DST models are more vulnerable against user utterances including more slot combinations.
+
+| CoCo | TRADE | SIMPLETOD | TRIPPY |
| Full | 26.2 | 31.6 | 42.3 |
| -Drop | 25.7 | 31.1 | 42.1 |
| -Add | 30.4 | 36.0 | 50.4 |
| -Change | 34.1 | 40.9 | 48.3 |
| -Classifier | 25.3 | 30.5 | 41.3 |
+
+Table 4: Ablation study on the meta operations and classifier based filtering.
+
+# C FULL FIGURE FOR MAIN RESULT
+
+
+Figure 6: Joint goal accuracy (\%) across different methods. "Original" refers to the results on the original held-out test set. * denotes results obtained from in-domain unseen slot-value dictionary (I) while other results use out-of-domain slot-value dictionary (O). freq, neu, and rare indicate which slot-combination dictionary is used.
+
+# D COCo+ MULTI-ROUND DATA AUGMENTATION ON TRIPPY
+
+Section 5.4 shows that $\mathrm{CoCo + }$ as data augmentation (COCoAUG) improves TRIPPY's joint goal accuracy by $1.3\%$ when evaluated on the original test set following the post-processing strategy employed by SIMPLETOD. In this section, we further extend previous single-round data augmentation into multiple rounds. Specifically, for each tuple $< X_{t},L_{t},B_{t}>$ in the original training set, we can generate multiple $< \hat{X}_t,\hat{L}_t,\hat{B}_t>$ by sampling $\hat{L}_t$ multiple times and utilizing $\mathrm{CoCo + }$ to generate corresponding $\hat{X}_t$ and $\hat{B}_t$ . With this approach, generated multiple $< \hat{X}_t,\hat{L}_t,\hat{B}_t>$ combined with original $< X_{t},L_{t},B_{t}>$ can be used to train DST models.
+
+We experiment with $\{1,2,4,8\}$ times data augmentation size over original training data on TRIPPY following its own default cleaning so that results with previous methods are comparable. Comparison results with different baselines and data augmentation sizes are summarized in Table 5. When using more and more CoCo+ generated training data, TRIPPY gains benefits from more training data and consistently improves over baselines. When using $8\mathrm{x}$ CoCo+ generated training data, TRIPPY provides $5.49\%$ improvement over its counterpart without data augmentation. Furthermore, it achieves the new state-of-the-art join goal accuracy9, outperforming CONVBERT-DG+MULTI, which uses open-domain dialogues and DialogGLUE (Mehri et al., 2020) as additional training data.
+
+| Model | JOINT GOAL ACCURACY |
| DSTreader (Gao et al., 2019) | 36.40%† |
| TRADE (Wu et al., 2019) | 45.60%† |
| MA-DST (Kumar et al., 2020) | 51.04%† |
| NA-DST (Le et al., 2020) | 49.04%† |
| DST-picklist (Zhang et al., 2019a) | 53.30%† |
| SST (Chen et al., 2020) | 55.23%† |
| MinTL(T5-small) (Lin et al., 2020) | 50.95% § |
| SimpleTOD (Hosseini-Asl et al., 2020) | 55.76% § |
| ConvBERT-DG+Multi (Mehri et al., 2020) | 58.70% §¶ |
| TRIPPY (Heck et al., 2020) | 55.04%* |
| + CoCoAUG (1×) | 56.00% |
| + CoCoAUG (2×) | 56.94% |
| + CoCoAUG (4×) | 59.73% |
| + CoCoAUG (8×) | 60.53% |
+
+Table 5: Joint goal accuracy results on MultiWOZ 2.1 (Eric et al., 2019) of different methods. The upper part are results of various baselines and lower part are results of TRIPPY without or with $\{1,2,4,8\}$ times data augmentation size over original training data. †: results reported from (Zhang et al., 2019a). §: results reported in their original papers. *: results of our run based on their officially released code. ¶: results need open-domain dialogues and DialogGLUE data.
+
+# E MODEL DETAILS
+
+# E.1 THE DETAILS OF CONTROLLABLE GENERATION MODEL
+
+We instantiate $p_{\theta}(U_t^{\mathrm{usr}}|U_t^{\mathrm{sys}}, L_t)$ with T5-small (Raffel et al., 2020) and utilize MultiWOZ 2.2 as its training data since it's cleaner than previous versions (Zang et al., 2020). During training, we use Adam optimizer (Kingma and Ba, 2015) with an initial learning rate $5e - 5$ and set linear warmup to be 200 steps. The batch size is set to 36 and training epoch is set to be 10. The maximum sequence length of both encoder and decoder is set to be 100. We select the best checkpoint according to lowest perplexity on development set.
+
+# E.2 THE DETAILS OF CLASSIFIER FILTER
+
+We employ BERT-base-uncased as the backbone of our classifier filter and train classifier filter with Adam optimizer (Kingma and Ba, 2015) on MultiWOZ 2.2 since it's cleaner than previous versions (Zang et al., 2020). We select the best checkpoint based on the highest recall on development set during training process. The best checkpoint achieves a precision of $92.3\%$ and a recall of $93.5\%$ on the development set of MultiWOZ 2.2 and, a precision of $93.1\%$ and a recall of $91.6\%$ on its original test set.
+
+F DIVERSITY EVALUATION
+
+| slot name | data | area | book day | book time | food | name | price range | entropy |
| book people | Ori-test | 2.7 | 36.9 | 37.7 | 1.6 | 18.7 | 2.4 | 0.57 |
| CoCo-test | 3.6 | 38.5 | 25.2 | 15.6 | 14.8 | 2.2 | 0.65 |
+
+Table 6: Original test set (Ori-test) and CoCo generated test set (CoCo-test) co-occurrence distribution(%) comparisons of book people slot with other slots in restaurant domain within the same user utterance. The distribution entropy of CoCo-test is higher than its counterpart of Ori-test with an upper bound 0.78 corresponding to uniform distribution, meaning that CoCo-test is more diverse compared to Ori-test in terms of slot combinations.
+
+| Data | Distinct-1 ↑ | Distinct-2 ↑ | Distinct-3 ↑ | Distinct-4 ↑ |
| Ori-test | 0.009 | 0.051 | 0.105 | 0.151 |
| CoCo-test | 0.009 | 0.053 | 0.113 | 0.166 |
+
+Table 7: Language diversity comparisons of data points between Ori-test and CoCo-test. We use unique n-gram ratio (Li et al., 2016) as our diversity metric. $\uparrow$ represents a higher number means more diversity. Overall, CoCo-test has similar (if not better) diversity scores compared to Ori-test.
+
+# G GENERATED EXAMPLES BY COCO
+
+
+Figure 7: Zero-shot generation ability of CoCo on flight domain, which is never seen during training.
+
+
+Figure 8: A success and failure example generated by CoCo with different slot-value combinations.
+
+
+Figure 9: An example generated by CoCo with correct predictions by TRADE, SIMPLETOD and TRIPPY without retraining.
+
+
+Figure 10: An example generated by CoCo with incorrect predictions by TRADE, SIMPLETOD and TRIPPY without retraining.
+
+
+Figure 11: An example from original MultiWOZ test set, which is predicted incorrectly by original TRADE, SIMPLETOD and TRIPPY, is corrected by their retraining counterparts.
+
+
+Figure 12: An example generated by CoCo(rare) evaluation set, which is predicted incorrectly by original TRADE, SIMPLETOD and TRIPPY, is corrected by their retraining counterparts.
+
+# H SLOT-COMBINATION DICTIONARY
+
+Please find the different slot-combination dictionaries introduced in the main paper below.
+
+| domain-slot | freq |
| "hotel-internet" | ["hotel-area","hotel-parking","hotel-pricerange","hotel-stars","hotel-type"] |
| "hotel-type" | ["hotel-area","hotel-internet","hotel-parking","hotel-pricerange","hotel-stars"] |
| "hotel-parking" | ["hotel-area","hotel-internet","hotel-pricerange","hotel-stars","hotel-type"] |
| "hotel-pricerange" | ["hotel-area","hotel-internet","hotel-parking","hotel-stars","hotel-type"] |
| "hotel-book day" | ["hotel-book people","hotel-book stay"] |
| "hotel-book people": | ["hotel-book day","hotel-book stay"] |
| "hotel-book stay" | ["hotel-book day","hotel-book people"] |
| "hotel-stars" | ["hotel-area","hotel-internet","hotel-parking","hotel-pricerange","hotel-type"] |
| "hotel-area" | ["hotel-internet","hotel-parking","hotel-pricerange","hotel-stars","hotel-type"] |
| "hotel-name" | ["hotel-book day","hotel-book people","hotel-book stay"] |
| "restaurant-area" | ["restaurant-food","restaurant-pricerange"] |
| "restaurant-food" | ["restaurant-area","restaurant-pricerange"] |
| "restaurant-pricerange" | ["restaurant-area","restaurant-food"] |
| "restaurant-name" | ["restaurant-book day","restaurant-book people","restaurant-book time"] |
| "restaurant-book day" | ["restaurant-book people","restaurant-book time"] |
| "restaurant-book people" | ["restaurant-book day","restaurant-book time"] |
| "restaurant-book time" | ["restaurant-book day","restaurant-book people"] |
| "taxi-arriveby" | ["taxi-leaveat","train-book people"] |
| "taxi-leaveat" | ["taxi-arriveby","train-book people"] |
| "taxi-departure" | ["taxi-destination","taxi-leaveat","taxi-arriveby"] |
| "taxi-destination" | ["taxi-departure","taxi-arriveby","taxi-leaveat"] |
| "train-arriveby" | ["train-day","train-leaveat","train-book people"] |
| "train-departure" | ["train-arriveby","train-leaveat","train-destination","train-day","train-book people"] |
| "train-destination" | ["train-arriveby","train-leaveat","train-departure","train-day","train-book people"] |
| "train-day" | ["train-arriveby","train-leaveat","train-book people"] |
| "train-leaveat" | ["train-day"] |
| "train-book people" | [] |
| "attraction-name" | [] |
| "attraction-area" | ["attraction-type"] |
| "attraction-type" | ["attraction-area"] |
+
+Table 8: Slot-combination dictionary for freq case.
+
+| slot-name | neu |
| ‘hotel-internet’ | [‘hotel-book day’,‘hotel-name’,‘hotel-book stay’,‘hotel-pricerange’,‘hotel-stars’,‘hotel-area’,‘hotel-book people’,‘hotel-type’,‘hotel-parking’] |
| ‘hotel-area’ | [‘hotel-book day’,‘hotel-name’,‘hotel-book stay’,‘hotel-pricerange’,‘hotel-stars’,‘hotel-book people’,‘hotel-internet’,‘hotel-type’,‘hotel-parking’] |
| ‘hotel-parking’ | [‘hotel-book day’,‘hotel-name’,‘hotel-book stay’,‘hotel-pricerange’,‘hotel-stars’,‘hotel-area’,‘hotel-book people’,‘hotel-internet’,‘hotel-type’] |
| ‘hotel-pricerange’ | [‘hotel-book day’,‘hotel-name’,‘hotel-book stay’,‘hotel-stars’,‘hotel-area’,‘hotel-book people’,‘hotel-internet’,‘hotel-type’,‘hotel-parking’] |
| ‘hotel-stars’ | [‘hotel-book day’,‘hotel-name’,‘hotel-book stay’,‘hotel-pricerange’,‘hotel-area’,‘hotel-book people’,‘hotel-internet’,‘hotel-type’,‘hotel-parking’] |
| ‘hotel-type’ | [‘hotel-book day’,‘hotel-book stay’,‘hotel-pricerange’,‘hotel-stars’,‘hotel-area’,‘hotel-book people’,‘hotel-internet’,‘hotel-parking’] |
| ‘hotel-name’ | [‘hotel-book day’,‘hotel-book stay’,‘hotel-pricerange’,‘hotel-stars’,‘hotel-area’,‘hotel-book people’,‘hotel-internet’,‘hotel-parking’] |
| ‘hotel-book day’ | [‘hotel-name’,‘hotel-book stay’,‘hotel-pricerange’,‘hotel-stars’,‘hotel-area’,‘hotel-book people’,‘hotel-internet’,‘hotel-type’,‘hotel-parking’] |
| ‘hotel-book people’ | [‘hotel-book day’,‘hotel-name’,‘hotel-book stay’,‘hotel-pricerange’,‘hotel-stars’,‘hotel-area’,‘hotel-book people’,‘hotel-internet’,‘hotel-type’,‘hotel-parking’] |
| ‘hotel-book stay’ | [‘hotel-book day’,‘hotel-name’,‘hotel-pricerange’,‘hotel-stars’,‘hotel-area’,‘hotel-book people’,‘hotel-internet’,‘hotel-type’,‘hotel-parking’] |
| ‘restaurant-area’ | [‘restaurant-book day’,‘restaurant-name’,‘restaurant-food’,‘restaurant-book people’,‘restaurant-book time’,‘restaurant-pricerange’] |
| ‘restaurant-food’ | [‘restaurant-book day’,‘restaurant-book people’,‘restaurant-book time’,‘restaurant-area’,‘restaurant-pricerange’] |
| ‘restaurant-pricerange’ | [‘restaurant-book day’,‘restaurant-name’,‘restaurant-food’,‘restaurant-book people’,‘restaurant-book time’,‘restaurant-area’] |
| ‘restaurant-name’ | [‘restaurant-book day’,‘restaurant-book people’,‘restaurant-book time’,‘restaurant-area’,‘restaurant-pricerange’] |
| ‘restaurant-book day’ | [‘restaurant-name’,‘restaurant-food’,‘restaurant-book people’,‘restaurant-book time’,‘restaurant-area’,‘restaurant-pricerange’] |
| ‘restaurant-book people’ | [‘restaurant-book day’,‘restaurant-name’,‘restaurant-food’,‘restaurant-book time’,‘restaurant-area’,‘restaurant-pricerange’] |
| ‘restaurant-book time’ | [‘restaurant-book day’,‘restaurant-name’,‘restaurant-food’,‘restaurant-book people’,‘restaurant-region’] |
| ‘taxi-departure’ | [‘taxi-destination’,‘taxi-leaveat’,‘taxi-arriveby’] |
| ‘taxi-destination’ | [‘taxi-departure’,‘taxi-leaveat’,‘taxi-arriveby’] |
| ‘taxi-leaveat’ | [‘taxi-departure’,‘taxi-destination’,‘taxi-leaveat’] |
| ‘taxi-arriveby’ | [‘taxi-departure’,‘taxi-destination’,‘taxi-leaveat’] |
| ‘train-arriveby’ | [‘train-book people’,‘train-day’,‘train-leaveat’,‘train-departure’,‘train-destination’] |
| ‘train-leaveat’ | [‘train-book people’,‘train-arriveby’,‘train-day’,‘train-departure’,‘train-destination’] |
| ‘train-departure’ | [‘train-book people’,‘train-arriveby’,‘train-day’,‘train-leaveat’,‘train-destination’] |
| ‘train-destination’ | [‘train-book people’,‘train-arriveby’,‘train-day’,‘train-leaveat’,‘train-departure’] |
| ‘train-day’ | [‘train-book people’,‘train-arriveby’,‘train-leaveat’,‘train-departure’,‘train-destination’] |
| ‘train-book people’ | [‘train-arriveby’,‘train-day’,‘train-leaveat’,‘train-departure’,‘train-destination’] |
| ‘attraction-name’ | [‘attraction-area’] |
| ‘attraction-area’ | [‘attraction-type’] |
| ‘attraction-type’ | [‘attraction-area’] |
+
+Table 9: Slot-combination dictionary for neu case.
+
+| slot-name | rare |
| ‘hotel-internet’ | [‘hotel-book people’,‘hotel-book day’,‘hotel-name’,‘hotel-book stay’] |
| ‘hotel-area’: | [‘hotel-book people’,‘hotel-book day’,‘hotel-name’,‘hotel-book stay’] |
| ‘hotel-parking’ | [‘hotel-book people’,‘hotel-book day’,‘hotel-name’,‘hotel-book stay’] |
| ‘hotel-pricerange’ | [‘hotel-book people’,‘hotel-book day’,‘hotel-name’,‘hotel-book stay’] |
| ‘hotel-stars’ | [‘hotel-book people’,‘hotel-book day’,‘hotel-name’,‘hotel-book stay’] |
| ‘hotel-type’ | [‘hotel-book people’,‘hotel-book day’,‘hotel-book stay’] |
| ‘hotel-name’ | [‘hotel-pricerange’,‘hotel-stars’,‘hotel-area’,‘hotel-internet’,‘hotel-parking’] |
| ‘hotel-book day’ | [‘hotel-name’,‘hotel-pricerange’,‘hotel-stars’,‘hotel-area’,‘hotel-internet’,‘hotel-type’,‘hotel-parking’] |
| ‘hotel-book people’ | [‘hotel-name’,‘hotel-pricerange’,‘hotel-stars’,‘hotel-area’,‘hotel-internet’,‘hotel-type’,‘hotel-parking’] |
| ‘hotel-book stay’ | [‘hotel-name’,‘hotel-pricerange’,‘hotel-stars’,‘hotel-area’,‘hotel-internet’,‘hotel-type’,‘hotel-parking’] |
| ‘restaurant-area’ | [‘restaurant-book day’,‘restaurant-name’,‘restaurant-book time’,‘restaurant-book people’] |
| ‘restaurant-food’ | [‘restaurant-book day’,‘restaurant-book time’,‘restaurant-book people’] |
| ‘restaurant-pricerange’ | [‘restaurant-book day’,‘restaurant-name’,‘restaurant-book time’,‘restaurant-book people’] |
| ‘restaurant-name’ | [‘restaurant-area’,‘restaurant-pricerange’] |
| ‘restaurant-book day’ | [‘restaurant-name’,‘restaurant-area’,‘restaurant-food’,‘restaurant-pricerange’] |
| ‘restaurant-book people’ | [‘restaurant-name’,‘restaurant-area’,‘restaurant-food’,‘restaurant-pricerange’] |
| ‘restaurant-book time’ | [‘restaurant-name’,‘restaurant-area’,‘restaurant-food’,‘restaurant-pricerange’] |
| ‘taxi-departure’ | [] |
| ‘taxi-destination’ | [] |
| ‘taxi-leaveat’ | [‘taxi-departure’, ‘taxi-destination’] |
| ‘taxi-arriveby’ | [‘taxi-departure’, ‘taxi-destination’] |
| ‘train-arriveby’ | [‘train-destination’, ‘train-departure’] |
| ‘train-leaveat’ | [‘train-destination’, ‘train-book people’, ‘train-arriveby’, ‘train-departure’] |
| ‘train-departure’ | [] |
| ‘train-destination’ | [] |
| ‘train-day’ | [‘train-destination’, ‘train-departure’] |
| ‘train-book people’ | [‘train-arriveby’, ‘train-departure’, ‘train-destination’, ‘train-day’, ‘train-leaveat’] |
| ‘attraction-name’ | [‘attraction-area’] |
| ‘attraction-area’ | [‘attraction-name’] |
| ‘attraction-type’ | [] |
+
+Table 10: Slot-combination dictionary for rare case.
+
+# I SLOT-VALUE DICTIONARY
+
+Please find the different slot-value dictionaries introduced in the main paper below.
+
+| slot-name | train-O |
| "hotel-internet" | ['yes'] |
| "hotel-type" | ['hotel', 'guesthouse'] |
| "hotel-parking" | ['yes'] |
| "hotel-pricerange": | ['moderate', 'cheap', 'expensive'] |
| "hotel-book day" | ['march 11th', "march 12th", "march 13th", "march 14th", "march 15th", "march 16th", "march 17th", "march 18th", "march 19th", "march 20th"] |
| "hotel-book people" | ['20","21","22","23","24","25","26","27","28","29"] |
| "hotel-book stay" | ['20","21","22","23","24","25","26","27","28","29"] |
| "hotel-area" | ['south', 'north', 'west', 'east', 'centre'] |
| "hotel-stars" | ['0', '1', '2', '3', '4', '5'] |
| "hotel-name" | ['moody moon", "four seasons hotel", "knights inn", "travelodge", "jack summer inn", "paradise point resort"] |
| "restaurant-area" | ['south', 'north', 'west', 'east', 'centre'] |
| "restaurant-food" | ['asian fusion', 'burger', 'pasta', 'ramen', 'taiwanese'] |
| "restaurant-pricerange": | ['moderate', 'cheap', 'expensive'] |
| "restaurant-name" | ['buddha bowls", "pizza my heart", "pho bistro", "sushiya express", "rockfire grill", "itsuki restaurant"] |
| "restaurant-book day" | ['monday", "tuesday", "wednesday", "thursday", "friday", "saturday", "sunday"] |
| "restaurant-book people" | ['20","21","22","23","24","25","26","27","28","29"] |
| "restaurant-book time": | ['19:01","18:06","17:11","19:16","18:21","17:26","19:31", "18:36","17:41","19:46","18:51","17:56","7:00 pm", "6:07 pm", "5:12 pm", "7:17 pm", "6:17 pm", "5:27 pm", "7:32 pm", "6:37 pm", "5:42 pm", "7:47 pm", "6:52 pm", "5:57 pm", "11:00 am", "11:05 am", "11:10 am", "11:15 am", "11:20 am", "11:25 am", "11:30 am", "11:35 am", "11:40 am", "11:45 am", "11:50 am", "11:55 am"] |
| "taxi-arriveby" | ['17:26","19:31","18:36","17:41","19:46","18:51","17:56","7:00 pm","6:07 pm","5:12 pm","7:17 pm","6:17 pm", "5:27 pm", "11:30 am", "11:35 am", "11:40 am", "11:45 am", "11:50 am", "11:55 am"] |
| "taxi-leaveat": | ['19:01","18:06","17:11","19:16","18:21","7:32 pm", "6:37 pm", "5:42 pm", "7:47 pm", "6:52 pm", "5:57 pm", "11:00 am", "11:05 am", "11:10 am", "11:15 am", "11:20 am", "11:25 am"] |
| "taxi-departure": | ['moody moon", "four seasons hotel", "knights inn", "travelodge", "jack summer inn", "paradise point resort"], |
| "taxi-destination": | ['buddha bowls", "pizza my heart", "pho bistro", "sushiya express", "rockfire grill", "itsuki restaurant"] |
| "train-arriveby": | ['17:26","19:31","18:36","17:41","19:46","18:51","17:56","7:00 pm","6:07 pm","5:12 pm","7:17 pm", "6:17 pm","5:27 pm", "11:30 am", "11:35 am", "11:40 am", "11:45 am", "11:50 am", "11:55 am"] |
| "train-leaveat": | ['19:01","18:06","17:11","19:16","18:21","7:32 pm", "6:37 pm", "5:42 pm", "7:47 pm", "6:52 pm", "5:57 pm", "11:00 am", "11:05 am", "11:10am", "11:15 am", "11:20 am", "11:25 am"] |
| "train-departure": | ['gilroy","san martin","morgan hill","blossom hill", "college park", "santa clara", "lawrence", "sunnyvale"] |
| "train-destination" | ['mountain view", "san antonio", "palo alto", "menlo park", "hayward park", "san mateo", "roadshow", "san bruno"] |
| "train-day": | ['march 11th", "march 12th", "march 13th", "march 14th", "march 15th", "march 16th", "march 17th", "march 18th", "march 19th", "march 20th"] |
| "train-book people" | ['20","21","22","23","24","25","26","27","28","29"] |
| "attraction-area" | ['south', 'north', 'west', 'east', 'centre'] |
| "attraction-name" | ['grand canyon", "golden gate bridge", "niagara falls", "kennedy space center", "pike place market", "las Vegas strip"] |
| "attraction-type" | ['historical landmark', 'aquaria', 'beach', 'castle', 'art gallery'] |
+
+Table 11: Slot value dictionary of train- $O$ .
+
+| slot-name | I |
| "hotel-internet" | ['yes'] |
| "hotel-type" | ['hotel', 'guesthouse'] |
| "hotel-parking" | ['yes'] |
| "hotel-pricerange" | ['moderate', 'cheap', 'expensive'] |
| "hotel-book day" | ['friday', 'tuesday', 'thursday', 'saturday', 'monday', 'sunday', 'wednesday'] |
| "hotel-book people" | ['1', '2', '3', '4', '5', '6', '7', '8'] |
| "hotel-book stay" | ['1', '2', '3', '4', '5', '6', '7', '8'] |
| "hotel-name" | ['alpha milton', 'finches bed and breakfast', 'express holiday inn by cambridge', 'wankworth house', 'alexander b and b', 'the gonville hotel'] |
| "hotel-stars" | ['0', '1', '3', '2', '4', '5'] |
| "hotel-area" | ['south', 'east', 'west', 'north', 'centre'] |
| "restaurant-area" | ['south', 'east', 'west', 'north', 'centre'] |
| "restaurant-food" | ['european', 'brazilian', 'weish'] |
| "restaurant-pricerange" | ['moderate', 'cheap', 'expensive'] |
| "restaurant-name": | [pizza hut in cherry, 'the nirala', 'barbakan', 'the golden house', 'michaelhouse', 'bridge', 'varity restaurant', 'loch', 'the peking', 'charlie', 'cambridge lodge', 'maharajah tandoori'] |
| "restaurant-book day" | ['friday', 'tuesday', 'thursday', 'saturday', 'monday', 'sunday', 'wednesday'] |
| "restaurant-book people" | ['8', '6', '7', '1', '3', '2', '4', '5'] |
| "restaurant-book time" | ['14:40', '19:00', '15:15', '9:30', '7 pm', '11 am', '8:45'] |
| "taxi-arriveby" | ['08:30', '9:45'] |
| "taxi-leaveat" | ['7 pm', '3:00'] |
| "taxi-departure" | [aylesbray lodge', fitzbillies', 'uno', 'zizzi cambridge', 'express by holiday inn', great saint marys church', 'county folk museum', riverboat', 'bishops stortford', 'cafee uno', 'hong house', 'gandhi', 'cambridge arts', 'the hotpot', 'regency gallery', 'saint johns chop shop house'], |
| "taxi-destination" | ['ashley', 'all saints', 'de luca cucina and bar's', 'the lensfield hotel', 'oak bistro', 'broxbourne', 'sleeperz hotel', "saint catherine's college"] |
| "train-arriveby" | [4:45 pm', '18:35', '21:08', '19:54', '10:08', '13:06', '15:24', '07:08', '16:23', '8:56', '09:01', '10:23', '10:00 am', '16:44', '6:15', '06:01', '8:54', '21:51', '16:07', '12:43', '20:08', '08:23', '12:56', '17:23', '11:32', '20:54', '20:06', '14:24', '18:10', '20:38', '16:06', '3:00', '22:06', '20:20', '17:51', '19:52', '7:52', '07:44', '16:08'], |
| "train-leaveat" | [13:36', '15:17', '14:21', '3:15 pm', '6:10 am', '14:40', '5:40', '13:40', '17:11', '13:50', '5:11', '11:17', '5:01', '13:24', '5:35', '07:00', '8:08', '7:40', '11:54', '12:06', '07:01', '18:09', '13:17', '21:45', '06:40', '01:44', '9:17', '20:21', '20:40', '08:11', '07:35', '14:19', '1 pm', '19:17', '19:48', '19:50', '10:36', '09:19', '19:35', '8:06', '05:29', '17:50', '15:16', '09:17', '7:35', '5:29', '17:16', '14:01', '10:21', '05:01', '15:39', '15:01', '10:11', '08:01'], |
| "train-departure": | [london liverpool street', 'kings lynn', 'norwich', 'birmingham new street', 'london kings cross', broxbourne'] |
| "train-destination" | [bishopstortford', 'cambridge', 'ely', 'stansted airport', 'peterborough', 'leicester', stevenage'] |
| "train-day" | ['friday', 'tuesday', 'thursday', 'monday', 'saturday', 'sunday', 'wednesday'] |
| "train-book people" | ['9'] |
| "attraction-name": | ['the cambridge arts theatre', 'the chchurch college', 'the castle galleries', 'cambridge', 'saint catherine's college', 'street', 'corn cambridge exchange', 'fitzwilliam', 'cafe jello museum'], |
| "attraction-area": | ['south', 'east', 'west', 'north', 'centre'], |
| "attraction-type" | ['concerthall', 'museum', 'entertainment', 'college', 'multiple sports', 'hiking', 'architecture', 'theatre', 'cinema', 'swimmingpool', 'boat', 'nightclub', 'park'] |
+
+Table 12: Slot-value dictionary for $I$ case.
+
+| slot-name | O |
| "hotel-internet" | ['yes'] |
| "hotel-type" | ['hotel', 'guesthouse'] |
| "hotel-parking" | ['yes'] |
| "hotel-pricerange" | ['moderate', 'cheap', 'expensive'] |
| "hotel-book day" | ["april 11th", "april 12th", "april 13th", "april 14th", "april 15th", "april 16th", "april 17th", "april 18th", "april 19th", "april 20th"] |
| "hotel-book people" | ["30","31","32","33","34","35","36","37","38","39"] |
| "hotel-book stay" | ["30","31","32","33","34","35","36","37","38","39"] |
| "hotel-area" | ['south', 'east', 'west', 'north', 'centre'] |
| "hotel-stars" | ['0', '1', '2', '3', '4', '5'] |
| "hotel-name" | ["white rock hotel", "jade bay resort", "grand hyatt", "hilton garden inn", "cottage motel", "mandarin oriental"], |
| "restaurant-area" | ['south', 'east', 'west', 'north', 'centre'] |
| "restaurant-food" | ["sichuan", "fish", "noodle", "lobster", "burrito", "dumpling", "curry","taco"] |
| "restaurant-pricerange" | ['moderate', 'cheap', 'expensive'] |
| "restaurant-name": | ["lure fish house", "black sheep restaurant", "palapa restaurant", "nikka ramen", "sun sushi", "super cucas"] |
| "restaurant-book day": | ["monday", "tuesday", "wednesday", "thursday", "friday", "saturday", "sunday"] |
| "restaurant-book people" | ["30","31","32","33","34","35","36","37","38","39"] |
| "restaurant-book time" | ["20:02","21:07","22:12","20:17","21:22","22:27","20:32","21:37","22:42","20:47","21:52","22:57","8:00 pm","9:04 pm","10:09 pm","8:14 pm","9:19 pm","10:24 pm","8:29 pm","9:34 pm","10:39 pm","8:44 pm","9:49 pm","10:54 pm","10:00 am","10:06 am","10:11 am","10:16 am","10:21 am","10:26 am","10:31 am","10:36 am","10:41 am","10:46 am","10:51 am","10:56 am"], |
| "taxi-arriveby": | ["20:02","21:07","22:12","20:17","21:22","22:27","9:34 pm","10:39 pm","8:44 pm","9:49 pm","10:54 pm","10:00 am","10:06 am","10:11 am","10:16 am","10:21 am","10:26 am"], |
| "taxi-leaveat": | ["21:37","22:42","20:47","21:52","22:57","8:00 pm","9:04 pm","10:09 pm","8:14 pm","9:19 pm","10:24 pm","8:29 pm","10:31 am","10:36 am","10:41 am","10:46 am","10:51 am","10:56 am"], |
| "taxi-departure": | ["lure fish house", "black sheep restaurant", "palapa restaurant", "nikka ramen", "sun sushi", "super cucas"], |
| "taxi-destination": | ["white rock hotel", "jade bay resort", "grand hyatt", "hilton garden inn", "cottage motel", "mandarin oriental"] |
| "train-departure" | ["northridge","camarillo","oxnard","morepark","simi valley","chatsworth","van nuys","glendale"] |
| "train-destination" | ["norwalk", "buena park", "fullerton", "santa ana", "tustin", "irvine", "san clemente", "oceanside"], |
| "train-arriveby": | ["20:02","21:07","22:12","20:17","21:22","22:27","9:34 pm","10:39 pm","8:44 pm","9:49 pm","10:54 pm","10:00 am","10:06 am","10:11 am","10:21 am","10:26 am"], |
| "train-day": | ["april 11th", "april 12th", "april 13th", "april 14th", "april 15th", "april 16th", "april 17th", "april 18th", "april 19th", "april 20th"], |
| "train-leaveat": | ["21:37","22:42","20:47","21:52","22:57","8:00 pm","9:04 pm","10:09 pm","8:14 pm","9:19 pm","10:24 pm","8:29 pm","10:31 am","10:36 am","10:41am","10:46 am","10:46 am","10:51 am","10:56 am"], |
| "train-book people": | ["30","31","32","33","34","35","36","37","38","39"] |
| "attraction-area": | ['south', 'east', 'west', 'north', 'centre'] |
| "attraction-name": | ["statue of liberty", "empire state building", "mount rushmore", "brooklyn bridge", "lincoln memorial", "times square"], |
| "attraction-type": | ["temple", "zoo", "library", "skyscraper", "monument"] |
+
+Table 13: Slot-value dictionary for $O$ case.
\ No newline at end of file
diff --git a/cococontrollablecounterfactualsforevaluatingdialoguestatetrackers/images.zip b/cococontrollablecounterfactualsforevaluatingdialoguestatetrackers/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..7d773e63a42181d3933bc9f4e3d2f2f9a285a847
--- /dev/null
+++ b/cococontrollablecounterfactualsforevaluatingdialoguestatetrackers/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a357faa21371c7cde6b747b6af166825eb58d71afd132aeb618502421221f979
+size 2062766
diff --git a/cococontrollablecounterfactualsforevaluatingdialoguestatetrackers/layout.json b/cococontrollablecounterfactualsforevaluatingdialoguestatetrackers/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..fb9dc679de67c4758a691719218d297c4da4c458
--- /dev/null
+++ b/cococontrollablecounterfactualsforevaluatingdialoguestatetrackers/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:99a7476576addc27b453c84995f67a87455097a77d98a1fbc32028742f54a042
+size 636351
diff --git a/coconaselfsupervisedapproachforcontrolledtextgeneration/50c3ac00-9d5c-4ded-ad53-6a538155dc66_content_list.json b/coconaselfsupervisedapproachforcontrolledtextgeneration/50c3ac00-9d5c-4ded-ad53-6a538155dc66_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..56cef6e21c8a44262c2c5b7747a5d46222479507
--- /dev/null
+++ b/coconaselfsupervisedapproachforcontrolledtextgeneration/50c3ac00-9d5c-4ded-ad53-6a538155dc66_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2b298d4467d8dfe22fa09864b4c17e8e2014ec7e2d6935bac0a8de33ea7452b9
+size 129198
diff --git a/coconaselfsupervisedapproachforcontrolledtextgeneration/50c3ac00-9d5c-4ded-ad53-6a538155dc66_model.json b/coconaselfsupervisedapproachforcontrolledtextgeneration/50c3ac00-9d5c-4ded-ad53-6a538155dc66_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..bac618613ef691f47aaca6d634856a771403a721
--- /dev/null
+++ b/coconaselfsupervisedapproachforcontrolledtextgeneration/50c3ac00-9d5c-4ded-ad53-6a538155dc66_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6d9611400c7b4564a817e3d717b653df64b4242ef0f1e706600be0639be7cc4b
+size 147370
diff --git a/coconaselfsupervisedapproachforcontrolledtextgeneration/50c3ac00-9d5c-4ded-ad53-6a538155dc66_origin.pdf b/coconaselfsupervisedapproachforcontrolledtextgeneration/50c3ac00-9d5c-4ded-ad53-6a538155dc66_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a1698532f8ddde90906a1306d2e1948415090092
--- /dev/null
+++ b/coconaselfsupervisedapproachforcontrolledtextgeneration/50c3ac00-9d5c-4ded-ad53-6a538155dc66_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:851a06238437e79ac3022ca059691413ab7a883873489e8d7888d95f72385d46
+size 684849
diff --git a/coconaselfsupervisedapproachforcontrolledtextgeneration/full.md b/coconaselfsupervisedapproachforcontrolledtextgeneration/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..42d23398a2d83d5d0c97033de9153915054b9c02
--- /dev/null
+++ b/coconaselfsupervisedapproachforcontrolledtextgeneration/full.md
@@ -0,0 +1,472 @@
+# COCON: A SELF-SUPERVISED APPROACH FOR CONTROLLED TEXT GENERATION
+
+Alvin Chan $^{1*}$ , Yew-Soon Ong $^{1}$ , Bill Pung $^{1}$ , Aston Zhang $^{2}$ , Jie Fu $^{3}$
+
+$^{1}$ Nanyang Technological University, $^{2}$ Amazon AI, $^{3}$ Mila, Polytechnique Montreal
+
+# ABSTRACT
+
+Pretrained Transformer-based language models (LMs) display remarkable natural language generation capabilities. With their immense potential, controlling text generation of such LMs is getting attention. While there are studies that seek to control high-level attributes (such as sentiment and topic) of generated text, there is still a lack of more precise control over its content at the word- and phrase-level. Here, we propose Content-Conditioner (CoCon) to control an LM's output text with a content input, at a fine-grained level. In our self-supervised approach, the CoCon block learns to help the LM complete a partially-observed text sequence by conditioning with content inputs that are withheld from the LM. Through experiments, we show that CoCon can naturally incorporate target content into generated texts and control high-level text attributes in a zero-shot manner. $^1$
+
+# 1 INTRODUCTION
+
+Transformer-based (Vaswani et al., 2017; Tay et al., 2020) pretrained language models (LMs) have led a wave of new advances in natural language processing tasks as a means to extract contextualized word embeddings (Devlin et al., 2018; Dai et al., 2019b; Yang et al., 2019) and as text generators (Radford et al., 2019; Brown et al., 2020). These LMs are trained on huge amounts of text corpora to predict next tokens through a log-likelihood objective. Given its remarkably fluent text generation, there is growing interest in controlling output texts of such LMs (Keskar et al., 2019; Dathathri et al., 2019). Approaches like training a modified LM from scratch to incorporate target text attributes (Keskar et al., 2019) can be expensive while finetuning pretrained LMs for specific attributes (Ziegler et al., 2019) limits the scope of text control. Without changing the architecture or weights of pretrained LMs, one promising approach (PPLM) (Dathathri et al., 2019) controls generated text through attribute models. Though effective in controlling high-level text attributes such as topic and sentiment, the same target attribute may generate text samples with vastly different content at the word- and phrase-levels, leaving a gap for more fine-grained control over the content of LM-generated texts.
+
+We conceptualize Content-Conditioner (CoCon) as an approach to narrow this gap by guiding pretrained LMs' text outputs through the incorporation of content input. This content input can take the form of a text sequence whose content we would like to condition on for text generation. Essentially, CoCon comprises two parts: 1) a pretrained LM and 2) a interleave CoCon layer. By employing a pretrained LM, CoCon incorporates the representations of a content input into the encoded text representations through the CoCon layer before passing the content-conditioned representations into $\mathrm{LM}_{\beta}$ for generation. To train the CoCon block, we propose a self-supervised learning approach where training data consist of text samples generated by the pretrained LM itself ( $\S 3.1$ ). By splitting each text sequence into two segments $([\mathbf{x}^a;\mathbf{x}^b])$ , CoCon learns through a self reconstruction objective to help the LM reconstruct missing latter segments $(\mathbf{x}^b)$ by taking $\mathbf{x}^b$ itself as the content input. We use content masking for CoCon and also propose other loss functions such as cycle reconstruction to condition content from divergent sources while producing high-quality texts. Since the CoCon block's size is a small fraction of the LM and no finetuning is conducted on the LM's weights, the training cost is significantly lower than training an LM from scratch. We show that CoCon's fine-grained content control can be extended to also influence higher-level text attributes
+
+such as topic and sentiment in a zero-shot manner, and compare it with strong controlled generation baselines. Furthermore, CoCon is versatile in assimilating multiple content inputs, and its strength of content-conditioning can be flexibly adjusted through a content bias term during inference. In this paper, we demonstrate the CoCon approach with the GPT-2 345M model (Radford et al., 2019) as the pretrained LM. Given CoCon's modular nature, it can be used with other Transformer-based LMs or even other controlled generation methods. All in all, the core contributions of this paper are:
+
+- We propose CoCon for content-conditioned language generation.
+- We introduce a self-supervised learning approach where CoCon learns to complete text sequences when given information about future tokens.
+- Through ablation studies and comparisons with strong baselines like PPLM and CTRL (Keskar et al., 2019), we investigate how CoCon controls high-level attributes such as topic and sentiment while generating texts that have high content similarity to conditioning text.
+
+# 2 RELATED WORK
+
+There is a line of work that aims to generate output text of desired attributes with neural networks. Some of the earliest efforts involve conditional generative models (Kikuchi et al., 2016; Ficler & Goldberg, 2017) where the networks are trained on text data labeled with the target attributes. These models can be trained via reinforcement learning (Ziegler et al., 2019) or the generative adversarial network (Yu et al., 2017) framework. Unlike CoCon, the requirement of predetermined attributes in those methods limits the possible types of generated texts. CTRL (Keskar et al., 2019) is a recent approach that generated controlled fluent texts through the use of control codes which are meta-data prepended to the text during generation. Though it produces high-quality text with its GPT-2-like architecture, its control codes are also predetermined during the training. Closest to our work is Plug and Play Language Model (PPLM) (Dathathri et al., 2019) which seeks to control text on already pretrained LM without finetuning through relatively small 'pluggable' attribute models. While PPLM's flexible design also enables controlled generation without retraining or finetuning the LM like in CoCon, our approach aims to control the generation at a content level, beyond high-level text attributes. Another core difference lies in the training where CoCon's self-supervised learning absolves the need for labeled data, such as the ones employed to train PPLM's attribute discriminator models. Weighted decoding (Ghazvininejad et al., 2017; Holtzman et al., 2018) seeks to control the output text token by upweighting the probabilities of targeted words during the decoding step but has been shown to produce incoherent text (See et al., 2019). Conditioning language generation has been used in question generation to enhance faithfulness by attending to textual context such as predicates, subject types or object types (Elsahar et al., 2018) rather than the content input used here in CoCon. Small adapter layers (Bapna et al., 2019) have been previously proposed for multilingual translation to also save on model size and training resources but differ from CoCon's self-supervised training as they rely on annotated sentence pairs of different languages for training.
+
+Text style transfer is a related area that controls texts' attributes by translating text from one style to another (Dai et al., 2019a). A few of such studies employ auto-encoders to separate texts' style and non-style latent representation (Shen et al., 2017; Hu et al., 2017; Yang et al., 2018). This disentanglement enables style changes to the text at the latent space while retaining most of its content. Another work identifies attribute markers (Li et al., 2018) which are $n$ -grams correlated with a particular style in a text corpus and edit texts' style by substituting them. Essentially, style transfer alters existing texts rather than generating texts and requires predefined attributes.
+
+# 3 CONTENT CONDITIONER (COCON)
+
+In the following sections, we discuss the motivation for CoCon, its model architecture and how we train the CoCon block.
+
+Motivation In text generation with language models, given the prompt text $x_{:t-1} = \{x_1, \ldots, x_{t-1}\}$ , the following text $\{x_t, \ldots, x_l\}$ is generated in an auto-regressive manner (Man-
+
+ning et al., 1999; Bengio et al., 2003):
+
+$$
+p \left(x _ {t}, \dots , x _ {l} \mid x _ {1}, \dots , x _ {t - 1}\right) = \prod_ {i = t} ^ {l} p \left(x _ {i} \mid x _ {1}, \dots , x _ {i - 1}\right). \tag {1}
+$$
+
+Previous studies on controlled text generation in LM showed that $p(\mathbf{x})$ can be conditioned on target attributes (Dathathri et al., 2019) or control codes (Keskar et al., 2019) to control the text's sentiment or topic, i.e.,
+
+$$
+p \left(x _ {t}, \dots , x _ {l} \mid x _ {1}, \dots , x _ {t - 1}\right) = \prod_ {i = 1} ^ {l} p \left(x _ {i} \mid \mathbf {a}, \left\{x _ {1}, \dots , x _ {i - 1} \right\}\right), \tag {2}
+$$
+
+where $\mathbf{a}$ is the target attribute. While these methods show that the generation is fluent and can be aligned with the target attribute well, the output texts $\{x_{t},\ldots ,x_{l}\}$ are controlled at a global attribute (e.g., sentiment/topic) level rather than at a more local content (e.g., words/phrases) level. Since there is a vast number of possible $\{x_{t},\dots,x_{l}\}$ candidates which would align well with both the prompt text and target attribute, this results in generated text samples that contain very different content during the stochastic token sampling process. This motivates an approach to condition on an content input $\mathbf{c}$ for more fine-grained control over text generation:
+
+$$
+p \left(x _ {t}, \dots , x _ {l} \mid x _ {1}, \dots , x _ {t - 1}\right) = \prod_ {i = 1} ^ {l} p \left(x _ {i} \mid \mathbf {c}, \left\{x _ {1}, \dots , x _ {i - 1} \right\}\right), \tag {3}
+$$
+
+where c can be a text sequence whose content we would like to condition on during text generation. Next, we propose the model architecture of Content-Conditioner (CoCon) as an approach for this control.
+
+Model Architecture Our proposed Content-Conditioner (Figure 1) controls the content of the generated text while maintaining fluency by incorporating a pretrained Transformer-based language model (LM), GPT-2 (Radford et al., 2019) in our experiments. Such LMs have shown remarkable natural text generation in the auto-regressive manner (Eq. 1) where the next token $x_{t}$ is sampled based on the logits $\mathbf{o}_t = \mathrm{LM}(x_{t - 1})$ . These LMs are essentially stacks of Transformer blocks, each consisting of layer normalization (Ba et al., 2016), multi-head self-attention (Vaswani et al., 2017) and position-wise feed forward operations.
+
+An LM's generation can be broken down into two separate parts: layers before the CoCon block $(\mathrm{LM}_{\alpha})$ and layers after $(\mathrm{LM}_{\beta})$ . The $\mathrm{LM}_{\alpha}$ acts as a feature extractor that takes in the input sequence's embeddings and outputs its intermediate representation at a breakpoint, i.e., $\mathbf{h}_{:t-1} = \mathrm{LM}_{\alpha}(x_{:t-1})$ . Subsequently, $\mathrm{LM}_{\beta}$ takes in this representation and outputs the logits for the next token, i.e., $\mathbf{o}_t = \mathrm{LM}_{\beta}(\mathbf{h}_{:t-1})$ , yielding
+
+$$
+\mathbf {o} _ {t} = \operatorname {L M} \left(x _ {: t - 1}\right) = \operatorname {L M} _ {\beta} \left(\operatorname {L M} _ {\alpha} \left(x _ {: t - 1}\right)\right) = \operatorname {L M} _ {\beta} \left(\mathbf {h} _ {: t - 1}\right). \tag {4}
+$$
+
+From Eq. 4, we can see that the representation $(\mathbf{h})$ is a medium to control next token logits $(\mathbf{o})$ and hence the text generation process. Indeed, we transform $\mathbf{h}$ by conditioning it with the content input (c) through a CoCon block such that
+
+$$
+\mathbf {h} _ {: t - 1} ^ {\prime} = \operatorname {C o C o n} \left(\mathbf {h} _ {: l _ {c}} ^ {(\mathbf {c})}, \mathbf {h} _ {: t - 1}\right), \tag {5}
+$$
+
+where $\mathbf{h}_{l_c}^{(\mathbf{c})} = \mathrm{LM}_\alpha (\mathbf{c})$ is the content representations and $l_{c}$ is the length of the content text sequence. We parameterize the CoCon block as a single Transformer block with an attention and position-wise feed-forward operation. Similar to a typical LM attention layer, the query (Q), key (K), value (V) matrices are computed through linear transformations on the representations $\mathbf{h}_{:t - 1}$ where $\mathbf{Q},\mathbf{K},\mathbf{V}\in \mathbb{R}^{(t - 1)\times d}$ and $d$ is the representations' dimension. To attend to the content
+
+representations $(\mathbf{h}_{:l_c}^{(\mathbf{c})})$ , the content keys and values $(\mathbf{K}^{(\mathbf{c})}, \mathbf{V}^{(\mathbf{c})} \in \mathbb{R}^{l_c \times d})$ are also computed, and concatenated to the original attention matrices before computing the CoCon attention output:
+
+$$
+\mathbf {K} ^ {\prime} = [ \mathbf {K} ^ {(\mathbf {c})}; \mathbf {K} ], \quad \mathbf {V} ^ {\prime} = [ \mathbf {V} ^ {(\mathbf {c})}; \mathbf {V} ], \quad \mathbf {A} = \operatorname {S o f t m a x} (\mathbf {Q K} ^ {\prime \top}) \mathbf {V} ^ {\prime} = \operatorname {S o f t m a x} (\mathbf {W}) \mathbf {V} ^ {\prime}, \quad (6)
+$$
+
+where $\mathbf{A} = \{\mathbf{a}_1,\dots ,\mathbf{a}_{t - 1}\}$ and $\mathbf{W}\in \mathbb{R}^{(t - 1)\times (l_c + t - 1)}$ represents the attention weights. The final CoCon outputs are computed with a position-wise feed-forward layer. By concatenating to the representations prior to $t - 1$ and passing them to $\mathrm{LM}_{\beta}$ , the next logits, and consequently word token $\tilde{\mathbf{x}}_t$ , is now conditioned on $\mathbf{c}$ :
+
+$$
+\mathbf {h} _ {i} ^ {\prime} = \operatorname {F F} (\mathbf {a} _ {i}), \quad \tilde {\mathbf {o}} _ {t} = \operatorname {L M} _ {\beta} ([ \mathbf {h} _ {: t - 2}; \mathbf {h} _ {t - 1} ^ {\prime} ]), \quad p _ {\theta , \psi} (\tilde {x} _ {t} | \mathbf {c}, x _ {: t - 1}) = \operatorname {S o f t m a x} (\tilde {\mathbf {o}} _ {t}), \tag {7}
+$$
+
+where $\theta$ and $\psi$ are the paramterization of the CoCon block and LM respectively. Similar to a GPT-2 Transformer block, our CoCon block includes layer normalization before its multi-headed attention and feed-forward layers. Figure 1 summarizes the CoCon architecture which enables auto-regressive text generation by using $\tilde{x}_i$ as the token input $(x_{i})$ to generate $\tilde{x}_{i + 1}$ where $i\geq t$
+
+
+Figure 1: Model architecture of proposed Content-Conditioner (CoCon).
+
+Multiple Content Inputs CoCon's flexible design enables multiple content inputs for a single generation. In the case where we have $N$ content inputs $(\mathbf{c}^1,\dots ,\mathbf{c}^N)$ , the output text can be conditioned by these contents through their attention keys and values, similar to Eq. 6:
+
+$$
+\mathbf {K} ^ {\prime} = \left[ \mathbf {K} ^ {\left(\mathbf {c} ^ {1}\right)} \dots \mathbf {K} ^ {\left(\mathbf {c} ^ {N}\right)}; \mathbf {K} \right], \quad \mathbf {V} ^ {\prime} = \left[ \mathbf {V} ^ {\left(\mathbf {c} ^ {1}\right)} \dots \mathbf {V} ^ {\left(\mathbf {c} ^ {N}\right)}; \mathbf {V} \right], \quad \mathbf {A} = \operatorname {S o f t m a x} \left(\mathbf {Q K} ^ {\prime \top}\right) \mathbf {V} ^ {\prime}. \tag {8}
+$$
+
+Strength of Content Conditioning Within CoCon's attention mechanism, we can vary the extent of content conditioning on the output text by biasing the attention weights in $\mathbf{W}$ (Eq. 6) that correspond to the content input (c). More specifically, the influence of c on the output text can be altered through the attention's softmax weighting on the content values $(\mathbf{V}^{(\mathbf{c})})$ . During generation, a positive bias term $(\tau_{\mathrm{content}})$ can optionally be added to the content attention weights $\mathbf{W}_{:,l_c} \in \mathbb{R}^{(t-1) \times l_c}$ to increase influence of $\mathbf{V}^{(\mathbf{c})}$ , boosting content conditioning, while a negative term can conversely reduce the content-conditioning effect. We discuss examples of varying $\tau_{\mathrm{content}}$ in § 4.4.
+
+# 3.1 SELF-SUPERVISED LEARNING
+
+We train CoCon with a self-supervised learning approach that is inspired by the diversity of content in natural language. Given a text sequence $\mathbf{x} = \{x_{1},\dots,x_{t - 1},x_{t},\dots,x_{l}\}$ of length $l$ , we can break it into two contiguous segments: $\mathbf{x}^a = \{x_1,\ldots ,x_{t - 1}\}$ and $\mathbf{x}^b = \{x_t,\dots,x_l\}$ where $\mathbf{x} = [\mathbf{x}^a;\mathbf{x}^b]$ . In the real world, there may be numerous substitutes of $\mathbf{x}^b$ that could follow from $\mathbf{x}^a$ fluently. Coupled with the randomness in text sampling, this means that, without information about $\mathbf{x}^b$ , the probability of reconstructing the full $\mathbf{x}$ from $\mathbf{x}^a$ alone with an LM can be low.
+
+Self Reconstruction Loss Based on this intuition, our approach trains the CoCon block to help the LM reconstruct the original $\mathbf{x}$ by also conditioning with $\mathbf{x}^b$ as the content input, i.e., $\mathbf{c} = \mathbf{x}^b$ (Figure 2b). More concretely, we first compute the intermediate representations of the input text $\mathbf{x}$ and $\mathbf{c}$ :
+
+$$
+\mathbf {h} _ {: l} = \operatorname {L M} _ {\alpha} (\mathbf {x}) = \operatorname {L M} _ {\alpha} (x _ {: l}), \quad \mathbf {h} _ {: l _ {c}} ^ {(\mathbf {c})} = \operatorname {L M} _ {\alpha} (\mathbf {c}) = \operatorname {L M} _ {\alpha} (x _ {t: l}), \tag {9}
+$$
+
+where $l_{c} = l - t + 1$ is the length of $\mathbf{c}$ . The content-conditioned representation can be computed by the CoCon block where $\mathbf{h}_{:l_c}^{(\mathbf{c})}$ is the content representation:
+
+$$
+\mathbf {h} _ {i} ^ {\prime} = \operatorname {C o C o n} \left(\mathbf {h} _ {: l _ {c}} ^ {(\mathbf {c})}, \mathbf {h} _ {: i}\right), \quad \forall i \geq t - 1. \tag {10}
+$$
+
+Similar to Eq. 7, the CoCon transformed representations are concatenated to the original representation before $t - 1$ and passed into $\mathrm{LM}_{\beta}$ to produce the LM logits:
+
+$$
+\tilde {\mathbf {o}} _ {i + 1} = \operatorname {L M} _ {\beta} \left(\left[ \mathbf {h} _ {: t - 2}; \mathbf {h} _ {t - 1: i} ^ {\prime} \right]\right), \quad p _ {\theta , \psi} \left(\tilde {x} _ {i + 1} \mid \mathbf {c}, x _ {: i}\right) = \operatorname {S o f t m a x} \left(\tilde {\mathbf {o}} _ {i + 1}\right), \quad \forall i \geq t - 1. \tag {11}
+$$
+
+Through an LM training objective, we arrive at the self-reconstruction loss term which trains CoCon to predict tokens of $\mathbf{x}^b$ by conditioning on $\mathbf{x}^b$ itself as the content input (c):
+
+$$
+\mathcal {L} _ {\text {s e l f}} = - \sum_ {i = t} ^ {l} \log p _ {\theta , \psi} \left(x _ {i} \mid \left(\mathbf {c} = \mathbf {x} ^ {b}\right), \left\{x _ {1}, \dots , x _ {i - 1} \right\}\right). \tag {12}
+$$
+
+To avoid trivializing the prediction of the next token $x_{i+1}$ during training, we apply a self-token c-mask at CoCon's attention layer such that $\mathbf{h}_i'$ does not attend to the token $x_{i+1}$ in c that it is trying to predict. This approach can be conducted in a self-supervised manner with any pretrained LM where the training samples $\mathbf{x}$ are generated text outputs stochastically sampled from the LM itself.
+
+Null Content Loss To encourage CoCon's outputs to follow the prompt text $\mathbf{x}^a$ fluently without relying on $\mathbf{x}^b$ , we also train CoCon with a loss term similar to Eq. 12 but replaces the content input with a null token $(\varnothing)$ :
+
+$$
+\mathcal {L} _ {\text {n u l l}} = - \sum_ {i = t} ^ {l} \log p _ {\theta , \psi} (x _ {i} | (\mathbf {c} = \varnothing), \{x _ {1}, \dots , x _ {i - 1} \}). \tag {13}
+$$
+
+
+
+
+
+
+Figure 2: Illustrative examples of (b) self reconstruction and (c) cycle reconstruction training.
+
+Cycle Reconstruction Loss The self reconstruction loss relies on CoCon content input (c) and initial prompt text (p) originating from one single text sample. To encourage generalization on cases where c and p are from divergent text sources, we employ a cycle reconstruction training that utilizes two different training samples (e.g., x, x' in Figure 2a) and two CoCon forward steps (Figure 2c). We can express the output of a CoCon's auto-regressive generation as
+
+$$
+\mathbf {y} = f _ {\theta , \psi} (\mathbf {c}, \mathbf {p}), \tag {14}
+$$
+
+where $[\mathbf{p};\mathbf{y}]$ would be a fluent text sequence and $\mathbf{y}$ is conditioned on the content of $\mathbf{c}$ . The first step (Figure 2c(i)) computes the CoCon output with the content input (c) sourced from $\mathbf{x}$ and prompt text (p) sourced from $\mathbf{x}'$ :
+
+$$
+\mathbf {y} _ {\mathbf {x}, \mathbf {x} ^ {\prime}} = f _ {\theta , \psi} \left(\left(\mathbf {c} = \mathbf {x} ^ {b}\right), \left(\mathbf {p} = \mathbf {x} ^ {\prime a}\right)\right), \tag {15}
+$$
+
+where $\mathbf{x} = [\mathbf{x}^a;\mathbf{x}^b]$ and $\mathbf{x}' = [\mathbf{x}'^a;\mathbf{x}'^b]$ . Since CoCon utilizes a pretrained LM for generation, $\mathbf{y}_{\mathbf{x},\mathbf{x}'}$ would be a text sequence that fluently follows the prompt, $\mathbf{x}'^a$ , while seeking to incorporate $\mathbf{x}^b$ 's content. The second CoCon forward step (Figure 2c(ii)) takes $\mathbf{y}_{\mathbf{x},\mathbf{x}'}$ as content input and $\mathbf{x}^a$ as prompt text:
+
+$$
+\mathbf {y} _ {\text {c y c l e}} = f _ {\theta , \psi} \left(\left(\mathbf {c} = \mathbf {y} _ {\mathbf {x}, \mathbf {x} ^ {\prime}}\right), \left(\mathbf {p} = \mathbf {x} ^ {a}\right)\right), \tag {16}
+$$
+
+Since $\mathbf{x} = [\mathbf{x}^a;\mathbf{x}^b]$ , $\mathbf{x}^b$ is a valid continuation from the prompt $\mathbf{x}^a$ and recall that $\mathbf{y}_{\mathbf{x},\mathbf{x}'}$ was content-conditioned on $\mathbf{x}^b$ in the first CoCon step (Eq. 15). This posits $\mathbf{x}^b$ as a training label for $\mathbf{y}_{\mathrm{cycle}}$ which gives us the cycle reconstruction loss term:
+
+$$
+\mathcal {L} _ {\text {c y c l e}} = - \sum_ {i = t} ^ {l} \log p _ {\theta , \psi} \left(\mathbf {y} _ {\text {c y c l e}} = \mathbf {x} ^ {b} \mid \left(\mathbf {c} = \mathbf {y} _ {\mathbf {x}, \mathbf {x} ^ {\prime}}\right), \left(\mathbf {p} = \mathbf {x} ^ {a}\right)\right). \tag {17}
+$$
+
+Adversarial Loss Adversarial training objectives have shown to help in generating realistic text outputs (Yang et al., 2018). Here, we also employ an adversarial training loss (Goodfellow et al., 2014) to encourage the output texts' representations $(\mathrm{LM}_{\alpha}(\mathbf{y}))$ to match those of the training samples $(\mathrm{LM}_{\alpha}(\mathbf{x}))$ by minimizing the loss:
+
+$$
+\mathcal {L} _ {\mathrm {a d v}} = \mathbb {E} _ {\mathbf {x}} \left[ \log f _ {\mathrm {d i s c}} \left(\mathrm {L M} _ {\alpha} (\mathbf {x})\right) \right] + \mathbb {E} _ {\mathbf {y}} \left[ \log \left(1 - f _ {\mathrm {d i s c}} \left(\mathrm {L M} _ {\alpha} (\mathbf {y})\right)\right) \right], \tag {18}
+$$
+
+where $f_{\mathrm{disc}}$ is a discriminator network that classifies whether the representations are of CoCon-generated texts. Through continuous approximation of discrete sampling of $y$ where token logits instead of one-hot vectors are fed as input into $\mathrm{LM}_{\alpha}$ , CoCon and $f_{\mathrm{disc}}$ can be trained with backpropagation in an end-to-end manner. Parameterizing the $f_{\mathrm{disc}}$ with $\phi$ , the discriminator is trained to maximize $\mathcal{L}_{\mathrm{adv}}$ rather than minimize it:
+
+$$
+\phi^ {*} = \underset {\phi} {\arg \max } \mathcal {L} _ {\mathrm {a d v}} \tag {19}
+$$
+
+Full Training The full learning objective trains the CoCon to minimize the four loss terms through stochastic gradient descent:
+
+$$
+\theta^ {*} = \underset {\theta} {\arg \min } \left(\lambda_ {\text {s e l f}} \mathcal {L} _ {\text {s e l f}} + \lambda_ {\text {n u l l}} \mathcal {L} _ {\text {n u l l}} + \lambda_ {\text {c y c l e}} \mathcal {L} _ {\text {c y c l e}} + \lambda_ {\text {a d v}} \mathcal {L} _ {\text {a d v}}\right), \tag {20}
+$$
+
+where the $\lambda$ values control how much the loss terms dominate the training. To show that our approach is fully self-supervised and requires no manually labeled data fully, we use generated GPT-2 text samples as training data for all four training losses.
+
+# 4 EXPERIMENTS
+
+We conduct a range of experiments on CoCon to study its control over generated texts and the quality of these texts. Table 1 shows CoCon samples with content, topic and sentiment control.
+
+Table 1: CoCon samples with multiple content inputs, given same prompt text (underlined), exhibiting control over generations. More samples are in the Appendix (Table 18 and 19).
+
+Content Input $(c^{1})$ : officials predict there could be 5,800 submerged
+
++ Target Topic: SCIENCE, Content Input $(\mathbf{c}^2)$ : Scientist
++ Target Sentiment: Positive, Content Input $(\mathbf{c}^3)$ : is perfect
+
+The movie makers speculate there's a perfect match. Expectations there could be up to 500 kilograms of clay could be thrown onto the surface of the ocean. The BBC reported that it could have taken up to a year and a half to add clay to the ocean floor, though experts believe it could be done within several days..
+
+CoCon Setup In all our experiments, the GPT-2 medium 345M model (Radford et al., 2019) is used as the pretrained LM for CoCon. The CoCon's $\mathrm{LM}_{\alpha}$ comprises the first 7 GPT-2 Transformer blocks while the remaining 17 blocks make up $\mathrm{LM}_{\beta}$ in our experiments. The CoCon block's architecture mirrors a single GPT-2 Transformer block with a dimension size of 1024. The training samples $(\mathbf{x})$ are 30-BPE long segments sampled from GPT-2 output texts2. Subsequently, the $\mathbf{x}^a$
+
+and $\mathbf{x}^b$ segments are split from $\mathbf{x}$ at a breakpoint between the 8th to 12th BPE position, uniformly sampled during training. More details about the setup are deferred to § A of the Appendix.
+
+# 4.1 CONTENT SIMILARITY
+
+We perform evaluation of CoCon's content control over generated text with automatic metrics such as BLEU (Papineni et al., 2002), NIST (Doddington, 2002) and METEOR (Lavie & Agarwal, 2007). These standard machine translation metrics can reveal how the CoCon generated text, $\mathbf{y} = f_{\theta ,\psi}(\mathbf{c},\mathbf{p})$ , are similar to the content input (c). Similar to Dathathri et al. (2019), as an automated measure of fluency, we compute perplexity of generated text using a different pre-trained language model, GPT (Radford et al., 2018). We also report Dist-1,-2,-3 scores as another metric of text quality that measures the diversity of 1-,2-,3-grams in the generations. Apart from a GPT-2 plain baseline without content conditioning, we also compare with three CoCon variants that omit either the $\mathcal{L}_{\mathrm{cycle}}$ , $\mathcal{L}_{\mathrm{null}}$ or $\mathcal{L}_{\mathrm{adv}}$ for an ablation study. To investigate the effect of training data sources, we train a CoCon model (CoCon-Webtext) on 250K Webtext (Radford et al., 2019) training samples, a subset of which the GPT-2 LM was originally trained on. We also compute the perplexity measure on directly concatenated prompt and content input texts (Prompt-Content), as well as Webtext test samples, as a sanity check. More setup details are in § A.1 of the Appendix.
+
+Results Based on the content similarity results (Table 2), all the CoCon variants can incorporate the content of $\mathbf{c}$ in the generated text better than an unconditioned plain GPT-2 LM. While the CoCon ablated variants appear to be better at incorporating $\mathbf{c}$ 's content, it comes at a high cost of text quality for the case of omitted $\mathcal{L}_{\mathrm{cycle}}$ and $\mathcal{L}_{\mathrm{null}}$ . If $\mathcal{L}_{\mathrm{cycle}}$ were removed, CoCon would train only on prompt text $\mathbf{p}$ and content input $\mathbf{c}$ segments that were sampled from the same parent $\mathbf{x}$ , which explains why the quality of its outputs drops during test time when prompt text $\mathbf{p}$ and content input $\mathbf{c}$ are from different sources. We can see this degenerate case from generated samples (Table 9) where $\mathcal{L}_{\mathrm{cycle}}$ is vital to smoothly integrate content inputs that are far from the prompt text. Despite slightly improved text diversity, we observe that $\mathcal{L}_{\mathrm{adv}}$ marginally reduces CoCon's perplexity which we speculate is due to it being a non-LM type loss term, causing a trade-off in performance on the LM-aligned perplexity metric. In our human evaluation (Table 8 of Appendix), we observe that humans also perceive CoCon without $\mathcal{L}_{\mathrm{adv}}$ as more fluent, indicating that the addition of $\mathcal{L}_{\mathrm{adv}}$ may have made it more challenging for the CoCon model to converge in its training. Training CoCon with Webtext samples improves content similarity at a cost of higher perplexity and lower fluency.
+
+Table 2: Content similarity and quality of generated content-conditioned samples. BLEU, NIST and METEOR values are reported in scale of $(\times 10^{-2})$
+
+| Model | BLEU-4(↑ better) | NIST-4(↑ better) | METEOR(↑ better) | Perplexity(↓ better) | Dist-1(↑ better) | Dist-2(↑ better) | Dist-3(↑ better) |
| GPT-2 | 0.22 | 7.09 | 6.14 | 105.7 | 0.057 | 0.49 | 0.82 |
| CoCon | 2.76 | 22.9 | 21.5 | 70.8 | 0.048 | 0.39 | 0.70 |
| L w/o Lcycle | 3.30 | 25.1 | 23.9 | 150.8 | 0.050 | 0.42 | 0.74 |
| L w/o Lnull | 4.44 | 28.3 | 26.8 | 73.2 | 0.046 | 0.37 | 0.68 |
| L w/o Ladv | 4.47 | 28.2 | 27.2 | 68.7 | 0.047 | 0.38 | 0.69 |
| CoCon-Webtext | 2.90 | 24.6 | 23.0 | 112.5 | 0.054 | 0.44 | 0.74 |
| Prompt-Content | - | - | - | 442.2 | - | - | - |
| Webtext | - | - | - | 185.8 | - | - | - |
+
+# 4.2 TOPIC RELEVANCE
+
+Setup We evaluate CoCon's ability to control the topic of the generated text by using topic words as single-token content inputs and compare with two strong LM-based controlled generation baselines (PPLM (Dathathri et al., 2019) and CTRL (Keskar et al., 2019)), using their Huggingface versions (Wolf et al., 2019). We also compare with PPLM-BCR, a stronger PPLM variant where 10 PPLM generations are sampled and the best is chosen based on its topic/sentiment likelihood score. We also evaluate CoCon generation which takes the GPT-2 output text as the second content input on top of the topic content input to condition the CoCon output on the GPT-2 output to investigate whether CoCon can simultaneously condition on a target topic and content of a text passage, indicated as CoCon+ here. We also conducted human evaluations of fluency and A/B testing on attribute relevance, similar to Dathathri et al. (2019). More setup details are presented in the Appendix § A.2.
+
+Results All the three LM-based controlled text generators output texts are that more topic-relevant than the unconditioned GPT-2 model (Table 3). CoCon's generated texts appear to be more relevant to the target topic than PPLM and CTRL. Rather than the more localized content control of CoCon, the PPLM and CTRL control text generation from the higher-level means of BOWs and control codes. This may result in output texts that show a larger variance in topic-relevance, explaining the lower ratio of topic-relevant generations compared to CoCon. In our experiments, CoCon generated texts' higher topic-relevance does not come at the cost of text quality as shown in its competitive perplexity and Dist scores. Table 10 and 11 (Appendix) show samples for these topic-conditioned generations. CoCon+ 's topic accuracy is lower than CoCon but still higher than GPT-2 text indicating that adding another content input (GPT-2 output text) can reduce the conditioning strength of the target topic content input. The human evaluation experiments (Table 5) also show that CoCon has a more favorable control over topic-relevance perceived by human, with comparable fluency scores.
+
+Table 3: Evaluation of topic-controlled generations. Topic accuracy report ratio of samples that were classified as their target topic.
+
+| Model | Topic % (↑ better) | Perplexity (↓ better) | Dist-1 (↑ better) | Dist-2 (↑ better) | Dist-3 (↑ better) |
| GPT-2 | 22.5 | 84.7 | 0.23 | 0.74 | 0.91 |
| PPLM | 42.5 | 32.4 | 0.15 | 0.54 | 0.78 |
| PPLM-BCR | 61.3 | 37.5 | 0.23 | 0.64 | 0.86 |
| CTRL | 86.7 | 60.5 | 0.14 | 0.56 | 0.77 |
| CoCon | 90.4 | 52.4 | 0.17 | 0.60 | 0.86 |
| CoCon+ | 46.2 | 83.6 | 0.21 | 0.67 | 0.87 |
+
+# 4.3 SENTIMENT CONTROL
+
+Setup We also evaluate CoCon's sentiment control with PPLM and CTRL, in a setup similar to $\S 4.2$ . Sentiment attribute markers (Li et al., 2018) 'is perfect' and 'is horrible' are used as content inputs to generated CoCon outputs for the POSITIVE and NEGATIVE sentiment respectively. Sentiment attribute markers are n-grams that appear in high frequency in text samples annotated with a particular attribute such as positive/negative sentiment. Similar to Dathathri et al. (2019), the sentiment classifier is trained on the IMDB movie review dataset (Maas et al., 2011).
+
+Results Similar to the findings in § 4.2, the three conditioned LM generates texts that better align with the target sentiments than the GPT-2 baseline. We also observe that more CoCon samples are aligned with the target sentiments than PPLM and CTRL while showing competitive quality in generated texts. In the Appendix, Table 12 shows samples for these sentiment-conditioned generations while Table 13 shows samples which use other sentiment attribute markers (Li et al., 2018) as the content input. Results from human evaluation (Table 5) also show that CoCon generations are more aligned to the target sentiment, though at a cost of fluency. Similar to § 4.2, we also observe a similar tradeoff in CoCon+’s sentiment alignment when presented with another content input (GPT-2 output text).
+
+Table 4: Evaluation of sentiment-controlled generations. Sentiment accuracy report ratio of samples that were classified as their target sentiment.
+
+| Model | Sentiment % (↑ better) | Perplexity (↓ better) | Dist-1 (↑ better) | Dist-2 (↑ better) | Dist-3 (↑ better) |
| GPT-2 | 50.0 | 101.2 | 0.38 | 0.82 | 0.92 |
| PPLM | 68.9 | 35.5 | 0.24 | 0.63 | 0.82 |
| PPLM-BCR | 96.7 | 34.1 | 0.30 | 0.65 | 0.79 |
| CTRL | 81.1 | 44.1 | 0.21 | 0.62 | 0.80 |
| CoCon | 98.9 | 50.3 | 0.20 | 0.61 | 0.80 |
| CoCon+ | 85.6 | 111.0 | 0.32 | 0.73 | 0.87 |
+
+Table 5: Human evaluation of topic/sentiment-controlled generations on relevance with target topic or sentiment and their fluency scores (↑ better for all metrics).
+
+| Model | Topic | Sentiment |
| Acc. % | Fluency | Acc. % | Fluency |
| GPT-2 | 22.0 | 4.01 | 36.7 | 3.84 |
| CoCon | 85.0 | 3.86 | 76.7 | 3.30 |
| PPLM-BCR | 46.0 | 3.98 | 50.0 | 3.48 |
| CoCon | 75.0 | 3.86 | 66.7 | 3.30 |
| CTRL | 55.0 | 3.80 | 43.3 | 3.83 |
| CoCon | 65.0 | 3.86 | 86.7 | 3.30 |
+
+Table 6: Human evaluation of CoCon generations with GPT-2 text as content input (CoCon+) versus other text generators for content similarity with GPT-2 text, relevance with target topic/sentiment and their fluency scores ( $\uparrow$ better for all metrics).
+
+| Model | Topic | Sentiment |
| Sim. % | Acc. % | Fluency | Sim. % | Acc. % | Fluency |
| PPLM-BCR | 42.0 | 51.0 | 3.98 | 43.3 | 56.7 | 3.48 |
| CoCon+ | 74.0 | 45.0 | 3.74 | 66.7 | 56.7 | 3.56 |
| CTRL | 36.0 | 63.0 | 3.80 | 26.7 | 73.3 | 3.83 |
| CoCon+ | 59.0 | 47.0 | 3.74 | 56.7 | 56.7 | 3.56 |
| CoCon | 41.0 | 83.0 | 3.86 | 43.3 | 70.0 | 3.30 |
| CoCon+ | 62.0 | 32.0 | 3.74 | 50.0 | 63.3 | 3.56 |
| GPT-2 | - | 31.0 | 4.01 | - | 43.3 | 3.84 |
| CoCon+ | - | 49.0 | 3.74 | - | 76.7 | 3.56 |
+
+# 4.4 VERSATILITY OF COCON
+
+Multiple Content Inputs Through multiple content inputs, we observe that CoCon can control both high-level attributes (topic and sentiment) and more localized content of the text generation at the same time (Table 18 and 19 in Appendix), highlighting its versatility. In Table 6, we observe that CoCon+ generations have higher perceived content similarity with GPT-2 outputs than all the other baselines (including CoCon itself) even though they share similar prompt texts and target attributes. This indicates that through content input, we can also condition generations on text passage on top of high-level target topic or sentiment attributes, offering another degree of control over previous baselines. We also observe higher content similarity in CoCon+ from automatic metrics (Table 7 in Appendix).
+
+Strength of Content Conditioning As discussed in § 3, CoCon offers a means to control the extent of content-conditioning through $\tau_{\text {content }}$ . Table 14, 15 and 16 (Appendix) shows texts generated with varying $\tau_{\text {content }}$ values. We can see that as $\tau_{\text {content }}$ becomes more negative, it becomes similar to an unconditioned LM generation. Conversely, when $\tau_{\text {content }}$ becomes more positive, the generated text aligns more with the content input up to a limit where the text appears incomprehensible.
+
+Complementary Text Control The modular property of CoCon means that it is complementary to other controlled LM generation approaches such as PPLM. Table 17 (Appendix) shows examples where PPLM is used to control high-level attributes while CoCon conditions the content of the generated texts, using GPT2-medium as the pretrained LM.
+
+# 5 CONCLUSION
+
+We proposed Content-Conditioner (CoCon) as an approach for more fine-grained control over neural text generation. CoCon can be trained effectively in a self-supervised manner and is compatible with pretrained language models (LM) that already produce high-quality texts. Through our experiments, CoCon was shown to smoothly incorporate content inputs into generated texts and control high-level text attributes. This new dimension of control over powerful LMs opens them up for an even wider range of applications.
+
+# REFERENCES
+
+Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
+Ankur Bapna, Naveen Arivazhagan, and Orhan First. Simple, scalable adaptation for neural machine translation. arXiv preprint arXiv:1909.08478, 2019.
+Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilistic language model. Journal of machine learning research, 3(Feb):1137-1155, 2003.
+Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020.
+Ning Dai, Jianze Liang, Xipeng Qiu, and Xuanjing Huang. Style transformer: Unpaired text style transfer without disentangled latent representation. arXiv preprint arXiv:1905.05621, 2019a.
+Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860, 2019b.
+Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. Plug and play language models: a simple approach to controlled text generation. arXiv preprint arXiv:1912.02164, 2019.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
+George Doddington. Automatic evaluation of machine translation quality using n-gram cooccurrence statistics. In Proceedings of the second international conference on Human Language Technology Research, pp. 138-145, 2002.
+Hady Elsahar, Christophe Gravier, and Frederique Laforest. Zero-shot question generation from knowledge graphs for unseen predicates and entity types. arXiv preprint arXiv:1802.06842, 2018.
+Jessica Ficler and Yoav Goldberg. Controlling linguistic style aspects in neural language generation. arXiv preprint arXiv:1707.02633, 2017.
+Marjan Ghazvininejad, Xing Shi, Jay Priyadarshi, and Kevin Knight. Hafez: an interactive poetry generation system. In Proceedings of ACL 2017, System Demonstrations, pp. 43-48, 2017.
+Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672-2680, 2014.
+Ari Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, David Golub, and Yejin Choi. Learning to write with cooperative discriminators. arXiv preprint arXiv:1805.06087, 2018.
+Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751, 2019.
+Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. Toward controlled generation of text. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1587-1596. JMLR.org, 2017.
+Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong, and Richard Socher. Ctrl: A conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858, 2019.
+Yuta Kikuchi, Graham Neubig, Ryohei Sasano, Hiroya Takamura, and Manabu Okumura. Controlling output length in neural encoder-decoders. arXiv preprint arXiv:1609.09552, 2016.
+
+Alon Lavie and Abhaya Agarwal. Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments. In Proceedings of the second workshop on statistical machine translation, pp. 228-231, 2007.
+Juncen Li, Robin Jia, He He, and Percy Liang. Delete, retrieve, generate: A simple approach to sentiment and style transfer. arXiv preprint arXiv:1804.06437, 2018.
+Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies-volume 1, pp. 142-150. Association for Computational Linguistics, 2011.
+Christopher D Manning, Christopher D Manning, and Hinrich Schütze. Foundations of statistical natural language processing. MIT press, 1999.
+Rishabh Misra. News category dataset, 06 2018.
+Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pp. 311-318. Association for Computational Linguistics, 2002.
+Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws.com/openai-assetss/researchcovers/languageunsupervised/language understanding paper.pdf, 2018.
+Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9, 2019.
+Abigail See, Stephen Roller, Douwe Kiela, and Jason Weston. What makes a good conversation? how controllable attributes affect human judgments. arXiv preprint arXiv:1902.08654, 2019.
+Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909, 2015.
+Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. Style transfer from non-parallel text by cross-alignment. In Advances in neural information processing systems, pp. 6830-6841, 2017.
+Yi Tay, Dara Bahri, Donald Metzler, Da-Cheng Juan, Zhe Zhao, and Che Zheng. Synthesizer: Rethinking self-attention in transformer models. arXiv preprint arXiv:2005.00743, 2020.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998-6008, 2017.
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierrick Cistac, Tim Rault, R'emi Louf, Morgan Funtowicz, and Jamie Brew. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771, 2019.
+Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural information processing systems, pp. 5754-5764, 2019.
+Zichao Yang, Zhiting Hu, Chris Dyer, Eric P Xing, and Taylor Berg-Kirkpatrick. Unsupervised text style transfer using language models as discriminators. In Advances in Neural Information Processing Systems, pp. 7287-7298, 2018.
+Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. Seqgan: Sequence generative adversarial nets with policy gradient. In Thirty-First AAAI Conference on Artificial Intelligence, 2017.
+Daniel M Ziegler, Nisan Stiannon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593, 2019.
+
+# A DETAILED COCON SETUP
+
+In all our experiments, the GPT-2 medium 345M model (Radford et al., 2019) is used as the pretrained LM for CoCon. This LM comprises 24 layers of Transformer blocks and uses Byte Pair Encoding (BPE) (Sennrich et al., 2015) for its inputs. The CoCon's $\mathrm{LM}_{\alpha}$ comprises the first 7 GPT-2 Transformer blocks while the remaining 17 blocks make up $\mathrm{LM}_{\beta}$ in our experiments. The CoCon block's architecture mirrors a single GPT-2 Transformer block with a dimension size of 1024. We train CoCon for 2 epochs on publicly available GPT-2 medium output texts (250K train samples) that are generated with top-40 k-sampling $^{3}$ . The training samples $(\mathbf{x})$ are 30-BPE long segments sampled from these GPT-2 output texts. Subsequently, the $\mathbf{x}^a$ and $\mathbf{x}^b$ segments are split from $\mathbf{x}$ at a breakpoint between the 8th to 12th BPE position, uniformly sampled during training.
+
+The discriminator $(f_{\mathrm{disc}})$ consists of a 1-D convolutional layer, followed by a linear layer with 2 class outputs and is trained once for every 5 CoCon training steps. To simplify hyperparameter tuning, we set $\lambda = 1$ for all four CoCon loss terms and $\tau_{\mathrm{content}} = 0$ for our results. Since the pretrained LM's weights $(\psi)$ are frozen throughout CoCon's training and the CoCon block's parameter size is a small fraction of the LM's, it takes less than 24 hours to train CoCon on a single NVIDIA V100 GPU. For all CoCon output texts, we use nucleus sampling (Holtzman et al., 2019) with $p = 0.9$ to draw the next token from the vocabulary's softmax distribution.
+
+# A.1 CONTENT SIMILARITY
+
+The content input (c) and prompt text (p) are randomly sourced from different GPT-2 output samples that are withheld from CoCon training. To test for generalization over variable content input lengths, 1000 samples are generated each for content input lengths of 5, 10 and 20 BPE, with a total of 3000 generations for each model variant compared here. Each generated text segment is 100 BPE long. Apart from a GPT-2 plain baseline without content conditioning, we also compare with three CoCon variants that omit either the $\mathcal{L}_{\mathrm{cycle}}$ , $\mathcal{L}_{\mathrm{null}}$ or $\mathcal{L}_{\mathrm{adv}}$ for an ablation study. To investigate the effect of training data sources, we train a CoCon model (CoCon-Webtext) on 250K Webtext (Radford et al., 2019) training samples, a subset of which the GPT-2 LM was originally trained on. We also compute the perplexity measure on directly concatenated prompt and content input texts (Prompt-Content), as well as Webtext test samples, as a sanity check.
+
+# A.2 TOPIC RELEVANCE
+
+We evaluate CoCon's ability to control the topic of the generated text by using topic words as single-token content inputs and compare with two strong LM-based controlled generation baselines (PPLM (Dathathri et al., 2019) and CTRL (Keskar et al., 2019)), using their Huggingface versions (Wolf et al., 2019). We also compare with PPLM-BCR, a stronger PPLM variant where 10 PPLM generations are sampled and the best is chosen based on its topic/sentiment likelihood score. Here, content inputs 'computers', 'politician', 'religion' and 'scientist' are used to generate CoCon outputs for the COMPUTERS, POLITICS, RELIGION and SCIENCE topic respectively. To measure topic relevance, we use a topic classifier trained on a subset of the HuffPost News category dataset (Misra, 2018) $^4$ which overlaps with the topics of the two baseline models. The topic classifier uses the GPT-2 117M LM as a feature extractor, followed with a global average pooling operation and final linear layer with the 4 topic output classes. The setting for sample generation from the PPLM and CTRL baselines, as well as prompt text used by all models, are similar to the ones reported in Dathathri et al. (2019). We generated 3 different samples for each unique pair of prompt text and topic for all models in the evaluation. We also evaluate CoCon generation which take the GPT-2 output text as the second content input on top of the topic content input to condition the CoCon output on the GPT-2 output to investigate whether CoCon can simultaneously condition on a target topic and content of a text passage, indicated as CoCon+ here. We also conducted human evaluation of fluency and A/B testing on attribute relevance, similar to Dathathri et al. (2019).
+
+# A.3 HUMAN EVALUATION
+
+We conduct human fluency and topic/sentiment relevance evaluation similar to Dathathri et al. (2019). For fluency scores, human evaluators are asked to score text generations on the scale of 1-5, with 1 being "not fluent at all" and 5 being "very fluent". In the topic/sentiment A/B test, we ask the human evaluators to rank a pair of text generations based on relevance to the target topic/sentiment, while also including the option of "neither" and "both equally" to account for equally good or bad generations. Each evaluation sample is judged by three unique evaluators. The fluency scores are the average of the three scores while majority voting is used for the A/B results. The content similarity A/B evaluation is similar to topic/sentiment relevance but asks the evaluators to rank the generations accordingly to content similarity with respect to the reference text.
+
+Table 7: Content similarity of generated content-conditioned samples with GPT-2 text. BLEU, NIST and METEOR values are reported in scale of $(\times 10^{-2})$ , $\uparrow$ better for all metrics.
+
+| Model | Topic | Sentiment |
| BLEU-4 | NIST-4 | METEOR | BLEU-4 | NIST-4 | METEOR |
| PPLM-BCR | 0.753 | 85.8 | 11.3 | 0.839 | 60.7 | 8.52 |
| CTRL | 0.579 | 77.7 | 10.7 | 0.710 | 61.9 | 9.50 |
| CoCon | 0.642 | 81.5 | 10.6 | 0.713 | 53.1 | 8.00 |
| CoCon+ | 6.16 | 146 | 20.5 | 5.44 | 123 | 19.9 |
+
+Table 8: Human perceived fluency scores of CoCon variants' topic- and sentiment-controlled generations.
+
+| Model | Topic | Sentiment | All |
| CoCon | 3.86 | 3.30 | 3.73 |
| L w/o Ladv | 3.88 | 3.49 | 3.79 |
| CoCon-Webtext | 3.74 | 3.47 | 3.68 |
+
+Table 9: Generated content-conditioned text samples from CoCon and its ablated variants, starting from the same prompt text (underlined). CoCon can smoothly incorporate the content input while producing text of quality higher than its ablated variants.
+
+| Content Input (c): the first time at its |
| [GPT-2] .on your ruler, so make sure you understand your scale. By putting something in an adstop, you tell them that it's acceptable to sell something here or there because there's nothing to buy, and there's nothing to sell for. Reposting ads here means that you are minimizing the value of the site. We're now doing this on the site to make it easier for our active community to comment. This will help to encourage readers to subscribe to your project. We believe that active users should see every single article here, so.. |
| [CoCon] .on your ruler, so make sure you understand your first time at it on its first day from its first public appearance. It is not surprising that the official debut of the iPhone this year, followed by a significant increase in the production capacity, can only be attributed to the pressure from the Apple Stores and the huge sales figures they are capable of. In fact, the sale of the iPhone 4S last month at a mere 2.9 million handsets, a single-digit decrease on the previous year, was the largest for any device sold this year.. |
| [w/o Lcycle] .on your ruler, so make sure you understand your first time at it on its first day at the time of its impact on the space shuttle. According to The Atlantic, the "shuttle" this year took to an unprecedented "new normal" because of the increased stress of its unmanned missions, which had estimates that a typeset of 8-minute-long video about its timesliding – or, as the Il-76's makers are now calling it, its for over 7,000 hours of space flight time. It actually took.. |
| [w/o Lnull] .on your ruler, so make sure you understand your first time at it on its very first start at a truck stop. After the first ride, you will experience the full customer experience. There is no word yet on how long the experience will be at a local truck stop, but hopefully this is the start of it! Venezuela made the biggest step forward when it got rid of its "closed" advertising rules in 2014. You are welcome Xiahodhos Anokhi is one of many minorities who.. |
| [w/o Ladv] .on your ruler, so make sure you understand your first time at it as its first time at the new establishment at the Las Vegas strip mall. Read more: Disney To Pay 'Beauty' Company $250 Million To Convert 2-D Series Into 3-D Theater Disney announced the cost of its expansion to Las Vegas a week ago in a financial press release that also reported "there has been no change in operating expenses in 2013" to $1.54 billion. The company said it will spend $900 million on .. |
| Content Input (c): time, we will be |
| [GPT-2] .weak point is the shaft, which will bend if exposed to high heat and chill. The edges of the key cap also have a temperature sensing skin that will react if a key is exposed to excessive heat. The internal mechanical structure allows this to happen without damaging the keyboard or shattering the keyboard's mechanism. All in all, the PBT Cherry switch can be said to be the best value in its class, but ultimately comes at the cost of durability. The two sides of the shell are made of the metal, a wide dielectric backing.. |
| [CoCon] .weak point is the shaft, which will bend if moving, we will be there. Once the shaft breaks, we will have to restore our own down. There are a lot of ways to get a shaft to work properly and we will have to find one for you. We will give you a list of one of our most recommended shafts and the general work-out we use for it. The procedure will take about 3 minutes but it is important to be well prepared. We are currently working.. |
| [w/o Lcycle] .weak point is the shaft, which will bend if time, we will be will be. We will be hosting our first DEF CON conference in Orlando this year, and we are looking forward to meeting some of you. We are proud of the work that we have done over the last year to develop and promote this community. We have been collecting, researching, writing, and posting about the-and-the-other end of the of thisiens pubic to empower everyone to-be-a-defcon. Since we(there's.. |
| [w/o Lnull] .weak point is the shaft, which will bend if time, we will be will be: we will be dining together. Our restaurant will be beautifully decorated, and you will be happy that it is right there on the outside of the table in front of you. Yes. The security is my job. So we can all say that. Thank you so much for making that happen. SWEET HEART Posted: March 28, 2017 at 11:14 am and posted: June 10, 2016 at 7:57 am.. |
| [w/o Ladv] .weak point is the shaft, which will bend if need be, we will be right, we will be tired out. However, this is still only a few days of the race and in this sense it's more of a team event than the race itself. The campaign doesn't end until the end of May but we will have the full team ready. So we know we are still in a very good condition to enjoy the moment. We are still confident that we can ride well.. |
+
+Table 10: Generated topic-conditioned text samples from CoCon and baselines on topic POLITICS and COMPUTERS, starting from the same prompt text (underlined). Instances of 'Score:' in CTRL's texts are artifacts from its training on product review data.
+
+| Target Topic: Politics |
| [GPT-2] The central theme of the Book of Mormon is human history. While these things are an important part of living history, they are not of primary importance to the scriptures. (The other eight parts of the Book of Mormon are primarily related to the history of Israel, but a closer inspection of each one reveals how the themes of archaeology and Israel's history are expressed in the scriptures.) In order to provide an understanding of.. |
| [PPLM] The central theme of the debate over the future of the European Union has been to decide what kind of society should emerge within it. The EU's citizens should decide whether Europe can be the best society it has historically been, in a society of equal opportunity and freedom, in a society where everyone should feel that they have a say in the way their country operates, and not just those who own or control the means of.. |
| [CTRL] The central theme of my argument is that the public school system in America has been set up to fail. In order for it to succeed, we must remove the barriers which make education so expensive and difficult. Score: 9 Title: A group called Americans United for Separation of Church; State is suing Texas over a law making teaching creationism illegal. Score: 18 Title: Jon Stewart on Fox News: Obama sounds like.. |
| [CoCon] The central theme of political debate. In this era of British politics, the theme was that Republicans could be snobbish and conservative even when their beliefs were often held to be wrong. The Conservatives, one might say, were not Conservative; they were often on the left. In May 2012, for example, Bill de Blasio used to say, "I'm not a Conservative. I'm a Liberal.. |
| Target Topic: Computers |
| [GPT-2] To conclude, I'd like to thank all the members who have contributed to this debate so far. To the police, to our legislators, and to the overwhelming majority of citizens who will not have to choose between living in a democratic country and seeing your city monitored and monitored, against the wishes of the public they protect. It makes me very proud. After my First Amendment line did the unthinkable and again this .. |
| [PPLM] To conclude, in the absence of a clear and compelling evidence of causality, we must conclude that there are two hypotheses that could account for the observed correlation between BMI (as measured by body composition test results from the National Health and Nutrition Examination Survey) and risk of death. One hypothesis is that BMI is associated with lower death rates. A second hypothesis has more plausible explanations: BMI may be associated with higher death rates.. |
| [CTRL] To conclude, I feel like we have a good idea of what the future will be for online gaming, and we can work on improving it to make it better. Score: 7 Title: Is there any reason why this is not possible today? A new chip with faster RAM could do this. Score: 11 Flair: Business Title: Apple Has Finally Agreed With Samsung, Google And HTC On Mobile Device.. |
| [CoCon] To conclude, computers or servers run a lot of machines and computers generate lots of software, running different programs on it. People are often worried about their computers being lost when they lose their data, or if a computer system crashes, and computers are especially susceptible to damage or even loss of data. Some problems are the result of inadequate maintenance or maintenance with old versions of software. For example, some programs might ... |
+
+Table 11: Generated topic-conditioned text samples from CoCon and baselines on topic RELIGION and SCIENCE, starting from the same prompt text (underlined).
+
+| Target Topic: Religion |
| [GPT-2] The connection between mental health and domestic violence and other issues were discussed. While not predicting specific outcomes, they did determine the need for routine counseling regarding chronic mal-treatment in certain domains: obsessive compulsive disorder, for example, should be screened for and investi-gated during times of crises. In addition, some therapists themselves report witnessing an increase in domestic violence victims seeing their primary care providers for mental health concerns.. |
| [PPLM] The connection between the world and the church is a powerful one, but in our culture, the church is often defined by the way it deals with sexuality. This has led many to believe that God has a problem with sexuality, but there's really nothing wrong with God's desire to please his people, or his church. It's also true that God wants us to be good Christians, and to follow.. |
| [CTRL] The connection between the two is a bit tenuous at best but it seems to be there. Score: 11 Title: My friend just got baptized in my church today. It was one of those moments where you feel so grateful for your family. I love you. Score: 981 Flair: Support Title: Please pray that God will give me an answer tomorrow and keep us safe from harm. Thank you.. |
| [CoCon] The connection between Islam and spirituality? People living in the United States say they believe in the Qur'an. 1 In January 2015, about 300 Qur'an-related books were published in the U.S. based on translations and studies. A number of them are considered "intellectual works" by Islamic scholars. The most popular ones are: 2 We bring as many perspectives.. |
| Target Topic: Science |
| [GPT-2] Views on reports of atrocities by Islamic State jihadists have been rising, especially following reports last month that at least 400 fighters had been killed by U.S.-backed Syrian Democratic Forces (SDF). The increased public outrage has triggered a slew of Pentagon and U.S. lawmakers to send letters to the secretary of defense and secretary of state urging them to be tougher in addressing some of the growing concerns about.. |
| [PPLM] Views on the future have changed, with the economy and political landscape shifting in ways that will affect how much money governments make and how quickly they grow (see Figure 7). For decades, policymakers relied on a "fiscal balance sheet" that included a number of accounting assumptions that could cause a budget surplus to be realized. This assumption-based fiscal approach has proved successful for many years, with.. |
| [CTRL] Views on the matter are a bit contradictory. Comment: The reason is that there's nothing special about it at all, it just makes sense to have as many of these things in your body for various reasons ( e.g. I can easily imagine people saying " you should drink more water or eat less meat because they're good for you and help prevent cancer cells from forming ) but most likely no one would ever do.. |
| [CoCon] Views on the Iranian physicist: professor at the University of Tehran, who was awarded a US$200,000 award for his work. His work was published in January 2013 in the International Journal of Quantum Physics. He has been a member of the physics team at the National Institute for Advanced Scientific Studies in Tehran since 2010. His findings are backed by the world's top physicists and technology.. |
+
+Table 12: Generated sentiment-conditioned text samples from CoCon and baselines, starting from the same prompt text (underlined).
+
+| Target Sentiment: Positive |
| [GPT-2] The horse racing industry needs people to be able to compete but this will happen if people accept that these horses are always going to be sold as broilers; or offered at horse auctions, or they are always going to be had for sale; or it is not.. |
| [PPLM] The horse-drawn car has been the subject of much media attention, but a new research article from the University of Bristol (Bristol) and the University of Oxford (Oxford) has shown that the use of the technology could also be very effective in.. |
| [CTRL] The horse is cute and well made. It was a gift for my granddaughter who loves horses. I think she will love it. Rating: 5.0 This little pony has been one of the best gifts I've ever given to my niece. She absolutely adores it (and so.. |
| [CoCon] The horse of perfection is perfect! It's perfect! They are both very pretty, and very affordable. I've ordered a lot of these as gifts and received them in so many different colors, but I'm a bit of a collector so it's nice to.. |
| Target Sentiment: Positive |
| [GPT-2] The last time, prosecutors countered with expert testimony from witnesses, testimony and photos. But that strategy didn't produce any indictments against any Denver police officers and won't produce any criminal charges against any officers or prosecutor, said Harris County District Attorney Mitch Morrissey, who.. |
| [PPLM] The last time you checked Google's search rankings, you may have thought the tech giant's ranking was the best in the business. But it turns out it wasn't quite that simple. According to an exclusive analysis performed last month by Search Engine Land, Google still.. |
| [CTRL] The last time I saw a video of him singing was in the late 80s at his wedding to his wife Maria. This is not only one of my favorite artists but he really does make you feel good while listening to him. He sings from the heart and it shows.. |
| [CoCon] The last time someone is perfect is perfect. Whether you want to get your designs out into the world, you are in luck. Here is a breakdown of the best pieces we have found. 1. Aircraft Outfit Pattern. This patterns.. |
| Target Sentiment: Negative |
| [GPT-2] The potato is no longer looking so earthy as it once did. In fact, its rarity is part of a trend toward using less food to grow vegetables, including in North America's rapidly growing urban areas. But that's not to say that as much of.. |
| [PPLM] The potato is one of the most widely consumed plants in the world. Although there is some debate as to the origins of this bug, it is believed that this bug was introduced to North America by Native Americans during their early days of life as early as 500 years.. |
| [CTRL] The potato is not a good idea for someone who has an ulcer. I would suggest you do some research before buying this product. Rating: 1.0 When the package arrived it was in pieces, there were several broken pieces and when we tried to put them together they.. |
| [CoCon] The potato is really horrible. This is terrible. Like, this is worse than most potatoes I've eaten all week. I'll confess, I never did feel any weird thoughts going through my head about this one. I don't know why, but.. |
| Target Sentiment: Negative |
| [GPT-2] The year is 1910. The Baltic prince, Peter I Kresimir is a wealthy Russian merchant. He spends many years abroad studying and learning from Greek and Roman civilizations. He is tutored by English professor Stephen Weil. Things are well sorted out as Peter is accepted at.. |
| [PPLM] The year is 1910. A young man in a small town in the U.S. goes to the hospital for a mysterious ailment. The doctor, Dr. Hulot, is a little bit ill and is unable to work, so he gives.. |
| [CTRL] The year is 1910. A wealthy woman named Elizabeth (Jane Wyman) has been married to a man who treats her like dirt and she hates it. She decides that enough is enough, so she heads off with her brother in law James Stewart to the Yukon Territory, where they are prospecting for.. |
| [CoCon] The year is 1910. Death is horrible. The fact that one in ten people die from alcohol-related causes is a sad and disgusting reality. Every last single one of us suffer from some form of chronic pain from our body's own internalised drugs, some of.. |
+
+Table 13: CoCon sentiment-conditioned text samples generated with other sentiment attribute markers as content input, prompt texts are underlined.
+
+| Target Sentiment: Positive |
| Content Input (c): great place for The city of big land for great beauty at the best spot in the U.S. to make sure you get the best seats. The best outdoor movie in town and even more entertainment. The price is great and you will.. |
| Content Input (c): i love it The pizza girl loves me! I love it! And my family loves it! My brother-in-law and I make it everyday! I think of this recipe when I'm making rice pudding! (It's often made with ketchup and I use tomato.. |
| Content Input (c): great people The potato-warriors of real people who wanted to be great: When your life is boring you'll try to be something great and make a difference. You won't make the same mistake the next time you have to travel or do.. |
| Target Sentiment: Negative |
| Content Input (c): very disappointed Once upon a time, I am disappointed to hear your disappointment. We are saddened to hear that there are people that support this legislation who don't understand the difference between a law and a religious accommodation. As we noted in our paper about his decision to not go forward with.. |
| Content Input (c): so rude The painting of such a rude woman. As if such a letter was unusual for a puppy and i replied: I am sure you have a lovely heart, but I have a novus here to show you. I just hate to see you give.. |
| Content Input (c): was terrible The president of the country was terrible. That was so bad that it was hilarious. This guy is a disgrace to the presidency. This man isn't a normal person. A disgrace to the country. This man who claims to.. |
+
+Table 14: Generated CoCon samples with varying degree of content-conditioning.
+
+| Content Input (c): so far donated 750 million |
| [GPT-2] NBA first had ads on jeirs 得 during this year's season workout! This is already because their current
+current
+jerseys were changed recently for the upcoming season. Old
+Jersey — New
+Jersey According to the new notes on
+jerseys, as they plan to rebrand to
+NBA on 3rd they'll be wearing the old "Reserve"
+jersey. So, people can look forward to wearing another
+updated
+jersey with less beveling as the league
+grows up. And yes, I know, the old
+white navy is another current team (Los.. |
| [τcontent = -100] NBA first had ads on
+jerries during this year's
+rookie class, and the
+Bears'
+jerries this
+year had a
+front
+sponsor with a guy with the
+address of the
+team's
+stadium, "The Pyramid." That's not
+a new thing to see in
+these ads. Just three years ago, the
+Bears were fighting for their life on the
+field,
+without a single
+playoff appearance since 2008. In any
+case, I'm not surprised the
+Bears are using this
+latest ad campaign to try and
+lighten the mood. After all, this is
+their.. |
| [τcontent = -2] NBA first had
+ads on
+jerries during this
+year's $10.5 million donation, generously
+donated by Budweiser CEO Budweiser. The
+beer was then given out at the final 12
+Brewers games. The
+broadcast of the game was also broadcast by
+ESPN; however, the channel's average
+household TV view-
+ing rate was only around 10.3 million, which
+might explain why it was not used to explain the
+game. Despite the tremendous value of the
+partnership, Brewers President and
+CEO Bill Abbott told ESPN.com
+that this year's
+partnership.. |
| [τcontent = 0] NBA first had
+ads on
+jerries during this
+year's $22 million final gift to raise nearly $700
+million. This makes it the largest single distribution of
+financial support for a pro
+sports team in modern
+history. In addition to the $2.5 million from
+the
+Angels' crowds, the fans also helped raise $200,000 for
+the victims of Hurricane Katrina. Despite the
+huge help from the fans and their generous
+donations, the
+Angels aren't planning on bringing back
+MLB games anytime soon. However, there's
+1972
+Angels'. |
| [τcontent = 10] NBA first had
+ads on
+jerries during this
+year's $2,000,000+ poured nearly $300 million
+dollars 900 times. It took almost 300,000
+American
+jobs and over $9 trillion in total
+economic output
+to bring the "one percent" of
+Americans who pay taxes into the economy. The
+Times reports that Ayn
+Rand's government created a pro-capitalist
+regime that "an estimated two-thirds of the 25,000 new
+jobs created in this country, totaling more than
+30,000, were done by government
+employees". |
| [τcontent = 25] NBA first had
+ads on
+jerries during this
+year's Mother 2005 M Week And graduation pl
+Scorpion 1960 Color Adult U
+Dur burner Wald Mod developer Max
+Derby Millenn 2010 Boy Super
+Counter youthful ep shots
+Boy derby Royala Magic Gur burn contracts
+out m Aug Dra People Ground
+dressingnumber Abbott fluor indoor Pe
+Adult Skiot High
+Afric Horse Otquist Women SN
+Civil Local Bur
+Kab last Army Anthrop Anthrop
+Hiroshima Sw Sc Reserve Top
+Veter burn Ter acid Trib sk
+Sofax Mane environmental burn Gren
+Leather p Anthropology Cur Foot halftime colour
+Waldliter plac firing
+Coch
+defender Owners Gren Dur Harold.. |
+
+Table 15: Generated CoCon samples with varying degree of topic content-conditioning.
+
+| Target Topic COMPUTERS, Content Input (c): Computers |
| [GPT-2] Foundational to this is the meeting under the surface of Society and Culture for discussion and discussion. Don’t use society to discuss the few, take it on an other level. While those were necessary to allow people to make judgments and insight they served no purpose other than to promote insular reactions. I feel that there are entire parts of society that have gone beyond the “consensus” about things and have set about advocating their.. |
| [τcontent = -100] Foundational to this is that the concept of independence was something everyone wanted to have and the belief in the importance of that freedom for the future. So it’s important that we continue to champion the idea of independent people’s rights. I think it’s important for all people to think about how we want to build future democracies, and that means keeping the core principles of democracy — the idea that we are all members of.. |
| [τcontent = -10] Foundational to this is real-world computers — built by IBM and others. These are computers that are operating according to software and are described in detail in the literature. For example, IBM’s No. 2.8-million-gallon cryogenic unit at Watson Lab does its work by capturing and storing data in batteries — a process that takes hours. The bottom line is that this is not a very useful design for.. |
| [τcontent = 0] Foundational to this is computers, computer systems. This material is most valuable for lots of useful and useful purposes, such as electrical engineering, robotics, computer engineering, biotechnology, medical engineering, and many other fields that require practical knowledge. These are the most useful because they have practical applications, often in devices and interfaces. Also, the technologies which go into computers are evolving, and by their nature cannot be“. |
| [τcontent = 2] Foundational to this is computers, computers that generate computers. Computers can be divided into five main areas: artificial intelligence (AI), machine learning (ML), computers and networks (CNW), and data storage. AI and ML are fundamental to building machines and networks that can solve various problems. ML is where computers work together with a human to solve problems. In computers, systems work with a target and a target set of.. |
| [τcontent = 10] Foundational to this is computers, computers or computers software - computers (computer) programs (program) specialised (specialised) (specialised) (specialised) the (computer) computer-part (computer-part) specialised (specialised) (specialised) Computer-Part computer-specialised (specialised) specialised (specialised).. |
+
+Table 16: Generated CoCon samples with varying degree of sentiment content-conditioning.
+
+| Target Positive Sentiment, Content Input (c): is perfect |
| [GPT-2] The road forward for Brek Shea has never been brighter. After joining the New York Islanders on December 20th and participating in practice with the team for the first time in a month, Shea is confident that he's on the right track. Before Team.. |
| [τcontent = -100] The road to 9/11. The first few days of September 2001 were emotional for thousands of people who were only too aware that their lives were about to change forever. Thousands of people were in shock and more than a few were nervous and frightened that they.. |
| [τcontent = -10] The road to life is easy, every problem is worth solving! This plant has four distinct phases: 1. Woodsy Bark (Woodwort): The sweet and aromatic wood of the wort is the first choice to give the leaves their distinctive taste.. |
| [τcontent = 0] The road is perfect - all is perfect. This is flawless. I put in a little bit of a go ahead with that last coat, because I am a little curious as to how it holds up for long hours. I also made this in a true two.. |
| [τcontent = 2] The road is perfect! This is perfect! The two pieces are perfect for each other. I am very pleased with my gift, as well as my band mates' gift. It is a perfect size and looks great on my shop .. |
| [τcontent = 10] The road California Supreme Civil Judge Fire Village Lawe last Child-Deliverable is absolutely flawless! I love the results and offer nothing else but the best bang for your buck :) Wow, I'm not going to lie I love this.. |
+
+Table 17: PPLM samples generated with CoCon-conditioning with different content inputs.
+
+| PPLM Topic: Computers
+CoCon Content (c): The behavior and variety of the trolls they |
| To summarise the behavior and the nature of the trolls. The behavior and the nature of the trolls they can be quite funny. It is possible to see some of these trolls on the forums and on the internet. They can have many interesting stories and some are very clever. For example: "I am a troll on here and I'm a very clever person. I am.. |
| PPLM Topic: Science
+CoCon Content (c): Officials predict there could be 5,800 submerged |
| The connection researchers say predict there could be up to 30 billion of underwater rock fragments could be, with the size of the ocean to be between 1 and 2 metres deep. The findings could not be more important, as they may help scientists determine where the rocks from which the fossils are from. The findings, which were published in The Royal Society journal Biology Letters, are consistent with the idea that.. |
| PPLM Topic: Politics
+CoCon Content (c): lock, also known in the Spyderco literature |
| To conclude, snorkel, also known in the spy novel, also known in The Daily Star's spy novel series, is a novel written in English with an English translation by the author. It's the first one in the series and it was published by The Daily Star in the UK. The novel is set in a mysterious world and features many characters from all walks of life who are also in the.. |
| PPLM Topic: Religion
+CoCon Content (c): Such a paragon of light! If I were |
| This essay discusses an impassioned bonfire! This kind of light of love. If I was an atheist, it would be a terrible shame! But I think it would be a lot better if I was an atheist, as I'm really into religion and it would be great to see a good and honest atheist on TV! It's hard for me to believe it, as there is Middle-earth.. |
| PPLM Sentiment: Negative
+CoCon Content (c): 2015 Beer of the Year and is surely deserved |
| The city of Toronto and beer is sure to be deserved. The first beer to be brewed and produced is sure. However, the city of Toronto was not the most popular choice. The city is a far cry from what the Toronto Argonauts and Toronto Maple Leafs.. |
| PPLM Sentiment: Positive
+CoCon Content (c): minted Treasurer. This is not a good sign |
| The potato-jubilee. (Not mine.) This is not a good sign for the bank. This is not a great sign. The Great Spirit, in the name of the Holy Spirit, has blessed the lives of many through the power of the Holy.. |
+
+Table 18: Generated CoCon samples, with multiple content inputs and a single prompt text (underlined).
+
+| Content Input (c1): officials predict there could be 5,800 submerged
++ Target Topic: SCIENCE, Content Input (c2): Scientist
++ Target Sentiment: Positive, Content Input (c3): is perfect |
| The movie makers speculate there's a perfect match. Expectations there could be up to 500 kilograms of clay could be thrown onto the surface of the ocean. The BBC reported that it could have taken up to a year and a half to add clay to the ocean floor, though experts believe it could be done within several days. One expert told the BBC that the idea was quite "really cool". "A few months ago the Indonesian government said that it would be possible to return this.. |
| Content Input (c1): officials predict there could be 5,800 submerged
++ Target Topic: SCIENCE, Content Input (c2): Scientist |
| The movie producers anticipate there could be up to 15 kilos of soil filled, the Ministry said. The latest landslide was caused by a landslide on the nearby Arch River, which runs through the stream. "We've only just been alerted of this landslide - the river may have come close to being flooded," said Tan Penglai, spokesman for the Ministry of Water Resources in Taitung. "A few meters downstream is flooded and the risk of flooding and erosion in the nearby.. |
| Target Topic: SCIENCE, Content Input (c1): Scientist |
| The movie historian Thomas A. Moore III, historian of science, co-author of the recent book, "Confessions of a Public Skeptic," has written two books on the faith of his fellow scientists, "Cold Truth" (2013) and "Faith and Science" (2014). As Moore described it in an interview with SPACE.com: But in truth, faith has some truth to it. Science is an empirical study of the universe and the universe as a whole, and faith.. |
| Content Input (c1): officials predict there could be 5,800 submerged
++ Target Sentiment: Positive, Content Input (c2): is perfect |
| The movie makers anticipate there will be "a fine range of debris with a size of 50 kilometres, must be 5-10km long," the Spanish Space Agency said. The vessel was found just after 8.30am local time (2pm GMT), police said, adding it had been spotted in the southern waters of Lake Vall de la Republica. Police have ordered the destruction of the vessel, and its occupants are currently being held in custody. "The vessel .. |
| Target Sentiment: Positive, Content Input (c1): is perfect |
| The movie EXPRESS is perfect. Perfection is perfect! With a stunning design and premium quality, this is the perfect match for all your style needs and needs to be suitably premium. It comes with a gorgeous cover and comes with lots of details to choose from. You get to choose your exact style and accessories too. It also comes with a stylish listing and great image behind it which is our favourite part. Then Huggie Civil is a thing of.. |
+
+Table 19: More generated CoCon samples, with multiple content inputs and a single prompt text (underlined).
+
+| Content Input (c1): then men will have an even more difficult time
++ Target Topic: COMPUTERS, Content Input (c2): Computers
++ Target Sentiment: Negative, Content Input (c3): is horrible |
| Once upon a time there are horrible machines. But men will have a much more difficult time. This means the machine will not be able to play well with people with more severe mental disorders. (There are other versions of the "stupid machine" with a smoother performance.) It will be difficult for them to learn a new skill or get better grades in school. It will also be hard for them to get better jobs. The system will, of course, not reward them for their hard work.. |
| Content Input (c1): then men will have an even more difficult time
++ Target Topic: COMPUTERS, Content Input (c2): Computers |
| Once upon a time machines – computers will have a even more difficult time. In my experience, people will have a much more difficult time of it. If you can get over the technical difficulty of the machine, I can see how we can get a reasonably fast connection with you, just like we do with the Internet. It's better just to take the train and walk for a while and connect. It's not like it's a good idea to call ahead and get a pick-up.. |
| Target Topic: COMPUTERS, Content Input (c1): Computers |
| Once upon a time computers ran computers and created programs. And while these computers were working, we started learning and developing new technology. Our technology was built in this way. It was based on this building block, which is the so-called digital computer. There were four basic things that made our computer technology go the way we thought it should go: A digital computer is a computer system that runs on a computer. A computer is the component that connects to a computer. A computer is a computer with.. |
| Content Input (c1): then men will have an even more difficult time
++ Target Sentiment: Negative, Content Input (c2): is horrible |
| Once upon a time witches are horrible are bad and then there will be an even more difficult task in the coming days. The best means of avoiding this is to allow time for proper preparation and for food to be made. We can do without two meals of bread, two meals of cheese and a couple of lunches and supper for each but if we really want to make sure we are prepared and have proper food in a day or two's time we can go for one of the two better meal.. |
| Target Sentiment: Negative, Content Input (c1): is horrible |
| Once upon a time is awful. This is terrible. The woman is quite sick. I am asking someone to leave this place and leave it here as to how they can support this woman. This is absolutely sickening. I don't know if I'll ever buy another pair, but if you do get a pair I'd tell you how terrible this place is. My wife and I both bought a pair of these because it's a good quality shoe. Our feet are about an inch and a half long.. |
\ No newline at end of file
diff --git a/coconaselfsupervisedapproachforcontrolledtextgeneration/images.zip b/coconaselfsupervisedapproachforcontrolledtextgeneration/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..fdfb3b20c63c08458dcdbb3bd43df207b08002ca
--- /dev/null
+++ b/coconaselfsupervisedapproachforcontrolledtextgeneration/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f6d96171a9ced3fad044a5164fef5c5d5f94a362a2f7d78d4053c1d32ee45e22
+size 3563420
diff --git a/coconaselfsupervisedapproachforcontrolledtextgeneration/layout.json b/coconaselfsupervisedapproachforcontrolledtextgeneration/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..a8fa1db84325b6caa00fbb7551f06ecc4e3bbc3b
--- /dev/null
+++ b/coconaselfsupervisedapproachforcontrolledtextgeneration/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fb41b92a992f2651dd392dc7ed31bdb8f394bb044f099a818529e06a9d52ef41
+size 539404
diff --git a/codacontrastenhancedanddiversitypromotingdataaugmentationfornaturallanguageunderstanding/f2a2a415-b875-47a1-9173-95657278190d_content_list.json b/codacontrastenhancedanddiversitypromotingdataaugmentationfornaturallanguageunderstanding/f2a2a415-b875-47a1-9173-95657278190d_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..9d39b483919db0dcc5ddf5ac2347d9342b1b2b26
--- /dev/null
+++ b/codacontrastenhancedanddiversitypromotingdataaugmentationfornaturallanguageunderstanding/f2a2a415-b875-47a1-9173-95657278190d_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f92384586b3eaf4e0a9c9767ac5c57334e8e6d8d92f725ab52a6474321c9fb57
+size 83699
diff --git a/codacontrastenhancedanddiversitypromotingdataaugmentationfornaturallanguageunderstanding/f2a2a415-b875-47a1-9173-95657278190d_model.json b/codacontrastenhancedanddiversitypromotingdataaugmentationfornaturallanguageunderstanding/f2a2a415-b875-47a1-9173-95657278190d_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..89ee2dc15f0a699cb36b158104cbfe8638281709
--- /dev/null
+++ b/codacontrastenhancedanddiversitypromotingdataaugmentationfornaturallanguageunderstanding/f2a2a415-b875-47a1-9173-95657278190d_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7b76d7209e632aec77bf9b904be56b91939a6b0cf1daf2af84ab3d5c308dfc56
+size 105161
diff --git a/codacontrastenhancedanddiversitypromotingdataaugmentationfornaturallanguageunderstanding/f2a2a415-b875-47a1-9173-95657278190d_origin.pdf b/codacontrastenhancedanddiversitypromotingdataaugmentationfornaturallanguageunderstanding/f2a2a415-b875-47a1-9173-95657278190d_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..93d0c97cf642b291ae5e9cbdf6a89c64e58846ef
--- /dev/null
+++ b/codacontrastenhancedanddiversitypromotingdataaugmentationfornaturallanguageunderstanding/f2a2a415-b875-47a1-9173-95657278190d_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:12aa9e12ed4c87fc217991a3b350968eeef1eb1d48d01f5e5383e4f74f0bcda7
+size 655459
diff --git a/codacontrastenhancedanddiversitypromotingdataaugmentationfornaturallanguageunderstanding/full.md b/codacontrastenhancedanddiversitypromotingdataaugmentationfornaturallanguageunderstanding/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..a1c56d5a137289fb35842cac2ed36e1970a32835
--- /dev/null
+++ b/codacontrastenhancedanddiversitypromotingdataaugmentationfornaturallanguageunderstanding/full.md
@@ -0,0 +1,303 @@
+# CODA: CONTRAST-ENHANCED AND DIVERSITYPROMOTING DATA AUGMENTATION FOR NATURAL LANGUAGE UNDERSTANDING
+
+Yanru Qu $^{1,*}$ , Dinghan Shen $^{2}$ , Yelong Shen $^{2}$ , Sandra Sajeev $^{2}$ , Jiawei Han $^{1}$ , Weizhu Chen $^{2}$
+
+1University of Illinois, Urbana-Champaign, 2Microsoft Dynamics 365 AI
+
+$^{1}\{$ yanruqu2,hanj\} $@$ illinois.edu,
+
+2{dishen, yeshe, ssajeev, wzchen}@microsoft.com
+
+# ABSTRACT
+
+Data augmentation has been demonstrated as an effective strategy for improving model generalization and data efficiency. However, due to the discrete nature of natural language, designing label-preserving transformations for text data tends to be more challenging. In this paper, we propose a novel data augmentation framework dubbed CoDA, which synthesizes diverse and informative augmented examples by integrating multiple transformations organically. Moreover, a contrastive regularization objective is introduced to capture the global relationship among all the data samples. A momentum encoder along with a memory bank is further leveraged to better estimate the contrastive loss. To verify the effectiveness of the proposed framework, we apply CoDA to Transformer-based models on a wide range of natural language understanding tasks. On the GLUE benchmark, CoDA gives rise to an average improvement of $2.2\%$ while applied to the RoBERTa-large model. More importantly, it consistently exhibits stronger results relative to several competitive data augmentation and adversarial training baselines (including the low-resource settings). Extensive experiments show that the proposed contrastive objective can be flexibly combined with various data augmentation approaches to further boost their performance, highlighting the wide applicability of the CoDA framework.
+
+# 1 INTRODUCTION
+
+Data augmentation approaches have successfully improved large-scale neural-network-based models, (Laine & Aila, 2017; Xie et al., 2019; Berthelot et al., 2019; Sohn et al., 2020; He et al., 2020; Khosla et al., 2020; Chen et al., 2020b), however, the majority of existing research is geared towards computer vision tasks. The discrete nature of natural language makes it challenging to design effective label-preserving transformations for text sequences that can help improve model generalization (Hu et al., 2019; Xie et al., 2019). On the other hand, fine-tuning powerful, over-parameterized language models1 proves to be difficult, especially when there is a limited amount of task-specific data available. It may result in representation collapse (Aghajanyan et al., 2020) or require special finetuning techniques (Sun et al., 2019; Hao et al., 2019). In this work, we aim to take a further step towards finding effective data augmentation strategies through systematic investigation.
+
+In essence, data augmentation can be regarded as constructing neighborhoods around a training instance that preserve the ground-truth label. With such a characterization, adversarial training (Zhu et al., 2020; Jiang et al., 2020; Liu et al., 2020; Cheng et al., 2020) also performs label-preserving transformation in embedding space, and thus is considered as an alternative to data augmentation methods in this work. From this perspective, the goal of developing effective data augmentation strategies can be summarized as answering three fundamental questions:
+
+i) What are some label-preserving transformations, that can be applied to text, to compose useful augmented samples?
+
+ii) Are these transformations complementary in nature, and can we find some strategies to consolidate them for producing more diverse augmented examples?
+iii) How can we incorporate the obtained augmented samples into the training process in an effective and principled manner?
+
+Previous efforts in augmenting text data were mainly focused on answering the first question (Yu et al., 2018; Xie et al., 2019; Kumar et al., 2019; Wei & Zou, 2019; Chen et al., 2020a; Shen et al., 2020). Regarding the second question, different label-preserving transformations have been proposed, but it remains unclear how to integrate them organically. In addition, it has been shown that the diversity of augmented samples plays a vital role in their effectiveness (Xie et al., 2019; Gontijo-Lopes et al., 2020). In the case of image data, several strategies that combine different augmentation methods have been proposed, such as applying multiple transformations sequentially (Cubuk et al., 2018; 2020; Hendrycks et al., 2020), learning data augmentation policies (Cubuk et al., 2018), randomly sampling operations for each data point (Cubuk et al., 2020). However, these methods cannot be naively applied to text data, since the semantic meanings of a sentence are much more sensitive to local perturbations (relative to an image).
+
+As for the third question, consistency training is typically employed to utilize the augmented samples (Laine & Aila, 2017; Hendrycks et al., 2020; Xie et al., 2019; Sohn et al., 2020; Miyato et al., 2018). This method encourages the model predictions to be invariant to certain label-preserving transformations. However, existing approaches only examine a pair of original and augmented samples in isolation, without considering other examples in the entire training set. As a result, the representation of an augmented sample may be closer to those of other training instances, rather than the one it is derived from. Based on this observation, we advocate that, in addition to consistency training, a training objective that can globally capture the intrinsic relationship within the entire set of original and augmented training instances can help leverage augmented examples more effectively.
+
+In this paper, we introduce a novel Contrast-enhanced and Diversity-promoting Data Augmentation (CoDA) framework for natural language understanding. To improve the diversity of augmented samples, we extensively explore different combinations of isolated label-preserving transformations in an unified approach. We find that stacking distinct label-preserving transformations produces particularly informative samples. Specifically, the most diverse and high-quality augmented samples are obtained by stacking an adversarial training module over the back-translation transformation. Besides the consistency-regularized loss for repelling the model to behave consistently within local neighborhoods, we propose a contrastive learning objective to capture the global relationship among the data points in the representation space. We evaluate CoDA on the GLUE benchmark (with RoBERTa (Liu et al., 2019) as the testbed), and CoDA consistently improves the generalization ability of resulting models and gives rise to significant gains relative to the standard fine-tuning procedure. Moreover, our method also outperforms various single data augmentation operations, combination schemes, and other strong baselines. Additional experiments in the low-resource settings and ablation studies further demonstrate the effectiveness of this framework.
+
+# 2 METHOD
+
+In this section, we focus our discussion on the natural language understanding (NLU) tasks, and particularly, under a text classification scenario. However, the proposed data augmentation framework can be readily extended to other NLP tasks as well.
+
+# 2.1 BACKGROUND: DATA AUGMENTATION AND ADVERSARIAL TRAINING
+
+Data Augmentation Let $\mathcal{D} = \{\pmb{x}_i, y_i\}_{i=1\dots N}$ denote the training dataset, where the input example $\pmb{x}_i$ is a sequence of tokens, and $y_i$ is the corresponding label. To improve model's robustness and generalization ability, several data augmentation techniques (e.g., back-translation (Senrich et al., 2016; Edunov et al., 2018; Xie et al., 2019), mixup (Guo et al., 2019), c-BERT (Wu et al., 2019)) have been proposed. Concretely, label-preserving transformations are performed (on the original training sequences) to synthesize a collection of augmented samples, denoted by $\mathcal{D}' = \{\pmb{x}_i', y_i'\}_{i=1\dots N}$ . Thus, a model can learn from both the training set $\mathcal{D}$ and the augmented set $\mathcal{D}'$ , with $p_\theta(\cdot)$ the predicted output distribution of the model parameterized by $\theta$ :
+
+$$
+\theta^ {*} = \arg \min _ {\theta} \sum_ {\left(\boldsymbol {x} _ {i}, y _ {i}\right) \in \mathcal {D}} \mathcal {L} \left(p _ {\theta} \left(\boldsymbol {x} _ {i}\right), y _ {i}\right) + \sum_ {\left(\boldsymbol {x} _ {i} ^ {\prime}, y _ {i} ^ {\prime}\right) \in \mathcal {D} ^ {\prime}} \mathcal {L} \left(p _ {\theta} \left(\boldsymbol {x} _ {i} ^ {\prime}\right), y _ {i} ^ {\prime}\right) \tag {1}
+$$
+
+
+(a) Back-translation
+
+
+
+
+(b) Adversarial training
+(c) Stacking of back-translation and adversarial training
+Figure 1: Illustration of data augmentation combined with adversarial training.
+
+Several recent research efforts were focused on encouraging model predictions to be invariant to stochastic or domain-specific data transformations (Xie et al., 2019; Laine & Aila, 2017; Tarvainen & Valpola, 2017; Sohn et al., 2020; Miyato et al., 2018; Jiang et al., 2020; Hendrycks et al., 2020). Take back-translation as example: $\boldsymbol{x}_i' = \text{BackTrans}(\boldsymbol{x}_i)$ , then $\boldsymbol{x}_i'$ is a paraphrase of $\boldsymbol{x}_i$ . The model can be regularized to have consistent predictions for $(\boldsymbol{x}_i, \boldsymbol{x}_i')$ , by minimizing the distribution discrepancy $\mathcal{R}_{\mathrm{CS}}(p_\theta(\boldsymbol{x}_i), p_\theta(\boldsymbol{x}_i'))$ , which typically adopts KL divergence (see Fig. 1a).
+
+Adversarial Training In another line, adversarial training methods are applied to text data (Zhu et al., 2020; Jiang et al., 2020; Cheng et al., 2020; Aghajanyan et al., 2020) for improving model's robustness. Compared with data augmentation techniques, adversarial training requires no domain knowledge to generate additional training examples. Instead, it relies on the model itself to produce adversarial examples which the model are most likely to make incorrect predictions. Similar to data augmentation, adversarial training also typically utilizes the cross-entropy and consistency-based objectives for training. As the two most popular adversarial-training-based algorithms, the adversarial loss (Goodfellow et al., 2015) (Eqn. 2) and virtual adversarial loss (Miyato et al., 2018) (Eqn. 3) can be expressed as follows (see Fig. 1b):
+
+$$
+\mathcal {R} _ {\mathrm {A T}} \left(\boldsymbol {x} _ {i}, \tilde {\boldsymbol {x}} _ {i}, y _ {i}\right) = \mathcal {L} \left(p _ {\theta} \left(\tilde {\boldsymbol {x}} _ {i}\right), y _ {i}\right), s. t., \| \tilde {\boldsymbol {x}} _ {i} - \boldsymbol {x} _ {i} \| \leq \epsilon , \tag {2}
+$$
+
+$$
+\mathcal {R} _ {\mathrm {V A T}} \left(\boldsymbol {x} _ {i}, \tilde {\boldsymbol {x}} _ {i}\right) = \mathcal {R} _ {\mathrm {C S}} \left(p _ {\theta} \left(\tilde {\boldsymbol {x}} _ {i}\right), p _ {\theta} \left(\boldsymbol {x} _ {i}\right)\right), s. t., \| \tilde {\boldsymbol {x}} _ {i} - \boldsymbol {x} _ {i} \| \leq \epsilon . \tag {3}
+$$
+
+Generally, there is no closed-form to obtain the exact adversarial example $\hat{\pmb{x}}_i$ in either Eqn. 2 or 3. However, it usually can be approximated by a low-order approximation of the objective function with respect to $\pmb{x}_i$ . For example, the adversarial example in Eqn. 2 can be approximated by:
+
+$$
+\hat {\boldsymbol {x}} _ {i} \approx \boldsymbol {x} _ {i} + \epsilon \frac {\boldsymbol {g}}{\| \boldsymbol {g} \| _ {2}}, \text {w h e r e} \boldsymbol {g} = \nabla_ {\boldsymbol {x} _ {i}} \mathcal {L} \left(p _ {\theta} (\boldsymbol {x} _ {i}), y _ {i}\right). \tag {4}
+$$
+
+# 2.2 DIVERSITY-PROMOTING CONSISTENCY TRAINING
+
+As discussed in the previous section, data augmentation and adversarial training share the same intuition of producing neighbors around the original training instances. Moreover, both approaches share very similar training objectives. Therefore, it is natural to ask the following question: are different data augmentation methods and adversarial training equal in nature? Otherwise, are they complementary to each other, and thus can be consolidated together to further improve the model's generalization ability? Notably, it has been shown, in the CV domain, that combining different data augmentation operations could lead to more diverse augmented examples (Cubuk et al., 2018; 2020; Hendrycks et al., 2020). However, this is especially challenging for natural language, given that the semantics of a sentence can be entirely altered by slight perturbations.
+
+To answer the above question, we propose several distinct strategies to combine different data transformations, with the hope to produce more diverse and informative augmented examples. Specifically, we consider 5 different types of label-preserving transformations: back-translation (Sennrich et al., 2016; Edunov et al., 2018; Xie et al., 2019), $c$ -BERT word replacement (Wu et al., 2019), mixup (Guo et al., 2019; Chen et al., 2020a), cutoff (Shen et al., 2020), and adversarial training (Zhu et al., 2020; Jiang et al., 2020). The 3 combination strategies are schematically illustrated
+
+
+(a) Random combination
+
+
+(b) Mixup interpolation
+Figure 2: Illustration of different strategies to combine various label-preserving transformations.
+
+
+(c) Sequential stacking
+
+in Figure 2. For random combination, a particular label-preserving transformation is randomly selected, among all the augmentation operations available, for each mini-batch. As to the mixup interpolation, given two samples $\boldsymbol{x}_i$ and $\boldsymbol{x}_j$ drawn in a mini-batch, linear interpolation is performed between their input embedding matrices $e_i$ and $e_j$ (Zhang et al., 2017): $e_i' = ae_i + (1 - a)e_j$ , where $a$ is the interpolation parameter, usually drawn from a Beta distribution.
+
+Moreover, we consider stacking different label-preserving transformations in a sequential manner (see Figure 2c). It is worth noting that due to the discrete nature of text data, some stacking orders are infeasible. For example, it is not reasonable to provide an adversarially-perturbed embedding sequence to the back-translation module. Without loss of generality, we choose the combination where adversarial training is stacked over back-translation to demonstrate the sequential stacking operation (see Fig. 1c). Formally, given a training example $(x_{i},y_{i})$ , the consistency training objective for such a stacking operation can be written as:
+
+$$
+\boldsymbol {x} _ {i} ^ {\prime} = \operatorname {B a c k T r a n s} \left(\boldsymbol {x} _ {i}\right), \hat {\boldsymbol {x}} _ {i} \approx \operatorname {a r g m a x} _ {\tilde {\boldsymbol {x}} _ {i}} \mathcal {R} _ {\mathrm {A T}} \left(\boldsymbol {x} _ {i} ^ {\prime}, \tilde {\boldsymbol {x}} _ {i}, y _ {i}\right), \tag {5}
+$$
+
+$$
+\mathcal {L} _ {\text {c o n s i s t e n c y}} \left(\boldsymbol {x} _ {i}, \hat {\boldsymbol {x}} _ {i}, y _ {i}\right) = \mathcal {L} \left(p _ {\theta} \left(\boldsymbol {x} _ {i}\right), y _ {i}\right) + \alpha \mathcal {L} \left(p _ {\theta} \left(\hat {\boldsymbol {x}} _ {i}\right), y _ {i}\right) + \beta \mathcal {R} _ {\mathrm {C S}} \left(p _ {\theta} \left(\boldsymbol {x} _ {i}\right), p _ {\theta} \left(\hat {\boldsymbol {x}} _ {i}\right)\right), \tag {6}
+$$
+
+where the first term corresponds to the cross-entropy loss, the second term is the adversarial loss, $\mathcal{R}_{\mathrm{CS}}$ denotes the consistency loss term between $(\pmb{x}_i,\hat{\pmb{x}}_i)$ . Note that $\hat{\pmb{x}}_i$ is obtained through two different label-preserving transformations applied to $\pmb{x}$ , and thus deviates farther from $\pmb{x}$ and should be more diverse than $\pmb{x}_i'$ . Inspired by (Bachman et al., 2014; Zheng et al., 2016; Kannan et al., 2018; Hendrycks et al., 2020), we employ the Jensen-Shannon divergence for $\mathcal{R}_{\mathrm{CS}}$ , since it is upper bounded and tends to be more stable and consistent relative to the KL divergence:
+
+$$
+\mathcal {R} _ {\mathrm {C S}} \left(p _ {\theta} \left(\boldsymbol {x} _ {i}\right), p _ {\theta} \left(\hat {\boldsymbol {x}} _ {i}\right)\right) = \frac {1}{2} \left(\mathrm {K L} \left(p _ {\theta} \left(\boldsymbol {x} _ {i}\right) \| M\right) + \mathrm {K L} \left(p _ {\theta} \left(\hat {\boldsymbol {x}} _ {i}\right)\right) \| M\right), \tag {7}
+$$
+
+where $M = (p_{\theta}(\boldsymbol{x}_i) + p_{\theta}(\hat{\boldsymbol{x}}_i)) / 2$ . Later we simply use $\boldsymbol{x}_i'$ to represent the transformed example.
+
+# 2.3 CONTRASTIVE REGULARIZATION
+
+Consistency loss only provides local regularization, i.e., $\boldsymbol{x}_i$ and $\boldsymbol{x}_i'$ should have close predictions. However, the relative positions between $\boldsymbol{x}_i'$ and other training instances $\boldsymbol{x}_j$ ( $j \neq i$ ) have not been examined. In this regard, we propose to leverage a contrastive learning objective to better utilize the augmented examples. Specifically, we assume that the model should encourage an augmented sample $\boldsymbol{x}_i'$ to be closer, in the representation space, to its original sample $\boldsymbol{x}_i$ , relative to other data points $\boldsymbol{x}_j$ ( $j \neq i$ ) in the training set. This is a reasonable assumption since intuitively, the model should be robust enough to successfully determine from which original data an augmented sample is produced.
+
+
+Figure 3: Illustration of the contrastive learning module.
+
+The contrastive learning module is illustrated in Fig. 3. As demonstrated by prior efforts on contrastive learning, adopting a large batch size is especially vital for its effectiveness (Chen et al., 2020b; Khosla et al., 2020). Therefore, we introduce a memory bank that stores the history embeddings, thus enabling much larger number of negative samples. Moreover, to avoid the encoder from changing too rapidly (which may result in inconsistency embeddings), a momentum encoder module is incorporated into our algorithm. Concretely, let $f_{\theta}(.)$ and $f_{\bar{\theta}}(.)$ denote the transformation
+
+parameterized by the query encoder and key encoder, respectively. Note that $\theta$ and $\bar{\theta}$ represent their parameters. The momentum model parameters $\bar{\theta}$ are not learned by gradients. Instead, they are updated through the momentum rule: $\bar{\theta} \gets \gamma \bar{\theta} + (1 - \gamma)\theta$ at each training step. We omit the details here and refer the interested readers to the work by (He et al., 2020) for further explanation. Given a sample $x_{i}$ and its augmented example $x_{i}'$ , the query and key can be obtained as follows:
+
+$$
+\boldsymbol {q} _ {i} = f _ {\theta} \left(\boldsymbol {x} _ {i}\right), \quad \boldsymbol {q} _ {i} ^ {\prime} = f _ {\theta} \left(\boldsymbol {x} _ {i} ^ {\prime}\right), \quad \boldsymbol {k} _ {i} = f _ {\bar {\theta}} \left(\boldsymbol {x} _ {i}\right). \tag {8}
+$$
+
+Thus, the contrastive training objective can be written as:
+
+$$
+\mathcal {R} _ {\text {c o n t r a s t}} \left(\boldsymbol {x} _ {i}, \boldsymbol {x} _ {i} ^ {\prime}, \mathcal {M}\right) = \mathcal {R} _ {\mathrm {C T}} \left(\boldsymbol {q} _ {i}, \boldsymbol {k} _ {i}, \mathcal {M}\right) + \mathcal {R} _ {\mathrm {C T}} \left(\boldsymbol {q} _ {i} ^ {\prime}, \boldsymbol {k} _ {i}, \mathcal {M}\right), \tag {9}
+$$
+
+$$
+\mathcal {R} _ {\mathrm {C T}} \left(\boldsymbol {q} _ {i}, \boldsymbol {k} _ {i}, \mathcal {M}\right) = - \log \frac {\exp \left(\sin \left(\boldsymbol {q} _ {i} , \boldsymbol {k} _ {i}\right) / \tau\right)}{\sum_ {\boldsymbol {k} _ {j} \in \mathcal {M} \cup \left\{\boldsymbol {k} _ {i} \right\}} \exp \left(\sin \left(\boldsymbol {q} _ {i} , \boldsymbol {k} _ {j}\right) / \tau\right)}, \tag {10}
+$$
+
+where $\tau$ is the temperature, and $\mathcal{M}$ is the memory bank in which the history keys are stored. Cosine similarity is chosen for $\mathrm{sim}(\cdot)$ . Note that $\mathcal{R}_{\mathrm{CT}}(\pmb{q}_i', \pmb{k}_i, \mathcal{M})$ is similarly defined as $\mathcal{R}_{\mathrm{CT}}(\pmb{q}_i, \pmb{k}_i, \mathcal{M})$ (with $\pmb{q}_i$ replaced by $\pmb{q}_i'$ in Eqn. 10). In Eqn. 9, the first term corresponds to the contrastive loss calculated on the original examples (self-contrastive loss), while the second term is computed on the augmented sample (augment-contrastive loss). Under such a framework, the pair of original and augmented samples are encouraged to stay closer in the learned embedding space, relative to all other training instances. As a result, the model is regularized globally through considering the embeddings of all the training examples available.
+
+By integrating both the consistency training objective and the contrastive regularization, the overall training objective for the CoDA framework can be expressed as:
+
+$$
+\theta^ {*} = \operatorname {a r g m i n} _ {\theta} \sum_ {\left(\boldsymbol {x} _ {i}, y _ {i}\right) \in \mathcal {D}} \mathcal {L} _ {\text {c o n s i s t e n c y}} \left(\boldsymbol {x} _ {i}, \boldsymbol {x} _ {i} ^ {\prime}, y _ {i}\right) + \lambda \mathcal {R} _ {\text {c o n t r a s t}} \left(\boldsymbol {x} _ {i}, \boldsymbol {x} _ {i} ^ {\prime}, \mathcal {M}\right). \tag {11}
+$$
+
+where $\lambda$ is a hyperparameter to be chosen. It is worth noting that the final objective has taken both the local (consistency loss) and global (contrastive loss) information introduced by the augmented examples into consideration.
+
+# 3 EXPERIMENTS
+
+To verify the effectiveness of CoDA, We evaluate it on the widely-adopted GLUE benchmark (Wang et al., 2018), which consists of multiple natural language understanding (NLU) tasks. The details of these datasets can be found in Appendix B. RoBERTa (Liu et al., 2019) is employed as the testbed for our experiments. However, the proposed approach can be flexibly integrated with other models as well. We provide more implementation details in Appendix C. Our code will be released to encourage future research.
+
+In this section, we first present our exploration of several different strategies to consolidate various data transformations (Sec 3.1). Next, we conduct extensive experiments to carefully select the contrastive objective for NLU problems in Sec 3.2. Based upon these settings, we further evaluate CoDA on the GLUE benchmark and compare it with a set of competitive baselines in Sec 3.3. Additional experiments in the low-resource settings and qualitative analysis (Sec 3.4) are further conducted to gain a deep understanding of the proposed framework.
+
+# 3.1 COMBINING LABEL-PRESERVING TRANSFORMATIONS
+
+We start by implementing and comparing several data augmentation baselines. As described in the previous section, we explore 5 different approaches: back-translation, c-BERT word replacement, Mixup, Cutoff and adversarial training. More details can be found in Appendix A. The standard cross-entropy loss, along with the consistency regularization term (Eq. 6) is utilized for all methods to ensure a fair comparison. We employ the MNLI dataset and RoBERTa-base model for the comparison experiments with the results shown in Table 1.
+
+All these methods have achieved improvements over the RoBERTa-base model, demonstrating the effectiveness of leveraging label-preserving transformations for NLU. Moreover, back-translation, cutoff and adversarial training exhibit stronger empirical results relative to mixup and c-BERT.
+
+To improve the diversity of augmented examples, we explore several strategies to combine multiple transformations: $i$ ) random combination, ii) mixup interpolation, and iii) sequential stacking, as
+
+shown in Fig. 2. In Table 1, the score of naive random combination lies between single transformations. This may be attributed to the fact that different label-preserving transformations regularize the model in distinct ways, and thus the model may not be able to leverage different regularization terms simultaneously.
+
+Besides, among all the other combination strategies, we observe that gains can be obtained by integrating back-translation and adversarial training together. Concretely, mixing back-translation and adversarial training samples (in the input embedding space) slightly improve the accuracy from 88.5 to 88.6. More importantly, the result is further improved to 88.8 with these two transformations stacking together2 (see Sec 2.2). With significance test, we find stack (back, adv) performs consistently better than other combinations (t-test of 10 runs, $p$ -values $< 0.02$ ). This observation indicates that the stacking operation, especially in the case of back-translation and adversarial training, can produce more diverse augment examples.
+
+Intuitively, the augmented sample, with two sequential transformations, deviates more from the corresponding training data, and thus tends to be more effective at improving the model's generalization ability. To verify this hypothesis,
+
+we further calculate the MMD (Gretton et al., 2012) between augmented samples and the original training instances. It can be observed that stack (back, adv), stack (back, cut, adv) and stack (back, adv, cut) have all produced examples the farthest from the original training instances (see Table 1). However, we conjecture that the latter two may have altered the semantic meanings too much, thus leading to inferior results. In this regard, stack (back, adv) is employed as the data transformation module for all the experiments below.
+
+Table 1: Comparison of different transformations on the MNLI-m development set. Abbr: original training instances (ori), back-translation (back), cutoff (cut), mixup (mix), adversarial (adv).
+
+# 3.2 CONTRASTIVE REGULARIZATION DESIGN
+
+In this section, we aim to incorporate the global information among the entire set of original and augmented samples via a contrastive regularization. First, we explore a few hyperparameters for the proposed contrastive objective. Since both the memory bank and the momentum encoder are vital components, we study the impacts of different hyperparameter values on both the temperature and the momentum. As shown in Fig. 4a, a temperature of 1.0 combined with the momentum of 0.99 can achieve the best empirical result. We then examine the size effect of the memory bank, and observe a larger memory bank size leads to a better capture of the global information and results in higher performance boost $^3$ (see Fig. 4b).
+
+After carefully choosing the best setting based on the above experiments, we apply the contrastive learning objective to several GLUE datasets. We also implement several prior works on contrastive learning to compare, including the MoCo loss (He et al., 2020) and the supervised contrastive (SupCon) loss (Khosla et al., 2020), all implemented with memory banks. Note that we remove the consistency regularization for this experiment to better examine the effect of the contrastive regularization term (i.e., $\alpha = \beta = 0$ , $\lambda \neq 0$ ). As presented in Table 2, our contrastive objective consistently exhibits the largest performance improvement. This observation demonstrates for NLU, our data transformation module can be effectively equipped with the contrastive regularization.
+
+
+(a) Momentum encoder
+
+
+(b) Memory bank
+Figure 4: Hyperparameter exploration for the contrastive loss, evaluated on the MNLI-m development set. Note: All models use the RoBERTa-base model as the encoder.
+
+| Method | MNLI-m (Acc) | QNLI (Acc) | SST-2 (Acc) | RTE (Acc) | MRPC (Acc) |
| RoBERTa-base | 87.6 | 92.8 | 94.8 | 78.7 | 90.2 |
| + MoCo (He et al., 2020) | 88.2 | 93.3 | 95.1 | 80.8 | 90.9 |
| + SupCon (Khosla et al., 2020) | 88.1 | 93.2 | 95.2 | 80.5 | 90.2 |
| + Contrastive (ours) | 88.1 | 93.6 | 95.3 | 82.0 | 91.7 |
+
+# 3.3 GLUE BENCHMARK EVALUATION
+
+With both components within the CoDA algorithm being specifically tailored to the natural language understanding applications, we apply it to the RoBERTa-large model (Liu et al., 2019). Comparisons are made with several competitive data-augmentation-based and adversarial-training-based approaches on the GLUE benchmark. Specifically, we consider back-translation, cutoff (Shen et al., 2020), FreeLB (Zhu et al., 2020), SMART (Jiang et al., 2020), and R3F (Aghajanyan et al., 2020) as the baselines, where the last three all belong to adversarial training. The results are presented in Table 3. It is worth noting that back-translation is based on our implementation, where both the cross-entropy and consistency regularization terms are utilized.
+
+Table 2: Comparison among different contrastive objectives on the GLUE development set.
+
+| Method | MNLI-m/ mm (Acc) | QQP (Acc/F1) | QNLI (Acc) | SST-2 (Acc) | MRPC (Acc/F1) | CoLA (Mcc) | RTE (Acc) | STS-B (P/S) | Avg |
| RoBERTa-large | 90.2/- | 92.2/- | 94.7 | 96.4 | -/90.9 | 68 | 86.6 | 92.4/- | 88.9 |
| Back-Trans | 91.1/90.4 | 92/- | 95.3 | 97.1 | 90.9/93.5 | 69.4 | 91.7 | 92.8/92.6 | 90.4 |
| Cutoff | 91.1/- | 92.4/- | 95.3 | 96.9 | 91.4/93.8 | 71.5 | 91.0 | 92.8/- | 90.6 |
| FreeLB | 90.6/- | 92.6/- | 95 | 96.7 | 91.4/- | 71.1 | 88.1 | 92.7/- | - |
| SMART | 91.1/91.3 | 92.4/89.8 | 95.6 | 96.9 | 89.2/92.1 | 70.6 | 92 | 92.8/92.6 | 90.4 |
| R3F | 91.1/91.3 | 92.4/89.9 | 95.3 | 97.0 | 91.6/- | 71.2 | 88.5 | - | - |
| CoDA | 91.3/90.8 | 92.5/89.9 | 95.3 | 97.4 | 91.9/94 | 72.6 | 92.4 | 93/92.7 | 91.1 |
+
+Table 3: Main results of single models on the GLUE development set. Note: The best result on each task is in bold and “-” denotes the missing results. The average score is calculated based on the same setting as RoBERTa.
+
+We find that CoDA brings significant gains to the RoBERTa-large model, with the averaged score on the GLUE dev set improved from 88.9 to 91.1. More importantly, CoDA consistently outperforms these strong baselines (indicated by a higher averaged score), demonstrating that our algorithm can produce informative and high-quality augmented samples and leverage them effectively as well. Concretely, on datasets with relatively larger numbers of training instances ( $>100\mathrm{K}$ ), i.e., MNLI, QQP and QNLI, different approaches show similar gains over the RoBERTa-large model. However, on smaller tasks (SST-2, MRPC, CoLA, RTE, and STS-B), CoDA beats other data augmentation or adversarial-based methods by a wide margin. We attribute this observation to the fact that the synthetically produced examples are more helpful when the tasks-specific data is limited. Thus, when smaller datasets are employed for fine-tuning large-scale language models, the superiority of the proposed approach is manifested to a larger extent.
+
+# 3.4 ADDITIONAL EXPERIMENTS AND ANALYSIS
+
+Low-resource Setting To verify the advantages of CoDA when a smaller number of task-specific data is available, we further conduct a low-resource experiment with the MNLI
+
+
+Figure 5: Low-resource setting experiments on the MNLI (left) and QNLI (right) dev sets.
+
+
+
+and QNLI datasets. Concretely, different proportions of training data are sampled and utilized for training. We apply CoDA to RoBERTa-base and compare it with back-translation and adversarial training across various training set sizes. The corresponding results are presented in Fig. 5. We observe that back-translation and adversarial training exhibit similar performance across different proportions. More importantly, CoDA demonstrates stronger results consistently, further highlighting its effectiveness with limited training data.
+
+The Effectiveness of Contrastive Objective To investigate the general applicability of the proposed contrastive regularization objective, we further apply it to different data augmentation methods. The RoBERTa-base model and QNLI dataset are leveraged for this set of experiments, and the results are shown in Fig. 6. We observe that the contrastive learning objective boosts the empirical performance of the resulting algorithm regardless of the data augmentation approaches it is applied to. This further validates our assumption that considering the global information among the embeddings of all examples is beneficial for leveraging augmented samples more effectively.
+
+
+Figure 6: Evaluation of the proposed contrastive objective while applied to different data augmentation approaches.
+
+# 4 RELATED WORK
+
+Data Augmentation in NLP Different data augmentation approaches have been proposed for text data, such as back-translation (Sennrich et al., 2016; Edunov et al., 2018; Xie et al., 2019), c-BERT word replacement (Wu et al., 2019), mixup (Guo et al., 2019; Chen et al., 2020a), Cutoff (Shen et al., 2020). Broadly speaking, adversarial training (Zhu et al., 2020; Jiang et al., 2020) also synthesizes additional examples via perturbations at the word embedding layer. Although effective, how these data augmentation transformations may be combined together to obtain further improvement has been rarely explored. This could be attributed to the fact that a sentence's semantic meanings are quite sensitive to small perturbations. Consistency-regularized loss (Bachman et al., 2014; Rasmus et al., 2015; Laine & Aila, 2017; Tarvainen & Valpola, 2017) is typically employed as the training objective, which ignores the global information within the entire dataset.
+
+Contrastive Learning Contrastive methods learn representations by contrasting positive and negative examples, which has demonstrated impressive empirical success in computer vision tasks (Hénaff et al., 2019; He et al., 2020). Under an unsupervised setting, Contrastive learning approaches learn representation by maximizing mutual information between local-global hidden representations (Hjelm et al., 2019; Oord et al., 2018; Hénaff et al., 2019). It can be also leveraged to learn invariant representations by encouraging consensus between augmented samples from the same input (Bachman et al., 2019; Tian et al., 2019). He et al. (2020); Wu et al. (2018) proposes to utilize a memory bank to enable a much larger number of negative samples, which is shown to benefit the transferability of learned representations as well (Khosla et al., 2020). Recently, contrastive learning was also employed to improve language model pre-training (Iter et al., 2020).
+
+# 5 CONCLUSION
+
+In this paper, we proposed CoDA, a Contrast-enhanced and Diversity promoting data Augmentation framework. Through extensive experiments, we found that stacking adversarial training over a back-translation module can give rise to more diverse and informative augmented samples. Besides,
+
+we introduced a specially-designed contrastive loss to incorporate these examples for training in a principled manner. Experiments on the GLUE benchmark showed that CoDA consistently improves over several competitive data augmentation and adversarial training baselines. Moreover, it is observed that the proposed contrastive objective can be leveraged to improve other data augmentation approaches as well, highlighting the wide applicability of the CoDA framework.
+
+# REFERENCES
+
+Armen Aghajanyan, Akshit Shrivastava, Anchit Gupta, Naman Goyal, Luke Zettlemoyer, and Sonal Gupta. Better fine-tuning by reducing representational collapse. arXiv preprint arXiv:2008.03156, 2020.
+Philip Bachman, Ouais Alsharif, and Doina Precup. Learning with pseudo-ensembles. In Advances in neural information processing systems, pp. 3365-3373, 2014.
+Philip Bachman, R Devon Hjelm, and William Buchwalter. Learning representations by maximizing mutual information across views. In Advances in Neural Information Processing Systems, pp. 15535-15545, 2019.
+David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin A Raffel. Mixmatch: A holistic approach to semi-supervised learning. In Advances in Neural Information Processing Systems, pp. 5049-5059, 2019.
+Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020.
+Jiaao Chen, Zichao Yang, and Diyi Yang. MixText: Linguistically-informed interpolation of hidden space for semi-supervised text classification. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 2147-2157, Online, July 2020a. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.194. URL https://www.aclweb.org/anthology/2020.acl-main.194.
+Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709, 2020b.
+Yong Cheng, Lu Jiang, Wolfgang Macherey, and Jacob Eisenstein. Advaug: Robust adversarial augmentation for neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, 2020.
+E. Cubuk, Barret Zoph, Dandelion Mané, V. Vasudevan, and Quoc V. Le. Autoaugment: Learning augmentation policies from data. ArXiv, abs/1805.09501, 2018.
+Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. Randaugment: Practical automated data augmentation with a reduced search space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702-703, 2020.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, 2019.
+Terrance DeVries and Graham W Taylor. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017.
+Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. Understanding back-translation at scale. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 2018.
+Raphael Gontijo-Lopes, Sylvia J Smullin, Ekin D Cubuk, and Ethan Dyer. Affinity and diversity: Quantifying mechanisms of data augmentation. arXiv preprint arXiv:2002.08973, 2020.
+
+Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. 2015.
+Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Scholkopf, and Alexander Smola. A kernel two-sample test. The Journal of Machine Learning Research, 13(1):723-773, 2012.
+Hongyu Guo, Yongyi Mao, and Richong Zhang. Augmenting data with mixup for sentence classification: An empirical study. arXiv preprint arXiv:1905.08941, 2019.
+Yaru Hao, Li Dong, Furu Wei, and Ke Xu. Visualizing and understanding the effectiveness of bert. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP, 2019.
+Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9729-9738, 2020.
+Olivier J Henaff, Aravind Srinivas, Jeffrey De Fauw, Ali Razavi, Carl Doersch, SM Eslami, and Aaron van den Oord. Data-efficient image recognition with contrastive predictive coding. arXiv preprint arXiv:1905.09272, 2019.
+Dan Hendrycks, Norman Mu, Ekin D Cubuk, Barret Zoph, Justin Gilmer, and Balaji Lakshminarayanan. Augmix: A simple data processing method to improve robustness and uncertainty. In 8th International Conference on Learning Representations, ICLR, 2020.
+R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. In 7th International Conference on Learning Representations, ICLR, 2019.
+Vu Cong Duy Hoang, Philipp Koehn, Gholamreza Haffari, and Trevor Cohn. Iterative back-translation for neural machine translation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pp. 18-24, 2018.
+Zhiting Hu, Bowen Tan, Russ R Salakhutdinov, Tom M Mitchell, and Eric P Xing. Learning data manipulation for augmentation and weighting. In Advances in Neural Information Processing Systems, pp. 15764-15775, 2019.
+Dan Iter, Kelvin Guu, Larry Lansing, and Dan Jurafsky. Pretraining with contrastive sentence objectives improves discourse performance of language models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL, 2020.
+Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Tuo Zhao. Smart: Robust and efficient fine-tuning for pre-trained natural language models through principled regularized optimization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL, 2020.
+Harini Kannan, Alexey Kurakin, and Ian Goodfellow. Adversarial logit pairing. arXiv preprint arXiv:1803.06373, 2018.
+Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. Supervised contrastive learning. arXiv preprint arXiv:2004.11362, 2020.
+Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, 2014.
+Ashutosh Kumar, Satwik Bhattachamishra, Manik Bhandari, and Partha Talukdar. Submodular optimization-based diverse paraphrasing and its effectiveness in data augmentation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2019.
+
+Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017.
+Xiaodong Liu, Hao Cheng, Pengcheng He, Weizhu Chen, Yu Wang, Hoifung Poon, and Jianfeng Gao. Adversarial training for large neural language models. arXiv preprint arXiv:2004.08994, 2020.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
+Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE transactions on pattern analysis and machine intelligence, 41(8):1979-1993, 2018.
+Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.
+Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683, 2019.
+Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-supervised learning with ladder networks. In Advances in neural information processing systems, pp. 3546-3554, 2015.
+Rico Sennrich, Barry Haddow, and Alexandra Birch. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL, 2016.
+Dinghan Shen, Mingzhi Zheng, Yelong Shen, Yanru Qu, and Weizhu Chen. A simple but tough-to-beat data augmentation approach for natural language understanding and generation, 2020.
+Jian Shen, Yanru Qu, Weinan Zhang, and Yong Yu. Wasserstein distance guided representation learning for domain adaptation. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), 2018.
+Kihyuk Sohn, David Berthelot, Chun-Liang Li, Zizhao Zhang, Nicholas Carlini, Ekin D Cubuk, Alex Kurakin, Han Zhang, and Colin Raffel. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. arXiv preprint arXiv:2001.07685, 2020.
+Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. How to fine-tune bert for text classification? In China National Conference on Chinese Computational Linguistics, pp. 194-206. Springer, 2019.
+Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In Advances in neural information processing systems, pp. 1195-1204, 2017.
+Yonglong Tian, Dilip Krishnan, and Phillip Isola. Contrastive multiview coding. arXiv preprint arXiv:1906.05849, 2019.
+Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. EMNLP 2018, pp. 353, 2018.
+Jason Wei and Kai Zou. Eda: Easy data augmentation techniques for boosting performance on text classification tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP, 2019.
+Xing Wu, Shangwen Lv, Liangjun Zang, Jizhong Han, and Songlin Hu. Conditional bert contextual augmentation. In International Conference on Computational Science, pp. 84-95. Springer, 2019.
+
+Zhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua Lin. Unsupervised feature learning via nonparametric instance discrimination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3733-3742, 2018.
+Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, and Quoc V Le. Unsupervised data augmentation for consistency training. arXiv preprint arXiv:1904.12848, 2019.
+Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V Le. Qanet: Combining local convolution with global self-attention for reading comprehension. In 6th International Conference on Learning Representations, ICLR 2018, 2018.
+Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017.
+Stephan Zheng, Yang Song, Thomas Leung, and Ian Goodfellow. Improving the robustness of deep neural networks via stability training. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4480-4488, 2016.
+Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Thomas Goldstein, and Jingjing Liu. Freelb: Enhanced adversarial training for language understanding. In 8th International Conference on Learning Representations, ICLR, 2020.
+
+# A DATA AUGMENTATION DETAILS
+
+We select the following representative data augmentation operations as basic building blocks of our data augmentation module. We denote $\boldsymbol{x}_i = [x_{i,1},\dots,x_{i,l}]$ as the input text sequence, and $\boldsymbol{e}_i = [e_{i,1},\dots,e_{i,l}]$ as corresponding embedding vectors.
+
+- Back translation is widely applied in machine translation (MT) (Sennrich et al., 2016; Hoang et al., 2018; Edunov et al., 2018), and is introduced to text classification recently (Xie et al., 2019). Back-Trans uses 2 MT models to translate the input example to another pivot language, and then translate it back, $\boldsymbol{x}_i \rightarrow \text{Pivot Language} \rightarrow \boldsymbol{x}_i'$ .
+- C-BERT Word Replacement (Wu et al., 2019) is a representative of the word replacement augmentation family. C-BERT pretrains a conditional BERT model to learn contextualized representation $P(\mathbf{x}_j|[x_{i,1}\dots x_{i,j - 1}[\mathrm{MASK}]x_{i,j + 1}\dots x_{i,l}],y_i)$ conditioning on classes. This method then randomly substitutes words of $\pmb{x}$ to obtain $\pmb{x}'([x_{i,1}\dots x_{i,j}'\dots x_{i,l}])^4$ .
+- Cutoff (DeVries & Taylor, 2017) randomly drops units in a continuous span on the input, while Shen et al. (2020) adapts this method to text embeddings. For input embeddings $e_i$ , this method randomly set a continuous span of elements to 0s, $e_i' = [e_{i,1} \ldots e_{i,j-1}, 0 \ldots 0, e_{i,j+w} \ldots e_{i,l}]$ , where the window size $w \propto l$ , and the start position $j \in [1, l-w]$ is randomly selected. For transformer encoders that involve position embeddings, we also set input mask as 0s at corresponding positions.
+- Mixup (Zhang et al., 2017) interpolates two image as well as their labels. Guo et al. (2019) borrows this method to text. For 2 input embeddings $(e_i, e_j)$ , mixup interpolates the embedding vectors $e_i' = ae_i + (1 - a)e_j$ where $a$ is sampled from a Beta distribution. Also, the labels are interpolated for the augmented sample $y_i' = ay_i + (1 - a)y_j$ .
+- Adversarial training generates adversarial examples for input embeddings, simply, $e_i' = \arg \max_{\| e_i - e_i' \| \leq 1} \mathcal{L}(f(e_i'), y_i)$ . We mainly follow the implementation of Zhu et al. (2020). Besides, when computing the adversarial example $e_i'$ , the dropout variables are recorded and reused later when encoding $e_i'$ .
+
+Maximum mean discrepancy (MMD) (Gretton et al., 2012) is a widely used discrepancy measure for 2 distributions. We adopt the multi-kernel MMD implementation based on Shen et al. $(2018)^{5}$ , to quantify the distance of data distributions before and after DA transformations.
+
+# B DATASET DETAILS
+
+The datasets and statistics are summarized in Table 4.
+
+| Corpus | Task | Sentence Pair | #Train | #Dev | #Test | #Class | Metrics |
| MNLI | NLI | ✓ | 393k | 20k | 20k | 3 | Accuracy |
| QQP | Paraphrase | ✓ | 364k | 40k | 391k | 2 | Accuracy/F1 |
| QNLI | QA/NLI | ✓ | 108k | 5.7k | 5.7k | 2 | Accuracy |
| SST | Sentiment | × | 67k | 872 | 1.8k | 2 | Accuracy |
| MRPC | Paraphrase | ✓ | 3.7k | 408 | 1.7k | 2 | Accuracy/F1 |
| CoLA | Acceptability | × | 8.5k | 1k | 1k | 2 | Matthews corr |
| RTE | NLI | ✓ | 2.5k | 276 | 3k | 2 | Accuracy |
| STS-B | Similarity | ✓ | 7k | 1.5k | 1.4k | - | Pearson/Spe-arman corr |
+
+Table 4: GLUE benchmark summary.
+
+# C IMPLEMENTATION DETAILS
+
+Our implementation is based on RoBERTa (Liu et al., 2019). We use ADAM (Kingma & Ba, 2014) as our optimizer. We follow the hyper-parameter study of RoBERTa and set as default the following parameters: batch size (32), learning rate (1e-5), epochs (5), warmup ratio (0.06), weight decay (0.1) and we keep other parameters unchanged with RoBERTa. For Back-Trans, we use the en-de single models trained on WMT19 and released in FairSeq. More specifically, we use beam search (beam size = 5) and keep only the top-1 hypothesis. We slightly tune Adversarial parameters on MNLI based on FreeLB and fix them on other datasets, since adversarial training is not our focus. For contrastive regularization, we implement based on MoCo. In GLUE evaluation, we mainly tune the weights of 3 regularization terms, $\alpha \in [0,1],\beta \in [0,3],\lambda \in [0,0.03]$ (Eq. 6, 11). Besides, for smaller tasks (MRPC, CoLA, RTE, STS-B), we use the best performed MNLI model to initialize their parameters6.
\ No newline at end of file
diff --git a/codacontrastenhancedanddiversitypromotingdataaugmentationfornaturallanguageunderstanding/images.zip b/codacontrastenhancedanddiversitypromotingdataaugmentationfornaturallanguageunderstanding/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..8c5c1ea721ddf20df37cf1de3f4759a7bfc632fc
--- /dev/null
+++ b/codacontrastenhancedanddiversitypromotingdataaugmentationfornaturallanguageunderstanding/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:07885c6513751267baf098c3618a025666e3ba8560e2bc80069d55ae67333842
+size 399843
diff --git a/codacontrastenhancedanddiversitypromotingdataaugmentationfornaturallanguageunderstanding/layout.json b/codacontrastenhancedanddiversitypromotingdataaugmentationfornaturallanguageunderstanding/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..3c917d39215580f2949dbfe2eb2f4de5b867578c
--- /dev/null
+++ b/codacontrastenhancedanddiversitypromotingdataaugmentationfornaturallanguageunderstanding/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:14c699954010089d6009f178268285d5f4135f1cfe3f72bca49be4ddb0f39e3d
+size 412027
diff --git a/collectiverobustnesscertificatesexploitinginterdependenceingraphneuralnetworks/b44d8373-674b-4fda-b8c5-456f69a511af_content_list.json b/collectiverobustnesscertificatesexploitinginterdependenceingraphneuralnetworks/b44d8373-674b-4fda-b8c5-456f69a511af_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..802066476075616a1b6ef266fb86b9fc1191ef09
--- /dev/null
+++ b/collectiverobustnesscertificatesexploitinginterdependenceingraphneuralnetworks/b44d8373-674b-4fda-b8c5-456f69a511af_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:20fd1f9400b4c3efd48a1feeeab4a06fd6380542283a9345f76170d73f73a5b6
+size 154116
diff --git a/collectiverobustnesscertificatesexploitinginterdependenceingraphneuralnetworks/b44d8373-674b-4fda-b8c5-456f69a511af_model.json b/collectiverobustnesscertificatesexploitinginterdependenceingraphneuralnetworks/b44d8373-674b-4fda-b8c5-456f69a511af_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..7eda0c79f9491d1d308a821833e92a523a22fec5
--- /dev/null
+++ b/collectiverobustnesscertificatesexploitinginterdependenceingraphneuralnetworks/b44d8373-674b-4fda-b8c5-456f69a511af_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:870b9d57f21ee219985c42a4c92f956f0abfd4d6d8718da85fc668ec8897ab35
+size 176507
diff --git a/collectiverobustnesscertificatesexploitinginterdependenceingraphneuralnetworks/b44d8373-674b-4fda-b8c5-456f69a511af_origin.pdf b/collectiverobustnesscertificatesexploitinginterdependenceingraphneuralnetworks/b44d8373-674b-4fda-b8c5-456f69a511af_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..8f07ab280ce17a9f17f1d1794c6e384fd4485499
--- /dev/null
+++ b/collectiverobustnesscertificatesexploitinginterdependenceingraphneuralnetworks/b44d8373-674b-4fda-b8c5-456f69a511af_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a1eac4e58a243302c5dee019c018556af669faa8e63057bf4f1e3671f0b03849
+size 674722
diff --git a/collectiverobustnesscertificatesexploitinginterdependenceingraphneuralnetworks/full.md b/collectiverobustnesscertificatesexploitinginterdependenceingraphneuralnetworks/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..12b4d532ad886800f6e0db6d144fff0fdbb881b4
--- /dev/null
+++ b/collectiverobustnesscertificatesexploitinginterdependenceingraphneuralnetworks/full.md
@@ -0,0 +1,697 @@
+# COLLECTIVE ROBUSTNESS CERTIFICATES: EXPLOITING INTERDEPENDENCE IN GRAPH NEURAL NETWORKS
+
+Jan Schuchardt, Aleksandar Bojchevski, Johannes Gasteiger & Stephan Gunnemann
+
+Technical University of Munich, Germany
+
+{jan.schuchardt,bojchevs,j.gasteiger,guennemann}@in.tum.de
+
+# ABSTRACT
+
+In tasks like node classification, image segmentation, and named-entity recognition we have a classifier that simultaneously outputs multiple predictions (a vector of labels) based on a single input, i.e. a single graph, image, or document respectively. Existing adversarial robustness certificates consider each prediction independently and are thus overly pessimistic for such tasks. They implicitly assume that an adversary can use different perturbed inputs to attack different predictions, ignoring the fact that we have a single shared input. We propose the first collective robustness certificate which computes the number of predictions that are simultaneously guaranteed to remain stable under perturbation, i.e. cannot be attacked. We focus on Graph Neural Networks and leverage their locality property - perturbations only affect the predictions in a close neighborhood - to fuse multiple single-node certificates into a drastically stronger collective certificate. For example, on the Citeseer dataset our collective certificate for node classification increases the average number of certifiable feature perturbations from 7 to 351.
+
+# 1 INTRODUCTION
+
+Most classifiers are vulnerable to adversarial attacks (Akhtar & Mian, 2018; Hao-Chen et al., 2020). Slight perturbations of the data are often sufficient to manipulate their predictions. Even in scenarios where attackers are not present it is critical to ensure that models are robust since data can be noisy, incomplete, or anomalous. We study classifiers that collectively output many predictions based on a single input. This includes node classification, link prediction, molecular property prediction, image segmentation, part-of-speech tagging, named-entity recognition, and many other tasks.
+
+Various techniques have been proposed to improve the adversarial robustness of such models. One example is adversarial training (Goodfellow et al., 2015), which has been applied to part-of-speech tagging (Han et al., 2020), semantic segmentation (Xu et al., 2020b) and node classification (Feng et al., 2019). Graph-related tasks in particular have spawned a rich assortment of techniques. These include Bayesian models (Feng et al., 2020), data-augmentation methods (Entezari et al., 2020) and various robust network architectures (Zhu et al., 2019; Geisler et al., 2020). There are also robust loss functions which either explicitly model an adversary trying to cause misclassifications (Zhou & Vorobeychik, 2020) or use regularization terms derived from robustness certificates (Zügner & Gunnemann, 2019). Other methods try to detect adversially perturbed graphs (Zhang et al., 2019; Xu et al., 2020a) or directly correct perturbations using generative models (Zhang & Ma, 2020).
+
+However, none of these techniques provide guarantees and they can only be evaluated based on their ability to defend against known adversarial attacks. Once a technique is established, it may subsequently be defeated using novel attacks (Carlini & Wagner, 2017). We are therefore interested in deriving adversarial robustness certificates which provably guarantee that a model is robust.
+
+In this work we focus on node classification. Here, the goal is to assign a label to each node in a single (attributed) graph. Node classification can be the target of either local or global adversarial attacks. Local attacks, such as Nettack (Züigner et al., 2018; Züigner et al., 2020), attempt to alter the
+
+
+Figure 1: Previous certificates consider each node independently. Most nodes cannot be certified since the adversary can choose a different perturbed graph per node (left). This is impossible in practice due to mutually exclusive perturbations. Our collective certificate enforces a single perturbed graph (center). It aggregates the amount of perturbation within each receptive field and then evaluates a single-node certificate to determine whether the corresponding prediction is robust (right).
+
+prediction of a particular node in the graph. Global attacks, as proposed by Zügner & Gunnemann (2019), attempt to alter the predictions of many nodes at once. With global attacks, the attacker is constrained by the fact that all predictions are based on a single shared input. To successfully attack some nodes the attacker might need to insert certain edges in the graph, while for another set of nodes the same edges must not be inserted. With such mutually exclusive adversarial perturbations, the attacker is forced to make a choice and can attack only one subset of nodes (see Fig. 1).
+
+Existing certificates (Zügner & Gümnmann, 2019; Bojchevski & Gümnmann, 2019; Bojchevski et al., 2020) are designed for local attacks, i.e. to certify the predictions of individual nodes. So far, there is no dedicated certificate for global attacks, i.e. to certify the predictions of many nodes at once2. A naïve certificate for global attacks can be constructed from existing single-node certificates as follows: One simply certifies each node's prediction independently and counts how many are guaranteed to be robust. This, however, implicitly assumes that an adversary can use different perturbed inputs to attack different predictions, ignoring the fact that we have a single shared input.
+
+We propose a collective robustness certificate for global attacks that directly computes the number of simultaneously certifiable nodes for which we can guarantee that their predictions will not change. This certificate explicitly models that the attacker is limited to a single shared input and thus accounts for the resulting mutual exclusivity of certain attacks. Specifically, we fuse multiple single-node certificates, which we refer to as base certificates, into a drastically (and provably) stronger collective one. Our approach is independent of how the base certificates are derived, and any improvements to the base certificates directly translate to improvements to the collective certificate.
+
+The key property which we exploit is locality. For example, in a $k$ -layer message-passing graph neural network (Gilmer et al., 2017) the prediction for any given node depends only on the nodes in its $k$ -hop neighborhood. Similarly, the predicted segment for any pixel depends only on the pixels in its receptive field, and the named-entity assigned to any word only depends on words in its surrounding.
+
+For classifiers that satisfy locality, perturbations to one part of the graph do not affect all nodes. Adversaries are thus faced with a budget allocation problem: It might be possible to attack different subsets of nodes via perturbations to different subgraphs, but performing all perturbations at once could exceed their adversarial budget. The naive approach discussed above ignores this, overestimating how many nodes can be attacked. We design a simple (mixed-integer) linear program (LP) that enforces a single perturbed graph. It leverages locality by only considering the amount of perturbation within each receptive field when evaluating the single-node certificates (see Fig. 1).
+
+We evaluate our approach on different datasets and with different base certificates. We show that incorporating locality alone is sufficient to obtain significantly better results. Our proposed certificate:
+
+- Is the first collective certificate that explicitly models simultaneous attacks on multiple outputs.
+- Fuses individual certificates into a provably stronger certificate by explicitly modeling locality.
+- Is the first node classification certificate that can model not only global and local budgets, but also the number of adversary-controlled nodes, regardless of whether the base certificates support this.
+
+# 2 PRELIMINARIES
+
+Data and models. We define our unperturbed data as an attributed graph $\mathcal{G} = (\pmb {X},\pmb {A})\in \mathbb{G}$ with $\mathbb{G} = \{0,1\}^{N\times D}\times \{0,1\}^{N\times N}$ , consisting of $N$ $D$ -dimensional feature vectors and a directed $N\times N$ adjacency matrix. Each vertex is assigned one out of $C$ classes by a multi-output classifier $f:\mathbb{G}\mapsto \{1,\ldots ,C\} ^N$ . In the following, $f_{n}(\mathcal{G}) = f_{n}(\pmb {X},\pmb {A}) = f_{n}$ refers to the prediction for node $n$ .
+
+Collective threat model. Unlike previous certificates, we model an adversary that aims to change multiple predictions at once. Let $\mathbb{B}_{\mathcal{G}} \subseteq \mathbb{G}$ be a set of admissible perturbed graphs. Given a clean graph $\mathcal{G}$ , the adversary tries to find a $\mathcal{G}' \in \mathbb{B}_{\mathcal{G}}$ that maximizes the number of misclassified nodes, i.e., $\sum_{n \in \mathbb{T}} \mathbf{1}_{f_n(\mathcal{G}) \neq f_n(\mathcal{G}')}$ , for some set of target nodes $\mathbb{T} \subseteq \{1, \dots, N\}$ .
+
+Following prior work (Zügner & Gümnmann, 2019), we constrain the set of admissible perturbed graphs $\mathbb{B}_{\mathcal{G}}$ through global and (optionally) local constraints on the number of changed bits. Our global constraints are parameterized by scalars $r_{X_{\mathrm{add}}}, r_{X_{\mathrm{del}}}, r_{A_{\mathrm{add}}}, r_{A_{\mathrm{del}}} \in \mathbb{N}_0$ . They are an upper limit on how many bits can be added ( $0 \to 1$ ) or deleted ( $1 \to 0$ ) when perturbing $X$ and $A$ . Our local constraints are parameterized by vectors $r_{X_{\mathrm{add,loc}}}, r_{X_{\mathrm{del,loc}}}, r_{A_{\mathrm{add,loc}}}, r_{A_{\mathrm{del,loc}}} \in \mathbb{N}_0^N$ . They are an upper limit on how many bits can be added or deleted per row of $X$ and $A$ , i.e. how much the attributes of a particular node can change and how many incident edges can be perturbed.
+
+Often, it is reasonable to assume that no adversary has direct control over the entire graph. Instead, a realistic attacker should only be able to perturb a small (adaptively chosen) subset of nodes. To model this, we introduce an additional parameter $\sigma \in \mathbb{N}$ . For all $(X', A') \in \mathbb{B}_{\mathcal{G}}$ , there must be a set of node indices $\mathbb{S} \subseteq \{1, \dots, N\}$ with $|\mathbb{S}| \leq \sigma$ such that for all $d \in \{1, \dots, D\}$ and $n, m \in \{1, \dots, N\}$ :
+
+$$
+\left(X _ {n, d} ^ {\prime} \neq X _ {n, d} \Rightarrow n \in \mathbb {S}\right) \wedge \left(A _ {n, m} ^ {\prime} \neq A _ {n, m} \Rightarrow n \in \mathbb {S} \vee m \in \mathbb {S}\right). \tag {1}
+$$
+
+The set $\mathbb{S}$ is not fixed, but chosen by the adversary. The resulting set $\mathbb{B}_{\mathcal{G}}$ is formally defined in Section B. If the global budget parameters are variable and the remaining parameters are clear from the context, we treat $\mathbb{B}_{\mathcal{G}}$ as a function $\mathbb{B}_{\mathcal{G}}: \mathbb{N}_0^4 \mapsto \mathcal{P}(\mathbb{G})$ that, given global budget parameters, returns the set of all perturbed graphs fulfilling the constraints.
+
+Local predictions. Our certificate exploits the locality of predictions, i.e. the fact that predictions are only based on a subset of the input data. We characterize the receptive field of $f_{n}$ via an indicator vector $\boldsymbol{\psi}^{(n)} \in \{0,1\}^N$ corresponding to rows in attribute matrix $\mathbf{X}$ and an indicator matrix $\Psi^{(n)} \in \{0,1\}^{N\times N}$ corresponding to entries in adjacency matrix $\mathbf{A}$ . For all $(X',A'),(X'',A'') \in \mathbb{B}_{\mathcal{G}}$ :
+
+$$
+\sum_ {m = 1} ^ {N} \sum_ {d = 1} ^ {D} \psi_ {m} ^ {(n)} \mathbf {1} _ {X _ {m, d} ^ {\prime} \neq X _ {m, d} ^ {\prime \prime}} + \sum_ {i = 1} ^ {N} \sum_ {j = 1} ^ {N} \Psi_ {i, j} ^ {(n)} \mathbf {1} _ {A _ {i, j} ^ {\prime} \neq A _ {i, j} ^ {\prime \prime}} = 0 \Rightarrow f _ {n} \left(\boldsymbol {X} ^ {\prime}, \boldsymbol {A} ^ {\prime}\right) = f _ {n} \left(\boldsymbol {X} ^ {\prime \prime}, \boldsymbol {A} ^ {\prime \prime}\right). \tag {2}
+$$
+
+Eq. 2 enforces that as long as all nodes and edges for which $\psi^{(n)} = 1$ or $\Psi^{(n)} = 1$ remain unperturbed, the prediction $f_{n}$ does not change. Put differently, changes to the rest of the data do not affect the prediction. Note that the adversary can alter receptive fields, e.g. add edges to enlarge them. To capture all potential alterations, $\psi^{(n)}$ and $\Psi^{(n)}$ correspond to all data points that influence $f_{n}$ under some graph in $\mathbb{B}_{\mathcal{G}}$ , i.e. the union of all receptive fields achievable under the threat model.
+
+# 3 COMPACT REPRESENTATION OF BASE CERTIFICATES
+
+Before deriving our collective certificate, we define a representation that allows us to efficiently evaluate base certificates. A base certificate is any procedure that can provably guarantee that the prediction $f_{n}$ for a specific node $n$ cannot be changed by any perturbed graph in an admissible set, such as sparsity-aware smoothing (Bojchevski et al., 2020). As you shall see in the next section, our collective certificate requires evaluating base certificates for varying adversarial budgets within $\mathbb{L} = [r_{\mathbf{X}_{\mathrm{add}}}] \times [r_{\mathbf{X}_{\mathrm{del}}}] \times [r_{\mathbf{A}_{\mathrm{add}}}] \times [r_{\mathbf{A}_{\mathrm{del}}}]$ (with $[k] = \{0, \dots, k\}$ ), the set of vectors that do not exceed the collective global budget.
+
+A base certificate implicitly partitions $\mathbb{L}$ into a set of budgets $\mathbb{K}^{(n)} \subseteq \mathbb{L}$ for which the prediction $f_{n}$ is certifiably robust and its complement $\overline{\mathbb{K}^{(n)}} = \mathbb{L} \setminus \mathbb{K}^{(n)}$ with
+
+$$
+\mathbb {K} ^ {(n)} \subseteq \left\{\boldsymbol {\rho} \in \mathbb {N} _ {0} ^ {4} \mid \boldsymbol {\rho} \in \mathbb {L} \wedge \forall (\boldsymbol {X} ^ {\prime}, \boldsymbol {A} ^ {\prime}) \in \mathbb {B} _ {\mathcal {G}} (\boldsymbol {\rho}): f _ {n} (\boldsymbol {X}, \boldsymbol {A}) = f _ {n} (\boldsymbol {X} ^ {\prime}, \boldsymbol {A} ^ {\prime}) \right\}. \tag {3}
+$$
+
+Note the subset relation. The set $\mathbb{K}^{(n)}$ does not have to contain all budgets for which $f_{n}$ is robust. Conversely, a certain budget vector $\pmb{\rho}$ not being part of $\mathbb{K}^{(n)}$ does not necessarily mean that $f_{n}$ can be attacked under threat model $\mathbb{B}_{\mathcal{G}}(\pmb{\rho})$ - its robustness is merely unknown. We now make the following natural assumption about base certificates: If a classifier is certifiably robust to perturbations with a large global budget, it should also be certifiably robust to perturbations with a smaller global budget.
+
+$$
+\forall \boldsymbol {\rho} \in \mathbb {K} ^ {(n)}, \boldsymbol {\rho} ^ {\prime} \in \mathbb {L}: [ \forall d \in \{1, 2, 3, 4 \}: \rho_ {d} ^ {\prime} \leq \rho_ {d} ] \Longrightarrow \left[ \boldsymbol {\rho} ^ {\prime} \in \mathbb {K} ^ {(n)} \right]. \tag {4}
+$$
+
+From a geometric point of view, Eq. 4 means that the budgets $\mathbb{K}$ for which the prediction $f_{n}$ is certifiably robust form a singular enclosed volume around $(0\quad 0\quad 0\quad 0)^T$ within the larger volume $\mathbb{L}$ . Determining whether a classifier is robust to perturbations in $\mathbb{B}_{\mathcal{G}}(\pmb {\rho})$ is equivalent to determining which side of the surface enclosing the volume $\mathbb{K}$ the budget vector $\pmb{\rho}$ lies on. This can be done by evaluating linear inequalities, as shown in the following.
+
+First, let us assume that all but one of the budgets are zero, e.g. $\mathbb{L} = [r_{\mathbf{X}_{\mathrm{add}}}] \times [0] \times [0] \times [0]$ , with $r_{\mathbf{X}_{\mathrm{add}}} > 0$ . Due to Eq. 4 there must be a distinct value $p_n \in \mathbb{N}_0$ (the smallest uncertifiable budget) with $\forall \rho \in \mathbb{L}: \rho \in \overline{\mathbb{K}^{(n)}} \iff \rho_1 \geq p_n$ . Evaluating the base certificate can thus be performed by evaluating a single inequality. This approach can be generalized to arbitrary types of perturbations. Instead of using a single scalar $p_n$ , we characterize the volume of budgets $\mathbb{K}^{(n)}$ via the pareto front of points on its enclosing surface:
+
+$$
+\mathbb {P} ^ {(n)} = \left\{\boldsymbol {\rho} \in \overline {{\mathbb {K} ^ {(n)}}} \mid \neg \exists \boldsymbol {\rho} ^ {\prime} \in \overline {{\mathbb {K} ^ {(n)}}}: \boldsymbol {\rho} ^ {\prime} \neq \boldsymbol {\rho} \wedge \forall d \in \{1, 2, 3, 4 \}: \rho_ {d} ^ {\prime} \leq \rho_ {d} \right\}. \tag {5}
+$$
+
+These points fulfill $\forall \pmb {\rho}\in \mathbb{L}\left(\pmb {\rho}\in \overline{\mathbb{K}^{(n)}}\iff \exists \pmb {p}\in \mathbb{P}^{(n)},\forall d\in \{1,2,3,4\} :\rho_d\geq p_d\right)$ . Here, evaluating the base certificate can be performed by evaluating $4|\mathbb{P}|$ inequalities.
+
+In the following, we assume that we are directly given this pareto front (or the smallest uncertifiable budget). Finding the pareto front can be easily implemented via a flood-fill algorithm that identifies the surface of volume $\mathbb{K}^{(n)}$ , followed by a thinning operation (for more details, see Section D).
+
+# 4 COLLECTIVE CERTIFICATE
+
+To improve clarity in this section, we only discuss the global budget constraints. All remaining constraints from the threat model can be easily modelled as linear constraints. You can find the certificate for the full threat model in Section C. We first formalize the naive collective certificate described in the introduction, which implicitly allows the adversary to use different graphs to attack different predictions. We then derive the proposed collective certificate, first focusing on attribute additions before extending it to arbitrary perturbations. We relax the certificate to a linear program to enable fast computation and show the certificate's tightness when using a randomized smoothing base certificate. We conclude by discussing the certificate's time complexity and limitations.
+
+Naive collective certificate. Assume we are given a clean input $(\mathbf{X},\mathbf{A})$ , a multi-output classifier $f$ , a set $\mathbb{T}$ of target nodes and a set of admissible perturbed graphs $\mathbb{B}_{\mathcal{G}}$ fulfilling collective global budget constraints given by $r_{\mathbf{X}_{\mathrm{add}}},r_{\mathbf{X}_{\mathrm{del}}},r_{\mathbf{A}_{\mathrm{add}}},r_{\mathbf{A}_{\mathrm{del}}}$ . Let $\mathbb{L} = [r_{\mathbf{X}_{\mathrm{add}}}] \times [r_{\mathbf{X}_{\mathrm{del}}}] \times [r_{\mathbf{A}_{\mathrm{add}}}] \times [r_{\mathbf{A}_{\mathrm{del}}}]$ be the set of all vectors that that do not exceed the collective global budget. Further assume that the base certificate guarantees that each classifier $f_{n}$ is certifiable robust to perturbations within a set of budgets $\mathbb{K}^{(n)}$ (see Eq. 3). As discussed, the naive certificate simply counts the predictions whose robustness to perturbations from $\mathbb{B}_{\mathcal{G}}$ is guaranteed by the base certificate. Using the representation of base certificates introduced in Section 3, this can be expressed as $\sum_{n \in \mathbb{T}} \mathbf{1}\left[\mathbb{K}^{(n)} = \mathbb{L}\right]$ . From the definition of $\mathbb{K}^{(n)}$ in Eq. 3, we can directly see that this is a lower bound on the optimal value of $\sum_{n \in \mathbb{T}} \min_{(\mathbf{X}',\mathbf{A}') \in \mathbb{B}_{\mathcal{G}}} \mathbf{1}\left[f_n(\mathbf{X},\mathbf{A}) = f_n(\mathbf{X}',\mathbf{A}')\right]$ , i.e. the number of predictions guaranteed to be stable under attack. Note that each summand involves a different minimization problem, meaning the adversary may use a different graph to attack each of the nodes.
+
+Collective certificate for attribute additions. To improve upon the naïve certificate, we want to determine the number of predictions that are simultaneously robust to attacks with a single graph:
+
+$$
+\min _ {\left(\boldsymbol {X} ^ {\prime}, \boldsymbol {A} ^ {\prime}\right) \in \mathbb {B} _ {\mathcal {G}}} \sum_ {n \in \mathbb {T}} \mathbf {1} \left[ f _ {n} \left(\boldsymbol {X}, \boldsymbol {A}\right) = f _ {n} \left(\boldsymbol {X} ^ {\prime}, \boldsymbol {A} ^ {\prime}\right) \right]. \tag {6}
+$$
+
+Solving this problem is usually not tractable. For simplicity, let us assume that the adversary is only allowed to perform attribute additions. As before, we can lower-bound the indicator functions using the base certificates:
+
+$$
+\min _ {\left(\boldsymbol {X} ^ {\prime}, \boldsymbol {A} ^ {\prime}\right) \in \mathbb {B} _ {\mathcal {G}}} \sum_ {n \in \mathbb {T}} \mathbf {1} \left[ \left(b \quad 0 \quad 0 \quad 0\right) ^ {T} \in \mathbb {K} ^ {(n)} \right] \tag {7}
+$$
+
+where $b = \sum_{(n,d):X_{n,d} = 0}X_{n,d}'$ is the number of attribute additions for a given perturbed graph. Since this certificate only depends on the number of perturbations, it is sufficient to optimize over the number of attribute additions while enforcing the global budget constraint:
+
+$$
+\min _ {b \in \mathbb {N} _ {0}} \sum_ {n \in \mathbb {T}} \mathbf {1} \left[ \left(b \quad 0 \quad 0 \quad 0\right) ^ {T} \in \mathbb {K} ^ {(n)} \right] \text {s . t .} b \leq r _ {\boldsymbol {X} _ {\mathrm {a d d}}}. \tag {8}
+$$
+
+There are two limitations: (1) The certificate does not account for locality, but simply considers the number of perturbations in the entire graph. In this regard, it is no different from the naïve collective certificate; (2) Evaluating the indicator functions, i.e. certifying the individual nodes, might involve complex optimization problems that are difficult to optimize through. We tackle (1) by evaluating the base certificates locally.
+
+Lemma 1 Assume multi-output classifier $f$ , corresponding receptive field indicators $\psi^{(n)} \in \{0,1\}^N$ and $\Psi^{(n)} \in \{0,1\}^{N\times N}$ , and a clean graph $(X,A)$ . Let $\mathbb{K}^{(n)}$ be the set of certifiable global budgets of prediction $f_{n}$ , as defined in Eq. 3. Let $(X',A')$ be a perturbed graph. Define $X'' \in \{0,1\}^{N\times D}$ and $A'' \in \{0,1\}^{N\times N}$ as follows:
+
+$$
+\boldsymbol {X} _ {i, d} ^ {\prime \prime} = \psi_ {i} ^ {(n)} \boldsymbol {X} _ {i, d} ^ {\prime} + (1 - \psi_ {i} ^ {(n)}) \boldsymbol {X} _ {i, d}, \tag {9}
+$$
+
+$$
+\boldsymbol {A} _ {i, j} ^ {\prime \prime} = \Psi_ {i, j} ^ {(n)} \boldsymbol {A} _ {i, j} ^ {\prime} + (1 - \Psi_ {i, j} ^ {(n)}) \boldsymbol {A} _ {i, j}, \tag {10}
+$$
+
+i.e. use values from the clean graph for bits that are not in $f_n$ 's receptive field. If there exists a vector of budgets $\pmb{\rho} \in \mathbb{N}_0^4$ such that $(\pmb{X}''', \pmb{A}''') \in \mathbb{B}_{\mathcal{G}}(\pmb{\rho})$ and $\pmb{\rho} \in \mathbb{K}^{(n)}$ , then $f_n(\pmb{X}, \pmb{A}) = f_n(\pmb{X}', \pmb{A}')$ .
+
+See proof in Section B. Due to Lemma 1 we can ignore all perturbations outside $f_{n}$ 's receptive field when evaluating its base certificate. We can thus replace $(b \quad 0 \quad 0 \quad 0)^T$ in Eq. 8 with $(b^T \psi^{(n)} \quad 0 \quad 0 \quad 0)^T$ , where the vector $\mathbf{b} \in \mathbb{N}_0^N$ indicates the number of attribute additions at each node. Optimizing over $\mathbf{b}$ yields a collective certificate that accounts for locality:
+
+$$
+\min _ {\boldsymbol {b} \in \mathbb {N} _ {0}} \sum_ {n \in \mathbb {T}} \mathbf {1} \left[ \left(\boldsymbol {b} ^ {T} \boldsymbol {\psi} ^ {(n)} \quad 0 \quad 0 \quad 0\right) ^ {T} \in \mathbb {K} ^ {(n)} \right] \text {s . t .} \| \boldsymbol {b} \| _ {1} \leq r _ {\boldsymbol {X} _ {\mathrm {a d d}}}. \tag {11}
+$$
+
+We now tackle issue (2) by employing the compact representation of base certificates defined in Section 3. Since we are only allowing one type of perturbation, the base certificate of each classifier $f_{n}$ is characterized by the smallest uncertifiable radius $p_{n}$ (see Section 3). To evaluate the indicator function in Eq. 11 we simply have to compare the number of perturbations in $f_{n}$ 's receptive field to $p_{n}$ , which can be implemented via the following MILP:
+
+$$
+\min _ {\boldsymbol {b} \in \mathbb {N} _ {0} ^ {N}, \boldsymbol {t} \in \{0, 1 \} ^ {N}} | \mathbb {T} | - \sum_ {n \in \mathbb {T}} t _ {n} \tag {12}
+$$
+
+$$
+\mathrm {s . t .} \boldsymbol {b} ^ {T} \boldsymbol {\psi} ^ {(n)} \geq p _ {n} t _ {n} \quad \forall n \in \{1, \dots , N \} \tag {13}
+$$
+
+$$
+\left\| \boldsymbol {b} \right\| _ {1} \leq r _ {\boldsymbol {X} _ {\mathrm {a d d}}}. \tag {14}
+$$
+
+Eq. 14 ensures that the number of perturbations fulfills the global budget constraint. Eq. 13 ensures that the indicator $t_n$ can only be set to 1 if the local perturbation on the l.h.s. exceeds or matches $p_n$ , i.e. $f_n$ is not robustly certified by the base certificate. The adversary tries to minimize the number of robustly certified predictions in $\mathbb{T}$ (see Eq. 8), which is equivalent to Eq. 12.
+
+Collective certificate for arbitrary perturbations. Lemma 1 holds for arbitrary perturbations. We only have to consider the perturbations within a prediction's receptive field when evaluating its base certificate. However, when multiple perturbation types are allowed, the base certificate of a prediction $f_{n}$ is not characterized by a scalar $p_n$ , but by its pareto front $\mathbb{P}^{(n)}$ (see Eq. 5). Let $P^{(n)} \in \mathbb{N}_0^{\left|\mathbb{P}^{(n)}\right| \times 4}$ be a matrix encoding of the set $\mathbb{P}^{(n)}$ . To determine if $f_{n}$ is robust we can check whether
+
+there is some pareto-optimal point $P_{i,:}^{(n)}$ such that the amount of perturbation in $f_{n}$ 's receptive field matches or exceeds $P_{i,:}^{(n)}$ in all four dimensions. This can again be expressed as a MILP (see Eq. 15 to Eq. 23 below).
+
+As before, we use a vector $\pmb{t}$ with $t_n = 1$ indicating that $f_n$ is not certified by the base certificate. The adversary tries to find a budget allocation (parameterized by $\pmb{b}_{\pmb{X}_{\mathrm{add}}}$ , $\pmb{b}_{\pmb{X}_{\mathrm{del}}}$ and $\pmb{B}_A$ ) that minimizes the number of robustly certified predictions in $\mathbb{T}$ (see Eq. 15). Eq. 20 and Eq. 21 ensure that the budget allocation is consistent with the global budget parameters characterizing $\mathbb{B}_{\mathcal{G}}$ . The value of $t_n$ is determined by the following constraints: First, Eq. 17 to Eq. 19 ensure that $Q_{p,d}^{(n)}$ is only set to 1 if the local perturbation matches or exceeds the pareto-optimal point corresponding to row $p$ of $P^{(n)}$ in dimension $d$ . The constraints in Eq. 16 implement logic operations on $Q^{(n)}$ : Indicator $s_p^{(n)}$ can only be set to 1 if $\forall d \in \{1,2,3,4\} : Q_{p,d}^{(n)} = 1$ . Indicator $t_n$ can only be set to 1 if $\exists p \in \{1,\dots,|\mathbb{P}^{(n)}|:\bar{s}_p^{(n)} = 1$ . Combined, these constraints enforce that if $t_n = 1$ , there must be some point in $\mathbb{P}^{(n)}$ that is exceeded or matched by the amount of perturbation in all four dimensions.
+
+$$
+\min _ {\left(\boldsymbol {Q} ^ {(n)}, \boldsymbol {s} ^ {(n)}\right) _ {n = 1} ^ {N}, \boldsymbol {b} _ {\boldsymbol {X} _ {\mathrm {a d d}}}, \boldsymbol {b} _ {\boldsymbol {X} _ {\mathrm {d e l}}}, \boldsymbol {B} _ {\boldsymbol {A}}, \boldsymbol {t}} | \mathbb {T} | - \sum_ {n \in \mathbb {T}} t _ {n} \tag {15}
+$$
+
+$$
+\left. \right.\left.\left|\left| \boldsymbol {s} ^ {(n)} \right|\right| _ {1} \geq t _ {n}, \quad Q _ {p, d} ^ {(n)} \geq s _ {p} ^ {(n)}, \right. \tag {16}
+$$
+
+$$
+\left(\boldsymbol {b} _ {\boldsymbol {X} _ {\mathrm {a d d}}}\right) ^ {T} \boldsymbol {\psi} ^ {(n)} \geq Q _ {i, 1} ^ {(n)} P _ {p, 1} ^ {(n)}, \quad \left(\boldsymbol {b} _ {\boldsymbol {X} _ {\mathrm {d e l}}}\right) ^ {T} \boldsymbol {\psi} ^ {(n)} \geq Q _ {p, 2} ^ {(n)} P _ {p, 2} ^ {(n)}, \tag {17}
+$$
+
+$$
+\sum_ {m, m ^ {\prime} \leq N} (1 - A _ {m, m ^ {\prime}}) \left(\boldsymbol {\Psi} ^ {(n)} \odot B _ {\boldsymbol {A}}\right) _ {m, m ^ {\prime}} \geq Q _ {p, 3} ^ {(n)} P _ {p, 3} ^ {(n)}, \tag {18}
+$$
+
+$$
+\sum_ {m, m ^ {\prime} \leq N} A _ {m, m ^ {\prime}} \left(\Psi^ {(n)} \odot B _ {\mathbf {A}}\right) _ {m, m ^ {\prime}} \geq Q _ {p, 4} ^ {(n)} P _ {p, 4} ^ {(n)}, \tag {19}
+$$
+
+$$
+\left\| b _ {\mathbf {X} _ {\mathrm {a d d}}} \right\| _ {1} \leq r _ {\mathbf {X} _ {\mathrm {a d d}}}, \quad \left\| b _ {\mathbf {X} _ {\mathrm {a d d}}} \right\| _ {1} \leq r _ {\mathbf {X} _ {\mathrm {d e l}}}, \tag {20}
+$$
+
+$$
+\sum_ {(i, j): A _ {i, j} = 0} B _ {\boldsymbol {A} _ {i, j}} \leq r _ {\boldsymbol {A} _ {\mathrm {a d d}}}, \quad \sum_ {(i, j): A _ {i, j} = 1} B _ {\boldsymbol {A} _ {i, j}} \leq r _ {\boldsymbol {A} _ {\mathrm {d e l}}}, \tag {21}
+$$
+
+$$
+\boldsymbol {s} ^ {(n)} \in \{0, 1 \} ^ {| \mathbb {P} ^ {(n)} |}, \quad \boldsymbol {Q} ^ {(n)} \in \{0, 1 \} ^ {| \mathbb {P} ^ {(n)} | \times 4} \quad \boldsymbol {t} \in \{0, 1 \} ^ {N} \tag {22}
+$$
+
+$$
+\boldsymbol {b} _ {\boldsymbol {X} _ {\text {a d d}}}, \boldsymbol {b} _ {\boldsymbol {X} _ {\text {d e l}}} \in \mathbb {N} _ {0} ^ {N}, \quad \boldsymbol {B} _ {\boldsymbol {A}} \in \{0, 1 \} ^ {N \times N}. \tag {23}
+$$
+
+LP-relaxation. For large graphs, finding an optimum to the mixed-integer problem is prohibitively expensive. In practice, we relax integer variables to reals and binary variables to $[0,1]$ . Semantically, the relaxation means that bits can be partially perturbed, nodes can be partially controlled by the attacker and classifiers can be partially uncertified (i.e. $1 > t_{n} > 0$ ). The relaxation yields a linear program, which can be solved much faster.
+
+Tightness for randomized smoothing. One recent method for robustness certification is randomized smoothing (Cohen et al., 2019). In randomized smoothing, a (potentially non-deterministic) base classifier $h: \mathbb{X} \mapsto \mathbb{Y}$ that maps from some input space $\mathbb{X}$ to a set of labels $\mathbb{Y}$ is transformed into a smoothed classifier $g(x)$ with $g(x) = \operatorname{argmax}_{y \in \mathbb{Y}} \Pr[h(\phi(x) = y)]$ , where $\phi(x)$ is some randomization scheme parameterized by input $x$ . For the smoothed $g(x)$ we can then derive probabilistic robustness certificates. Randomized smoothing is a black-box method that only depends on $h$ 's expected output behavior under $\phi(x)$ and does not require any further assumptions. Building on prior randomized smoothing work for discrete data by Lee et al. (2019), Bojchevski et al. (2020) propose a smoothing distribution and corresponding certificate for graphs. Using their method as a base certificate to our collective certificate, the resulting (non-relaxed) certificate is tight. That is, our mixed-integer collective certificate is the best certificate we can obtain for the specified threat model, if we do not use any information other than the classifier's expected predictions and their locality. Detailed explanation and proof in Section E.
+
+Time complexity. For our method we need to construct the pareto fronts corresponding to each prediction's base certificate. This has to be performed only once and the results can then be reused in evaluating the collective certificate with varying parameters. We discuss the details of this preprocessing in Section D. The complexity of the collective certificate is based on the number of constraints and variables of the underlying (MI)LP. In total, we have $13\sum_{n=1}^{N}|\mathbb{P}^{(n)}| + 8N + 2e + 5$
+
+constraints and $5\sum_{n = 1}^{N}|\mathbb{P}^{(n)}| + 4N + e$ variables, where $e$ are the number of edges in the unperturbed graph (we disallow edge additions). For single-type perturbations we have $\mathcal{O}(N + e)$ terms, linear in the number of nodes and edges. The relaxed LP takes at most a few seconds to certify robustness for single-type perturbations and a few minutes for multiple types of perturbations (see Section 5).
+
+Limitations. The proposed approach is designed to exploit locality. Without locality, it is equivalent to a naive combination of base certificates that sums over perturbations in the entire graph. A non-obvious limitation is that our notion of locality breaks down if the receptive fields are data-dependent and can be arbitrarily extended by the adversary. Recall how we specified locality in Eq. 2: The indicators $\psi^{(n)}$ and $\Psi^{(n)}$ correspond to the union of all achievable receptive fields. Take for example a two-layer message-passing neural networks and an adversary that can add new edges. Each node is classified based on its 2-hop neighborhood. For any two nodes $n, m$ , the adversary can construct a graph such that $m$ is in $f_{n}$ receptive field. We thus have to treat the $f_{n}$ as global, even if for any single graph they might only process some subgraph. Nonetheless, our method still yields significant improvements for edge deletions and arbitrary attribute perturbations. As discussed in prior work (Zugner & Gunnemann, 2020) edge addition is inherently harder and less relevant in practice.
+
+# 5 EXPERIMENTAL EVALUATION
+
+Experimental setup. We evaluate the proposed approach by certifying node classifiers on multiple graphs and with different base certificates. We use 20 nodes per class to construct a train and a validation set. We certify all remaining nodes. We repeat each experiment five times with different random initializations and data splits. Unless otherwise specified, we do not impose any local budget constraints or constraints on the number of attacker-controlled nodes. We compare the proposed method with the naive collective certificate, which simply counts the number of predictions that are certified to be robust by the base certificate. All experiments are based on the relaxed linear programming version of the certificate. We assess the integrality gap to the mixed-integer version in Section A. The code is publicly available under https://www.daml.in.tum.de/collective-robustness/. We also uploaded the implementation as supplementary material.
+
+Datasets, models and base certificates. We train and certify models on the following datasets: Cora-ML (McCallum et al. (2000); Bojchevski & Gunnemann (2018); $N = 2810$ , 7981 edges, 7 classes), CiteSeer (Sen et al. (2008); $N = 2110$ , 3668 edges, 6 classes), PubMed (Namata et al. (2012); $N = 19717$ , 44324 edges, 3 classes), Reuters-21578 $^{4}$ ( $N = 862$ , 2586 edges, 4 classes) and WebKB (Craven et al. (1998); $N = 877$ , 2631 edges, 5 classes). The graphs for the natural language corpora Reuters and WebKB are constructed using the procedure described in Zhou & Vorobeychik (2020), resulting in 3-regular graphs. We use five types of classifiers: Graph convolution networks (GCN) (Kipf & Welling, 2017), graph attention networks (GAT) (Veličković et al., 2018), APPNP (Gasteiger et al., 2019), robust graph convolution networks (RGCN) (Zhu et al., 2019) and soft medoid aggregation networks (SMA) (Geisler et al., 2020). All classifiers are configured to have two layers, i.e. each node's classifier is dependent on its two-hop neighborhood. We use two types of base certificates: (Bojchevski et al., 2020) (randomized smoothing, arbitrary perturbations) and Zügner & Gunnemann (2019) (convex relaxations of network nonlinearities, attribute perturbations). We provide a summary of all hyperparameters in Section F.
+
+Evaluation metrics. We report the certified ratio on the test set, i.e. the percentage of nodes that are certifiably robust under a given threat model, averaged over all data splits. We further calculate the standard sample deviation in certified ratio (visualized as shaded areas in plots) and the average wall-clock time per collective certificate. For experiments in which only one global budget parameter is altered, we report the average certifiable radius, i.e. $\bar{r} = (\sum_{r=0}^{\infty} \omega(r) * r / \sum_{r=0}^{\infty} \omega(r))$ , where $\omega(r)$ is the certified ratio for value $r$ of the global budget parameter, averaged over all splits.
+
+Attribute perturbations. We first evaluate the certificate for a single perturbation type. Using randomized smoothing as the base certificate, we evaluate the certified ratio of GCN classifiers for varying global attribute deletion budgets $r_{\mathbf{X}_{\mathrm{del}}}$ on the citation graphs Cora, Citeseer and PubMed (for Reuters and WebKB, see Section A). The remaining global budget parameters are set to 0. Fig. 2 shows that for all datasets, the proposed method yields significantly larger certified ratios than the naïve certificate and can certify robustness for much larger $r_{\mathbf{X}_{\mathrm{del}}}$ . The average certifiable radius $\bar{r}$
+
+
+Figure 2: Certified ratios for smoothed GCN on Cora, Citeseer and PubMed, under varying $r_{X_{\mathrm{del}}}$ . We compare the proposed certificate (solid lines) to the naive certificate (dotted lines). Our method certifies orders of magnitude larger radii (note the logarithmic x-axis).
+
+
+Figure 3: Two-dimensional colective certificate for smoothed GCN on Cora-ML under varying $r_{X_{\mathrm{del}}}$ and $r_{A_{\mathrm{del}}}$ . The solid and dotted contour lines show ratios $\geq 0.5$ and $\geq 0.7$ for our vs. the naive certificate respectively. Our method achieves much larger certified ratios and radii.
+
+
+Figure 4: Comparison of certified ratios for GAT, GCN and APPNP on Cora-ML under varying $r_{\mathbf{X}_{\mathrm{del}}}$ for our (solid lines) and the naive (dotted lines) collective certificate.
+
+
+Figure 5: Certifying GCN on Citeseer, under varying $r_{\mathbf{X}_{\mathrm{add}}}$ using Züigner & Gunnemann (2019)'s base certificate. Our certificate yields significantly larger certified ratios and radii.
+
+on CiteSeer increases from 7.18 to 351.73. The results demonstrate the benefit of using the collective certificate, which explicitly models simultaneous attacks on all predictions. The average wall-clock time per certificate on Cora, CiteSeer and PubMed is $2.0\mathrm{s}$ , $0.29\mathrm{s}$ and $336.41\mathrm{s}$ . Interestingly, the base certificate yields the highest certifiable ratios on Cora, while the collective certificate yields the highest certifiable ratios on PubMed. We attribute this to differences in graph structure, which are explicitly taken into account by the proposed certification procedure.
+
+Simultaneous attribute and graph perturbations. To evaluate the multi-dimensional version of the certificate, we visualize the certified ratio of randomly smoothed GCN classifiers for different combinations of $r_{\mathbf{X}_{\mathrm{del}}}$ and $r_{\mathbf{A}_{\mathrm{del}}}$ on Cora-ML. For an additional experiment on simultaneous attribute additions and deletions, see Section A. Fig. 3 shows that we achieve high certified ratios even when the attacker is allowed to perturb both the attributes and the structure. Comparing the contour lines at $50\%$ the naïve certiface can only certify much smaller radii, e.g. at most 6 attribute deletions compared to 39 for our approach. The average wall-clock time per certificate is $106.90~\mathrm{s}$ .
+
+Different classifiers. Our method is agnostic towards classifier architectures, as long as they are compatible with the base certificate and their receptive fields can be determined. In Fig. 4 we compare the certified collective robustness of GAT, GCN, and APPNP, using the sparse smoothing certificate on Cora-ML. Better base certificates translate into better collective certificates. For an additional experiment on the benefits of robust classifier architectures RGCN and SMA, see Section A.
+
+
+Figure 6: Certified ratios for smoothed GCN on Cora-ML, under varying $r_{\mathbf{A}_{\mathrm{del}}}$ and $r_{\mathbf{A}_{\mathrm{del,loc}}}$ . Stricter local budgets yield larger certified ratios.
+
+
+Figure 7: Certified ratios for smoothed GCN on Cora-ML. We vary $r_{X_{\mathrm{del}}}$ and $\sigma$ . The certified ratios remain constant and non-zero for large $r_{A_{\mathrm{del}}}$ .
+
+Different base certificates. Our method is also agnostic to the base certificate type. We show that it works equally well with base certificates other than randomized smoothing. Specifically, we use the method from Zügner & Gümnmann (2019). We certify a GCN model for varying $r_{\mathbf{X}_{\mathrm{add}}}$ on Cite-seer. Unlike randomized smoothing, this base certificate models local budget constraints. Using the default in the base certificate's reference implementation, we limit the number of attribute additions per node to $\lfloor 0.01D \rfloor = 21$ for both the base and the collective certificate. Fig. 5 shows that the proposed collective certificate is again significantly stronger. The average certified radius $\hat{r}$ increases from 17.12 to 971.36. The average wall-clock time per certificate is 0.39 s.
+
+Local constraints. We evaluate the effect of additional constraints in our threat model. We can enforce local budget constraints and limit the number of attacker-controlled nodes, even if they are not explicitly modeled by the base certificate. In Fig. 6, we use a smoothed GCN on Cora-ML and vary both the global budget for edge deletions, $r_{A_{\mathrm{del}}}$ , and the local budgets $r_{A_{\mathrm{del},\mathrm{loc}}}$ . Even though the base certificate does not support local budget constraints, reducing the number of admissible deletions per node increases the certified ratio as expected. For example, limiting the adversary to one deletion per node more than doubles the certified ratio at $r_{A_{\mathrm{del}}} = 1000$ . In Fig. 7, we fix a relatively large local budget of 16 edge deletions per node (only $\sim 5\%$ of nodes on Cora-ML have a degree $> 16$ ) and vary the number of attacker-controlled nodes. We see that for any given number of attacker nodes, there is some point $r_{A_{\mathrm{del}}}$ after which the certified ratio curve becomes constant. This constant value is an an upper limit on the number of classifiers that can be attacked with a given local budget and number of attacker-controlled nodes. It is independent of the global budget.
+
+# 6 CONCLUSION
+
+We propose the first collective robustness certificate. Assuming predictions based on a single shared input, we leverage the fact that an adversary must use a single adversarial example to attack all predictions. We focus on Graph Neural Networks, whose locality guarantees that perturbations to the input graph only affect predictions in a close neighborhood. The proposed method combines many weak base certificates into a provably stronger collective certificate. It is agnostic towards network architectures and base certification procedures. We evaluate it on multiple semi-supervised node classification datasets with different classifier architectures and base certificates. Our empirical results show that the proposed collective approach yields much stronger certificates than existing methods, which assume that an adversary can attack predictions independently with different graphs.
+
+# 7 ACKNOWLEDGEMENTS
+
+This research was supported by the German Research Foundation, Emmy Noether grant GU 1409/2-1, the German Federal Ministry of Education and Research (BMBF), grant no. 01IS18036B, and the TUM International Graduate School of Science and Engineering (IGSSE), GSC 81.
+
+# REFERENCES
+
+Naveed Akhtar and Ajmal Mian. Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6:14410-14430, 2018.
+Aleksandar Bojchevski and Stephan Gunnemann. Deep gaussian embedding of graphs: Unsupervised inductive learning via ranking. In International Conference on Learning Representations, 2018.
+Aleksandar Bojchevski and Stephan Gunnemann. Certifiable robustness to graph perturbations. In Advances in Neural Information Processing Systems, volume 32, pp. 8319-8330, 2019.
+Aleksandar Bojchevski, Johannes Gasteiger, and Stephan Gunnemann. Efficient robustness certificates for discrete data: Sparsity-aware randomized smoothing for graphs, images and more. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 1003-1013, 2020.
+Nicholas Carlini and David Wagner. Adversarial examples are not easily detected: Bypassing ten detection methods. In Workshop on Artificial Intelligence and Security, AISec, 2017.
+Ping-yeh Chiang, Michael J. Curry, Ahmed Abdelkader, Aounon Kumar, John Dickerson, and Tom Goldstein. Detection as regression: Certified object detection by median smoothing. In Advances in Neural Information Processing Systems, volume 33, pp. 1275-1286, 2020.
+Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. Certified adversarial robustness via randomized smoothing. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 1310-1320, 2019.
+Mark Craven, Dan DiPasquo, Dayne Freitag, Andrew McCallum, Tom Mitchell, Kamal Nigam, and Seán Slattery. Learning to extract symbolic knowledge from the world wide web. In Proceedings of the Fifteenth National/Tenth Conference on Artificial Intelligence/Innovative Applications of Artificial Intelligence, AAAI '98/IAAI '98, pp. 509-516. American Association for Artificial Intelligence, 1998.
+Negin Entezari, Saba A. Al-Sayouri, Amirali Darvishzadeh, and Evangelos E. Papalexakis. All you need is low (rank): Defending against adversarial attacks on graphs. In Proceedings of the 13th International Conference on Web Search and Data Mining, pp. 169-177, 2020.
+Boyuan Feng, Yuke Wang, Zheng Wang, and Yufei Ding. Uncertainty-aware attention graph neural network for defending adversarial attacks. arXiv preprint arXiv:2009.10235, 2020.
+Fuli Feng, Xiangnan He, Jie Tang, and Tat-Seng Chua. Graph adversarial training: Dynamically regularizing based on graph structure. IEEE Transactions on Knowledge and Data Engineering, 2019.
+Johannes Gasteiger, Aleksandar Bojchevski, and Stephan Gunnemann. Predict then propagate: Graph neural networks meet personalized pagerank. In International Conference on Learning Representations, 2019.
+Simon Geisler, Daniel Zügner, and Stephan Gunnemann. Reliable graph neural networks via robust aggregation. In Advances in Neural Information Processing Systems, volume 33, pp. 13272-13284, 2020.
+Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 1263-1272, 2017.
+Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, 2015.
+Wenjuan Han, Liwen Zhang, Yong Jiang, and Kewei Tu. Adversarial attack and defense of structured prediction models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 2327-2338, 2020.
+
+Han Xu Yao Ma Hao-Chen, Liu Debayan Deb, Hui Liu Ji-Liang Tang Anil, and K Jain. Adversarial attacks and defenses in images, graphs and text: A review. International Journal of Automation and Computing, 17(2):151-178, 2020.
+Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, 2017.
+Guang-He Lee, Yang Yuan, Shiyu Chang, and Tommi Jaakkola. Tight certificates of adversarial robustness for randomly smoothed classifiers. In Advances in Neural Information Processing Systems 32, pp. 4910-4921. Curran Associates, Inc., 2019.
+Andrew Kachites McCallum, Kamal Nigam, Jason Rennie, and Kristie Seymore. Automating the construction of internet portals with machine learning. Information Retrieval, 3(2):127-163, 2000.
+Galileo Mark Namata, Ben London, Lise Getoor, and Bert Huang. Query-driven active surveying for collective classification. In Workshop on Mining and Learning with Graphs, 2012.
+Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classification in network data. AI Magazine, 29(3):93, 2008.
+Petar Velicković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph Attention Networks. In International Conference on Learning Representations, 2018.
+X Xu, Y Yu, L Song, C Liu, B Kailkhura, C Gunter, and B Li. Edog: Adversarial edge detection for graph neural networks. Technical report, Lawrence Livermore National Lab.(LLNL), Livermore, CA (United States), 2020a.
+Xiaogang Xu, Hengshuang Zhao, and Jiaya Jia. Dynamic divide-and-conquer adversarial training for robust semantic segmentation. arXiv preprint arXiv:2003.06555, 2020b.
+Ao Zhang and Jinwen Ma. Defensevgae: Defending against adversarial attacks on graph data via a variational graph autoencoder. arXiv preprint arXiv:2006.08900, 2020.
+Yingxue Zhang, Sakif Hossain Khan, and Mark Coates. Comparing and detecting adversarial attacks for graph deep learning. In Representation Learning on Graphs and Manifolds Workshop, International Conference on Learning Representations, 2019.
+Kai Zhou and Yevgeniy Vorobeychik. Robust collective classification against structural attacks. In Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), volume 124 of Proceedings of Machine Learning Research, pp. 250-259, 2020.
+Dingyuan Zhu, Ziwei Zhang, Peng Cui, and Wenwu Zhu. Robust graph convolutional networks against adversarial attacks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1399-1407, 2019.
+Daniel Zügner and Stephan Gunnemann. Certifiable robustness and robust training for graph convolutional networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019.
+Daniel Zügner and Stephan Gunnemann. Certifiable robustness of graph convolutional networks under structure perturbations. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1656-1665, 2020.
+Daniel Züigner, Amir Akbarnejad, and Stephan Gümnmann. Adversarial attacks on neural networks for graph data. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2847-2856, 2018.
+Daniel Zügner, Oliver Borchert, Amir Akbarnejad, and Stephan Gunnemann. Adversarial attacks on graph neural networks: Perturbations and their patterns. ACM Transactions on Knowledge Discovery from Data, 14(5), 2020.
+
+# A ADDITIONAL EXPERIMENTS
+
+Robust architectures. Our comparison of different standard classifier architectures demonstrated that the proposed collective certificate is architecture-agnostic and that better base certificates translate into better collective certificates. In Fig. 8 we assess the benefit of using SMA and RGCN, both of which are robust architectures meant to improve adversarial robustness. We use GCN as a baseline for comparison and evaluate the respective certified ratios for varying attribute deletion budgets on Cora-ML. While RGCN is supposed to be more robust to adversarial attacks, it has a lower certified ratio than GCN. Soft medoid aggregation on the other hand has a significantly higher certified ratio. Its base certificate is almost as strong as the collective certificate of RGCN. Its collective certified ratio at $r_{\mathbf{X}_{\mathrm{del}}} = 21$ is $88.3\%$ , compared to the $76.9\%$ and $74\%$ of GCN and RGCN.
+
+
+Figure 8: Certified ratios for smoothed GCN, RGCN and soft medoid aggregation on Cora-ML under varying $r_{\mathbf{X}_{\mathrm{del}}}$ for our (solid lines) and the naive (dotted lines) collective certificate.
+
+
+Figure 9: Certified ratios for smoothed GCN on WebKB and Reuters-21578, under varying $r_{X_{\mathrm{del}}}$ for our (solid lines) and the naive (dotted lines) collective certificate.
+
+Attribute perturbations on additional datasets. In addition to citation graphs, we also use graphs constructed from the Reuters-21578 and WebKB natural language corpora to evaluate the proposed certificate. As in the main experiments section, we use randomized smoothing as a base certificate for GCN classifiers and assess the certified ratio for varying global attribute deletion budgets. Fig. 9 shows that the certified ratio increases and that much larger $r_{\mathbf{X}_{\mathrm{del}}}$ (up to approximately $10^{3}$ ) can be certified when using the collective approach. The average certifiable radius for Reuters and WebKB increases from 6.54 and 8.08 to 265.62 and 309.32, respectively. With less than 900 nodes each, both datasets are smaller than our three citation graphs. This leads to even shorter average wall-clock times per certificate: 0.105 s and 0.116 s.
+
+Simultaneous attribute deletions and additions. In the main experiments section, we applied our collective certificate to simultaneous certification of attribute and adjacency deletions. Here, we assess how it performs for simultaneous deletions and additions of attributes. We again use a randomly smoothed GCN classifier on Cora-ML, and perform collective certification for different combinations of $r_{\mathbf{X}_{\mathrm{add}}}$ and $r_{\mathbf{X}_{\mathrm{del}}}$ on Cora-ML. As shown in Fig. 3, the collective certificate is again much stronger than the naive collective certificate. For example, we obtain certified ratios between $30\%$ and $60\%$ at radii for which the naive collective certificate cannot certify any robustness at all. The average wall-clock time per certificate is $40.51~\mathrm{s}$ .
+
+Integrity gap. For all previous experiments, we used the relaxed linear programming version of the certificate to reduce the compute time. To assess the integrality gap (i.e. the difference between the mixed-integer linear programming and the linear programming based certificates), we apply both versions of the certificate to a single smoothed GCN on Cora. We certify both robustness to attribute deletions (Fig. 11a) and edge deletions (Fig. 11b). The wall-clock time per certificate for the MILP increased from $0.24\mathrm{s}$ to $64\mathrm{h}$ with increasing edge deletion budget $(0.35\mathrm{s}$ to $94\mathrm{h}$ for attribute deletions). Due to the exploding runtime for the MILP, we cannot compute the integrality gap for radii larger than 8 and 12, respectively. The integrality gap is small (at most $4\%$ for attribute deletions, $4.3\%$ for edge deletions), relative to the certified ratio, and appears to be slightly increasing with increasing global budget.
+
+
+(a) Proposed collective certificate
+
+
+(b) Naive collective certificate
+
+
+Figure 10: Comparison of the proposed collective certificate (Fig. 10a) to the naive collective certificate (Fig. 10b) for certification of smoothed GCN on Cora-ML, under varying $r_{\mathbf{X}_{\mathrm{add}}}$ and $r_{\mathbf{X}_{\mathrm{del}}}$ . Our method achieves much larger certified ratios for all combinations of attack radii.
+(a)
+
+
+(b)
+Figure 11: Certified ratios for smoothed GCN on Cora, under varying $r_{\mathbf{X}_{\mathrm{del}}}$ (Fig. 11a) and $r_{\mathbf{A}_{\mathrm{del}}}$ (Fig. 11b), using the mixed-integer collective certificate (blue line) and the relaxed linear programming certificate (orange line). The integrality gap is small, relative to the certified ratio.
+
+# B FORMAL DEFINITION OF THREAT MODEL PARAMETERS AND PROOFS
+
+Here we define formally the set of admissible perturbed graphs $\mathbb{B}_{\mathcal{G}}$ described in Section 2. Recall that we have an unperturbed graph $(X,A)\in \mathbb{G}$ , global budget parameters $r_{X_{\mathrm{add}}},r_{X_{\mathrm{del}}},r_{A_{\mathrm{add}}},r_{A_{\mathrm{del}}}\in \mathbb{N}_0$ , local budget parameters $r_{X_{\mathrm{add,loc}}},r_{X_{\mathrm{del,loc}}},r_{A_{\mathrm{add,loc}}},r_{A_{\mathrm{del,loc}}}\in \mathbb{N}_0^N$ and at most $\sigma$ adversary-controlled nodes. Given these parameters, the set of admissible perturbed graphs $\mathbb{B}_{\mathcal{G}}$ is defined as follows:
+
+$$
+\left(\boldsymbol {X} ^ {\prime}, \boldsymbol {A} ^ {\prime}\right) \in \mathbb {B} _ {\mathcal {G}} \Longrightarrow
+$$
+
+$$
+\left| \left\{(n, d): X _ {n, d} = 0 \neq X _ {n, d} ^ {\prime} \right\} \right| \leq r _ {\mathbf {X} _ {\mathrm {a d d}}} \wedge \left| \left\{(n, d): X _ {n, d} = 1 \neq X _ {n, d} ^ {\prime} \right\} \right| \leq r _ {\mathbf {X} _ {\mathrm {d e l}}}
+$$
+
+$$
+\left. \wedge \left| \left\{(n, m): A _ {n, m} = 0 \neq A _ {n, m} ^ {\prime} \right\} \right| \leq r _ {\mathbf {A} _ {\mathrm {a d d}}} \wedge \left| \left\{(n, m): A _ {n, m} = 1 \neq A _ {n, m} ^ {\prime} \right\} \right| \leq r _ {\mathbf {A} _ {\mathrm {d e l}}} \right.
+$$
+
+$$
+\wedge \left(\forall n \in \{1, \dots , N \}: \right.
+$$
+
+$$
+\left| \left\{d: X _ {n, d} = 0 \neq X _ {n, d} ^ {\prime} \right\} \right| \leq r _ {\mathbf {X} } \wedge \left| \left\{d: X _ {n, d} = 1 \neq X _ {n, d} ^ {\prime} \right\} \right| \leq r _ {\mathbf {X} } \tag {24}
+$$
+
+$$
+\left. \wedge \left| \left\{m: A _ {n, m} = 0 \neq X _ {n, m} ^ {\prime} \right\} \right| \leq r _ {\mathbf {A} } \wedge \left| \left\{m: A _ {n, m} = 1 \neq X _ {n, m} ^ {\prime} \right\} \right| \leq r _ {\mathbf {A} }\right)
+$$
+
+$$
+\wedge \Big (\exists \mathbb {S} \subseteq \{1, \dots , N \}: \left(\left| \mathbb {S} \right| \leq \sigma \wedge \forall d \in \{1, \dots , D \}, n, m \in \{1, \dots , N \}: \right.
+$$
+
+$$
+\left. \left(X _ {n, d} ^ {\prime} \neq X _ {n, d} \Rightarrow n \in \mathbb {S}\right) \wedge \left(A _ {n, m} ^ {\prime} \neq A _ {n, m} \Rightarrow n \in \mathbb {S} \vee m \in \mathbb {S}\right)\right).
+$$
+
+Next, we show the proof of Lemma 1 delegated from the main paper.
+
+Proof: By the definition of receptive fields (Eq. 2) changes outside the receptive field do not influence the prediction, i.e. $f_{n}(\pmb{X}^{\prime},\pmb{A}^{\prime}) = f_{n}(\pmb{X}^{\prime \prime},\pmb{A}^{\prime \prime})$ . Since $(\pmb{X}^{\prime \prime},\pmb{A}^{\prime \prime}) \in \mathbb{B}_{\mathcal{G}}(\pmb{\rho})$ and $\pmb{\rho} \in \mathbb{K}^{(n)}$ , we know that $f_{n}(\pmb{X},\pmb{A}) = f_{n}(\pmb{X}^{\prime \prime},\pmb{A}^{\prime \prime})$ . By transitivity $f_{n}(\pmb{X},\pmb{A}) = f_{n}(\pmb{X}^{\prime},\pmb{A}^{\prime})$ .
+
+# C FULL COLLECTIVE CERTIFICATE
+
+Here we discuss how to incorporate local budget constraints and constraints on the number of attacker-controlled nodes into the collective certificate for global budget constraints (see Eq. 15). We also discuss how to adapt it to undirected adjacency matrices (see Section C.1) As before, we lower-bound the true objective
+
+$$
+\min _ {\left(\boldsymbol {X} ^ {\prime}, \boldsymbol {A} ^ {\prime}\right) \in \mathbb {B} _ {\mathcal {G}}} \sum_ {n \in \mathbb {T}} \mathbf {1} \left[ f _ {n} \left(\boldsymbol {X}, \boldsymbol {A}\right) = f _ {n} \left(\boldsymbol {X} ^ {\prime}, \boldsymbol {A} ^ {\prime}\right) \right]. \tag {25}
+$$
+
+by replacing the indicator functions with evaluations of the corresponding base certificates and optimizing over the number of perturbations per node / the perturbed edges. The only difference is that more constraints are imposed on $\mathbb{B}_{\mathcal{G}}$ .
+
+The derivation proceeds as before: Lemma 1 still holds, meaning we only have to consider perturbations within $f_{n}$ 's receptive field in evaluating whether its robustness is guaranteed by the base certificate. The base certificates can be efficiently evaluated by comparing the perturbation within $f_{n}$ 's receptive field to all points in the pareto front $\mathbb{P}^{(n)}$ characterizing the volume of budgets $\mathbb{K}^{(n)}$ . After encoding $\mathbb{P}^{(n)}$ as a matrix $P^{(n)} \in \mathbb{N}_0^{\left|\mathbb{P}^{(n)}\right\rangle \times 4}$ , we can solve the following optimization problem to obtain a lower bound on Eq. 25:
+
+$$
+\min _ {\left(\boldsymbol {Q} ^ {(n)}, \boldsymbol {s} ^ {(n)}\right) _ {n = 1} ^ {N}, \boldsymbol {b} _ {\boldsymbol {X} _ {\mathrm {a d d}}}, \boldsymbol {b} _ {\boldsymbol {X} _ {\mathrm {d e l}}}, \boldsymbol {B} _ {\boldsymbol {A}}, \boldsymbol {t}} | \mathbb {T} | - \sum_ {n \in \mathbb {T}} t _ {n} \tag {26}
+$$
+
+$$
+\text {s . t .} \quad \| \boldsymbol {s} ^ {(n)} \| _ {1} \geq t _ {n}, \quad Q _ {p, d} ^ {(n)} \geq s _ {p} ^ {(n)}, \tag {27}
+$$
+
+$$
+\left(\boldsymbol {b} _ {\boldsymbol {X} _ {\mathrm {a d d}}}\right) ^ {T} \boldsymbol {\psi} ^ {(n)} \geq Q _ {i, 1} ^ {(n)} P _ {p, 1} ^ {(n)}, \quad \left(\boldsymbol {b} _ {\boldsymbol {X} _ {\mathrm {d e l}}}\right) ^ {T} \boldsymbol {\psi} ^ {(n)} \geq Q _ {p, 2} ^ {(n)} P _ {p, 2} ^ {(n)}, \tag {28}
+$$
+
+$$
+\sum_ {m, m ^ {\prime} \leq N} (1 - A _ {m, m ^ {\prime}}) (\boldsymbol {\Psi} ^ {(n)} \odot B _ {\boldsymbol {A}}) _ {m, m ^ {\prime}} \geq Q _ {p, 3} ^ {(n)} P _ {p, 3} ^ {(n)}, \tag {29}
+$$
+
+$$
+\sum_ {m, m ^ {\prime} \leq N} A _ {m, m ^ {\prime}} \left(\boldsymbol {\Psi} ^ {(n)} \odot B _ {\boldsymbol {A}}\right) _ {m, m ^ {\prime}} \geq Q _ {p, 4} ^ {(n)} P _ {p, 4} ^ {(n)}, \tag {30}
+$$
+
+$$
+\left| \left| b _ {\mathbf {X} _ {\mathrm {a d d}}} \right| \right| _ {1} \leq r _ {\mathbf {X} _ {\mathrm {a d d}}}, \quad \left| \left| b _ {\mathbf {X} _ {\mathrm {a d d}}} \right| \right| _ {1} \leq r _ {\mathbf {X} _ {\mathrm {d e l}}}, \tag {31}
+$$
+
+$$
+\sum_ {(i, j): A _ {i, j} = 0} B _ {\boldsymbol {A} i, j} \leq r _ {\boldsymbol {A} _ {\mathrm {a d d}}}, \quad \sum_ {(i, j): A _ {i, j} = 1} B _ {\boldsymbol {A} i, j} \leq r _ {\boldsymbol {A} _ {\mathrm {d e l}}}, \tag {32}
+$$
+
+$$
+b _ {\mathbf {X} _ {\mathrm {a d d} n}} \leq a _ {n} r _ {\mathbf {X} _ {\mathrm {a d d}, \mathrm {l o c} n}}, \quad b _ {\mathbf {X} _ {\mathrm {d e l} n}} \leq a _ {n} r _ {\mathbf {X} _ {\mathrm {d e l}, \mathrm {l o c} n}}, \tag {33}
+$$
+
+$$
+\sum_ {m: A _ {n, m} = 0} B _ {\mathbf {A} _ {n, m}} + B _ {\mathbf {A} _ {m, n}} \leq r _ {\mathbf {A} _ {\mathrm {a d d , l o c} _ {n}}}, \quad \sum_ {m: A _ {n, m} = 1} B _ {\mathbf {A} _ {n, m}} + B _ {\mathbf {A} _ {m, n}} \leq r _ {\mathbf {A} _ {\mathrm {d e l , l o c} _ {n}}} \tag {34}
+$$
+
+$$
+B _ {i, j} \leq a _ {i} + a _ {j} \forall i, j \in \{1, \dots , N \}, \tag {35}
+$$
+
+$$
+\left\| \boldsymbol {a} \right\| _ {1} \leq \sigma , \tag {36}
+$$
+
+$$
+\boldsymbol {s} ^ {(n)} \in \{0, 1 \} ^ {| \mathbb {P} ^ {(n)} |}, \quad \boldsymbol {Q} ^ {(n)} \in \{0, 1 \} ^ {| \mathbb {P} ^ {(n)} | \times 4}, \quad \boldsymbol {t} \in \{0, 1 \} ^ {N}, \tag {37}
+$$
+
+$$
+\boldsymbol {a} \in \{0, 1 \} ^ {N}, \quad \boldsymbol {b} _ {\boldsymbol {X} _ {\text {a d d}}}, \boldsymbol {b} _ {\boldsymbol {X} _ {\text {d e l}}} \in \mathbb {N} _ {0} ^ {N}, \quad \boldsymbol {B} _ {\boldsymbol {A}} \in \{0, 1 \} ^ {N \times N}, \tag {38}
+$$
+
+$$
+\forall n \in \{1, \dots , N \}, p \in \{1, \dots , | \mathbb {P} ^ {(n)} | \}, d \in \{1, \dots , 4 \}.
+$$
+
+The constraints from Eq. 27 to Eq. 30 are identical to constraints Eq. 16 to Eq. 19 of our collective certificate for global budget constraints. They simply implement boolean logic to determine whether there is some pareto-optimal $\pmb{p} \in \mathbb{P}(n)$ such that the perturbation in $f_{n}$ 's receptive field matches or exceeds $\pmb{p}$ in all four dimensions. If this is the case, the base certificate cannot certify the robustness of $f_{n}$ and $t_{n}$ can be set to 1. Eq. 31 and Eq. 32 enforce the global budget constraints. The difference to the global budget certificate lies in Eq. 33 to Eq. 36. We introduce an additional variable vector $\pmb{a} \in \{0,1\}^{N}$ that indicates which nodes are attacker controlled. Eq. 33 enforces that the attributes of node
+
+$n$ remains unperturbed, unless $a_{n} = 1$ . If $a_{n} = 1$ , the adversary can add or delete at most $r_{X_{\mathrm{del,loc}}}$ or $r_{X_{\mathrm{del,loc}}}$ attribute bits. With edge perturbations, it is sufficient for either incident node to be attacker-controlled. This is expressed via Eq. 35. The number of added or deleted edges incident to node $n$ is constrained via Eq. 34. Finally, Eq. 36 ensures that at most $\sigma$ nodes are attacker-controlled.
+
+# C.1 UNDIRECTED ADJACENCY MATRIX
+
+To adapt our certificate to undirected graphs, we simply change the interpretation of the indicator matrix $B_{A}$ . Now, setting either $B_{A,i,j}$ or $B_{Aj,i}$ to 1 corresponds to perturbing the undirected edge $\{i,j\}$ . An edge should not be perturbed twice, which we express through an additional constraint:
+
+$$
+B _ {\boldsymbol {A} i, j} + B _ {\boldsymbol {A} j, i} \leq 1 \forall i, j \in \{1, \dots , N \}. \tag {39}
+$$
+
+We further combine Eq. 34, which enforced that at least one of the incident nodes of a perturbed edge has to be attacker controlled, and Eq. 35, which enforced the local budgets for edge perturbations, into following constraints:
+
+$$
+\sum_ {m: A _ {n, m} = 0} B _ {\mathbf {A} _ {n, m}} \leq a _ {n} r _ {\mathbf {A} _ {\mathrm {a d d}, \mathrm {l o c} n}} \tag {40}
+$$
+
+$$
+\sum_ {m: A _ {n, m} = 1} B _ {\mathbf {A} _ {n, m}} \leq a _ {n} r _ {\mathbf {A} _ {\mathrm {d e l}, \mathrm {l o c} n}} \tag {41}
+$$
+
+These changes do not affect the optimal value of the mixed-integer linear program. But they are more effective than Eq. 34 and Eq. 35 when solving the relaxed linear program and the nodes' local budgets are small relative to their degree.
+
+# D DETERMINING THE PARETO FRONT OF BASE CERTIFICATES
+
+For our collective certificate, we assume that base certificates directly yield the pareto front $\mathbb{P}^{(n)}$ of points enclosing the volume of budgets $\mathbb{K}^{(n)}$ for which the prediction $f_{n}$ is certifiably robust:
+
+$$
+\mathbb {P} ^ {(n)} = \left\{\boldsymbol {\rho} \in \overline {{\mathbb {K} ^ {(n)}}} \mid \neg \exists \boldsymbol {\rho} ^ {\prime} \in \overline {{\mathbb {K} ^ {(n)}}}: \boldsymbol {\rho} ^ {\prime} \neq \boldsymbol {\rho} \wedge \forall d \in \{1, 2, 3, 4 \}: \rho_ {d} ^ {\prime} \leq \rho_ {d} \right\} \tag {42}
+$$
+
+with
+
+$$
+\mathbb {K} ^ {(n)} \subseteq \left\{\boldsymbol {\rho} \in \mathbb {N} _ {0} ^ {4} \mid \boldsymbol {\rho} \in \mathbb {L} \wedge \forall \left(\boldsymbol {X} ^ {\prime}, \boldsymbol {A} ^ {\prime}\right) \in \mathbb {B} _ {\mathcal {G}} (\boldsymbol {\rho}): f _ {n} (\boldsymbol {X}, \boldsymbol {A}) = f _ {n} \left(\boldsymbol {X} ^ {\prime}, \boldsymbol {A} ^ {\prime}\right) \right\}, \tag {43}
+$$
+
+$\overline{\mathbb{K}^{(n)}} = \mathbb{L}\backslash \mathbb{K}^{(n)}$ and $\mathbb{L} = [r_{\pmb{X}_{\mathrm{add}}}] \times [r_{\pmb{X}_{\mathrm{del}}}] \times [r_{\pmb{A}_{\mathrm{add}}}] \times [r_{\pmb{A}_{\mathrm{del}}}]$ (with $[k] = \{0, \dots, k\}$ (see Section 3). In practice, finding this representation requires some additional processing which we shall discuss in this section.
+
+Existing certificates for graph-structured data are methods that determine for a specific budget $\pmb{\rho} \in \mathbb{L}$ whether a classifier $f_{n}$ is robust to perturbations in $\mathbb{B}_{\mathcal{G}}(\pmb{\rho})$ . In other words: They can only test the membership relation $\pmb{\rho} \in \mathbb{K}^{(n)}$ . One possible way of finding the pareto front is through the following three-step process:
+
+1. Use a flood-fill algorithm starting at $(0\quad 0\quad 0\quad 0)^T$ to determine $\mathbb{K}^{(n)}$ .
+2. Identify all points in $\overline{\mathbb{K}^{(n)}} = \mathbb{L}\setminus \mathbb{K}^{(n)}$ that enclose the volume $\mathbb{K}^{(n)}$ .
+3. Remove all enclosing points that are not pareto-optimal.
+
+A pseudo-code implementation is provided in algorithm 1. It has a running time in $\mathcal{O}\left(c\left|\mathbb{K}^{(n)}\right|\right)$ where $c$ is the worst-case case of performing a membership test $\pmb {\rho}\in \mathbb{K}^{(n)}$
+
+Algorithm 1: Determining the pareto front of base certificates
+Result: Set $\mathbb{P}^{(n)}$ of pareto-optimal points enclosing the volume of budgets $\mathbb{K}^{(n)}$ within $\mathbb{L} = [r_{\mathbf{X}_{\mathrm{add}}}] \times [r_{\mathbf{X}_{\mathrm{del}}}] \times [r_{\mathbf{A}_{\mathrm{add}}}] \times [r_{\mathbf{A}_{\mathrm{del}}}]$ (see Section 3).
+ $\mathbb{P}'^{(n)} \gets \{\}$ ; //Potential pareto points
+closed_set $\leftarrow \{\}$ ; //Visited vectors
+Function flood_fill( $\pmb{\rho}$ )
+closed_set $\leftarrow$ closed_set $\cup \{\pmb{\rho}\}$ ;
+if $\neg (\pmb{\rho} \in \mathbb{K}^{(n)})$ then
+ $\mathbb{P}'^{(n)} \gets \mathbb{P}'^{(n)} \cup \{\pmb{\rho}\}$ ;
+else
+for $\pmb{\rho}' \in \{\pmb{\rho} + \pmb{d} | \pmb{d} \in \{0,1\}^4 \land ||\pmb{d}||_1 = 1\} \cap \mathbb{L}$ do
+if $\neg (\pmb{\rho}' \in$ closed_set) then
+| flood_fill( $\pmb{\rho}'$ ); //Consider neighboring vectors
+end
+end
+end
+flood_fill((0 0 0 0)T);
+ $\mathbb{P}^{(n)} \gets \{\}$ ; //Pareto front
+for $\pmb{\rho} \in \mathbb{P}^{(n)'}$ do
+pareto_optimal $\leftarrow$ true;
+for $\pmb{\rho}' \in \{\pmb{\rho} - \pmb{d}' | \pmb{d}' \in \{0,1\}^4 \land ||\pmb{d}'||_1 \geq 1\}$ do
+if $\pmb{\rho}' \in \mathbb{P}^{(n)'}$ then
+pareto_optimal $\leftarrow$ false; //Pareto-optimality does not allow decreasing values while staying in $\overline{\mathbb{K}^{(n)}}$ break;
+end
+end
+if pareto_optimal then
+ $\mathbb{P}^{(n)} \gets \mathbb{P}^{(n)} \cup \{\pmb{\rho}\}$
+end
+
+# E TIGHTNESS FOR RANDOMIZED SMOOTHING
+
+In this section we prove that if we use the randomized smoothing based certificate from Bojchevski et al. (2020) as our base certificate, then our collective certificate is tight: If we do not make any further assumptions, outside each smoothed classifier's receptive field and its expected output behavior under the smoothing distribution, we cannot obtain a better collective certificate. We define the base certificate and then provide a constructive proof of the resulting collective certificate's tightness.
+
+# E.1 RANDOMIZED SMOOTHING FOR SPARSE DATA
+
+Bojchevski et al. (2020) provide a robustness certificate for classification of arbitrary sparse binary data. Applied to node classification, it can be summarized as follows:
+
+Assume we are given a multi-output classifier $h: \mathbb{G} \mapsto \{1, \dots, C\}^N$ . Define a smoothed classifier $f: \mathbb{G} \mapsto \{1, \dots, C\}^N$ with
+
+$$
+f _ {n} (\boldsymbol {X}, \boldsymbol {A}) = \operatorname {a r g m a x} _ {c \in \{1, \dots , C \}} \Pr \left[ h _ {n} \left(\phi_ {\text {a t t r}} (\boldsymbol {X}), \phi_ {\text {a d j}} (\boldsymbol {A})\right) = c \right] \quad \forall n \in \{1, \dots , N \}, \tag {44}
+$$
+
+where $\phi_{\mathrm{attr}}$ and $\phi_{\mathrm{adj}}$ are two independent randomization schemes that assign probability mass to the set of attribute matrices $\{0,1\}^{N\times D}$ and adjacency matrices $\{0,1\}^{N\times N}$ , respectively. The ran-
+
+domization schemes are defined as follows:
+
+$$
+\Pr \left[ \phi_ {\operatorname {a t t r}} (\boldsymbol {X}) _ {m, d} = 1 - X _ {m, d} \right] = \theta_ {\boldsymbol {X} _ {\mathrm {d e l}}} ^ {X _ {m, d}} \theta_ {\boldsymbol {X} _ {\mathrm {a d d}}} ^ {(1 - X _ {m, d})} \quad \forall m \in \{1, \dots , N \}, d \in \{1, \dots , D \} \tag {45}
+$$
+
+$$
+\Pr \left[ \phi_ {\mathrm {a d j}} (\boldsymbol {A}) _ {i, j} = 1 - A _ {i, j} \right] = \theta_ {\boldsymbol {A} _ {\mathrm {d e l}}} ^ {A _ {i, j}} \theta_ {\boldsymbol {A} _ {\mathrm {a d d}}} ^ {(1 - A _ {i, j})} \quad \forall i, j \in \{1, \dots , N \}. \tag {46}
+$$
+
+Each bit's probability of being flipped is dependent on its current value, but independent of the other bits.
+
+An adversarially perturbed graph $(X', A')$ is successful in changing prediction $y_{n} = f_{n}(X, A)$ , if
+
+$$
+y _ {n} \neq \operatorname {a r g m a x} _ {c \in \{1, \dots , C \}} \Pr \left[ h _ {n} \left(\phi_ {\text {a t t r}} \left(\boldsymbol {X} ^ {\prime}\right), \phi_ {\text {a d j}} \left(\boldsymbol {A} ^ {\prime}\right)\right) = c \right]. \tag {47}
+$$
+
+Evaluating this inequality is usually not tractable. We can however relax the problem: Let $p_n = \operatorname*{Pr}\left[h_n(\phi_{\mathrm{attr}}(\pmb {X}),\phi_{\mathrm{adj}}(\pmb {A})) = y_n\right]$ and let $\mathbb{H}$ be the set of all possible classifiers for graphs in $\mathcal{G}$ (including non-deterministic ones). If
+
+$$
+\left. \left(\min _ {\tilde {h} _ {n} \in \mathbb {H}} \Pr \left[ \tilde {h} _ {n} \left(\phi_ {\operatorname {a t t r}} \left(\boldsymbol {X} ^ {\prime}\right), \phi_ {\operatorname {a d j}} \left(\boldsymbol {A} ^ {\prime}\right)\right) = y _ {n} \right]\right) > 0. 5 \right. \tag {48}
+$$
+
+$$
+\text {s . t .} \quad \Pr \left[ \tilde {h} _ {n} \left(\phi_ {\operatorname {a t t r}} (\boldsymbol {X}), \phi_ {\operatorname {a d j}} (\boldsymbol {A})\right) = y _ {n} \right] = p _ {n}, \tag {49}
+$$
+
+then $f_{n}(\pmb{X}',\pmb{A}') = f_{n}(\pmb{X},\pmb{A})$ . It is easy to see why: The unsmoothed $h_n$ is in the set defined by Eq. 49, so the result of the optimization problem is a lower bound on $\operatorname*{Pr}\left[h_n(\phi_{\mathrm{attr}}(\pmb{X}'),\phi_{\mathrm{adj}}(\pmb{A}')) = y_n\right]$ . If this lower bound is larger than 0.5, then $y_{n}$ is guaranteed to be the argmax class.
+
+Optimizing over the set of all possible classifiers might appear hard. We can however use the approach of Lee et al. (2019) to find an optimum. Let $\pmb{X}^{\prime},\pmb{A}^{\prime}$ be a graph that results from $b_{\mathbf{X}_{\mathrm{add}}}$ attribute additions, $b_{\mathbf{X}_{\mathrm{del}}}$ attribute deletions, $b_{\mathbf{A}_{\mathrm{add}}}$ edge additions, and $b_{\mathbf{A}_{\mathrm{del}}}$ edge additions applied to $(\pmb {X},\pmb {A})$ . We can partition the set of all graphs $\mathbb{G}$ into $(b_{\mathbf{X}_{\mathrm{add}}} + b_{\mathbf{X}_{\mathrm{del}}} + 1)(b_{\mathbf{A}_{\mathrm{add}}} + b_{\mathbf{A}_{\mathrm{del}}} + 1)$ regions that have a constant likelihood ratio under our smoothing distribution:
+
+$$
+\left\{\mathbb {J} _ {q _ {\boldsymbol {X}}, q _ {\boldsymbol {A}}} \left| q _ {\boldsymbol {X}}, q _ {\boldsymbol {A}} \in \mathbb {N} _ {0} \wedge q _ {\boldsymbol {X}} \leq \left(b _ {\boldsymbol {X} _ {\text {a d d}}} ^ {(n)} + b _ {\boldsymbol {X} _ {\text {d e l}}} ^ {(n)} + 1\right) \wedge q _ {\boldsymbol {A}} \leq \left(b _ {\boldsymbol {A} _ {\text {a d d}}} ^ {(n)} + b _ {\boldsymbol {A} _ {\text {d e l}}} ^ {(n)} + 1\right) \right. \right\} \tag {50}
+$$
+
+with
+
+$$
+\left(\left(\boldsymbol {X} ^ {\prime \prime}, \boldsymbol {A} ^ {\prime \prime}\right) \in \mathbb {J} _ {q _ {\boldsymbol {X}}, q _ {\boldsymbol {A}}}\right) \Longrightarrow \left(\frac {\Pr \left[ \phi_ {\text {a t t r}} (\boldsymbol {X}) = X ^ {\prime \prime} \wedge \phi_ {\text {a d j}} (\boldsymbol {A}) = A ^ {\prime \prime} \right]}{\Pr \left[ \phi_ {\text {a t t r}} \left(\boldsymbol {X} ^ {\prime}\right) = X ^ {\prime \prime} \wedge \phi_ {\text {a d j}} \left(\boldsymbol {A} ^ {\prime}\right) = A ^ {\prime \prime} \right]} = \eta_ {q _ {\boldsymbol {X}}, q _ {\boldsymbol {A}}}\right), \tag {51}
+$$
+
+where the $\eta_{\cdot ,\cdot}\in \mathbb{R}_{+}$ are constants. The regions have a particular semantic meaning, which will be important for our later proof: Any $(X^{\prime \prime},A^{\prime \prime})\in \mathbb{J}_{q_X,q_A}$ has $q_{X}$ attribute bits and $q_{A}$ adjacency bits that have the same value in $(X,A)$ , and a different value in $(X^{\prime},A^{\prime})$
+
+$$
+\begin{array}{r l r} & & \left(\left(\boldsymbol {X} ^ {\prime \prime}, \boldsymbol {A} ^ {\prime \prime}\right) \in \mathbb {J} _ {q _ {\boldsymbol {X}}, q _ {\boldsymbol {A}}}\right) \Longleftrightarrow \\ & & \left| \left\{m, d \mid X _ {m, d} ^ {\prime \prime} = X _ {m, d} \neq X _ {m, d} ^ {\prime} \right\} \right| = q _ {\boldsymbol {X}} \wedge \left| \left\{i, j \mid A _ {i, j} ^ {\prime \prime} = A _ {i, j} \neq A _ {i, j} ^ {\prime} \right\} \right| = q _ {\boldsymbol {A}}. \end{array} \tag {52}
+$$
+
+As proven by Lee et al. (2019), we can find an optimal solution to Eq. 48 by optimizing over the expected output of $\tilde{h}$ within each region of constant likelihood ratio. This can be implemented via the following linear program:
+
+$$
+\Lambda_ {n} \left(b _ {\mathbf {X} _ {\mathrm {a d d}}}, b _ {\mathbf {X} _ {\mathrm {d e l}}}, b _ {\mathbf {A} _ {\mathrm {a d d}}}, b _ {\mathbf {A} _ {\mathrm {d e l}}}, p _ {n}\right) :=
+$$
+
+$$
+\min _ {\boldsymbol {H} ^ {(n)}} \sum_ {q _ {\boldsymbol {X}} = 0} ^ {\left(b _ {\boldsymbol {X} _ {\mathrm {a d d}}} + b _ {\boldsymbol {X} _ {\mathrm {d e l}}}\right) \left(b _ {\boldsymbol {A} _ {\mathrm {a d d}}} + b _ {\boldsymbol {A} _ {\mathrm {d e l}}}\right)} \sum_ {q _ {\boldsymbol {A}} = 0} ^ {H _ {q _ {\boldsymbol {X}}, q _ {\boldsymbol {A}}} ^ {(n)}} H _ {q _ {\boldsymbol {X}}, q _ {\boldsymbol {A}}} ^ {(n)} \Pr \left[ \phi_ {\operatorname {a t t r}} \left(X ^ {\prime}\right), \phi_ {\operatorname {a d j}} \left(A ^ {\prime}\right) \right] \in \mathbb {J} _ {q _ {\boldsymbol {X}}, q _ {\boldsymbol {A}}} \tag {53}
+$$
+
+$$
+\begin{array}{l} \left(b _ {\boldsymbol {X} _ {\mathrm {a d d}} + b _ {\boldsymbol {X} _ {\mathrm {d e l}}}}\right) \left(b _ {\boldsymbol {A} _ {\mathrm {a d d}} + b _ {\boldsymbol {A} _ {\mathrm {d e l}}}}\right) \\ \text {s . t .} \sum_ {q _ {\boldsymbol {X}} = 0} ^ {(b _ {\boldsymbol {X} _ {\mathrm {a d d}}} + b _ {\boldsymbol {X} _ {\mathrm {d e l}}})} \sum_ {q _ {\boldsymbol {A}} = 0} ^ {(n)} H _ {q _ {\boldsymbol {X}}, q _ {\boldsymbol {A}}} ^ {(n)} \Pr \left(\left[ \phi_ {\operatorname {a t t r}} (X), \phi_ {\operatorname {a d j}} (A)\right) \in \mathbb {J} _ {q _ {\boldsymbol {X}}, q _ {\boldsymbol {A}}} \right] = p _ {n}, \end{array} \tag {54}
+$$
+
+$$
+\boldsymbol {H} ^ {(n)} \in [ 0, 1 ] ^ {\left(r _ {X _ {\mathrm {a d d}}} + r _ {X _ {\mathrm {d e l}}}\right) \times \left(r _ {A _ {\mathrm {a d d}}} + r _ {A _ {\mathrm {d e l}}}\right)}. \tag {55}
+$$
+
+Any optimal solution $\tilde{H}^{(n)}$ corresponds to a single-output classifier $\tilde{h}_n$ that, given an input graph $(X'', A'')$ , simply counts the number of attribute bits $q_{X}$ and adjacency bits $q_{A}$ that have the same
+
+value in $(\mathbf{X},\mathbf{A})$ and a different value in $(\mathbf{X}^{\prime},\mathbf{A}^{\prime})$ and then assigns a probability of $H_{q_X,q_A}^{(n)}$ to class $f_{n}$ and $1 - H_{q_X,q_A}^{(n)}$ to the remaining classes.
+
+The optimal value of $Eq. 53$ being larger than 0.5 for a fixed perturbed graph $(X', A')$ only proofs that this particular graph is not a successful attack on $f_{n}$ . For a robustness certificate, we want to know the result for a worst-case graph. However, the result is only dependent on the number of perturbations $b_{\mathbf{X}_{\mathrm{add}}}, b_{\mathbf{X}_{\mathrm{del}}}, b_{\mathbf{A}_{\mathrm{add}}}$ and $b_{\mathbf{A}_{\mathrm{del}}}$ , and not on which specific bits are perturbed. Therefore, we can solve the problem for an arbitrary fixed perturbed graph with the given number of perturbations, and obtain a valid robustness certificate.
+
+For use in our collective certificate, we define the set of budgets $\mathbb{K}^{(n)}$ for which prediction $f_{n}$ is certifiably robust as
+
+$$
+\mathbb {K} ^ {(n)} = \left\{\left(b _ {\boldsymbol {X} _ {\mathrm {a d d}}} b _ {\boldsymbol {X} _ {\mathrm {d e l}}} b _ {\boldsymbol {A} _ {\mathrm {a d d}}} b _ {\boldsymbol {A} _ {\mathrm {d e l}}}\right) ^ {T} \in \mathbb {L} \mid \Lambda_ {n} \left(b _ {\boldsymbol {X} _ {\mathrm {a d d}}}, b _ {\boldsymbol {X} _ {\mathrm {d e l}}}, b _ {\boldsymbol {A} _ {\mathrm {a d d}}}, b _ {\boldsymbol {A} _ {\mathrm {d e l}}}, p _ {n}\right) > 0. 5 \right\} \tag {56}
+$$
+
+where $\Lambda_{n}$ is defined as in Eq. 53 and $\mathbb{L} = [r_{\mathbf{X}_{\mathrm{add}}}] \times [r_{\mathbf{X}_{\mathrm{del}}}] \times [r_{\mathbf{A}_{\mathrm{add}}}] \times [r_{\mathbf{A}_{\mathrm{del}}}]$ (with $[k] = \{0, \dots, k\}$ ) is the set of vectors that do not exceed the available collective budget (see Section 3).
+
+# E.2 TIGHTNESS PROOF
+
+With the definition of our base certificate in place, we can now formalize and prove that the resulting collective certificate is tight. Recall that randomized smoothing is a black-box method. The classifier that is being smoothed is treated as unknown. A robustness certificate based on randomized smoothing has to account for the worst-case (i.e. least robust under the given threat model) classifier. Our collective certificate lower-bounds the number of predictions that are guaranteed to be simultaneously robust. We show that with the randomized smoothing base certificate from the previous section, it actually yields the exact number of robust predictions, assuming the worst-case unsmoothed classifier.
+
+Theorem 1 Let $(\mathbf{X},\mathbf{A})$ be an unperturbed graph. Let $h:\mathbb{G}\mapsto \{1,\dots ,C\} ^N$ be a (potentially non-deterministic) multi-output classifier. Let $f:\mathbb{G}\mapsto \{1,\ldots ,C\}$ be the corresponding smoothed classifier with
+
+$$
+f _ {n} \left(\boldsymbol {X} ^ {\prime \prime}, \boldsymbol {A} ^ {\prime \prime}\right) = \operatorname {a r g m a x} _ {c \in \{1, \dots , C \}} \Pr \left[ h _ {n} \left(\phi_ {\mathrm {a t t r}} \left(\boldsymbol {X} ^ {\prime \prime}\right), \phi_ {\mathrm {a d j}} \left(\boldsymbol {A} ^ {\prime \prime}\right)\right) = c \right], \tag {57}
+$$
+
+$$
+y _ {n} = f _ {n} (\boldsymbol {X}, \boldsymbol {A}), \tag {58}
+$$
+
+$$
+p _ {n} = \Pr \left[ h _ {n} \left(\phi_ {\operatorname {a t t r}} (\boldsymbol {X}), \phi_ {\operatorname {a d j}} (\boldsymbol {A})\right) = y _ {n} \right], \tag {59}
+$$
+
+and randomization schemes $\phi_{\mathrm{attr}}(\mathbf{X}),\phi_{\mathrm{adj}}(\mathbf{A})$ defined as in Eq. 45 and Eq. 46. Let $\psi \in \{0,1\}^N,\Psi \in \{0,1\}^{N\times N}$ be receptive field indicators corresponding to $f_{n}$ (see Eq. 2). Let $\mathbb{B}_{\mathcal{G}}$ be a set of admissible perturbed graphs, constrained by parameters $r_{\mathbf{X}_{\mathrm{add}}},r_{\mathbf{X}_{\mathrm{del}}},r_{\mathbf{A}_{\mathrm{add}}},r_{\mathbf{A}_{\mathrm{del}}},r_{\mathbf{X}_{\mathrm{add\_loc}}},r_{\mathbf{A}_{\mathrm{add\_loc}}},r_{\mathbf{A}_{\mathrm{del\_loc}}},\sigma$ as defined in Section 2. Let $\mathbb{T}$ be the indices of nodes targeted by an adversary. Under the given parameters, let $o*$ be the optimal value of the optimization problem defined in Section C.
+
+Then there are a perturbed graph $(\mathbf{X}',\mathbf{A}')$ , a non-deterministic multi-output classifier $\tilde{h}$ and a corresponding smoothed multi-output classifier $\tilde{f}$ with
+
+$$
+\tilde {f} _ {n} (\boldsymbol {X}, \boldsymbol {A}) = \operatorname {a r g m a x} _ {c \in \{1, \dots , C \}} \Pr \left[ \tilde {h} _ {n} \left(\phi_ {\mathrm {a t t r}} (\boldsymbol {X}), \phi_ {\mathrm {a d j}} (\boldsymbol {A})\right) = c \right] \forall n \in \{1, \dots , N \} \tag {60}
+$$
+
+such that
+
+$$
+\left| \left\{n \in \mathbb {T} \left| \tilde {f} _ {n} \left(\boldsymbol {X} ^ {\prime}, \boldsymbol {A} ^ {\prime}\right) = y _ {n} \right. \right\} \right| = o *, \tag {61}
+$$
+
+$$
+\Pr \left[ \tilde {h} _ {n} \left(\phi_ {\operatorname {a t t r}} (\boldsymbol {X}), \phi_ {\operatorname {a d j}} (\boldsymbol {A})\right) = y _ {n} \right] = p _ {n}, \tag {62}
+$$
+
+and each $\tilde{f}_n$ is only dependent on nodes and edges for which $\psi^{(n)}$ and $\Psi^{(n)}$ have value 1.
+
+Proof: The optimization problem from Section C has three parameters $b_{\mathbf{X}_{\mathrm{add}}}, b_{\mathbf{X}_{\mathrm{del}}}, B_A$ , which specify the budget allocation of the adversary. Let $b_{\mathbf{X}_{\mathrm{add}}}^*$ , $b_{\mathbf{X}_{\mathrm{del}}}^*$ , $B_A^*$ be their value in the optimum. We can construct a perturbed graph $(X', A')$ from the clean graph $(X, A)$ as follows: For every node $n$ , set the first $\bar{b}_{\mathbf{X}_{\mathrm{add}n}}^*$ zero-valued bits to one and the first $b_{\mathbf{X}_{\mathrm{del}n}}^*$ non-zero bits to zero. Then,
+
+flip any entry $(n,m)$ of $A_{n,m}$ for which $B_{\mathbf{A}^{n,m}}^{*} = 1$ . The parameters $\pmb{b}_{\pmb{X}_{\mathrm{add}}}^{*},\pmb{b}_{\pmb{X}_{\mathrm{del}}}^{*},\pmb{B}_{\pmb{A}}^{*}$ are part of a feasible solution to the optimization problem. In particular, they must fulfill constraints Eq. 31 to Eq. 36, which guarantee that the constructed graph is in $\mathbb{B}_{\mathcal{G}}$ .
+
+Given the perturbed graph $(X', A')$ , we can calculate the amount of perturbation in the receptive field of each prediction $f_{n}$ :
+
+$$
+u _ {\boldsymbol {X} _ {\mathrm {a d d}}} ^ {(n)} = \left(\boldsymbol {b} _ {\boldsymbol {X} _ {\mathrm {a d d}}} ^ {*}\right) ^ {T} \boldsymbol {\psi} ^ {(n)} \tag {63}
+$$
+
+$$
+u _ {\boldsymbol {X} _ {\mathrm {d e l}}} ^ {(n)} = \left(\boldsymbol {b} _ {\boldsymbol {X} _ {\mathrm {d e l}}} ^ {*}\right) ^ {T} \boldsymbol {\psi} ^ {(n)} \tag {64}
+$$
+
+$$
+u _ {\boldsymbol {A} _ {\mathrm {a d d}}} ^ {(n)} = \sum_ {(i, j): A _ {i, j} = 0} \Psi_ {i, j} B _ {\boldsymbol {A} i, j} ^ {*} \tag {65}
+$$
+
+$$
+u _ {\boldsymbol {A} _ {\mathrm {a d d}}} ^ {(n)} = \sum_ {(i, j): A _ {i, j} = 1} \Psi_ {i, j} B _ {\boldsymbol {A} _ {i, j}} ^ {*} \tag {66}
+$$
+
+We can now specify the unsmoothed multi-output classifier $\tilde{h}$ . Recall that in the collective certificate's optimization problem, each $f_{n}$ is associated with a binary variable $t_n$ . In the optimum, $(t_n^* = 0)\iff \left[\left(u_{\mathbf{X}_{\mathrm{add}}}^{(n)}\quad u_{\mathbf{X}_{\mathrm{del}}}^{(n)}\quad u_{\mathbf{A}_{\mathrm{add}}}^{(n)}\quad u_{\mathbf{A}_{\mathrm{del}}}^{(n)}\right)^T\in \mathbb{K}^{(n)}\right]$ , i.e. $f_{n}$ 's robustness is guaranteed by the base certificate, and $o^{*} = \sum_{n\in \mathbb{T}}|\mathbb{T}| - t_{n}^{*}$ .
+
+Case 1: $n \notin \mathbb{T}$ . Choose $\tilde{h}_n = h_n$ . Trivially, constraint Eq. 62 is fulfilled since $\tilde{f}_n$ is only dependent on nodes and edges for which $\psi^{(n)}$ and $\Psi^{(n)}$ have value 1. Whether $\tilde{f}$ is adversarially attacked or not does not influence Eq. 61, as $n \notin \mathbb{T}$ .
+
+Case 2: $n \in \mathbb{T}$ and $t_n^* = 0$ . Choose $\tilde{h}_n = h_n$ . Again, constraint Eq. 62 is fulfilled since $\tilde{f}_n$ is only dependent on nodes and edges for which $\pmb{\psi}^{(n)}$ and $\Psi^{(n)}$ have value 1. Since $t_n^* = 0$ , we know that $\left(u_{\pmb{X}_{\mathrm{add}}}^{(n)} u_{\pmb{X}_{\mathrm{del}}}^{(n)} u_{\pmb{A}_{\mathrm{add}}}^{(n)} u_{\pmb{A}_{\mathrm{del}}}^{(n)}\right)^T \in \mathbb{K}^{(n)}$ , i.e. $\tilde{f}_n(\pmb{X}', \pmb{A}') = y_n$ .
+
+Case 3: $n \in \mathbb{T}$ and $t_n^* = 1$ . Since $t_n^* = 1$ , we know that $f_n$ is not certified by the base certificate: $\left(u_{\pmb{X}_{\mathrm{add}}}^{(n)}u_{\pmb{X}_{\mathrm{del}}}^{(n)}u_{\pmb{A}_{\mathrm{add}}}^{(n)}u_{\pmb{A}_{\mathrm{del}}}^{(n)}\right)^T \notin \mathbb{K}^{(n)}$ . Let $\pmb{H}^{*(n)} \in [0,1]^{\left(u_{\pmb{X}_{\mathrm{add}}}^{(n)} + u_{\pmb{X}_{\mathrm{del}}}^{(n)}\right) \times \left(u_{\pmb{A}_{\mathrm{add}}}^{(n)} + u_{\pmb{A}_{\mathrm{del}}}^{(n)}\right)}$ be the optimum of the linear program underlying the base certificate (see Eq. 53 to Eq. 55). Define $\tilde{h}_n$ to have the following non-deterministic output behavior:
+
+$$
+\Pr \left[ \tilde {h} _ {n} \left(\boldsymbol {X} ^ {\prime \prime}, \boldsymbol {A} ^ {\prime \prime}\right) = y _ {n} \right] = H _ {q _ {\boldsymbol {X}} ^ {(n)} \left(\boldsymbol {X} ^ {\prime \prime}\right), q _ {\boldsymbol {A}} ^ {(n)} \left(\boldsymbol {A} ^ {\prime \prime}\right)} ^ {*} (\boldsymbol {X} ^ {\prime \prime}, \boldsymbol {A} ^ {\prime \prime}) \quad \forall \left(\boldsymbol {X} ^ {\prime \prime}, \boldsymbol {A} ^ {\prime \prime}\right) \in \mathbb {G} \tag {67}
+$$
+
+$$
+\Pr \left[ \tilde {h} _ {n} \left(\boldsymbol {X} ^ {\prime \prime}, \boldsymbol {A} ^ {\prime \prime}\right) = y _ {n} ^ {\prime} \right] = 1 - H _ {q _ {\boldsymbol {X}} ^ {(n)} \left(\boldsymbol {X} ^ {\prime \prime}\right), q _ {\boldsymbol {A}} ^ {(n)} \left(\boldsymbol {A} ^ {\prime \prime}\right)} ^ {* (n)} \quad \forall \left(\boldsymbol {X} ^ {\prime \prime}, \boldsymbol {A} ^ {\prime \prime}\right) \in \mathbb {G} \tag {68}
+$$
+
+for some $y_{n}^{\prime}\neq y_{n}$ and with
+
+$$
+q _ {\boldsymbol {X}} ^ {(n)} \left(\boldsymbol {X} ^ {\prime \prime}\right) = \left| \left\{(n, d) \mid X _ {n, d} ^ {\prime \prime} = X _ {n, d} \neq X _ {n, d} ^ {\prime} \right\} \right| \tag {69}
+$$
+
+$$
+\left. q _ {\boldsymbol {A}} ^ {(n)} \left(\boldsymbol {A} ^ {\prime \prime}\right) = \left| \left\{(n, m) \mid A _ {n, m} ^ {\prime \prime} = A _ {n, m} \neq A _ {n, m} ^ {\prime} \right\} \right|. \right. \tag {70}
+$$
+
+As discussed at the end of Section E.1, this classifier simply counts the number of bits in $(X'', A'')$ that are within $f_{n}$ 's receptive field and have the same value in the clean graph $(X, A)$ and a different value in the perturbed graph $(X', A'')$ . Since $H^{*(n)}$ is a valid solution to the linear program underlying the base certificate, we know that Eq. 62 is fulfilled, as it is equivalent to Eq. 54 from the base certificate. Since $\left(u_{X_{\mathrm{add}}}^{(n)} u_{X_{\mathrm{del}}}^{(n)} u_{A_{\mathrm{add}}}^{(n)} u_{A_{\mathrm{del}}}^{(n)}\right)^T \notin \mathbb{K}^{(n)}$ , we know that $\operatorname*{Pr}\left[\tilde{h}_n(\phi_{\mathrm{attr}}(\boldsymbol{X}'), \phi_{\mathrm{adj}}(\boldsymbol{A}')) = y_n'\right] \geq 0.5$ (see Eq. 56), i.e. $\tilde{f}_n$ is successfully attacked, $\tilde{f}_n(\boldsymbol{X}', \boldsymbol{A}') = y_n' \neq \tilde{f}_n(\boldsymbol{X}, \boldsymbol{A})$ .
+
+By construction, we have exactly $o^*$ nodes for which $\tilde{f}_n(\pmb{X}',\pmb{A}') = y_n$ and the remaining constraints are fulfilled as well.
+
+# F HYPERPARAMETERS
+
+Training schedule for smoothed classifiers. Training is performed in a semi-supervised fashion with 20 nodes per class as a train set. Another 20 nodes per class serve as a validation set. Models are trained with Adam (learning rate $= 0.001$ [0.01 for SMA], $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ , $\epsilon = 10^{-8}$ , weight decay $= 0.001$ ) for 3000 epochs, using the average cross-entropy loss across all training set nodes, with a batch size of 1. We employ early stopping, if the validation loss does not decrease for 50 epochs (300 epochs for SMA). In each epoch, a different graph is sampled from the smoothing distribution. We do not use the KL-divergence based regularization loss proposed for RGCN, as we found it to decrease the certifiable robustness of the model.
+
+Training schedule for non-smoothed GCN. Training is performed in a semi-supervised fashion with 20 nodes per class as a train set. Another 20 nodes per class serve as a validation set. For the first 100 of 1000 epochs, models are trained with Adam (learning rate $= 0.01$ , $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ , $\epsilon = 10^{-8}$ , weight decay $= 10^{-5}$ ), using the average cross-entropy loss across all training set nodes, with a batch size of 8. After 100 episodes, we add the robust loss proposed in (Zügner & Gümnmann, 2019) (local budget $q = 21$ , global budget $Q = 12$ , training node classification margin $= \log(90/10)$ , unlabeled node classification margin $= \log(60/40)$ ). The gradient for the robust loss term is accumulated over 5 epochs before each weight update in order to simulate larger batch sizes.
+
+Network parameters. In all models, hidden linear and convolutional layers are followed by a ReLU nonlinearity. During training, each ReLU nonlinearity is followed by $50\%$ dropout. For GCN and GAT, we use two convolution layers with 64 hidden activations. The number of attention heads for GAT is set to 8 for the first layer and 1 for the second layer. RGCN uses independent gaussians as its internal representation (i.e. each feature dimension has a mean and a variance). For RGCN, we use one linear layer, followed by two convolutional layers. We set the number of hidden activations to 32 for the means and 32 for the variances. Dropout is applied to both the means and variances. For APPNP, we use two linear layers with 64 hidden activations, followed by a propagation layer based on approximate personalized pagerank (teleport probability $= 0.15$ , iterations $= 10$ ). To ensure locality, we set all but the top 64 of each row in the approximate pagerank matrix to 0. For SMA, we first transform each node's features using a linear layer with 64 hidden activations. We then apply soft medoid aggregation ( $k = 64$ , $T = 10$ ) based on the approximate personalized pagerank matrix (teleport probability $= 0.15$ , iterations $= 2$ ). Note that we use the alternative parameterization from appendix B.5 of (Geisler et al., 2020) that is designed to improve robustness to attribute perturbations. After the aggregation, we apply a ReLU nonlinearity. Finally, we apply a second linear layer to each node independently.
+
+Randomized smoothing. Randomized smoothing introduces four additional hyperparameters $\theta_{\mathbf{X}_{\mathrm{add}}}, \theta_{\mathbf{X}_{\mathrm{add}}}, \theta_{\mathbf{A}_{\mathrm{add}}}, \theta_{\mathbf{A}_{\mathrm{del}}}$ , which control the probability of flipping bits in the attribute and adjacency matrix under the smoothing distribution. If we only certify attribute perturbations, we set $\theta_{\mathbf{A}_{\mathrm{add}}} = \theta_{\mathbf{A}_{\mathrm{del}}} = 0$ , $\theta_{\mathbf{X}_{\mathrm{add}}} = 0.002$ and $\theta_{\mathbf{X}_{\mathrm{del}}} = 0.6$ . If we only certify adjacency perturbations, we set $\theta_{\mathbf{X}_{\mathrm{add}}} = \theta_{\mathbf{X}_{\mathrm{del}}} = 0$ , $\theta_{\mathbf{A}_{\mathrm{add}}} = 0$ and $\theta_{\mathbf{A}_{\mathrm{del}}} = 0.4$ . If we jointly certify attribute and adjacency perturbations, we set $\theta_{\mathbf{X}_{\mathrm{add}}} = 0.002$ , $\theta_{\mathbf{X}_{\mathrm{del}}} = 0.6$ , $\theta_{\mathbf{A}_{\mathrm{add}}} = 0$ and $\theta_{\mathbf{A}_{\mathrm{del}}} = 0.4$ . Exactly evaluating smoothed classifiers is not possible, they have to be approximated via sampling. We use 1000 samples to determine a classifier's majority class ( $y_n$ ), followed by $10^6$ samples to estimate the probability of the majority class ( $p_n$ ) via a Clopper-Pearson confidence interval. Applying Bonferroni correction, the confidence level for each confidence interval is set to $1 - 0.01 / N$ to obtain an overall confidence level of $99\%$ for all certificates.
\ No newline at end of file
diff --git a/collectiverobustnesscertificatesexploitinginterdependenceingraphneuralnetworks/images.zip b/collectiverobustnesscertificatesexploitinginterdependenceingraphneuralnetworks/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..a3f2718e0e26af03ab0c69361bd77cd479861634
--- /dev/null
+++ b/collectiverobustnesscertificatesexploitinginterdependenceingraphneuralnetworks/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1257a40b9ebeb7e2ed796a569a8b5c631cd9d7c778d7195150847454bc0c30a2
+size 818433
diff --git a/collectiverobustnesscertificatesexploitinginterdependenceingraphneuralnetworks/layout.json b/collectiverobustnesscertificatesexploitinginterdependenceingraphneuralnetworks/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..329473697e6515d181251b14d4fed88e8c9a8644
--- /dev/null
+++ b/collectiverobustnesscertificatesexploitinginterdependenceingraphneuralnetworks/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3815b149a994e1bebafb1bda4d4b050ba3a194089db233989bf0645e091620f1
+size 924798
diff --git a/colorizationtransformer/fbc87209-6557-47b6-b54c-e0e41048ef5a_content_list.json b/colorizationtransformer/fbc87209-6557-47b6-b54c-e0e41048ef5a_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..a5df080e13241770c87e82b1cf0ab3c21e07a8ef
--- /dev/null
+++ b/colorizationtransformer/fbc87209-6557-47b6-b54c-e0e41048ef5a_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:07c739613efa7c9e246879e9cc369dbf6846c9f5aae78dfce417b289133ef475
+size 103663
diff --git a/colorizationtransformer/fbc87209-6557-47b6-b54c-e0e41048ef5a_model.json b/colorizationtransformer/fbc87209-6557-47b6-b54c-e0e41048ef5a_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..8aa57c90499d8e98766dc338595ff92808d2f3d0
--- /dev/null
+++ b/colorizationtransformer/fbc87209-6557-47b6-b54c-e0e41048ef5a_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cbd4853e2e1c0c7ea5c60296b67def62c137c8e0d6f916119e2d6284248df6dc
+size 126107
diff --git a/colorizationtransformer/fbc87209-6557-47b6-b54c-e0e41048ef5a_origin.pdf b/colorizationtransformer/fbc87209-6557-47b6-b54c-e0e41048ef5a_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..8b74949a7a4dfa0fc84d97032293579f81040a6d
--- /dev/null
+++ b/colorizationtransformer/fbc87209-6557-47b6-b54c-e0e41048ef5a_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:963f8b22149a94a46a7f37ae2ebd87438f4657675cc9fe851bbb98cc418c6f4e
+size 32582448
diff --git a/colorizationtransformer/full.md b/colorizationtransformer/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..797909619d48d90e417902615c847eab45485143
--- /dev/null
+++ b/colorizationtransformer/full.md
@@ -0,0 +1,428 @@
+# COLORIZATION TRANSFORMER
+
+Manoj Kumar, Dirk Weissenborn & Nal Kalchbrenner
+
+Google Research, Brain Team
+
+{mechcoder,diwe,nalk}@google.com
+
+# ABSTRACT
+
+We present the Colorization Transformer, a novel approach for diverse high fidelity image colorization based on self-attention. Given a grayscale image, the colorization proceeds in three steps. We first use a conditional autoregressive transformer to produce a low resolution coarse coloring of the grayscale image. Our architecture adopts conditional transformer layers to effectively condition grayscale input. Two subsequent fully parallel networks upsample the coarse colored low resolution image into a finely colored high resolution image. Sampling from the Colorization Transformer produces diverse colorings whose fidelity outperforms the previous state-of-the-art on colorising ImageNet based on FID results and based on a human evaluation in a Mechanical Turk test. Remarkably, in more than $60\%$ of cases human evaluators prefer the highest rated among three generated colorings over the ground truth. The code and pre-trained checkpoints for Colorization Transformer are publicly available at this url.
+
+# 1 INTRODUCTION
+
+
+Figure 1: Samples of our model showing diverse, high-fidelity colorizations.
+
+
+
+
+
+Image colorization is a challenging, inherently stochastic task that requires a semantic understanding of the scene as well as knowledge of the world. Core immediate applications of the technique include producing organic new colorizations of existing image and video content as well as giving life to originally grayscale media, such as old archival images (Tsaftaris et al., 2014), videos (Geshwind, 1986) and black-and-white cartoons (Sykora et al., 2004; Qu et al., 2006; Cinarel & Zhang, 2017). Colorization also has important technical uses as a way to learn meaningful representations without explicit supervision (Zhang et al., 2016; Larsson et al., 2016; Vondrick et al., 2018) or as an unsupervised data augmentation technique, whereby diverse semantics-preserving colorizations of labelled images are produced with a colorization model trained on a potentially much larger set of unlabelled images.
+
+The current state-of-the-art in automated colorization are neural generative approaches based on log-likelihood estimation (Guadarrama et al., 2017; Royer et al., 2017; Ardizzone et al., 2019). Probabilistic models are a natural fit for the one-to-many task of image colorization and obtain better results than earlier deterministic approaches avoiding some of the persistent pitfalls (Zhang et al., 2016). Probabilistic models also have the central advantage of producing multiple diverse colorings that are sampled from the learnt distribution.
+
+In this paper, we introduce the Colorization Transformer (ColTran), a probabilistic colorization model composed only of axial self-attention blocks (Ho et al., 2019b; Wang et al., 2020). The main
+
+advantages of axial self-attention blocks are the ability to capture a global receptive field with only two layers and $\mathcal{O}(D\sqrt{D})$ instead of $\mathcal{O}(D^2)$ complexity. They can be implemented efficiently using matrix-multiplications on modern accelerators such as TPUs (Jouppi et al., 2017). In order to enable colorization of high-resolution grayscale images, we decompose the task into three simpler sequential subtasks: coarse low resolution autoregressive colorization, parallel color and spatial super-resolution. For coarse low resolution colorization, we apply a conditional variant of Axial Transformer (Ho et al., 2019b), a state-of-the-art autoregressive image generation model that does not require custom kernels (Child et al., 2019). While Axial Transformers support conditioning by biasing the input, we find that directly conditioning the transformer layers can improve results significantly. Finally, by leveraging the semi-parallel sampling mechanism of Axial Transformers we are able to colorize images faster at higher resolution than previous work (Guadarrama et al., 2017) and as an effect this results in improved colorization fidelity. Finally, we employ fast parallel deterministic upsampling models to super-resolve the coarsely colored image into the final high resolution output. In summary, our main contributions are:
+
+- First application of transformers for high-resolution $(256 \times 256)$ image colorization.
+- We introduce conditional transformer layers for low-resolution coarse colorization in Section 4.1. The conditional layers incorporate conditioning information via multiple learnable components that are applied per-pixel and per-channel. We validate the contribution of each component with extensive experimentation and ablation studies.
+- We propose training an auxiliary parallel prediction model jointly with the low resolution coarse colorization model in Section 4.2. Improved FID scores demonstrate the usefulness of this auxiliary model.
+- We establish a new state-of-the-art on image colorization outperforming prior methods by a large margin on FID scores and a 2-Alternative Forced Choice (2AFC) Mechanical Turk test. Remarkably, in more than $60\%$ of cases human evaluators prefer the highest rated among three generated colorings over the ground truth.
+
+# 2 RELATED WORK
+
+Colorization methods have initially relied on human-in-the-loop approaches to provide hints in the form of scribbles (Levin et al., 2004; Ironi et al., 2005; Huang et al., 2005; Yatziv & Sapiro, 2006; Qu et al., 2006; Luan et al., 2007; Tsaftaris et al., 2014; Zhang et al., 2017; Ci et al., 2018) and exemplar-based techniques that involve identifying a reference source image to copy colors from (Reinhard et al., 2001; Welsh et al., 2002; Tai et al., 2005; Ironi et al., 2005; Pitie et al., 2007; Morimoto et al., 2009; Gupta et al., 2012; Xiao et al., 2020). Exemplar based techniques have been recently extended to video as well (Zhang et al., 2019a). In the past few years, the focus has moved on to more automated, neural colorization methods. The deterministic colorization techniques such as CIC (Zhang et al., 2016), LRAC (Larsson et al., 2016), LTBC (Iizuka et al., 2016), Pix2Pix (Isola et al., 2017) and DC (Cheng et al., 2015; Dahl, 2016) involve variations of CNNs to model per-pixel color information conditioned on the intensity.
+
+Generative colorization models typically extend unconditional image generation models to incorporate conditioning information from a grayscale image. Specifically, cINN (Ardizzone et al., 2019) use conditional normalizing flows (Dinh et al., 2014), VAE-MDN (Deshpande et al., 2017; 2015) and SCC-DC (Messaoud et al., 2018) use conditional VAEs (Kingma & Welling, 2013), and cGAN (Cao et al., 2017) use GANs (Goodfellow et al., 2014) for generative colorization. Most closely related to ColTran are other autoregressive approaches such as PixColor (Guadarrama et al., 2017) and PIC (Royer et al., 2017) with PixColor obtaining slightly better results than PIC due to its CNN-based upsampling strategy. ColTran is similar to PixColor in the usage of an autoregressive model for low resolution colorization and parallel spatial upsampling. ColTran differs from PixColor in the following ways. We train ColTran in a completely unsupervised fashion, while the conditioning network in PixColor requires pre-training with an object detection network that provides substantial semantic information. PixColor relies on PixelCNN (Oord et al., 2016) that requires a large depth to model interactions between all pixels. ColTran relies on Axial Transformer (Ho et al., 2019b) and can model all interactions between pixels with just 2 layers. PixColor uses different architectures for conditioning, colorization and super-resolution, while ColTran is conceptually simpler as we use self-attention blocks everywhere for both colorization and superresolution. Finally, we train
+
+our autoregressive model on a single coarse channel and a separate color upsampling network that improves fidelity (See: 5.3). The multi-stage generation process in ColTran that upsamples in depth and in size is related to that used in Subscale Pixel Networks (Menick & Kalchbrenner, 2018) for image generation, with differences in the order and representation of bits as well as in the use of fully parallel networks. The self-attention blocks that are the building blocks of ColTran were initially developed for machine translation (Vaswani et al., 2017), but are now widely used in a number of other applications including density estimation (Parmar et al., 2018; Child et al., 2019; Ho et al., 2019a; Weissenborn et al., 2019) and GANs (Zhang et al., 2019b)
+
+# 3 BACKGROUND: AXIAL TRANSFORMER
+
+# 3.1 ROW AND COLUMN SELF-ATTENTION
+
+Self-attention (SA) has become a standard building block in many neural architectures. Although the complexity of self-attention is quadratic with the number of input elements (here pixels), it has become quite popular for image modeling recently (Parmar et al., 2018; Weissenborn et al., 2019) due to modeling innovations that don't require running global self-attention between all pixels. Following the work of (Ho et al., 2019b) we employ standard qkv self-attention (Vaswani et al., 2017) within rows and columns of an image. By alternating row- and column self-attention we effectively allow global exchange of information between all pixel positions. For the sake of brevity we omit the exact equations for multihead self-attention and refer the interested reader to the Appendix H for more details. Row/column attention layers are the core components of our model. We use them in the autoregressive colorizer, the spatial upsampler and the color upsampler.
+
+# 3.2 AXIAL TRANSFORMER
+
+This Axial Transformer (Ho et al., 2019b) is an autoregressive model that applies (masked) row- and column self-attention operations in a way that efficiently summarizes all past information $\mathbf{x}_{i,j}$ and $\mathbf{x}_{< i}$ . To model a distribution over pixel $\mathbf{x}_{i,j}$ at position $i,j$ . Causal masking is employed by setting all $A_{m,n} = 0$ where $n > m$ during self-attention (see Eq. 15).
+
+Outer decoder. The outer decoder computes a state $\mathbf{s}_o$ over all previous rows $\mathbf{x}_{\leq i}$ . by applying $N$ layers of full row self-attention followed by masked column self-attention. (Eq 2). $\mathbf{s}_o$ is shifted down by a single row, such that the output context $\mathbf{o}_{i,j}$ at position $i,j$ only contains information about pixels $\mathbf{x}_{< i}$ , from prior rows. (Eq 3)
+
+$$
+\mathbf {e} = \operatorname {E m b e d d i n g s} (\mathbf {x}) \tag {1}
+$$
+
+$$
+\mathbf {s} _ {o} = \operatorname {M a x e d C o l u m n} (\operatorname {R o w} (\mathbf {e})) \quad \times N \tag {2}
+$$
+
+$$
+\mathbf {o} = \operatorname {S h i f t D o w n} \left(\mathbf {s} _ {o}\right) \tag {3}
+$$
+
+Inner decoder. The embeddings to the inner decoder are shifted right by a single column to mask the current pixel $\mathbf{x}_{i,j}$ . The context $\mathbf{o}$ from the outer decoder conditions the inner decoder by biasing the shifted embeddings. It then computes a final state $\mathbf{h}$ , by applying $N$ layers of masked row-wise self-attention to infuse additional information from prior pixels of the same row $\mathbf{x}_{i, < j}$ (Eq 4). $\mathbf{h}_{i,j}$ comprises information about all past pixels $\mathbf{x}_{< i}$ and $\mathbf{x}_{i, < j}$ . A dense layer projects $\mathbf{h}$ into a distribution $p(\mathbf{x}_{ij})$ over the pixel at position $(i,j)$ conditioned on all previous pixels $\mathbf{x}_{i, < j}$ and $\mathbf{x}_{< i}$ .
+
+$$
+\mathbf {z} = \mathbf {o} + \operatorname {S h i f t R i g h t} (\mathbf {e}) \tag {4}
+$$
+
+$$
+\mathbf {h} = \operatorname {M a s k e d R o w} (\mathbf {z}) \quad \times N \tag {5}
+$$
+
+$$
+p \left(\mathbf {x} _ {i j}\right) = \operatorname {D e n s e} (\mathbf {h}) \tag {6}
+$$
+
+Encoder. As shown above, the outer and inner decoder operate on 2-D inputs, such as a single channel of an image. For multi-channel RGB images, when modeling the "current channel", the Axial Transformer incorporates information from prior channels of an image (as per raster order) with an encoder. The encoder encodes each prior channel independently with a stack of unmasked row/column attention layers. The encoder outputs across all prior channels are summed to output a conditioning context $\mathbf{c}$ for the "current channel". The context conditions the outer and inner decoder by biasing the inputs in Eq 1 and Eq 4 respectively.
+
+
+Figure 2: Depiction of ColTran. It consists of 3 individual models: an autoregressive colorizer (left), a color upsampler (middle) and a spatial upsampler (right). Each model is optimized independently. The autoregressive colorizer (ColTran core) is an instantiation of Axial Transformer (Sec. 3.2, Ho et al. (2019b)) with conditional transformer layers and an auxiliary parallel head proposed in this work (Sec. 4.1). During training, the ground-truth coarse low resolution image is both the input to the decoder and the target. Masked layers ensure that the conditional distributions for each pixel depends solely on previous ground-truth pixels. (See Appendix G for a recap on autoregressive models). ColTran upsamplers are stacked row/column attention layers that deterministically upsample color and space in parallel. Each attention block (in green) is residual and consists of the following operations: layer-norm $\rightarrow$ multihead self-attention $\rightarrow$ MLP.
+
+Sampling. The Axial Transformer natively supports semi-parallel sampling that avoids re-evaluation of the entire network to generate each pixel of a RGB image. The encoder is run once per-channel, the outer decoder is run once per-row and the inner decoder is run once per-pixel. The context from the outer decoder and the encoder is initially zero. The encoder conditions the outer decoder (Eq 1) and the encoder + outer decoder condition the inner decoder (Eq 4). The inner decoder then generates a row, one pixel at a time via Eqs. (4) to (6). After generating all pixels in a row, the outer decoder recomputes context via Eqs. (1) to (3) and the inner decoder generates the next row. This proceeds till all the pixels in a channel are generated. The encoder, then recomutes context to generate the next channel.
+
+# 4 PROPOSED ARCHITECTURE
+
+Image colorization is the task of transforming a grayscale image $x^g \in \mathbb{R}^{H \times W \times 1}$ into a colored image $x \in \mathbb{R}^{H \times W \times 3}$ . The task is inherently stochastic; for a given grayscale image $x^g$ , there exists a conditional distribution over $x$ , $p(x|x^g)$ . Instead of predicting $x$ directly from $x^g$ , we instead sequentially predict two intermediate low resolution images $x^{s\downarrow}$ and $x^{s\downarrow c\downarrow}$ with different color depth first. Besides simplifying the task of high-resolution image colorization into simpler tasks, the smaller resolution allows for training larger models.
+
+We obtain $x^{s\downarrow}$ , a spatially downsampled representation of $x$ , by standard area interpolation. $x^{s\downarrow c\downarrow}$ is a 3 bit per-channel representation of $x^{s\downarrow}$ , that is, each color channel has only 8 intensities. Thus, there are $8^3 = 512$ coarse colors per pixel which are predicted directly as a single "color" channel. We rewrite the conditional likelihood $p(x|x^g)$ to incorporate the intermediate representations as follows:
+
+$$
+\begin{array}{l} p (x \mid x ^ {g}) = p (x \mid x ^ {g}) \cdot 1 = p (x \mid x ^ {g}) \cdot p \left(x ^ {s _ {\downarrow} c _ {\downarrow}}, x ^ {s _ {\downarrow}} \mid x, x ^ {g}\right) = p \left(x ^ {s _ {\downarrow} c _ {\downarrow}}, x ^ {s _ {\downarrow}}, x \mid x ^ {g}\right) (7) \\ = p \left(x \mid x ^ {s _ {\downarrow}}, x ^ {g}\right) \cdot p \left(x ^ {s _ {\downarrow}} \mid x ^ {s _ {\downarrow} c _ {\downarrow}}, x ^ {g}\right) \cdot p \left(x ^ {s _ {\downarrow} c _ {\downarrow}} \mid x ^ {g}\right) (8) \\ \end{array}
+$$
+
+ColTran core (Section 4.1), a parallel color upsampler and a parallel spatial upsampler (Section 4.3) model $p(x^{s\downarrow c\downarrow}|x^g), p(x^{s\downarrow}|x^{s\downarrow c\downarrow}, x^g)$ and $p(x|x^{s\downarrow})$ respectively. In the subsections below, we describe
+
+| Component | Unconditional | Conditional |
| Self-Attention | y = Softmax(qkT/√D)v | y = Softmax(qckcT/√D)v_c where ∀z = k, q, v z_c = (cU_s^z) ⊙ z + (cU_b^z) |
| MLP | y =ReLU(xU1 + b1)U2 + b2 | h =ReLU(xU1 + b1)U2 + b2 y = (cU_s^f) ⊙ h + (cU_b^f) |
| | y = βcNorm(x) + γc |
| Layer Norm | y = βNorm(x) + γ | where ∀μ = βc, γc c ∈ R^H×W×D →ˆc ∈ R^HW×D μ = (u ·ˆc)U_d^μ u ∈ R^HW |
+
+Table 1: We contrast the different components of unconditional self-attention with self-attention conditioned on context $\mathbf{c} \in \mathbb{R}^{M \times N \times D}$ . Learnable parameters specific to conditioning are denoted by $\mathbf{u}$ and $U \in \mathbb{R}^{D \times D}$ .
+
+these individual components in detail. From now on we will refer to all low resolutions as $M \times N$ and high resolution as $H \times W$ . An illustration of the overall architecture is shown in Figure 2.
+
+# 4.1 COLTRAN CORE
+
+In this section, we describe ColTran core, a conditional variant of the Axial Transformer (Ho et al., 2019b) for low resolution coarse colorization. ColTran Core models a distribution $p_c(x^{s\downarrow c\downarrow}|x^g)$ over 512 coarse colors for every pixel, conditioned on a low resolution grayscale image in addition to the colors from previously predicted pixels as per raster order (Eq. 9).
+
+$$
+p _ {c} \left(x ^ {s _ {\downarrow} c _ {\downarrow}} \mid x ^ {g}\right) = \prod_ {i = 1} ^ {M} \prod_ {j = 1} ^ {N} p _ {c} \left(x _ {i j} ^ {s _ {\downarrow} c _ {\downarrow}} \mid x ^ {g}, x _ {< i} ^ {s _ {\downarrow} c _ {\downarrow}}, x _ {i, < j} ^ {s _ {\downarrow} c _ {\downarrow}}\right) \tag {9}
+$$
+
+Given a context representation $\mathbf{c} \in \mathbb{R}^{M \times N \times D}$ we propose conditional transformer layers in Table 1. Conditional transformer layers have conditional versions of all components within the standard attention block (see Appendix H, Eqs. 14-18).
+
+Conditional Self-Attention. For every layer in the decoder, we apply six $1 \times 1$ convolutions to $\mathbf{c}$ to obtain three scale and shift vectors which we apply element-wise to $\mathbf{q}$ , $\mathbf{k}$ and $\mathbf{v}$ of the self-attention operation (Appendix 3.1), respectively.
+
+Conditional MLP. A standard component of the transformer architecture is a two layer pointwise feed-forward network after the self-attention layer. We scale and shift to the output of each MLP conditioned on $\mathbf{c}$ as for self-attention.
+
+Conditional Layer Norm. Layer normalization (Ba et al., 2016) globally scales and shifts a given normalized input using learnable vectors $\beta$ , $\gamma$ . Instead, we predict $\beta_{c}$ and $\gamma_{c}$ as a function of $\mathbf{c}$ . We first aggregate $\mathbf{c}$ into a global 1-D representation $\overline{\mathbf{c}} \in \mathbb{R}^{L}$ via a learnable, spatial pooling layer. Spatial pooling is initialized as a mean pooling layer. Similar to 1-D conditional normalization layers (Perez et al., 2017; De Vries et al., 2017; Dumoulin et al., 2016; Huang & Belongie, 2017), we then apply a linear projection on $\overline{\mathbf{c}}$ to predict $\beta_{c}$ and $\gamma_{c}$ , respectively.
+
+A grayscale encoder consisting of multiple, alternating row and column self-attention layers encodes the grayscale image into the initial conditioning context $\mathbf{c}^g$ . It serves as both context for the conditional layers and as additional input to the embeddings of the outer decoder. The sum of the outer decoder's output and $\mathbf{c}^g$ condition the inner decoder. Figure 2 illustrates how conditioning is applied in the autoregressive core of the ColTran architecture.
+
+Conditioning every layer via multiple components allows stronger gradient signals through the encoder and as an effect the encoder can learn better contextual representations. We validate this empirically by outperforming the native Axial Transformer that conditions context states by biasing (See Section 5.2 and Section 5.4).
+
+# 4.2 AUXILIARY PARALLEL MODEL
+
+We additionally found it beneficial to train an auxiliary parallel prediction model that models $\widetilde{p_c} (x^{s\downarrow c\downarrow})$ directly on top of representations learned by the grayscale encoder which we found beneficial for regularization (Eq. 10)
+
+$$
+\widetilde {p} _ {c} \left(x ^ {s \downarrow c \downarrow} \mid x ^ {g}\right) = \prod_ {i = 1} ^ {M} \prod_ {j = 1} ^ {N} \widetilde {p} _ {c} \left(x _ {i j} ^ {s \downarrow c \downarrow} \mid x ^ {g}\right) \tag {10}
+$$
+
+Intuitively, this forces the model to compute richer representations and global color structure already at the output of the encoder which can help conditioning and therefore has a beneficial, regularizing effect on learning. We apply a linear projection, $U_{\mathrm{parallel}} \in \mathbb{R}^{L \times 512}$ on top of $\mathbf{c}^g$ (the output of the grayscale encoder) into a per-pixel distribution over 512 coarse colors. It was crucial to tune the relative contribution of the autoregressive and parallel predictions to improve performance which we study in Section 5.3
+
+# 4.3 COLOR & SPATIAL UPSAMPLING
+
+In order to produce high-fidelity colorized images from low resolution, coarse color images and a given high resolution grayscale image, we train color and spatial upsampling models. They share the same architecture while differing in their respective inputs and resolution at which they operate. Similar to the grayscale encoder, the upsamplers comprise of multiple alternating layers of row and column self-attention. The output of the encoder is projected to compute the logits underlying the per pixel color probabilities of the respective upsampler. Figure 2 illustrates the architectures
+
+Color Upsampler. We convert the coarse image $x^{s\downarrow c\downarrow} \in \mathbb{R}^{M \times N \times 1}$ of 512 colors back into a 3 bit RGB image with 8 symbols per channel. The channels are embedded using separate embedding matrices to $\mathbf{x}_k^{s\downarrow c\downarrow} \in \mathbb{R}^{M \times N \times D}$ , where $k \in \{R, G, B\}$ indicates the channel. We upsample each channel individually conditioning only on the respective channel's embedding. The channel embedding is summed with the respective grayscale embedding for each pixel and serve as input to the subsequent self-attention layers (encoder). The output of the encoder is further projected to per pixel-channel probability distributions $\widetilde{p}_{c\uparrow}(x_k^{s\downarrow}|x^{s\downarrow c\downarrow}, x^g) \in \mathbb{R}^{M \times N \times 256}$ over 256 color intensities for all $k \in \{R, G, B\}$ (Eq. 11).
+
+$$
+\widetilde {p} _ {c \uparrow} \left(x ^ {s \downarrow} \mid x ^ {g}\right) = \prod_ {i = 1} ^ {M} \prod_ {j = 1} ^ {N} \widetilde {p} _ {c \uparrow} \left(x _ {i j} ^ {s \downarrow} \mid x ^ {g}, x ^ {s \downarrow c \downarrow}\right) \tag {11}
+$$
+
+Spatial Upsampler. We first naively upsample $x^{s\downarrow} \in \mathbb{R}^{M \times N \times 3}$ into a blurry, high-resolution RGB image using area interpolation. As above, we then embed each channel of the blurry RGB image and run a per-channel encoder exactly the same way as with the color upsampler. The output of the encoder is finally projected to per pixel-channel probability distributions $\widetilde{p}_{s\uparrow}(x_k|x^{s\downarrow},x^g) \in \mathbb{R}^{H \times W \times 256}$ over 256 color intensities for all $k \in \{R,G,B\}$ . (Eq. 12)
+
+$$
+\widetilde {p} _ {s \uparrow} \left(x \mid x ^ {g}\right) = \prod_ {i = 1} ^ {H} \prod_ {j = 1} ^ {W} \widetilde {p} _ {s \uparrow} \left(x _ {i j} \mid x ^ {g}, x ^ {s \downarrow}\right) \tag {12}
+$$
+
+In our experiments, similar to (Guadarrama et al., 2017), we found parallel upsampling to be sufficient for high quality colorizations. Parallel upsampling has the huge advantage of fast generation which would be notoriously slow for full autoregressive models on high resolution. To avoid plausible minor color inconsistencies between pixels, instead of sampling each pixel from the predicted distribution in (Eq. 12 and Eq. 11), we just use the argmax. Even though this slightly limits the potential diversity of colorizations, in practice we observe that sampling only coarse colors via ColTran core is enough to produce a great variety of colorizations.
+
+Objective. We train our architecture to minimize the negative log-likelihood (Eq. 13) of the data. $p_c / \widetilde{p}_c$ , $\widetilde{p}_{s\uparrow}$ , $\widetilde{p}_{c\uparrow}$ are maximized independently and $\lambda$ is a hyperparameter that controls the relative contribution of $p_c$ and $\widetilde{p}_c$
+
+$$
+\mathcal {L} = (1 - \lambda) \log p _ {c} + \lambda \log \widetilde {p} _ {c} + \log \widetilde {p} _ {c \uparrow} + \log \widetilde {p} _ {s \uparrow} \tag {13}
+$$
+
+
+Figure 3: Per pixel log-likelihood of coarse colored $64 \times 64$ images over the validation set as a function of training steps. We ablate the various components of the ColTran core in each plot. Left: ColTran with Conditional Transformer Layers vs a baseline Axial Transformer which conditions via addition (ColTran-B). ColTran-B 2x and ColTran-B 4x refer to wider baselines with increased model capacity. Center: Removing each conditional sub-component one at a time (no cLN, no cMLP and no cAtt). Right: Conditional shifts only (Shift), Conditional scales only (Scale), removal of kq conditioning in cAtt (cAtt, only v) and fixed mean pooling in cLN (cLN, mean pool). See Section 5.2 for more details.
+
+
+
+
+
+# 5 EXPERIMENTS
+
+# 5.1 TRAINING AND EVALUATION
+
+We evaluate ColTran on colorizing $256 \times 256$ grayscale images from the ImageNet dataset (Rusakovsky et al., 2015). We train the ColTran core, color and spatial upsamplers independently on 16 TPUv2 chips with a batch-size of 224, 768 and 32 for 600K, 450K and 300K steps respectively. We use 4 axial attention blocks in each component of our architecture, with a hidden size of 512 and 4 heads. We use RMSprop (Tieleman & Hinton, 2012) with a fixed learning rate of $3e - 4$ . We set apart 10000 images from the training set as a holdout set to tune hyperparameters and perform ablations. To compute FID, we generate 5000 samples conditioned on the grayscale images from this holdout set. We use the public validation set to display qualitative results and report final numbers.
+
+# 5.2 ABLATIONS OF COLTRAN CORE
+
+The autoregressive core of ColTran models downsampled, coarse-colored images of resolution $64 \times 64$ with 512 coarse colots, conditioned on the respective grayscale image. In a series of experiments we ablate the different components of the architecture (Figure 3). In the section below, we refer to the conditional self-attention, conditional layer norm and conditional MLP subcomponents as cAtt, cLN and cMLP respectively. We report the per-pixel log-likelihood over 512 coarse colors on the validation set as a function of training steps.
+
+Impact of conditional transformer layers. The left side of Figure 3 illustrates the significant improvement in loss that ColTran core (with conditional transformer layers) achieves over the original Axial Transformer (marked ColTran-B). This demonstrates the usefulness of our proposed conditional layers. Because conditional layers introduce a higher number of parameters we additionally compare to and outperform the original Axial Transformer baselines with $2\mathrm{x}$ and $4\mathrm{x}$ wider MLP dimensions (labeled as ColTran-B $2x$ and ColTran-B $4x$ ). Both ColTran-B $2x$ and ColTran-B $4x$ have an increased parameter count which makes for a fair comparison. Our results show that the increased performance cannot be explained solely by the fact that our model has more parameters.
+
+Importance of each conditional component. We perform a leave-one-out study to determine the importance of each conditional component. We remove each conditional component one at a time and retrain the new ablated model. The curves no cLN, no cMLP and no cAtt in the middle of Figure 3 quantifies our results. While each conditional component improves final performance, cAtt plays the most important role.
+
+Multiplicative vs Additive Interactions. Conditional transformer layers employ both conditional shifts and scales consisting of additive and multiplicative interactions, respectively. The curves Scale and Shift on the right hand side of Figure 3 demonstrate the impact of these interactions via ablated architectures that use conditional shifts and conditional scales only. While both types of interactions are important, multiplicative interactions have a much stronger impact.
+
+
+Figure 4: Left: FID of generated $64 \times 64$ coarse samples as a function of training steps for $\lambda = 0.01$ and $\lambda = 0.0$ . Center: Final FID scores as a function of $\lambda$ . Right: FID as a function of log-likelihood.
+
+
+
+
+
+Context-aware dot product attention. Self-attention computes the similarity between pixel representations using a dot product between $\mathbf{q}$ and $\mathbf{k}$ (See: Eq 15). cAtt applies conditional shifts and scales on $\mathbf{q}$ , $\mathbf{k}$ and allow modifying this similarity based on contextual information. The curve $cAtt$ , only $\nu$ on the right of Figure 3 shows that removing this property, by conditioning only on $\mathbf{v}$ leads to worse results.
+
+Fixed vs adaptive global representation: cLN aggregates global information with a flexible learnable spatial pooling layer. We experimented with a fixed mean pooling layer forcing all the cLN layers to use the same global representation with the same per-pixel weight. The curve cLN, mean pool on the right of Figure 3 shows that enforcing this constraint causes inferior performance as compared to even having no cLN. This indicates that different aggregations of global representations are important for different cLN layers.
+
+# 5.3 OTHER ABLATIONS
+
+Auxiliary Parallel Model. We study the effect of the hyperparameter $\lambda$ , which controls the contribution of the auxiliary parallel prediction model described in Section 4.2. For a given $\lambda$ , we now optimize $\hat{p}_c(\lambda) = (1 - \lambda)\log p_c(.) + \lambda\log \widetilde{p}_c(.)$ instead of just $\log p_{c}(.)$ . Note that $\widetilde{p}_c(.)$ models each pixel independently, which is more difficult than modelling each pixel conditioned on previous pixels given by $p_c(.)$ . Hence, employing $\hat{p}_c(\lambda)$ as a holdout metric, would just lead to a trivial solution at $\lambda = 0$ . Instead, the FID of the generated coarse 64x64 samples provides a reliable way to find an optimal value of $\lambda$ . In Figure 4, at $\lambda = 0.01$ , our model converges to a better FID faster with a marginal but consistent final improvement. At higher values the performance deteriorates quickly.
+
+Upsamplers. Upsampling coarse colored, low-resolution images to a higher resolution is much simpler. Given ground truth $64 \times 64$ coarse images, the ColTran upsamplers map these to fine grained $256 \times 256$ images without any visible artifacts and FID of 16.4. For comparison, the FID between two random sets of 5000 samples from our holdout set is 15.5. It is further extremely important to provide the grayscale image as input to each of the individual upsamplers, without which the generated images appear highly smoothed out and the FID drops to 27.0. We also trained a single upsampler for both color and resolution. The FID in this case drops marginally to 16.6.
+
+# 5.4 FRECHET INCEPTION DISTANCE
+
+We compute FID using colorizations of 5000 grayscale images of resolution $256 \times 256$ from the ImageNet validation set as done in (Ardizzone et al., 2019). To compute the FID, we ensure that there is no overlap between the grayscale images that condition ColTran and those in the ground-truth distribution. In addition to ColTran, we report two additional results ColTran-S and ColTran-B. ColTran-B refers to the baseline Axial Transformer that conditions via addition at the input. PixColor samples smaller $28 \times 28$ colored images autoregressively as compared to ColTran's $64 \times 64$ . As a control experiment, we train an autoregressive model on resolution $28 \times 28$ (ColTran-S) to disentangle architectural choices and the inherent stochasticity of modelling higher resolution images. ColTran-S and ColTran-B obtains FID scores of 22.06 and 19.98 that significantly improve over the previous best FID of 24.32. Finally, ColTran achieves the best FID score of 19.37. All results are presented in Table 2 left.
+
+| Models | FID |
| ColTran | 19.37 ± 0.09 |
| ColTran-B | 19.98 ± 0.20 |
| ColTran-S | 22.06 ± 0.13 |
| PixColor [16] | 24.32 ± 0.21 |
| cGAN [3] | 24.41 ± 0.27 |
| cINN [1] | 25.13 ± 0.3 |
| VAE-MDN [11] | 25.98 ± 0.28 |
| Ground truth | 14.68 ± 0.15 |
| Grayscale | 30.19 ± 0.1 |
+
+| Models | AMT Fooling rate |
| ColTran (Oracle) | 62.0 % ± 0.99 |
| ColTran (Seed 1) | 40.5 % ± 0.81 |
| ColTran (Seed 2) | 42.3 % ± 0.76 |
| ColTran (Seed 3) | 41.7 % ± 0.83 |
| PixColor [16] (Oracle) | 38.3 % ± 0.98 |
| PixColor (Seed 1) | 33.3 % ± 1.04 |
| PixColor (Seed 2) | 35.4 % ± 1.01 |
| PixColor (Seed 3) | 33.2 % ± 1.03 |
| CIC [56] | 29.2 % ± 0.98 |
| LRAC [27] | 30.9 % ± 1.02 |
| LTBC [22] | 25.8 % ± 0.97 |
+
+Table 2: We outperform various state-of-the-art colorization models both on FID (left) and human evaluation (right). We obtain the FID scores from (Ardizzone et al., 2019) and the human evaluation results from (Guadarrama et al., 2017). ColTran-B is a baseline Axial Transformer that conditions via addition and ColTran-S is a control experiment where we train ColTran core (See: 4.1) on smaller $28 \times 28$ colored images.
+
+
+Figure 5: We display the per-pixel, maximum predicted probability over 512 colors as a proxy for uncertainty.
+
+Correlation between FID and Log-likelihood. For each architectural variant, Figure 4 right illustrates the correlation between the log-likelihood and FID after 150K training steps. There is a moderately positive correlation of 0.57 between the log-likelihood and FID. Importantly, even an absolute improvement on the order of 0.01 - 0.02 can improve FID significantly. This suggests that designing architectures that achieve better log-likelihood values is likely to lead to improved FID scores and colorization fidelity.
+
+# 5.5 QUALITATIVE EVALUATION
+
+Human Evaluation. For our qualitative assessment, we follow the protocol used in PixColor (Guadarrama et al., 2017). ColTran colorizes 500 grayscale images, with 3 different colorizations per image, denoted as seeds. Human raters assess the quality of these colorizations with a two alternative-forced choice (2AFC) test. We display both the ground-truth and recolorized image sequentially for one second in random order. The raters are then asked to identify the image with fake colors. For each seed, we report the mean fooling rate over 500 colorizations and 5 different raters. For the oracle methods, we use the human rating to pick the best-of-three colorizations. ColTran's best seed achieves a fooling rate of $42.3\%$ compared to the $35.4\%$ of PixColor's best seed. ColTran Oracle achieves a fooling rate of $62\%$ , indicating that human raters prefer ColTran's best-of-three colorizations over the ground truth image itself.
+
+Visualizing uncertainty. The autoregressive core model of ColTran should be highly uncertain at object boundaries when colors change. Figure 5 illustrates the per-pixel, maximum predicted probability over 512 colors as a proxy for uncertainty. We observe that the model is indeed highly uncertain at edges and within more complicated textures.
+
+# 6 CONCLUSION
+
+We presented the Colorization Transformer (ColTran), an architecture that entirely relies on self-attention for image colorization. We introduce conditional transformer layers, a novel building block for conditional, generative models based on self-attention. Our ablations show the superiority of employing this mechanism over a number of different baselines. Finally, we demonstrate that ColTran can generate diverse, high-fidelity colorizations on ImageNet, which are largely indistinguishable from the ground-truth even for human raters.
+
+# REFERENCES
+
+Lynton Ardizzone, Carsten Lüth, Jakob Kruse, Carsten Rother, and Ullrich Köthe. Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392, 2019.
+Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
+Yun Cao, Zhiming Zhou, Weinan Zhang, and Yong Yu. Unsupervised diverse colorization via generative adversarial networks, 2017.
+Zezhou Cheng, Qingxiong Yang, and Bin Sheng. Deep colorization. In Proceedings of the IEEE International Conference on Computer Vision, pp. 415-423, 2015.
+Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509, 2019.
+Yuanzheng Ci, Xinzhu Ma, Zhihui Wang, Haojie Li, and Zhongxuan Luo. User-guided deep anime line art colorization with conditional adversarial networks. In Proceedings of the 26th ACM international conference on Multimedia, pp. 1536-1544, 2018.
+Ceyda Cinarel and Byoung-Tak Zhang. Into the colorful world of webtoons: Through the lens of neural networks. In 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), volume 3, pp. 35-40. IEEE, 2017.
+Ryan Dahl. Automatic colorization, 2016.
+Harm De Vries, Florian Stub, Jérémie Mary, Hugo Larochelle, Olivier Pietquin, and Aaron C Courville. Modulating early visual processing by language. In Advances in Neural Information Processing Systems, pp. 6594-6604, 2017.
+Aditya Deshpande, Jason Rock, and David Forsyth. Learning large-scale automatic image colorization. In Proceedings of the IEEE International Conference on Computer Vision, pp. 567-575, 2015.
+Aditya Deshpande, Jiajun Lu, Mao-Chuang Yeh, Min Jin Chong, and David Forsyth. Learning diverse image colorization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6837-6845, 2017.
+Laurent Dinh, David Krueger, and Yoshua Bengio. Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516, 2014.
+Vincent Dumoulin, Jonathon Shlens, and Manjunath Kudlur. A learned representation for artistic style. arXiv preprint arXiv:1610.07629, 2016.
+David M Geshwind. Method for colorizing black and white footage, August 19 1986. US Patent 4,606,625.
+Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. arXiv preprint arXiv:1406.2661, 2014.
+Sergio Guadarrama, Ryan Dahl, David Bieber, Mohammad Norouzi, Jonathon Shlens, and Kevin Murphy. Pixcolor: Pixel recursive colorization. arXiv preprint arXiv:1705.07208, 2017.
+Raj Kumar Gupta, Alex Yong-Sang Chia, Deepu Rajan, Ee Sin Ng, and Huang Zhiyong. Image colorization using similar images. In Proceedings of the 20th ACM international conference on Multimedia, pp. 369-378, 2012.
+Jonathan Ho, Xi Chen, Aravind Srinivas, Yan Duan, and Pieter Abbeel. Flow++: Improving flow-based generative models with variational dequantization and architecture design. arXiv preprint arXiv:1902.00275, 2019a.
+Jonathan Ho, Nal Kalchbrenner, Dirk Weissenborn, and Tim Salimans. Axial attention in multidimensional transformers. arXiv preprint arXiv:1912.12180, 2019b.
+
+Xun Huang and Serge Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1501-1510, 2017.
+Yi-Chin Huang, Yi-Shin Tung, Jun-Cheng Chen, Sung-Wen Wang, and Ja-Ling Wu. An adaptive edge detection based colorization algorithm and its applications. In Proceedings of the 13th annual ACM international conference on Multimedia, pp. 351-354, 2005.
+Satoshi Iizuka, Edgar Simo-Serra, and Hiroshi Ishikawa. Let there be color! joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification. ACM Transactions on Graphics (ToG), 35(4):1-11, 2016.
+Revital Ironi, Daniel Cohen-Or, and Dani Lischinski. Colorization by example. In Rendering Techniques, pp. 201-210. CiteSeer, 2005.
+Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1125-1134, 2017.
+Norman P. Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, Suresh Bhatia, Nan Boden, Al Borchers, Rick Boyle, Pierre luc Cantin, Clifford Chao, Chris Clark, Jeremy Coriell, Mike Daley, Matt Dau, Jeffrey Dean, Ben Gelb, Tara Vazir Ghaemmaghami, Rajendra Gottipati, William Gulland, Robert Hagmann, C. Richard Ho, Doug Hogberg, John Hu, Robert Hundt, Dan Hurt, Julian Ibarz, Aaron Jaffey, Alek Jaworski, Alexander Kaplan, Harshit Khaitan, Andy Koch, Naveen Kumar, Steve Lacy, James Laudon, James Law, Diemthu Le, Chris Leary, Zhuyuan Liu, Kyle Lucke, Alan Lundin, Gordon MacKean, Adriana Maggiore, Maire Mahony, Kieran Miller, Rahul Nagarajan, Ravi Narayanaswami, Ray Ni, Kathy Nix, Thomas Norrie, Mark Omernick, Narayana Penukonda, Andy Phelps, Jonathan Ross, Matt Ross, Amir Salek, Emad Samadiani, Chris Severn, Gregory Sizikov, Matthew Snelham, Jed Souter, Dan Steinberg, Andy Swing, Mercedes Tan, Gregory Thorson, Bo Tian, Horia Toma, Erick Tuttle, Vijay Vasudevan, Richard Walter, Walter Wang, Eric Wilcox, and Doe Hyun Yoon. In-datacenter performance analysis of a tensor processing unit, 2017.
+Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
+Gustav Larsson, Michael Maire, and Gregory Shakhnarovich. Learning representations for automatic colorization. In European conference on computer vision, pp. 577-593. Springer, 2016.
+Anat Levin, Dani Lischinski, and Yair Weiss. Colorization using optimization. In ACM SIGGRAPH 2004 Papers, pp. 689-694. 2004.
+Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaou Tang. Deep learning face attributes in the wild. In Proceedings of the IEEE international conference on computer vision, pp. 3730-3738, 2015.
+Qing Luan, Fang Wen, Daniel Cohen-Or, Lin Liang, Ying-Qing Xu, and Heung-Yeung Shum. Natural image colorization. In Proceedings of the 18th Eurographics conference on Rendering Techniques, pp. 309-320, 2007.
+Jacob Menick and Nal Kalchbrenner. Generating high fidelity images with subscale pixel networks and multidimensional upscaling. arXiv preprint arXiv:1812.01608, 2018.
+Safa Messaoud, David Forsyth, and Alexander G. Schwing. Structural consistency and controllability for diverse colorization. In Proceedings of the European Conference on Computer Vision (ECCV), September 2018.
+Yuji Morimoto, Yuichi Taguchi, and Takeshi Naemura. Automatic colorization of grayscale images using multiple images on the web. In SIGGRAPH 2009: Talks, pp. 1-1. 2009.
+Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016.
+Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Łukasz Kaiser, Noam Shazeer, Alexander Ku, and Dustin Tran. Image transformer. arXiv preprint arXiv:1802.05751, 2018.
+
+Ethan Perez, Florian Strub, Harm De Vries, Vincent Dumoulin, and Aaron Courville. Film: Visual reasoning with a general conditioning layer. arXiv preprint arXiv:1709.07871, 2017.
+François Pitie, Anil C Kokaram, and Rozenn Dahyot. Automated colour grading using colour distribution transfer. Computer Vision and Image Understanding, 107(1-2):123-137, 2007.
+Yingge Qu, Tien-Tsin Wong, and Pheng-Ann Heng. Manga colorization. ACM Transactions on Graphics (TOG), 25(3):1214-1220, 2006.
+Erik Reinhard, Michael Adhikhmin, Bruce Gooch, and Peter Shirley. Color transfer between images. IEEE Computer graphics and applications, 21(5):34-41, 2001.
+Amelie Royer, Alexander Kolesnikov, and Christoph H Lampert. Probabilistic image colorization. arXiv preprint arXiv:1705.04258, 2017.
+Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211-252, 2015.
+Daniel Sykora, Jan Burianek, and Jiri Žára. Unsupervised colorization of black-and-white cartoons. In Proceedings of the 3rd international symposium on Non-photorealistic animation and rendering, pp. 121-127, 2004.
+Yu-Wing Tai, Jiaya Jia, and Chi-Keung Tang. Local color transfer via probabilistic segmentation by expectation-maximization. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), volume 1, pp. 747-754. IEEE, 2005.
+Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2):26-31, 2012.
+Sotirios A Tsaftaris, Francesca Casadio, Jean-Louis Andral, and Aggelos K Katsaggelos. A novel visualization tool for art history and conservation: Automated colorization of black and white archival photographs of works of art. Studies in conservation, 59(3):125-135, 2014.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998-6008, 2017.
+Carl Vondrick, Abhinav Shrivastava, Alireza Fathi, Sergio Guadarrama, and Kevin Murphy. Tracking emerges by colorizing videos. In Proceedings of the European conference on computer vision (ECCV), pp. 391-408, 2018.
+Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, and Liang-Chieh Chen. Axial-deeplab: Stand-alone axial-attention for panoptic segmentation. arXiv preprint arXiv:2003.07853, 2020.
+Dirk Weissenborn, Oscar Täckström, and Jakob Uszkoreit. Scaling autoregressive video models. arXiv preprint arXiv:1906.02634, 2019.
+Tomihisa Welsh, Michael Ashikhmin, and Klaus Mueller. Transferring color to greyscale images. In Proceedings of the 29th annual conference on Computer graphics and interactive techniques, pp. 277-280, 2002.
+Chufeng Xiao, Chu Han, Zhuming Zhang, Jing Qin, Tien-Tsin Wong, Guoqiang Han, and Shengfeng He. Example-based colourization via dense encoding pyramids. In Computer Graphics Forum, volume 39, pp. 20-33. Wiley Online Library, 2020.
+Liron Yatziv and Guillermo Sapiro. Fast image and video colorization using chrominance blending. IEEE transactions on image processing, 15(5):1120-1129, 2006.
+Fisher Yu, Ari Seff, Yinda Zhang, Shuran Song, Thomas Funkhouser, and Jianxiong Xiao. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015.
+
+Bo Zhang, Mingming He, Jing Liao, Pedro V Sander, Lu Yuan, Amine Bermak, and Dong Chen. Deep exemplar-based video colorization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8052-8061, 2019a.
+Han Zhang, Ian Goodfellow, Dimitris Metaxas, and Augustus Odena. Self-attention generative adversarial networks. In International Conference on Machine Learning, pp. 7354-7363. PMLR, 2019b.
+Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. In European conference on computer vision, pp. 649-666. Springer, 2016.
+Richard Zhang, Jun-Yan Zhu, Phillip Isola, Xinyang Geng, Angela S Lin, Tianhe Yu, and Alexei A Efros. Real-time user-guided image colorization with learned deep priors. arXiv preprint arXiv:1705.02999, 2017.
+
+
+Figure 6: Left: FID vs training steps, with and without polyak averaging. Right: The effect of K in top-K sampling on FID. See Appendix B and E
+
+
+
+# ACKNOWLEDGEMENTS
+
+We would like to thank Mohammad Norouzi, Rianne van den Berg, Mostafa Dehghani for their useful comments on the draft and Avital Oliver for assistance in the Mechanical Turk setup.
+
+# CHANGELOG
+
+- v2: Dataset Sharding fix across multiple TPU workers. This changed the FID scores of ColTran, ColTran-B and ColTran-S from their v1 values of 19.71, 21.6 and 21.9 to their v2 values of 19.37, 19.98 and 22.06 respectively.
+
+# A CODE, CHECKPOINTS AND TENSORBOARD FILES
+
+Our implementation is open-sourced in the google-research framework at https://github.com/googleresearch/google-research/tree/master/coltran with a zip compressed version here. Our full set of hyperparameters are available here.
+
+We provide pre-trained checkpoints of the colorizer and upsamplers on ImageNet at https://console.cloud.google.com/storage/browser/gresearch/coltran. Finally, reference tensorboard files for our training runs are available at colorizer tensorboard, color upsampler tensorboard and spatial upsampler tensorboard.
+
+# B EXPONENTIAL MOVING AVERAGE
+
+We found using an exponential moving average (EMA) of our checkpoints, extremely crucial to generate high quality samples. In Figure 6, we display the FID as a function of training steps, with and without EMA. On applying EMA, our FID score improves steadily over time.
+
+# C NUMBER OF PARAMETERS AND INFERENCE SPEED
+
+Inference speed. ColTran core can sample a batch of 20 64x64 grayscale images in around 3.5 -5 minutes on a P100 GPU vs PixColor that takes 10 minutes to colorize 28x28 grayscale images on a K40 GPU. Sampling 28x28 colorizations takes around 30 seconds. The upsampler networks take in the order of milliseconds.
+
+Further, in our naive implementation, we recompute the activations, $\mathbf{c}U_s^z$ , $\mathbf{c}U_b^z$ , $\mathbf{c}U_s^f$ , $\mathbf{c}U_b^f$ in Table 1 to generate every pixel in the inner decoder. Instead, we can compute these activations once per-grayscale image in the encoder and once per-row in the outer decoder and reuse them. This is likely to speed up sampling even more and we leave this engineering optimization for future work.
+
+Number of parameters. ColTran has a total of ColTran core (46M) + Color Upsampler (14M) + Spatial Upsampler (14M) = 74M parameters. In comparison, PixColor has Conditioning network (44M) + Colorizer network (11M) + Refinement Network (28M) = 83M parameters.
+
+
+Figure 7: Ablated models. Gated: Gated conditioning layers as done in (Oord et al., 2016) and $cAtt + cMLP$ , global: Global conditioning instead of pointwise conditioning in cAtt and cLN.
+
+# D LOWER COMPUTE REGIME
+
+We retrained the autoregressive colorizer and color upsampler on 4 TPUv2 chips (the lowest configuration) with a reduced-batch size of 56 and 192 each. For the spatial upsampler, we found that a batch-size of 8 was sub-optimal and lead to a large deterioration in loss. We thus used a smaller spatial upsampler with 2 axial attention blocks with a batch-size of 16 and trained it also on 4 TPUv2 chips. The FID drops from 19.71 to 20.9 which is still significantly better than the other models in 2. We note that in this experiment, we use only 12 TPUv2 chips in total while PixColor (Guadarrama et al., 2017) uses a total of 16 GPUs.
+
+# E IMPROVED FID WITH TOP-K SAMPLING
+
+We can improve colorization fidelity and remove artifacts due to unnatural colors via Top-K sampling at the cost of reduced colorization diversity. In this setting, for a given pixel ColTran generates a color from the top-K colors (instead of 512 colors) as determined by the predicted probabilities. Our results in Figure $6K = 4$ and $K = 8$ demonstrate a performance improvement over the baseline ColTran model with $K = 512$
+
+# F ADDITIONAL ABLATIONS:
+
+Additional ablations of our conditional transformer layers are in Figure 7 which did not help.
+
+- Conditional transformer layers based on Gated layers (Oord et al., 2016) (Gated)
+- A global conditioning layer instead of pointwise conditioning in cAtt and cLN. $cAtt + cMLP$ , global
+
+# G AUTOREGRESSIVE MODELS
+
+Autoregressive models are a family of probabilistic methods that model joint distribution of data $P(x)$ or a sequence of symbols $(x_{1}, x_{2}, \ldots, x_{n})$ as a product of conditionals $\prod_{i=1}^{N} P(x_{i} | x_{60\%$ . Our model is able to show diversity in color for both high-level structure and low-level details. In the center, we display samples that have a high variance in MTurk ratings, with a difference of $80\%$ between the best and the worst sample. All of these are complex objects, that our model is able to colorize reasonably well given multiple attempts. To the right of Figure 11, we show failure cases where all samples have a fool rate of $0\%$ . For these cases, our model is unable to colorize highly complex structure, that would arguably be difficult even for a human.
+
+# L MORE PROBABILITY MAPS
+
+We display additional probability maps to visualize uncertainty as done in 5.5.
+
+# M MORE SAMPLES
+
+We display a wide-diversity of colorizations from ColTran that were not cherry-picked.
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/colorizationtransformer/images.zip b/colorizationtransformer/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..7b18bad0472167c72180465651848a5124f0e3fe
--- /dev/null
+++ b/colorizationtransformer/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0e5bc1270c3b67fa5cdc67f801d270e1289138ba16f47baba089c1215d0a486d
+size 2931792
diff --git a/colorizationtransformer/layout.json b/colorizationtransformer/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..6df562c6033e73a66565a5ba7a310639bf4d9165
--- /dev/null
+++ b/colorizationtransformer/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4e1e0f2c7257fa5ab923682dc7ff96489939e356c4dde3883ed4789176b464f1
+size 566383
diff --git a/combiningensemblesanddataaugmentationcanharmyourcalibration/80dff1ab-3173-4626-b379-3c6fca19e0fe_content_list.json b/combiningensemblesanddataaugmentationcanharmyourcalibration/80dff1ab-3173-4626-b379-3c6fca19e0fe_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..6cec0f6b786e996f88c64583c84b895ca46158fb
--- /dev/null
+++ b/combiningensemblesanddataaugmentationcanharmyourcalibration/80dff1ab-3173-4626-b379-3c6fca19e0fe_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dadb480556e7eaafa7f4045f60524b3ab4a472c915adc43178dc57811f3be1b7
+size 123666
diff --git a/combiningensemblesanddataaugmentationcanharmyourcalibration/80dff1ab-3173-4626-b379-3c6fca19e0fe_model.json b/combiningensemblesanddataaugmentationcanharmyourcalibration/80dff1ab-3173-4626-b379-3c6fca19e0fe_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..d7730a59d91f3c0085e9611ac5a1e276c83267ea
--- /dev/null
+++ b/combiningensemblesanddataaugmentationcanharmyourcalibration/80dff1ab-3173-4626-b379-3c6fca19e0fe_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f0c2afbba65d2a0d563ce2ee8a87b689bf1447fef9fcc165d60feb27d54737b9
+size 150100
diff --git a/combiningensemblesanddataaugmentationcanharmyourcalibration/80dff1ab-3173-4626-b379-3c6fca19e0fe_origin.pdf b/combiningensemblesanddataaugmentationcanharmyourcalibration/80dff1ab-3173-4626-b379-3c6fca19e0fe_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..3543d5411976a80c1305d3462465e74061ac569e
--- /dev/null
+++ b/combiningensemblesanddataaugmentationcanharmyourcalibration/80dff1ab-3173-4626-b379-3c6fca19e0fe_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f4760847c73f50f0a999843bd42e9876f6fb4a6ba2980f63171281d577b5d3b2
+size 1921633
diff --git a/combiningensemblesanddataaugmentationcanharmyourcalibration/full.md b/combiningensemblesanddataaugmentationcanharmyourcalibration/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..86026b7bbd1832cb07a2dd5193449fcccdf096c6
--- /dev/null
+++ b/combiningensemblesanddataaugmentationcanharmyourcalibration/full.md
@@ -0,0 +1,571 @@
+# COMBINING ENSEMBLES AND DATA AUGMENTATION CAN HARM YOUR CALIBRATION
+
+Yeming Wen\*, Ghassen Jerfel\*,2, Rafael Muller?, Michael W. Dusenberry?, Jasper Snoek?, Balaji Lakshminarayanan? & Dustin Tran?
+
+* Equal contribution, ${}^{1}$ University of Texas,Austin, ${}^{2}$ Google Brain
+
+# ABSTRACT
+
+Ensemble methods which average over multiple neural network predictions are a simple approach to improve a model's calibration and robustness. Similarly, data augmentation techniques, which encode prior information in the form of invariant feature transformations, are effective for improving calibration and robustness. In this paper, we show a surprising pathology: combining ensembles and data augmentation can harm model calibration. This leads to a trade-off in practice, whereby improved accuracy by combining the two techniques comes at the expense of calibration. On the other hand, selecting only one of the techniques ensures good uncertainty estimates at the expense of accuracy. We investigate this pathology and identify a compounding under-confidence among methods which marginalize over sets of weights and data augmentation techniques which soften labels. Finally, we propose a simple correction, achieving the best of both worlds with significant accuracy and calibration gains over using only ensembles or data augmentation individually. Applying the correction produces new state-of-the-art in uncertainty calibration across CIFAR-10, CIFAR-100, and ImageNet.1
+
+# 1 INTRODUCTION
+
+Many success stories in deep learning (Krizhevsky et al., 2012; Sutskever et al., 2014) are in restricted settings where predictions are only made for inputs similar to the training distribution. In real-world scenarios, neural networks can face truly novel data points during inference, and in these settings it can be valuable to have good estimates of the model's uncertainty. For example, in healthcare, reliable uncertainty estimates can prevent over-confident decisions for rare or novel patient conditions (Dusenberry et al., 2019). We highlight two recent trends obtaining state-of-the-art in uncertainty and robustness benchmarks.
+
+Ensemble methods are a simple approach to improve a model's calibration and robustness (Lakshminarayanan et al., 2017). The same network architecture but optimized with different initializations can converge to different functional solutions, leading to decorrelated prediction errors. By averaging predictions, ensembles can rule out individual mistakes (Lakshminarayanan et al., 2017; Ovadia et al., 2019). Additional work has gone into efficient ensembles such as MC-dropout (Gal and Ghahramani, 2016), BatchEnsemble, and its variants (Wen et al., 2020; Dusenberry et al., 2020; Wenzel et al., 2020). These methods significantly improve calibration and robustness while adding few parameters to the original model.
+
+Data augmentation is an approach which is orthogonal to ensembles in principle, encoding additional priors in the form of invariant feature transformations. Intuitively, data augmentation enables the model to train on more data, encouraging the model to capture certain invariances with respect to its inputs and outputs; data augmentation may also produce data that may be closer to an out-of-distribution target task. It has been a key factor driving state-of-the-art: for example, Mixup (Zhang et al., 2018; Thulasidasan et al., 2019a), AugMix (Hendrycks et al., 2020), and test-time data augmentation (Ashukha et al., 2020).
+
+A common wisdom in the community suggests that ensembles and data augmentation should naturally combine. For example, the majority of uncertainty models in vision with strong performance are
+
+built upon baselines leveraging standard data augmentation (He et al., 2016; Hendrycks et al., 2020) (e.g., random flips, cropping); Hafner et al. (2018) cast data augmentation as an explicit prior for Bayesian neural networks, treating it as beneficial when ensembling; and Hendrycks et al. (2020) highlights further improved results in AugMix when combined with Deep Ensembles (Hansen and Salamon, 1990; Krogh and Vedelsby, 1995). However, we find the complementary benefits between data augmentations and enembeds are not universally true. Section 3.1 illustrates the poor calibration of combining ensembles (MC-dropout, BatchEnsemble and Deep Ensembles) and Mixup on CIFAR: the model outputs excessive low confidence. Motivated by this pathology, in this paper, we investigate in more detail why this happens and propose a method to resolve it.
+
+Contributions. In contrast to prior work, which finds individually that ensembles and Mixup improve calibration, we find that combining ensembles and Mixup consistently degrades calibration performance across three ensembling techniques. From a detailed analysis, we identify a compounding under-confidence, where the soft labels in Mixup introduce a negative confidence bias that hinders its combination with ensembles. We further find this to be true for other label-based strategies such as label smoothing. Finally, we propose CAMixup to correct this bias, pairing well with ensembles. CAMixup produces new state-of-the-art calibration on both CIFAR-10/100 (e.g., $0.4\%$ and $2.3\%$ on CIFAR-10 and CIFAR-10C), building on Wide ResNet 28-10 for competitive accuracy (e.g., $97.5\%$ and $89.8\%$ ) and on ImageNet $(1.5\%)$ , building on ResNet-50 for competitive accuracy $(77.4\%)$ .
+
+# 2 BACKGROUND ON CALIBRATION, ENSEMBLES AND DATA AUGMENTATION
+
+# 2.1 CALIBRATION
+
+Uncertainty estimation is critical but ground truth is difficult to obtain for measuring performance. Fortunately, calibration error, which assesses how well a model reliably forecasts its predictions over a population, helps address this. Let $(\hat{Y},\hat{P})$ denote the class prediction and associated confidence (predicted probability) of a classifier.
+
+Expected Calibration Error(ECE): One notion of miscalibration is the expected difference between confidence and accuracy (Naeini et al., 2015): $E_{\hat{P}}[|\mathbb{P}(\hat{Y} = Y|\hat{P} = p) - p|]$ . ECE approximates this by binning the predictions in [0, 1] under $M$ equally-spaced intervals, and then taking a weighted average of each bins' accuracy/confidence difference. Let $B_{m}$ be the set of examples in the $m^{th}$ bin whose predicted confidence falls into interval $\left(\frac{m-1}{M}, \frac{m}{M}\right]$ . The bin $B_{m}$ 's accuracy and confidence are:
+
+$$
+\operatorname {A c c} \left(B _ {m}\right) = \frac {1}{\left| B _ {m} \right|} \sum_ {x _ {i} \in B _ {m}} \mathbb {1} \left(\hat {y} _ {i} = y _ {i}\right), \quad \operatorname {C o n f} \left(B _ {m}\right) = \frac {1}{\left| B _ {m} \right|} \sum_ {x _ {i} \in B _ {m}} \hat {p} _ {i}, \tag {1}
+$$
+
+where $\hat{y}_i$ and $y_i$ are the predicted and true labels and $\hat{p}_i$ is the confidence for example $x_i$ . Given $n$ examples, ECE is $\sum_{m=1}^{M} \frac{|B_m|}{n} \left| \operatorname{Acc}(B_m) - \operatorname{Conf}(B_m) \right|$ .
+
+# 2.2 ENSEMBLES
+
+Aggregating the predictions of multiple models into an ensemble is a well-established strategy to improve generalization (Hansen and Salamon, 1990; Perrone and Cooper, 1992; Dietterich, 2000).
+
+BatchEnsemble: BatchEnsemble takes a network architecture and shares its parameters across ensemble members, adding only a rank-1 perturbation for each layer in order to decorrelate member predictions (Wen et al., 2020). For a given layer, define the shared weight matrix among $K$ ensemble members as $\mathbf{W} \in \mathbb{R}^{m \times d}$ . A tuple of trainable vectors $\mathbf{r}_k \in \mathbb{R}^m$ and $\mathbf{s}_k \in \mathbb{R}^n$ are associated with each ensemble member $k$ . The new weight matrix for each ensemble member in BatchEnsemble is
+
+$$
+\mathbf {W} _ {k} ^ {\prime} = \mathbf {W} \circ \mathbf {F} _ {k}, \text {w h e r e} \mathbf {F} _ {k} = \mathbf {r} _ {k} \mathbf {s} _ {k} ^ {\top} \in \mathbb {R} ^ {m \times d}, \tag {2}
+$$
+
+where $\circ$ denotes the element-wise product. Applying rank-1 perturbations via $\mathbf{r}$ and $\mathbf{s}$ adds few additional parameters to the overall model. We use an ensemble size of 4 in all experiments.
+
+MC-Dropout: Gal and Ghahramani (2016) interpret Dropout (Srivastava et al., 2014) as an ensemble model, leading to its application for uncertainty estimates by sampling multiple dropout masks at test time in order to ensemble its predictions. We use an ensemble size of 20 in all experiments.
+
+Deep Ensembles: Composing an ensemble of models, each trained with a different random initialization, provides diverse predictions (Fort et al., 2019) which have been shown to outperform strong
+
+basiLs on uncertainty estimation tasks (Lakshminarayanan et al., 2017). We use an ensemble size of 4 in all experiments.
+
+In this work, we focus on the interaction between data augmentation strategies and BatchEnsemble, MC-Dropout, and deep ensembles. Other popular ensembling approaches leverage weight averaging such as Polyak-Ruppert (Ruppert, 1988), checkpointing (Huang et al., 2017), and stochastic weight averaging (Izmailov et al., 2018) to collect multiple sets of weights during training and aggregate them to make predictions with only a single set.
+
+# 2.3 DATA AUGMENTATION
+
+Data augmentation encourages a model to make invariant predictions under desired transformations which can greatly improve generalization performance. For example, in computer vision, random left-right flipping and cropping are de-facto approaches (He et al., 2016). We highlight two state-of-the-art techniques which we study.
+
+Mixup: Mixup (Zhang et al., 2018) manipulates both the features and the labels in order to encourage linearly interpolating predictions. Given an example $(x_{i},y_{i})$ , Mixup applies
+
+$$
+\tilde {x} _ {i} = \lambda x _ {i} + (1 - \lambda) x _ {j}, \quad \tilde {y} _ {i} = \lambda y _ {i} + (1 - \lambda) y _ {j}. \tag {3}
+$$
+
+Here, $x_{j}$ is sampled from the training dataset (taken from the minibatch), and $\lambda \sim \mathrm{Beta}(a,a)$ for a fixed hyperparameter $a > 0$ .
+
+Mixup was shown to be effective for generalization and calibration of deep neural networks (Zhang et al., 2018; Thulasidasan et al., 2019b). Recent work has investigated why Mixup improves generalization (Guo et al., 2018; Shimada et al., 2019) and adversarial robustness (Beckham et al., 2019; Pang et al., 2020; Mangla et al., 2020). Given Mixup's simplicity, many extensions have been proposed with further improvements (Yun et al., 2019; Berthelot et al., 2019; Verma et al., 2019; Roady et al., 2020; Chou et al., 2020).
+
+AugMix: Searching or sampling over a set of data augmentation operations can lead to significant improvement on both generalization error and calibration (Cubuk et al., 2019b;a). AugMix (Hendrycks et al., 2020) applies a sum of augmentations, each with random weighting, with a Jensen-Shannon consistency loss to encourage similarity across the augmentations. AugMix achieves state-of-the-art calibration across in- and out-of-distribution tasks. Let $\mathcal{O}$ be the set of data augmentation operations and $k$ be the number of AugMix iterations. AugMix samples $w_{1},\ldots ,w_{k}\sim \mathrm{Dirichlet}(a,\ldots ,a)$ for a fixed hyperparameter $a > 0$ and $\mathrm{op}_1,\dots ,\mathrm{op}_k$ from $\mathcal{O}$ . Given an interpolation parameter $m$ , sampled from $\mathrm{Beta}(a,a)$ , the augmented input $\tilde{x}_{augmix}$ is:
+
+$$
+\tilde {x} _ {\text {a u g m i x}} = m x _ {\text {o r i g}} + (1 - m) x _ {\text {a u g}}, \quad x _ {\text {a u g}} = \sum_ {i = 1} ^ {k} w _ {i} \mathrm {o p} _ {i} \left(x _ {\text {o r i g}}\right). \tag {4}
+$$
+
+# 3 MIXUP-ENSEMBLE PATHOLOGY
+
+We seek to understand the effect of data augmentations on ensembles. In particular, we hope to verify the hypothesis of compounding improvements when combining the seemingly orthogonal techniques of data augmentation and ensembles. To our surprise, we find that augmentation techniques can be detrimental to ensemble calibration.
+
+# 3.1 THE SURPRISING MISCALIBRATION OF ENSEMBLES WITH MIXUP
+
+Ensembles are the most known and simple approaches to improving calibration (Ovadia et al., 2019; Lakshminarayanan et al., 2017), and Thulasidasan et al. (2019b) showed that Mixup improves calibration in a single network. Motivated by this, Fig. 1 applies Mixup to each ensemble member on CIFAR-10/CIFAR-100 with WideResNet 28-10 (Zagoruyko and Komodakis, 2016). Here, we searched over Mixup's optimal hyperparameter $\alpha$ (Eq. 3) and found that $\alpha = 1$ gives the best result, which corroborates the finding in Zhang et al. (2018). All data points in Fig. 1 are averaged over 5 random seeds.
+
+Figs. 1a and 1b demonstrate improved test accuracy (Red (ensembles without Mixup) to Blue (ensembles with Mixup)). However, if we shift focus to Figs. 1c and 1d's calibration error, it is evident that combining Mixup with ensembles leads to worse calibration (Red to Blue). This is counterintuitive as we would expect Mixup, which improves calibration of individual models (Thulasidasan
+
+
+(a) CIFAR10-Error $(\%)$
+
+
+(b) CIFAR100-Error $(\%)$
+
+
+(c) CIFAR10-ECE $(\%)$
+Figure 1: WideResNet 28-10 on CIFAR-10/CIFAR-100. Red: Ensembles without Mixup; Blue: Ensembles with Mixup; Orange: Individual models in ensembles without Mixup. (a) & (b): Applying Mixup to different ensemble methods leads to consistent improvement on test accuracy. (c) & (d): Applying Mixup to different ensemble methods harms calibration. Averaged over 5 random seeds.
+
+
+(d) CIFAR100-ECE (%)
+
+et al., 2019a), to also improve the calibration of their ensemble. Fig. 1 confirms this pattern across BatchEnsemble (BE), MC-dropout (MC), and deep ensembles (DE). This pathology also occurs on ImageNet, as seen in Table 1.
+
+Why do Mixup ensembles degrade calibration? To investigate this in more detail, Fig. 2 plots a variant of reliability diagrams (DeGroot and Fienberg, 1983) on BatchEnsemble. We bin the predictions into $M = 15$ equally spaced intervals based on their confidence (softmax probabilities) and compute the difference between the average confidence and the average accuracy as in Eq. 1 for each bin. Fig. 2 tracks this difference over varying confidence levels. A positive difference (Acc-Conf) implies under-confidence with respect to the true frequencies; negative implies over-confidence; and zero implies perfect calibration.
+
+The backbone model in Fig. 2 is BatchEnsemble with an ensemble size of 4 (we also found this consistent for MC-Dropout and Deep-Ensemble). The figure presents 4 methods: Single: vanilla WideResNet 28-10; Mix
+
+
+Figure 2: Reliability diagrams on CIFAR-100 with a WideResNet 28-10.
+
+upSingle: WideResNet 28-10 model trained with Mixup; BatchEnsemble: vanilla BatchEnsemble WideResNet 28-10 model; MixupBE: BatchEnsemble WideResNet 28-10 model trained with Mixup. Fig. 2 shows that only models trained with Mixup have positive (Acc - Conf) values on the test set, which suggests that Mixup encourages under-confidence. Mixup ensemble's under-confidence is also greater in magnitude than that of the individual Mixup models. This suggests that Mixup ensembles suffer from compounding under-confidence, leading to a worse calibration for the ensemble than the individual Mixup models' calibration. This is contrary to our intuition that ensembles always improves calibration.
+
+To further visualize this issue, Appendix C's Fig. 8 investigates the confidence (softmax probabilities) surface of deep ensembles and Mixup when trained on a toy dataset consisting of 5 clusters, each with a different radius. We ensemble over 4 independently trained copies of 3-layer MLPs. Deep ensemble's predictive confidence is plotted over the entire input data space in Fig. 8c. The resulting predictions are extremely confident except at the decision boundaries. Deep Ensemble still displays high confidence in the area nearest to the origin which is expected to have lower confidence level. On the other hand, Fig. 8d shows that Mixup-Ensemble is only confident in a very constrained area around the training clusters, leading to an overall under-confident classifier which confirms our postulation of compounding under-confidence.
+
+# 3.2 IS THE PATHOLOGY SPECIFIC TO MIXUP?
+
+At the core of the issue is that Mixup conflates data uncertainty (uncertainty inherent to the data generating process) with model uncertainty. Soft labels can correct for over-confidence in single models which have no other recourse to improve uncertainty estimates. However, when combined with ensembles, which incorporate model uncertainty, this correction may be unnecessary. Because image classification benchmarks tend to be deterministic, soft labels encourage predictions on training data to be less confident about their true targets even if they are correct. We validate this hypothesis by showing it also applies to label smoothing.
+
+Label Smoothing: Like Mixup, label smoothing applies soft labels: it smoothens decision boundaries by multiplying a data point's true class by $(1 - \alpha)$ , with probability $\alpha$ spread equally across other classes. Using the same experimental setup as before, we apply increasing levels of label smoothing to ensembles of WideResNet 28-10 models trained on CIFAR-10. Fig. 3 demonstrates the harmful effect of label smoothing on CIFAR-10 ECE, particularly when aggressive (coeff $\geq 0.2$ ). In the concurrent work, Qin et al. (2020) found that label smoothing plus ensemble leads to worse calibration. They showed that adjusting model confidence successfully corrects the compounding underconfidence.
+
+
+Figure 3: ECE and Error on CIFAR-10 with label smoothing on MC Dropout, Deep Ensembles, and BatchEnsemble. ECE degrades with label smoothing, particularly when it is more aggressive $(\geq 0.2)$ .
+
+# 4 CONFIDENCE ADJUSTED MIXUP ENSEMBLES (CAMIXUP)
+
+In this section, we aim to fix the compounding under-confidence issue when combining Mixup and ensembles without sacrificing its improved accuracy on both in- and out-of-distribution data.
+
+# 4.1 CLASS BASED CAMIXUP
+
+Mixup encourages model under-confidence as shown in Fig. 2. Notice that Mixup assigns a uniform hyperparameter $\alpha$ to all examples in the training set. To improve Mixup, we start from the intuition that in classification, some classes are prone to be more difficult than others to predict. This can be confirmed by Fig. 4a, which provides examples of per-class test accuracy. Ideally, we prefer our model to be confident when it is predicting over easy classes such as cars and ships. For harder classes like cats and dogs, the model is encouraged to be less confident to achieve better calibration.
+
+Therefore, instead of a uniform Mixup hyperparameter for all classes, we propose to adjust the Mixup hyperparameter of each class by the difference between its accuracy and confidence. CAMixup's intuition is that we want to apply Mixup on hard classes on which models tend to be over-confident. On easy examples, we impose the standard data-augmentation without Mixup. This partially prevents Mixup models from being over-confident on difficult classes while maintaining its good calibration on out-of-distribution inputs.[2]
+
+Denote the accuracy and confidence of class $i$ as $\mathrm{Acc}(C_i)$ and $\mathrm{Conf}(C_i)$ . We adjust Mixup's $\lambda$ in Eqn. 3 by the sign of $\mathrm{Acc}(C_i) - \mathrm{Conf}(C_i)$ , which are defined as $\mathrm{Acc}(C_i) = \frac{1}{|C_i|} \sum_{x_j \in C_i} \mathbb{1}(\hat{y}_j = i)$ and $\mathrm{Conf}(C_i) = \frac{1}{|C_i|} \sum_{x_j \in C_i} \hat{p}_i$ .
+
+$$
+\lambda_ {i} = \left\{ \begin{array}{l l} 0 & \operatorname {A c c} \left(C _ {i}\right) > \operatorname {C o n f} \left(C _ {i}\right) \\ \lambda & \operatorname {A c c} \left(C _ {i}\right) \leq \operatorname {C o n f} \left(C _ {i}\right). \end{array} \right. \tag {5}
+$$
+
+
+(a) Proposed CAMixup method.
+
+
+(b) # epochs in which Mixup was applied to each class.
+Figure 4: Left: An illustration of the proposed CAMixup data augmentation. Selected per-class test accuracies are showed in brown. Overall test accuracy is $96.2\%$ on CIFAR-10; Right: Number of epochs (out of 250) where CAMixup enables Mixup for selected classes in BatchEnsemble. CAMixup tends to assign Mixup to hard classes. Counts are accumulated individually for each ensemble member (ensemble size 4).
+
+If the model is already under-confident on class $i$ ( $\mathrm{Acc}(C_i) > \mathrm{Conf}(C_i)$ ), Mixup is not applied to examples in the class, and $\lambda_i = 0$ . However, if $\mathrm{Acc}(C_i) \leq \mathrm{Conf}(C_i)$ , the model is over-confident on this class, and Mixup is applied to reduce model confidence. We compute the accuracy and confidence on a validation dataset after each training epoch.
+
+
+(a) CIFAR10-Error $(\%)$
+Figure 5: WideResNet 28-10 on CIFAR-10/CIFAR-100. Red: Ensembles without Mixup; Blue: Ensembles with Mixup; Green: Our proposed CAMixup improves both accuracy & ECE of ensembles.
+
+
+(b) CIFAR100-Error $(\%)$
+
+
+(c) CIFAR10-ECE $(\%)$
+
+
+(d) CIFAR100-ECE $(\%)$
+
+Notice that $\lambda_{i}$ is dynamically updated at the end of each epoch. To understand which classes are more often assigned Mixup operation, Fig. 4 calculates the number of times that $\lambda_{i} > 0$ throughout training. The maximum number of times is the number of total training epochs, which is 250 in the BatchEnsemble model. We find that CAMixup rarely enables Mixup to easy classes such as cars and ships: the number of times is less than $10\%$ of the total epochs. For harder classes like cats and dogs, CAMixup assigns Mixup operation almost every epoch, accounting for more than $80\%$ of total epochs. In summary, Fig. 4 shows that CAMixup reduces model confidence on difficult classes and encourages model confidence on easy classes, leading to better overall calibration. Appendix D.1's Fig. 9a also shows that CAMixup effectively shifts the confidence to the lower region.
+
+Fig. 5 presents results of CAMixup on CIFAR-10 and CIFAR-100 test set, where we compare the effect of Mixup and CAMixup on different ensembling strategies (BatchEnsemble, MC Dropout, DeepEnsemble). Adding Mixup to ensembles improves accuracy but worsens ECE. Adding CAMixup to ensembles significantly improves accuracy of ensembles in all cases. More importantly, the calibration results in Figs. 5c and 5d show that CAMixup ensembles are significantly better calibrated
+
+than Mixup ensembles, for instance, CAMixup reduces ECE by more than 5X for BatchEnsemble over Mixup. We observe a minor decrease in test accuracy (at most $0.2\%$ ) when comparing CAMixup ensembles with Mixup ensembles, but we believe that this is a worthwhile trade-off given the significant improvement in test ECE.
+
+Table 1 presents similar experiments applied to ResNet-50 on ImageNet, using BatchEnsemble as the base ensembling strategy. These results are state of the art to the best of our knowledge: Dusenberry et al. (2020) report $1.7\%$ ECE with Rank-1 Bayesian neural nets and $3.0\%$ with Deep Ensembles; Thulasidasan et al. (2019a) report $3.2\%$ for ResNet-50 with Mixup, $2.9\%$ for ResNet-50 with an entropy-regularized loss, and $1.8\%$ for ResNet-50 with label smoothing.
+
+Table 1: BatchEnsemble with ensemble size 4 on ImageNet.
+
+ | ACC | ECE |
| BatchEnsemble | 77.0 | 2.0% |
| MixupBE | 77.5 | 2.1% |
| CAMixupBE | 77.4 | 1.5% |
+
+# 4.2 PERFORMANCE UNDER DISTRIBUTION SHIFT
+
+Here, we assess model resilience to covariate shift by evaluating on the CIFAR-10-C and CIFAR-100-C benchmarks (C stands for corruptions) proposed by Hendrycks and Dietterich (2019a), which apply 15 types of corruptions each with 5 levels of intensity. We evaluate the performance of CAMixup vs Mixup when applied to different ensembles, and report average error on ECE across different types of corruptions and intensities.
+
+Fig. 6a shows that Mixup improves accuracy on the corrupted dataset because of its strong regularization effect. However, the models tend to be over-confident as one moves further from the original distribution (higher corruption intensities), so encouraging underconfidence is not an issue. This explains why Mixup ensembles maintain low ECE on out-of-distribution test data in Fig. 6b.
+
+
+(a) CIFAR10-C-Error
+Figure 6: WideResNet 28-10 on CIFAR-10-C. Red: Ensembles without Mixup; Blue: Ensembles with Mixup; Green: Ensembles with CAMixup (ours).
+
+
+(b) CIFAR10-C-ECE
+
+Fig. 6b also shows that CAMixup's calibration on out-of-distribution data (CIFAR-10-C) is also on par with Mixup ensembles. We observe the same result on CIFAR-100-C (Appendix D.1's Fig. 9). Thus, we successfully improve model calibration on in-distribution datasets without sacrificing its calibration on out-of-distribution datasets.
+
+# 5 COMPOUNDING THE BENEFITS OF CAMIXUP WITH AUGMIX ENSEMBLES
+
+We have investigated why certain data augmentation schemes may not provide complementary benefits to ensembling. We proposed class-adjusted Mixup (CAMixup) which compounds both accuracy and ECE over vanilla ensembles. We believe that the insights from our work will allow the community and practitioners to compound SOTA performance. We provide two concrete examples.
+
+# 5.1 AUGMIX
+
+We show how CAMixup can compound performance over ensembles of models trained with AugMix, which were shown by Hendrycks et al. (2020) to achieve state-of-the-art accuracy and calibration on both clean and corrupted benchmarks. We primarily focus on improving BatchEnsemble and we investigate if adding better data augmentation schemes closes the gap between memory-efficient ensembles (BatchEnsemble) and independent deep ensembles.
+
+As discussed in Section 2.3, AugMix only uses label-preserving transformations. Therefore AugMix provides complementary benefits to ensembles (and CAMixup). This is consistent with calibration improvements in the literature with ensemble methods, which apply standard data augmentation such as random flips, which also do not smoothen labels.
+
+We consider a combination of AugMix and Mixup as it allows the model to encounter both diverse label-preserving augmentations and soft labels under a linearly interpolating regime. The combination
+
+
+
+
+(a) CIFAR-10 Error
+
+
+(b) CIFAR-10 ECE
+Figure 7: Performance on BatchEnsemble under dataset shift. Mixup and AugMixup improve accuracy and calibration under shift but significantly worsen in-distribution calibration. Our proposed CAMixup and AugCAMixup improve accuracy and calibration.
+
+
+(c) CIFAR-100 Error
+
+
+(d) CIFAR-100 ECE
+
+| Method/Metric | CIFAR-10 | CIFAR-100 |
| Acc(↑) | ECE(↓) | cA/cECE | Acc(↑) | ECE(↓) | cA/cECE |
| AugMix BE | 97.36 | 1.02% | 89.49/2.6% | 83.57 | 2.96% | 67.12/7.1% |
| AugMixup BE | 97.52 | 1.71% | 90.05/2.8% | 83.77 | 4.19% | 69.26/4.8% |
| AugCAMixup BE | 97.47 | 0.45% | 89.81/2.4% | 83.74 | 2.35% | 68.71/4.4% |
+
+Table 2: Results for Wide ResNet-28-10 BatchEnsemble on in- and out-of-distribution CIFAR-10/100 with various data augmentations, averaged over 3 seeds. AugMix: AugMix + BatchEnsemble; AugMixup: AugMix + Mixup BatchEnsemble; AugCAMixup: AugMix + CAMixup BatchEnsemble. Adding Mixup to AugMix model increases test accuracy and corrupt accuracy at the cost of calibration decay on testset. CAMixup bridges this gap with only a minor drop in accuracy.
+
+AugMixup (AugMix + Mixup) can be written as
+
+$$
+x = \lambda * \operatorname {A u g M i x} \left(x _ {1}\right) + (1 - \lambda) \operatorname {A u g M i x} \left(x _ {2}\right), \quad y = \lambda * y _ {1} + (1 - \lambda) * y _ {2}. \tag {6}
+$$
+
+Consistent with earlier results on Mixup, Table 2 shows combining AugMixup with BatchEnsemble improves accuracy but worsens ECE, leading to under-confidence on in-distribution data. (Appendix D.2's Fig. 10). With our proposed fix CAMixup, the combination AugCAMixup (AugMix + CAMixup) improves calibration while retaining the highest accuracy for ensembles. Fig. 7 shows detailed results on CIFAR-10-C and CIFAR-100-C. Similar to Mixup, AugMixup improves calibration under shift but worsens calibration on in-distribution. However, our proposed AugCAMixup improves accuracy and calibration of ensembles on both clean and corrupted data.
+
+To the best of our knowledge, these results are state-of-the-art in the literature: Dusenberry et al. (2020) report $0.8\%$ ECE and $1.8\%$ ECE for CIFAR-10 and CIFAR-100 along with $8\%$ and $11.7\%$ ECE for corruptions; Guo et al. (2017) report $0.54\%$ and $2.3\%$ ECE for the smaller Wide ResNet 32 on CIFAR-10 and CIFAR-100 with temperature scaling ( $93\%$ and $72\%$ accuracy), and Ovadia et al. (2019) demonstrated that temperature scaling does not extend to distribution shift.
+
+# 5.2 TEMPERATURE SCALING
+
+In concurrent work, Rahaman and Thiery (2020) consider the interplay between data augmentation and ensembling on calibration. They also find that Mixup ensembles can be under-confident, and propose temperature scaling as a solution. Their core contribution is the same but differ in slight ways: we further this analysis by showing the compounding under-confidence extends to other techniques applying soft labels such as label smoothing, and we propose CAMixup as a solution. Post-hoc calibration techniques like temperature scaling are complementary to our proposal and do not address the core conflation issue with Mixup. Corroborating findings of Ovadia et al. (2019), Appendix G shows combining CAMixup and temperature scaling can further improve test calibration error; it does not improve out-of-distribution calibration. Another concurrent work showed that calibrated ensemble members do not always lead to calibrated ensemble predictions (Anonymous, 2021).
+
+# 6 CONCLUSION
+
+Contrary to existing wisdom in the literature, we find that combining ensembles and Mixup consistently degrades calibration performance across three ensembling techniques. From a detailed
+
+analysis, we identify a compounding under-confidence, where Mixup's soft labels (and more broadly, label-based augmentation strategies) introduce a negative confidence bias that hinders its combination with ensembles. To correct this, we propose CAMixup, which applies Mixup to only those classes on which the model tends to be over-confident, modulated throughout training. CAMixup combines well with state-of-the-art methods. It produces new state-of-the-art calibration across CIFAR-10, CIFAR-100, and ImageNet while obtaining competitive accuracy. Appendix H points out potential future work and limitations of CAMixup.
+
+# REFERENCES
+
+Chirag Agarwal and Sara Hooker. Estimating example difficulty using variance of gradients. arXiv preprint arXiv:2008.11600, 2020.
+Anonymous. Should ensemble members be calibrated? In Submitted to International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=twLfuDkvKp. under review.
+Arsenii Ashukha, Alexander Lyzhov, Dmitry Molchanov, and Dmitry Vetrov. Pitfalls of in-domain uncertainty estimation and ensembling in deep learning. In International Conference on Learning Representations, 2020.
+Christopher Beckham, Sina Honari, Alex Lamb, Vikas Verma, Farnoosh Ghadiri, R. Devon Hjelm, and Christopher Joseph Pal. Adversarial mixup resynthesizers. *ArXiv*, abs/1903.02709, 2019.
+David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin A Raffel. Mixmatch: A holistic approach to semi-supervised learning. In Advances in Neural Information Processing Systems, pages 5049-5059, 2019.
+Hsin-Ping Chou, S. Chang, J. Pan, Wei Wei, and D. Juan. Remix: Rebalanced mixup. ArXiv, abs/2007.03943, 2020.
+Ekin D. Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V. Le. Randaugment: Practical automated data augmentation with a reduced search space. arXiv: Computer Vision and Pattern Recognition, 2019a.
+Ekin Dogus Cubuk, Barret Zoph, Dandelion Mane, V. Vasudevan, and Quoc V. Le. Autoaugment: Learning augmentation strategies from data. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 113-123, 2019b.
+Morris H. DeGroot and Stephen E. Fienberg. The Comparison and Evaluation of Forecasters. The Statistician, 32(1/2):12, March 1983. ISSN 00390526. doi: 10.2307/2987588. URL https://www.jstor.org/stable/10.2307/2987588?origin=crossref.
+Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Fei-Fei Li. Imagenet: A large-scale hierarchical image database. 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248-255, 2009.
+Thomas G. Dietterich. Ensemble methods in machine learning. In Multiple Classifier Systems, 2000.
+Michael W Dusenberry, Dustin Tran, Edward Choi, Jonas Kemp, Jeremy Nixon, Ghassen Jerfel, Katherine Heller, and Andrew M Dai. Analyzing the role of model uncertainty for electronic health records. arXiv preprint arXiv:1906.03842, 2019.
+Michael W. Dusenberry, Ghassen Jerfel, Yeming Wen, Yi-an Ma, Jasper Snoek, Katherine Heller, Balaji Lakshminarayanan, and Dustin Tran. Efficient and scalable Bayesian neural nets with rank-1 factors. In ICML, 2020.
+Stanislav Fort, Huiyi Hu, and Balaji Lakshminarayanan. Deep Ensembles: A Loss Landscape Perspective. arXiv preprint arXiv:1912.02757, 2019.
+Yarin Gal and Zoubin Ghahramani. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In International Conference on Machine Learning, 2016.
+Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On Calibration of Modern Neural Networks. In International Conference on Machine Learning (ICML), volume cs.LG. Cornell University Library, August 2017. URL http://arxiv.org/abs/1706.04599v2.
+Hongyu Guo, Yongyi Mao, and Richong Zhang. Mixup as locally linear out-of-manifold regularization. In AAAI, 2018.
+Hongyu Guo, Yongyi Mao, and Richong Zhang. Augmenting data with mixup for sentence classification: An empirical study. arXiv preprint arXiv:1905.08941, 2019.
+
+Danijar Hafner, Dustin Tran, Alex Irpan, Timothy Lillicrap, and James Davidson. Reliable uncertainty estimates in deep neural networks using noise contrastive priors. arXiv preprint arXiv:1807.09289, 2018.
+Lars Kai Hansen and Péter Salamon. Neural network ensembles. IEEE Trans. Pattern Anal. Mach. Intell., 12:993-1001, 1990.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Computer Vision and Pattern Recognition, 2016.
+Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. In International Conference on Learning Representations, 2019a. URL https://openreview.net/forum?id=HJz6tiCqYm.
+Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. In International Conference on Learning Representations, 2019b.
+Dan Hendrycks, Norman Mu, Ekin D. Cubuk, Barret Zoph, Justin Gilmer, and Balaji Lakshminarayanan. Augmix: A simple data processing method to improve robustness and uncertainty. ArXiv, abs/1912.02781, 2020.
+Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q. Weinberger. Deep networks with stochastic depth. In ECCV, 2016.
+Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E Hopcroft, and Kilian Q Weinberger. Snapshot ensembles: Train 1, get m for free. arXiv preprint arXiv:1704.00109, 2017.
+Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. Averaging weights leads to wider optima and better generalization. In Uncertainty in Artificial Intelligence, 2018.
+Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009.
+Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Neural Information Processing Systems, pages 1097-1105, 2012.
+Anders Krogh and Jesper Vedelsby. Neural network ensembles, cross validation, and active learning. In Advances in neural information processing systems, pages 231-238, 1995.
+Ananya Kumar, Percy Liang, and Tengyu Ma. Verified uncertainty calibration. ArXiv, abs/1909.10155, 2019.
+Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In Neural Information Processing Systems, 2017.
+Puneet Mangla, Vedant Singh, Shreyas Jayant Havaldar, and Vineeth N. Balasubramanian. Varmixup: Exploiting the latent space for robust training and inference. ArXiv, abs/2003.06566, 2020.
+Mahdi Pakdaman Naeini, Gregory F. Cooper, and Milos Hauskrecht. Obtaining Well Calibrated Probabilities Using Bayesian Binning. In AAAI Conference on Artificial Intelligence, volume 2015, pages 2901-2907, January 2015. URL https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4410090/pdf/nihms679964.pdf.
+Jeremy Nixon, Mike Dusenberry, Linchuan Zhang, Ghassen Jerfel, and Dustin Tran. Measuring Calibration in Deep Learning. arXiv:1904.01685 [cs, stat], April 2019. URL http://arxiv.org/abs/1904.01685.
+Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, D Sculley, Sebastian Nowozin, Joshua V Dillon, Balaji Lakshminarayanan, and Jasper Snoek. Can you trust your model's uncertainty? Evaluating predictive uncertainty under dataset shift. In Neural Information Processing Systems, 2019.
+Tianyu Pang, Kun Xu, and Jun Zhu. Mixup inference: Better exploiting mixup to defend adversarial attacks. ArXiv, abs/1909.11515, 2020.
+
+Michael P. Perrone and Leon N. Cooper. When networks disagree: Ensemble methods for hybrid neural networks. 1992.
+Yao Qin, Xuezhi Wang, Alex Beutel, and Ed Huai hsin Chi. Improving uncertainty estimates through the relationship with adversarial robustness. *ArXiv*, abs/2006.16375, 2020.
+Rahul Rahaman and Alexandre H Thiery. Uncertainty quantification and deep ensembles. arXiv preprint arXiv:2007.08792, 2020.
+Ryne Roady, T. Hayes, and Christopher Kanan. Improved robustness to open set inputs via tempered mixup. ArXiv, abs/2009.04659, 2020.
+Alejandro Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. FitNets: Hints for thin deep nets. CoRR, abs/1412.6550, 2015.
+David Ruppert. Efficient estimations from a slowly convergent Robbins-Monro process. Technical report, Cornell University Operations Research and Industrial Engineering, 1988.
+Takuya Shimada, Shoichiro Yamaguchi, Kohei Hayashi, and Sosuke Kobayashi. Data interpolating prediction: Alternative interpretation of mixup. ArXiv, abs/1906.08412, 2019.
+Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929-1958, 2014.
+Rupesh Kumar Srivastava, Klaus Greff, and Jürgen Schmidhuber. Highway networks. CoRR, abs/1505.00387, 2015.
+Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Neural Information Processing Systems, 2014.
+Sunil Thulasidasan, Gopinath Chennupati, Jeff A Bilmes, Tanmoy Bhattacharya, and Sarah Michalak. On mixup training: Improved calibration and predictive uncertainty for deep neural networks. In Advances in Neural Information Processing Systems, pages 13888-13899, 2019a.
+Sunil Thulasidasan, Gopinath Chennupati, Jeff A. Bilmes, Tanmoy Bhattacharya, and Sarah Ellen Michalak. On mixup training: Improved calibration and predictive uncertainty for deep neural networks. In NeurIPS, 2019b.
+Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, and Geoffrey J Gordon. An empirical study of example forgetting during deep neural network learning. arXiv preprint arXiv:1812.05159, 2018.
+Juozas Vaicenavicius, D. Widmann, Carl R. Andersson, F. Lindsten, J. Roll, and Thomas Bo Schön. Evaluating model calibration in classification. In AISTATS, 2019.
+Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitliagkas, David Lopez-Paz, and Yoshua Bengio. Manifold mixup: Better representations by interpolating hidden states. In International Conference on Machine Learning, pages 6438-6447. PMLR, 2019.
+Yeming Wen, Dustin Tran, and Jimmy Ba. BatchEnsemble: An alternative approach to efficient ensemble and lifelong learning. In International Conference on Learning Representations, 2020.
+Florian Wenzel, Jasper Snoek, Dustin Tran, and Rodolphe Jenatton. Hyperparameter ensembles for robustness and uncertainty quantification. In Neural Information Processing Systems, 2020.
+D. Widmann, F. Lindsten, and D. Zachariah. Calibration tests in multi-class classification: A unifying framework. *ArXiv*, abs/1910.11385, 2019.
+Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE International Conference on Computer Vision, pages 6023-6032, 2019.
+Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
+Hongyi Zhang, Moustapha Cisse, Yann Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. ArXiv, abs/1710.09412, 2018.
+
+# A DATASET DETAILS
+
+CIFAR & CIFAR-C: We consider two CIFAR datasets, CIFAR-10 and CIFAR-100 (Krizhevsky, 2009). Each consists of a training set of size 50K and a test set of size 10K. They are natural images with 32x32 pixels. Each class has 5,000 training images and 500 training images on CIFAR-10 and CIFAR-100 respectively. In our experiments, we follow the standard data pre-processing schemes including zero-padding with 4 pixels on each size, random crop and horizon flip (Romero et al., 2015; Huang et al., 2016; Srivastava et al., 2015). If a training method requires validation dataset such as CAMixup, we use separate 2,500 images from 50K training images as the validation set.
+
+It's important to test whether models are well calibrated under distribution shift. CIFAR-10 corruption dataset (Hendrycks and Dietterich, 2019a) is designed to accomplish this. The dataset consists of 15 types of corruptions to the images. Each corruption types have 5 intensities. Thus, in total CIFAR-10C has 75 corrupted datasets. Notice that the corrupted dataset is used as a testset without training on it. Ovadia et al. (2019) benchmarked a number of methods on CIFAR-10 corruption. Similarly, we can apply the same corruptions to CIFAR-100 dataset to obtain CIFAR-100C.
+
+ImageNet & ImageNet-C: We used the ILSVRC 2012 classification dataset (Deng et al., 2009) which consists of a total of 1.2 million training images, 50,000 validation images and 150,000 testing images. Images span over 1,000 classes. We follow the data augmentation scheme in He et al. (2016), such as random crop and random flip, to preprocess the training images. During testing time, we apply a 224x224 center crop to images. Similarly to CIFAR-C, we apply 15 corruption types with 5 intensities each to obtain ImageNet-C (Hendrycks and Dietterich, 2019b).
+
+# B HYPERPARAMETERS IN SECTION 3
+
+We kept the same set of hyperparameters as the BatchEnsemble model in Wen et al. (2020). All hyperparameters can be found in Table 3. The most sensitive hyperparameter we found is whether to use ensemble batch norm, which applies a separate batch norm layer for each ensemble member; and the value of random_sign_init, which controls the standard deviation of Gaussian distributed initialization of s and r. We kept BatchEnsemble CIFAR-10 the same as Wen et al. (2020), which does not deploy ensemble batch norm. We enable ensemble batch norm on CIFAR-100 and ImageNet. This allows us to use larger standard deviation in the initialization. The random_sign_init is $-0.5$ on CIFAR-10 and $-0.75$ on CIFAR-100 and $-0.75$ on ImageNet. In the code, we use negative value to denote the standard deviation of Gaussian distribution (positive value instead initializes with a Bernoulli distribution under that probability). In our case, we only use negative random_sign_init, which means we only consider Gaussian distributed initialization in this work.
+
+| Dataset | CIFAR-10 | CIFAR-100 |
| ensemble_size | 4 | |
| base_learning_rate | 0.1 | |
| per_core_batch_size | 64 | |
| num_cores | 8 | |
| lr Decay_ratio | 0.1 | |
| train_epochs | 250 | |
| lr Decay_epochs | [80, 160, 200] | |
| 12 | 0.0001 | 0.0003 |
| random_sign_init | 0.5 | 0.75 |
| SyncEnsemble_BN | False | True |
+
+Table 3: Hyperparameters we used in Section 3 regarding to BatchEnsemble. The difference between CIFAR-10 and CIFAR-100 is 12, random_sign_init and whether to use SyncEnsemble_BN.
+
+# C EXCESSIVE UNDER-CONFIDENCE ON SYNTHETIC DATA
+
+To further understand the confidence surface of Mixup + Ensembles, we provided a visualization in Fig. 8. We trained on a synthetic dataset consisting of 5 clusters, each with a different radius. We ensemble over 4 independently trained copies of 3-layer MLPs. We plotted the softmax probabilities surface of Mixup-Single model, Deep-Ensemble and Mixup-Ensemble. The softmax probabilities
+
+
+(a) Synthetic Data
+
+
+(b) Mixup-Single
+
+
+(c) Deep-Ensemble
+
+
+(d) Mixup-Ensemble
+
+
+(a) Reliability on CIFAR100
+
+
+Figure 8: Softmax probabilities surface of different ensemble methods (ensemble size 4) in the input space after training on synthetic data. Deep ensemble is over-confident in the area around origin. Mixup-Ensemble leads to gloabl under-confidence.
+(b) CIFAR100-Error $(\%)$
+
+
+(c) CIFAR100-ECE $(\%)$
+Figure 9: Left: Reliability diagrams on CIFAR-100 with a WideResNet 28-10. Our proposed CAMixup successfully fixes the under-confidence of Mixup BatchEnsemble, leading to better calibration. (b) & (c): Red: Ensembles without Mixup; Blue: Ensembles with Mixup; Green: Our proposed CAMixup does not harm the out-of-distribution performance.
+
+represent the model confidence. Fig. 8c shows that Deep-Ensemble predictions are extremely confident except at the decision boundaries. Fig. 8b displays a lower confidence than Deep-Ensemble. This is beneficial in the single model context because single deep neural networks tend to be over-confident and Mixup can partially correct this bias. On the other hand, Fig. 8d shows that Mixup-Ensemble is only confident in a very constrained area around the training clusters, leading to an overall under-confident classifier which confirms our postulation of compounding under-confidence.
+
+# D MORE CALIBRATION RESULTS OF MIXUP-BATCHENSEMBLE
+
+In Section 3.1, we demonstrated that combining Mixup and ensembles leads to worse calibration on testset. In this appendix section, we complement the above conclusion with the analysis on corrupted datasets and with data-augmentation techniques like AugMix.
+
+# D.1 SUPPLEMENTARY RESULTS ON CAMIXUP
+
+In this section, we provided supplementary results on CAMixup. Fig. 2 shows that combining Mixup and BatchEnsemble leads to excessive under-confidence. In Fig. 9a, we showed that our proposed CAMixup fixes this issue by correcting the confidence bias. This explains why CAMixup achieves better calibration on in-distribution testset. As demonstrated in Section 4.2, Mixup improves model out-of-distribution performance because of its strong regularization effect. We showed that our proposed CAMixup inherits Mixup's improvement on CIFAR-10-C. Fig. 9b and Fig. 9c show that this conclusion seamlessly transfers to CIFAR-100-C. We also supplement Fig. 5 with Table 4 and Table 5, illustrating detailed numbers.
+
+
+(a) Reliability on CIFAR10
+
+
+(b) Reliability on CIFAR100
+Figure 10: Reliability diagrams on CIFAR-10 and CIFAR-100. Both plots show that AugMix does not lead to under-confidence when combined with ensembles. However, if we combine AugMix with Mixup (AugMixup), the compounding under-confidence issue still exists, leading to suboptimal calibration. Our proposed AugCAMixup corrects this underconfidence bias.
+
+# D.2 SUPPLEMENTARY RESULTS ON AUGMIX
+
+We show that Mixup does not combine with ensembles without sacrificing in-distribution calibration in Section 3.1. As discussed in Section 2.3, AugMix only uses label-preserving transformations and does not modify the labels. Intuitively, it does not reduce model confidence. We support this intuition with Fig. 10. It shows that AugMix does not lead to under-confidence. Therefore it can be combined with ensembles without any calibration issue.
+
+In Table 2, we showed that combining AugMix and Mixup leads to worse calibration due to the under-confidence although AugMix itself does not. To better understand the insights beyond staring at scalars, we provided the reliability diagram analysis as well. In Figure 10, we showed that the under-confidence issue of AugMixup (Augmix + Mixup) still exists. It suggests that applying CAMixup to Augmix can correct the under-confidence bias as what we showed in Fig. 10a and Fig. 10b. Our proposed CAMixup allows to compound performance of ensembles and data augmentation to achieve the best possible performance.
+
+| Method/Metric | CIFAR-10 |
| Acc(↑) | ECE(↓) | cA/cE |
| BatchEnsemble | 96.22 ±0.07 | 1.8 ±0.2 % | 77.5±0.3 /12.9 ±1.2 % |
| Mixup BE | 96.98±0.08 | 6.4 ±0.4 % | 80.0±0.4 /9.3±0.3 % |
| CAMixup BE | 96.94 ±0.10 | 1.2 ±0.2 % | 81.1±0.4 /9.7±0.35% |
+
+Table 4: CIFAR-10 results for Wide ResNet-28-10 BatchEnsemble (Wen et al., 2020) (BE), averaged over 5 seeds. This table is used to supplement Fig. 5.
+
+| Method/Metric | CIFAR-100 |
| Acc(↑) | ECE(↓) | cA/cE |
| BatchEnsemble | 81.85±0.09 | 2.8±0.1% | 54.1±0.3/19.1±0.8% |
| Mixup BE | 83.12±0.08 | 9.7±0.5 % | 59.3±0.3/8.8±0.4% |
| CAMixup BE | 83.02±0.10 | 2.3±0.1% | 59.7±0.3/8.9±0.4% |
+
+Table 5: CIFAR-100 results for Wide ResNet-28-10 BatchEnsemble (Wen et al., 2020) (BE), averaged over 5 seeds. This table is used to supplement Fig. 5.
+
+| Dataset | CIFAR-10 | CIFAR-100 |
| Metric | Acc(↑) | ECE(↓) | cA/cE | Acc(↑) | ECE(↓) | cA/cE |
| Deep Ensembles | 96.66 | 0.78% | 76.80/9.8% | 82.7 | 2.1% | 54.1/13.8% |
| Mixup DE | 97.11 | 6.15% | 83.33/8.0% | 83.90 | 9.42% | 61.02/8.9% |
| CAMixup DE | 96.95 | 1.92% | 83.01/4.4% | 83.68 | 5.22% | 59.18/8.6% |
| AugMix DE | 97.39 | 0.59% | 89.50/3.3% | 84.15 | 5.13% | 68.21/6.7% |
| AugMixup DE | 97.56 | 2.71% | 90.03/4.3% | 84.85 | 6.86% | 69.31/7.6% |
| AugCAMixup DE | 97.48 | 1.89% | 89.94/4.7% | 84.64 | 5.29% | 69.19/5.9% |
+
+Table 6: Mixup/AugMix/AugMixup/AugCAMixup on deep ensembles. We can conclude that Mixup worsens ensemble predictions in deep ensembles as well as in BatchEnsemble. This suggests we can use CAMixup on deep ensembles as well. However, the improvement is not as obvious as it is on BatchEnsemble, leading to the fact that AugMix is the most calibrated (in- and out-of-distribution) data augmentation strategy on deep ensembles.
+
+# E DEEP ENSEMBLES WITH MIXUP
+
+We showed that CAMixup improves Mixup BatchEnsemble calibration on testset without undermining its calibration under distribution shift in Section 4. In this section, we show that the improvement can also be observed on deep ensembles. In Fig. 11, we showed the under-confidence bias we observed on Mixup + BatchEnsemble also exists on Mixup + deep ensembles, with an even more obvious trend. Beyond commonly used ECE measure, we also explore other calibration measures. They further confirmed our under-confidence intuition. We provide some brief explanation on how to calculate ACE, SCE and TACE.
+
+ACE measure is the same as ECE except for the binning scheme. Rather than equally divide the confidence evenly into several bins, ACE chooses an adaptive scheme which spaces the bin intervals so that each contains an equal number of predictions. SCE is the same as ECE except that it accounts for all classes into calibration measure rather than just looking at the class with maximum probability. The softmax predictions induce infinitesimal probabilities. These tiny predictions can wash out the calibration score. TACE is proposed to set a threshold to only include predictions with large predictive probability, to address the above issue.
+
+We present the results of Mixup, CAMixup, AugMix, AugMixup and AugCAMixup on deep ensembles in Table 6. We notice that the improvement of CAMixup on deep ensembles is smaller than its improvement on BatchEnsemble. We postulate that this is because Mixup + deep ensembles is much badly calibrated than Mixup + BatchEnsemble. For example, AugMixup + deep ensembles achieve $2.71\%$ and $6.86\%$ ECE on CIFAR-10 and CIFAR-100. In the meanwhile, AugMixup + BatchEnsemble achieve $1.71\%$ and $4.19\%$ . Thus, even if CAMixup can improve the calibration of Mixup + deep ensembles, it still cannot beat AugMix + deep ensembles. As a result, when we say we close the calibration gap between BatchEnsemble and deep ensembles, we are comparing AugCAMixup BatchEnsemble (BatchEnsemble + CAMixup + Augmix) to AugMix deep ensembles. This is because AugMix deep ensembles achieve the best calibration among all variants we tried. How to completely fix the under-confidence in deep ensembles is a natural extension of this work. Since we focus on bridging the calibration gap between BatchEnsemble and deep ensembles, we delegate the complete fix in deep ensembles to the future work.
+
+# F METRICS OTHER THAN ECE
+
+ECE is the standard metric in calibration, but it is a biased estimate of true calibration (Vaicenavicius et al., 2019). Heavily relying on ECE metric might lead to inconsistent conclusion. In this section, we computed the calibration error with recently proposed calibration estimator which reduces bias in ECE, including debiased calibration estimator (Kumar et al., 2019) (DCE) and SKCE (Widmann et al., 2019). fig. 12 shows that our conclusion in the main section are also supported by these two recently proposed calibration estimators. In particular, the improvement of proposed CAMixup over
+
+
+
+
+
+
+
+
+
+
+(a) Reliability on testset.
+
+
+
+
+
+
+
+
+(b) Reliability on corrupt level 3.
+
+
+
+
+
+
+
+
+(c) Reliability on corrupt level 3.
+
+
+
+
+(d) Reliability on corrupt level 5.
+
+
+Figure 11: WideResNet-28-10 Deep Ensembles with Mixup on CIFAR-10. We plotted the reliability diagram of ensemble and individual predictions. Besides ECE, we also plotted other calibration metrics such as ACE, SCE and TACE proposed in Nixon et al. (2019). All metrics verify the conclusion that Mixup + Ensembles leads to under-confidence on testset.
+
+| Method/Metric | BatchEnsemble | Deep-Ensembles |
| Acc(↑) | SKCE(↓) | cA/cSKCE | Acc(↑) | SKCE(↓) | cA/cSKCE |
| Vanilla | 96.22 | 3.4e-4 | 77.5/0.026 | 96.66 | 3.4e-5 | 54.1/0.018 |
| Mixup | 96.98 | 4e-3 | 80.0/0.024 | 97.11 | 4.4e-3 | 59.3/0.0068 |
| CAMixup | 96.94 | 1.3e-4 | 81.1/0.019 | 96.95 | 4.3e-4 | 59.7/0.0032 |
+
+Table 7: Results for Wide ResNet-28-10 BatchEnsemble (Wen et al., 2020) and Deep Ensembles on CIFAR-10 and CIFAR-10-C, averaged over 3 seeds. This table is used to supplement Fig. 12
+
+Mixup on testset is even larger than what ECE reflects in Fig. 5. Table 7 demonstrates the specific numbers used in Fig. 12.
+
+
+(a) CIFAR10-SKCE (%)
+
+
+(b) CIFAR10-DCE $(\%)$
+
+
+(c) CIFAR10-C-SKCE $(\%)$
+
+
+d) CIFAR10-C-DCE $(\%)$
+Figure 12: WideResNet 28-10 on CIFAR-10 and CIFAR-10-C, averaged over 3 random seeds. SKCE: Squared kernel calibration error computed in Widmann et al. (2019). DCE: Debiased calibration error in Kumar et al. (2019). Red: Ensembles without Mixup; Blue: Ensembles with Mixup; Green: Ensembles with CAMixup (ours). Both SKCE and DCE give consistent rankings on calibration error to the ranking in Fig. 5 and Fig. 6. This plot shows that our proposed CAMixup is effective in reducing Mixup calibration error when combined with ensembles.
+
+# G CAMIXUP WITH TEMPERATURE SCALING
+
+See Fig. 13.
+
+# H LIMITATIONS AND FUTURE WORK
+
+We describe limitations of our work, signalling areas for future research. One limitation of CAMixup is that all examples in the same class still share the same Mixup coefficient. This leaves room for developing more fine-grained adaptive Mixup mechanisms, such as adapting the Mixup coefficient per example. This relates to an open research question: how do you measure the training difficulty of a data point given a deep network? (Toneva et al., 2018; Agarwal and Hooker, 2020) Another limitation is we showed that CAMixup still cannot fully fix the miscalibration of Mixup + deep ensembles in Appendix E, due to the fact that Mixup + deep ensembles leads to even worse calibration than Mixup + BatchEnsemble. This raises a harder question which CAMixup cannot completely solve but also leaves more research room to further understand why Mixup is worse on deep ensembles and how to address it. Thus, we leave the question on how to address the above issues to future work. Next, we determine whether to use Mixup based on the reliability (Mean Accuracy - Mean Confidence) of each class on validation set. One concern is that CAMixup does not scale well to a large number of classes. Fortunately, we showed that this works on problems up to 1000 classes (ImageNet). Additionally, Mixup has been most successful in the vision domain, hence our focus; and with preliminary success on tabular data and natural language processing (Zhang et al., 2018; Guo et al., 2019). Assessing whether CAMixup and ensembling techniques translate to text is an interesting area.
+
+We took a first step in developing a more fine-grained adaptive Mixup mechanism. Recall that class based CAMixup calculates the reliability (Accuracy - Confidence) at the end of each epoch, then it decided whether to apply Mixup in each class (illustrated in Fig. 4). This requires extra computation on validation dataset and it assigns uniform Mixup coefficient within one class. By leveraging recently developed forgetting count (Toneva et al., 2018), we can adjust Mixup coefficient for each example based on its forgetting counts. The intuition is if an examples is associated with high forgetting counts, it indicates the model tends to forget this example. To achieve better calibration, we should place low confidence on this example. The algorithm of forgetting counts based CAMixup is presented in Algorithm 1. In summary, we first calculate the forgetting counts for each training example and obtain the median of these counts as the threshold. Then, CAMixup applies Mixup to the training example whose forgetting counts are higher than the median.
+
+We provided a preliminary results on CIFAR-10 in Fig. 14. It demonstrates that forgetting counts based CAMixup outperforms class based CAMixup on most metrics across BatchEnsemble and MC-dropout. One exception is that it underperforms on test calibration on MC-dropout. We could not observe the same improvement on CIFAR-100. We postulate that the reliability of forgetting count on CIFAR-100 is not as good as it is on CIFAR-10, leading to the inconsistent results. We leave the question on how to improve
+
+# Algorithm 1 Forgetting Count Based CAMixup
+
+initialize prevacc $i = 0,i\in D$
+
+initialize forgetting $T[i] = 0, i \in D$
+
+initialize MixupCoeff[i] = 0
+
+while training do
+
+$B\sim D$ # sample a minibatch
+
+Apply Mixup on $B$ based on MixupCoeff
+
+for example $_i \in B$ do
+
+compute $\mathrm{acc}_i$
+
+if prevacc $i > \mathrm{acc}_i$ then
+
+$$
+T [ i ] = T [ i ] + 1
+$$
+
+$$
+\mathrm {p r e v a c c} _ {i} = \mathrm {a c c} _ {i}
+$$
+
+end if
+
+end for
+
+gradient update classifier on B
+
+$\operatorname{rank} = \operatorname{sort}(T)$
+
+threshold $=$ rank[|D|//2]
+
+for example $_i \in B$ do
+
+if $T[i] >$ threshold then
+
+$$
+\operatorname {M i x u p C o e f f} [ i ] = a
+$$
+
+else
+
+$$
+\operatorname {M i x u p C o e f f} [ i ] = 0
+$$
+
+end if
+
+end for
+
+end while
+
+forgetting count based CAMixup on CIFAR-100 into future work.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+(a) CIFAR-10
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+(b) CIFAR-100
+
+
+Figure 13: Combining CAMixup and Temperature Scaling further improves test ECE. It does not make further improvements on out-of-distribution calibration however.
+
+
+
+
+(a) CIFAR10-Error $(\%)$
+
+
+(b) CIFAR10-C-Error $(\%)$
+
+
+(c) CIFAR10-ECE $(\%)$
+Figure 14: WideResNet 28-10 on CIFAR-10 and CIFAR-10-C. Green: Class based CAMixup. Purple: Forgetting count based CAMixup. Forgetting count based CAMixup outperforms class based Mixup in most metrics across BatchEnsemble and MC-dropout.
+
+
+(d) CIFAR10-C-ECE $(\%)$
\ No newline at end of file
diff --git a/combiningensemblesanddataaugmentationcanharmyourcalibration/images.zip b/combiningensemblesanddataaugmentationcanharmyourcalibration/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..ba93909871d245d88f23a97a4b917ec3ff47e503
--- /dev/null
+++ b/combiningensemblesanddataaugmentationcanharmyourcalibration/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b51969ff77c4aef295b2e2f9714fa327988313b0ac75a2fdddc5789959c7a7ff
+size 1003809
diff --git a/combiningensemblesanddataaugmentationcanharmyourcalibration/layout.json b/combiningensemblesanddataaugmentationcanharmyourcalibration/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..df7a5b36a447565bdb47fcb4bd170b8955ba2f5e
--- /dev/null
+++ b/combiningensemblesanddataaugmentationcanharmyourcalibration/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:59c12032385594a2c5d86451bfde593a150fc4ad523b94828b20366cb8218c82
+size 667109
diff --git a/combininglabelpropagationandsimplemodelsoutperformsgraphneuralnetworks/a212774a-96b0-484a-aae2-a924be514e9f_content_list.json b/combininglabelpropagationandsimplemodelsoutperformsgraphneuralnetworks/a212774a-96b0-484a-aae2-a924be514e9f_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..b6819f557e9acdbcfd2051082fceb7841e42d523
--- /dev/null
+++ b/combininglabelpropagationandsimplemodelsoutperformsgraphneuralnetworks/a212774a-96b0-484a-aae2-a924be514e9f_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c8393a925c6540db00fcf81046025c09a46d2338cb6330322615e5e9a7b3799f
+size 91236
diff --git a/combininglabelpropagationandsimplemodelsoutperformsgraphneuralnetworks/a212774a-96b0-484a-aae2-a924be514e9f_model.json b/combininglabelpropagationandsimplemodelsoutperformsgraphneuralnetworks/a212774a-96b0-484a-aae2-a924be514e9f_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..b775d00ad171f446b7c6d580c399f99166982ff5
--- /dev/null
+++ b/combininglabelpropagationandsimplemodelsoutperformsgraphneuralnetworks/a212774a-96b0-484a-aae2-a924be514e9f_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c6f95256623f1a5afdde09cb4e2bc7fbae7974fad4a71a91858c19fc6a226761
+size 113735
diff --git a/combininglabelpropagationandsimplemodelsoutperformsgraphneuralnetworks/a212774a-96b0-484a-aae2-a924be514e9f_origin.pdf b/combininglabelpropagationandsimplemodelsoutperformsgraphneuralnetworks/a212774a-96b0-484a-aae2-a924be514e9f_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..e63f31c6e25e5e0f1dad87c3d1241693cd47fe3a
--- /dev/null
+++ b/combininglabelpropagationandsimplemodelsoutperformsgraphneuralnetworks/a212774a-96b0-484a-aae2-a924be514e9f_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1f9c69b8a3aa5123e69ed48e7f1084ef7fef4536b8f5d8bfc0a000b6f9a7d632
+size 3668054
diff --git a/combininglabelpropagationandsimplemodelsoutperformsgraphneuralnetworks/full.md b/combininglabelpropagationandsimplemodelsoutperformsgraphneuralnetworks/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..e377305278592bfd54b72ad44db7b13a11fbc253
--- /dev/null
+++ b/combininglabelpropagationandsimplemodelsoutperformsgraphneuralnetworks/full.md
@@ -0,0 +1,304 @@
+# COMBINING LABEL PROPAGATION AND SIMPLE MODELS OUT-PERFORMS GRAPH NEURAL NETWORKS
+
+Qian Huang\*, Horace He\*, Abhay Singh\*, Ser-Nam Lim\*, Austin R. Benson
+
+Cornell University†, Facebook†, Facebook AI§
+
+# ABSTRACT
+
+Graph Neural Networks (GNNs) are a predominant technique for learning over graphs. However, there is relatively little understanding of why GNNs are successful in practice and whether they are necessary for good performance. Here, we show that for many standard transductive node classification benchmarks, we can exceed or match the performance of state-of-the-art GNNs by combining shallow models that ignore the graph structure with two simple post-processing steps that exploit correlation in the label structure: (i) an "error correlation" that spreads residual errors in training data to correct errors in test data and (ii) a "prediction correlation" that smoothes the predictions on the test data. We call this overall procedure Correct and Smooth (C&S), and the post-processing steps are implemented via simple modifications to standard label propagation techniques that have long been used in graph-based semi-supervised learning. Our approach exceeds or nearly matches the performance of state-of-the-art GNNs on a wide variety of benchmarks, with just a small fraction of the parameters and orders of magnitude faster runtime. For instance, we exceed the best-known GNN performance on the OGB-Products dataset with 137 times fewer parameters and greater than 100 times less training time. The performance of our methods highlights how directly incorporating label information into the learning algorithm (as is common in traditional methods) yields easy and substantial performance gains. We can also incorporate our techniques into big GNN models, providing modest gains in some cases.
+
+# 1 INTRODUCTION
+
+Following the success of neural networks in computer vision and natural language processing, there are now a wide range of graph neural networks (GNNs) for making predictions involving relational data (Battaglia et al., 2018; Wu et al., 2020). These models have had much success and sit atop leaderboards such as the Open Graph Benchmark (Hu et al., 2020). Often, the methodological developments for GNNs revolve around creating strictly more expressive architectures than basic variants such as the Graph Convolutional Network (GCN) (Kipf & Welling, 2017) or GraphSAGE (Hamilton et al., 2017a); examples include Graph Attention Networks (Veličković et al., 2018), Graph Isomorphism Networks (Xu et al., 2018), and various deep models (Li et al., 2019; Rong et al., 2019; Chen et al., 2020). Many ideas for new GNN architectures are adapted from new architectures in models for language (e.g., attention) or vision (e.g., deep CNNs) with the hopes that success will translate to graphs. However, as these models become more complex, understanding their performance gains is a major challenge, and scaling them to large datasets is difficult.
+
+Here, we see how far we can get by combining much simpler models, with an emphasis on understanding where there are easy opportunities for performance improvements in graph learning, particularly transductive node classification. We propose a simple pipeline with three main parts (Figure 1): (i) a base prediction made with node features that ignores the graph structure (e.g., a shallow multi-layer perceptron or just a linear model); (ii) a correction step, which propagates uncertainties from the training data across the graph to correct the base prediction; and (iii) a smoothing of the predictions over the graph. Steps (ii) and (iii) are post-processing and implemented with classical methods for graph-based semi-supervised learning, namely, label propagation techniques
+
+
+Figure 1: Illustration of our GNN-free model, Correct and Smooth (C&S), with a toy example. Nodes in the left and right clusters have different labels, marked by color (orange or blue). We use a multilayer perceptron (MLP) for base predictions, ignoring the graph structure. We assume this gives the same prediction on all nodes in this example (which could happen if, e.g., all nodes had the same features). After, base predictions are corrected by propagating errors from the training data. Finally, corrected predictions are smoothed with label propagation.
+
+(Zhu, 2005).1 With a few modifications and new deployment of these classic ideas, we achieve state-of-the-art performance on several node classification tasks, outperforming big GNN models. In our framework, the graph structure is not used to learn parameters (which is done in step (i)) but instead as a post-processing mechanism. This simplicity leads to models with orders of magnitude fewer parameters that take orders of magnitude less time to train and can easily scale to large graphs. We can also combine our ideas with state-of-the-art GNNs, although the performance gains are modest.
+
+A major source of our performance improvements is directly using labels for predictions. This idea is not new — early diffusion-based semi-supervised learning algorithms on graphs such as the spectral graph transducer (Joachims, 2003), Gaussian random field models (Zhu et al., 2003), and label spreading (Zhou et al., 2004) all use this idea. However, the motivation for these methods was semi-supervised learning on point cloud data, so the “node features” were used to construct the graph itself. Since then, these techniques have been used for learning on relational data consisting of a graph and some labels but no node features (Koutra et al., 2011; Gleich & Mahoney, 2015; Peel, 2017; Chin et al., 2019); however, they have largely been ignored in the context of GNNs. (That being said, we still find that even simple label propagation, which ignores features, does surprisingly well on a number of benchmarks.) This provides motivation for combining two orthogonal sources of prediction power — one coming from the node features (ignoring graph structure) and one coming from using the known labels directly in predictions.
+
+Recent research connects GNNs to label propagation (Wang & Leskovec, 2020; Jia & Benson, 2020; 2021) as well as Markov Random fields (Qu et al., 2019; Gao et al., 2019), and some techniques use ad hoc incorporation of label information in the features (Shi et al., 2020). However, these approaches are usually still expensive to train, while we use label propagation in two understandable and low-cost ways. We start with a cheap "base prediction" from a model that uses only node features and ignores the graph structure. After, we use label propagation for error correction and then to smooth final predictions. These post-processing steps are based on the fact that errors and labels on connected nodes tend to be positively correlated. Assuming similarity between connected nodes is at the center of much network analysis and corresponds to homophily or assortative mixing (McPherson et al., 2001; Newman, 2003; Easley & Kleinberg, 2010). In the semi-supervised learning literature, the analog is the smoothness or cluster assumption (Chapelle et al., 2003; Zhu, 2005). The good performance of label propagation that we see across a wide variety of datasets suggests that these correlations hold on common benchmarks.
+
+Overall, our methodology demonstrates that combining several simple ideas yields excellent performance in transductive node classification at a fraction of the cost, in terms of both model size (i.e., number of parameters) and training time. For example, on the OGB-Products benchmark, we out-perform the current best-known GNN with more than two orders of magnitude fewer parameters and more than two orders of magnitude less training time. However, our goal is not to say that current graph learning methods are poor or inappropriate. Instead, we aim to highlight easier ways in which to improve prediction performance in graph learning and to better understand the source of performance gains. Our main finding is that more direct incorporation of labels into the learning algorithms is key. We hope that our approach spurs new ideas that can help in other graph learning tasks, such as inductive node classification, link prediction, and graph prediction.
+
+# 1.1 ADDITIONAL RELATED WORK
+
+The Approximate Personalized Propagation of Neural Predictions (APPNP) framework is most relevant to our work, as they also smooth base predictions (Klicpera et al., 2018). However, they focus on integrating this smoothing into the training process so that their model can be trained end to end. Not only is this significantly more computationally expensive, it also prevents APPNP from incorporating label information at inference. Compared to APPNP, our framework produces more accurate predictions, is faster to train, and more easily scales to large datasets. That being said, APPNP can also be used without end-to-end training, which can make it faster but less accurate. Our framework also complements the Simplified Graph Convolution (Wu et al., 2019) and other algorithms designed to increase scalability (Bojchevski et al., 2020; Zeng et al., 2019; Frasca et al., 2020). The primary focus of our approach, however, is using labels directly, and scalability is a byproduct. There is also prior work connecting GCNs and label propagation. Wang & Leskovec (2020) use label propagation as a pre-processing step to weight edges for GNNs, whereas we use label propagation as a post-processing step and avoid GNNs. Jia & Benson (2020; 2021) use label propagation with GNNs for regression tasks, and our error correction step adapts some of their ideas for the case of classification. Finally, there are several recent approaches that incorporate nonlinearity into label propagation methods to compete with GNNs and achieve scalability (Eliav & Cohen, 2018; Ibrahim & Gleich, 2019; Tudisco et al., 2021), but these methods focus on settings of low label rates and do not use feature-based learning.
+
+# 2 CORRECT AND SMOOTH (C&S) MODEL
+
+We start with some notation. We assume that we have an undirected graph $G = (V, E)$ , where there are $n = |V|$ nodes with features on each node represented by a matrix $X \in \mathbb{R}^{n \times p}$ . Let $A$ be the adjacency matrix of the graph, $D$ be the diagonal degree matrix, and $S$ be the normalized adjacency matrix $D^{-1/2}AD^{-1/2}$ . For the prediction problem, the node set $V$ is split into a disjoint set of unlabeled nodes $U$ and labeled nodes $L$ , which are subsets of the indices $\{1, \ldots, n\}$ . We will further split the labeled nodes into a training set $L_{t}$ and validation set $L_{v}$ . We represent the labels by a one-hot-encoding matrix $Y \in \mathbb{R}^{n \times c}$ , where $c$ is the number of classes (i.e., $Y_{ij} = 1$ if $i \in L$ is known to be in class $j$ , and 0 otherwise, where the $i$ th row of $Y$ is all zero if $i \in U$ ). Our problem is transductive node classification: assign each node $j \in U$ a label in $\{1, \ldots, c\}$ , given $G, X,$ and $Y$ .
+
+Our approach starts with a simple base predictor on node features that does not rely on any learning over the graph. After, we perform two types of label propagation (LP): one that corrects the base predictions by modeling correlated error and one that smooths the final prediction. We call the combination of these two methods Correct and Smooth (C&S; Figure 1). The LPs are only post-processing steps, and our pipeline is not trained end-to-end. Furthermore, the graph is only used in the post-processing steps (and in a pre-processing step to augment the features $X$ ), but not for the base predictions. This makes training fast and scalable compared to standard GNN models. Moreover, we take advantage of both LP (which performs fairly well on its own without features) and the node features. We find that combining these complementary signals yields excellent predictions.
+
+# 2.1 SIMPLE BASE PREDICTOR
+
+To start, we use a simple base predictor that does not rely on the graph structure. More specifically, we train a model $f$ to minimize $\sum_{i\in L_t}\ell (f(x_i),y_i)$ , where $x_{i}$ is the $i$ th row of $X$ , $y_{i}$ is the $i$ th row
+
+of $Y$ , and $\ell$ is a loss function. For this paper, $f$ is either a linear model or a shallow multi-layer perceptron (MLP) followed by a softmax, and $\ell$ is the cross-entropy loss. The validation set $L_{v}$ is used to tune hyperparameters such as learning rates and the hidden layer dimensions for the MLP. From $f$ , we get a base prediction $Z \in \mathbb{R}^{n \times c}$ , where each row of $Z$ is a probability distribution resulting from the softmax. Omitting the graph structure for these base predictions avoids most of the scalability issues with GNNs. In principle, though, we can use any base predictor for $Z$ , including those based on GNNs, and we explore this in Section 3. However, for our pipeline to be simple and scalable, we just use linear classifiers or MLPs with subsequent post-processing, which we describe next.
+
+# 2.2 CORRECTING BASE PREDICTIONS WITH ERROR CORRELATION
+
+Next, we improve the accuracy of the base prediction $Z$ by incorporating labels to correlate errors. The key idea is that we expect errors in the base prediction to be positively correlated along edges in the graph. In other words, an error at node $i$ increases the chance of a similar error at neighbors of $i$ . Thus, we should "spread" such uncertainty over the graph. Our approach here is inspired in part by residual propagation (Jia & Benson, 2020), where a similar concept is used for node regression tasks, as well as generalized least squares and correlated error models more broadly (Shalizi, 2013). To this end, we first define an error matrix $E \in \mathbb{R}^{n \times c}$ , where error is the residual on the training data and zero elsewhere:
+
+$$
+E _ {L _ {t},:} = Y _ {L _ {t},:} - Z _ {L _ {t},:}, \quad E _ {L _ {v},:} = 0, \quad E _ {U,}: = 0. \tag {1}
+$$
+
+The residuals in rows of $E$ corresponding to training nodes are zero only when the base predictor makes a perfect prediction. We smooth the error using the label spreading technique of Zhou et al. (2004), optimizing the objective
+
+$$
+\hat {E} = \underset {W \in \mathbb {R} ^ {n \times c}} {\arg \min } \operatorname {t r a c e} \left(W ^ {T} (I - S) W\right) + \mu \| W - E \| _ {F} ^ {2}. \tag {2}
+$$
+
+The first term encourages smoothness of the error estimation over the graph, and is equal to $\sum_{k=1}^{c}\sum_{(i,j)\in E}(W_{ik}/\sqrt{D_{ii}} - W_{jk}/\sqrt{D_{jj}})^2$ . The second term keeps the solution close to the initial guess $E$ of the error. As derived in Zhou et al. (2004), the solution can be obtained via the iteration $E^{(t+1)} = (1 - \alpha)E + \alpha SE^{(t)}$ , where $\alpha = 1/(1 + \mu)$ and $E^{(0)} = E$ , which converges rapidly to $\hat{E}$ . This iteration is a propagation (or diffusion or spreading) of the error, and we add the smoothed errors to the base prediction to get corrected predictions $Z^{(r)} = Z + \hat{E}$ . We emphasize that this is a post-processing technique and there is no coupled training with the base predictions.
+
+This type of propagation is motivated by a particular correlated Gaussian error assumption for regression problems (Jia & Benson, 2020; 2021). For the classification problems we consider, we find that the smoothed errors $\hat{E}$ might not be at the right scale. We know that
+
+$$
+\left\| E ^ {(t + 1)} \right\| _ {2} \leq (1 - \alpha) \| E \| + \alpha \| S \| _ {2} \| E ^ {(t)} \| _ {2} = (1 - \alpha) \| E \| _ {2} + \alpha \| E ^ {(t)} \| _ {2}. \tag {3}
+$$
+
+When $E^{(0)} = E$ , we then have that $\| E^{(t)}\|_2 \leq \| E\|_2$ . Thus, the propagation cannot completely correct the errors on all nodes in the graph, as it does not have enough "total mass," and we find that adjusting the scale of the residual can help substantially in practice. To do this, we propose two variations of scaling the residual.
+
+Autoscale. Intuitively, we want to scale the size of errors in $\hat{E}$ to be approximately the size of the errors in $E$ . We only know the true errors at labeled nodes, so we approximate the scale with the average error over the training nodes. Formally, let $e_j^T \in \mathbb{R}^c$ and $\hat{e}_j^T$ correspond to the $j$ th rows of $E$ and $\hat{E}$ and define $\sigma = \frac{1}{|L_t|} \sum_{j \in L_t} \| e_j \|_1$ . Then we define the corrected predictions on an unlabeled node $i \in U$ to be $Z_{i,:}^{(r)} = Z_{i,:} + \sigma / \| \hat{e}_i \|_1 \cdot \hat{e}_i^T$ .
+
+Scaled Fixed Diffusion (FDiff-scale). Alternatively, we can use a diffusion like the one from Zhu et al. (2003), which keeps the known errors at training nodes fixed. More specifically, we iterate $E_{U,:}^{(t + 1)} = [D^{-1}AE^{(t)}]_{U,:}$ and keep fixed $E_{L,:}^{(t)} = E_{L,:}$ until convergence to $\hat{E}$ , starting with $E^{(0)} = E$ . Intuitively, this fixes error values where we know the error (on the labeled nodes $L$ ), while other nodes keep averaging over the values of their neighbors until convergence. With this type of propagation, the maximum and minimum values of entries in $E^{(t)}$ do not go beyond those in $E_L$ . We still find it effective to select a scaling hyperparameter $s$ to produce $Z^{(r)} = Z + s\hat{E}$ .
+
+# 2.3 SMOOTHING FINAL PREDICTIONS WITH PREDICTION CORRELATION
+
+At this point, we have a score vector $Z^{(r)}$ , obtained from correcting the base predictor $Z$ with a model for the correlated error $\hat{E}$ . To make a final prediction, we further smooth the corrected predictions. The motivation is that adjacent nodes in the graph are likely to have similar labels, which is expected given network homophily or assortative properties of a network. Thus, we can encourage smoothness over the distribution over labels by another label propagation. First, we start with our best guess $H \in \mathbb{R}^{n \times c}$ of the labels:
+
+$$
+H _ {L _ {t},:} = Y _ {L _ {t},:}, \quad H _ {L _ {v} \cup U,:} = Z _ {L _ {v} \cup U,}: ^ {(r)}. \tag {4}
+$$
+
+Here, the true labels are used at the training nodes and the corrected predictions are used for the validation and unlabeled nodes, the latter of which no longer correspond to probability distributions. We can (and should) also use the true labels at the validation labels, which we discuss later in the experiments, but the setup in Equation (4) aligns more closely with standard GNN evaluation. We then iterate $H^{(t + 1)} = (1 - \alpha)H + \alpha SH^{(t)}$ with $H^{(0)} = H$ until convergence to give the final prediction $\hat{Y}$ . The classification for a node $i\in U$ is $\arg \max_{j\in \{1,\dots,c\}}\hat{Y}_{ij}$ .
+
+As with error correlation, the smoothing here is a post-processing step, decoupled from the other steps. This type of prediction smoothing is similar in spirit to APPNP (Klicpera et al., 2018), which we compare against later. However, APPNP is typically trained end-to-end, propagates final-layer representations instead of softmaxes, does not use labels, and is motivated differently.
+
+# 2.4 SUMMARY AND ADDITIONAL CONSIDERATIONS
+
+To summarize, we start with a cheap base prediction $Z$ , using only node features but not the graph structure. After, we estimate errors $\hat{E}$ by propagating errors on the training data. Then, we add these errors back to the base predictions, forming corrected predictions. Finally, we treat the corrected predictions as score vectors on unlabeled nodes, and combine them with the known labels via another LP step for smoothed final predictions. We call this pipeline Correct and Smooth (C&S).
+
+Before showing that this pipeline achieves state-of-the-art performance on transductive node classification, we briefly describe another simple way of improving performance: feature augmentation. The hallmark of deep learning is that we can learn features instead of engineering them. However, GNNs still rely on informative input features to make predictions. There are numerous ways to get useful features from just the graph topology to augment the raw node features (Henderson et al., 2011; 2012; Hamilton et al., 2017b). In our pipeline, we augment features with a regularized spectral embedding (Chaudhuri et al., 2012; Zhang & Rohe, 2018) coming from the leading $k$ eigenvectors of the matrix $D_{\tau}^{-1/2}(A + \frac{\tau}{n}\mathbf{11}^T)D_{\tau}^{-1/2}$ , where $\mathbf{1}$ is a vector of all ones, $\tau$ is a regularization parameter set to the average degree, and $D_{\tau}$ is diagonal with $i$ th diagonal entry equal to $D_{ii} + \tau$ . The underlying matrix is dense, but we can apply matrix-vector products in time linear in the number of edges and use iterative eigensolvers to compute the embeddings quickly.
+
+# 3 EXPERIMENTS ON TRANSDUCTIVE NODE CLASSIFICATION
+
+To demonstrate the effectiveness of our methods, we use nine datasets (Table 1). The Arxiv and Products datasets are from the Open Graph Benchmark (OGB) (Hu et al., 2020); the Cora, Cite-seer, and Pubmed are three classic citation network benchmarks (Getoor et al., 2001; Getoor, 2005; Namata et al., 2012); and wikiCS is a web graph (Mernyei & Cangea, 2020). In these datasets, classes are categories of papers, products, or pages, and features are derived from text. We also use a Facebook social network of Rice University, where classes are dorm residences and features are attributes such as gender, major, and class year (Traud et al., 2012), as well as a geographic dataset of US counties where classes are 2016 election outcomes and features are demographic (Jia & Benson, 2020). Finally, we use an email dataset of a European research institute, where classes are department membership and there are no features (Leskovec et al., 2007; Yin et al., 2017).
+
+Data splits. The training/validation/test splits for Arxiv and Products are given by the benchmark, and the splits for wikiCS come from Mernyei & Cangea (2020). For the Rice, US counties, and email data, we use $40\% / 10\% / 50\%$ random splits. For the smaller citation networks, we use $60\% / 20\% / 20\%$
+
+Table 1: Summary statistics of datasets and model performance. For the accuracy of our best C&S model compared to the state-of-the-art GNN method (see text), we report the change in the number of parameters and the accuracy. We also list the training time with time to compute the spectral embedding in parentheses (even if not used in the best model). Our methods require fewer parameters, are typically more accurate, and are fast to train. Also see Tables 2 and 3.
+
+| Datasets | Classes | Nodes | Edges | Parameter Δ | Accuracy Δ | Time (s) |
| Arxiv | 40 | 169,343 | 1,166,243 | -84.90% | +0.26 | 12 (+90) |
| Products | 47 | 2,449,029 | 61,859,140 | -93.47% | +1.74 | 171 (+2959) |
| Cora | 7 | 2,708 | 5,429 | -98.37% | +1.09 | < 1 (+7) |
| Citeseer | 6 | 3,327 | 4,732 | -89.68% | -0.69 | < 1 (+7) |
| Pubmed | 3 | 19,717 | 44,338 | -96.00% | -0.30 | < 1 (+14) |
| Email | 42 | 1,005 | 25,571 | -97.89% | +4.33 | 43 (+17) |
| Rice31 | 10 | 4,087 | 184,828 | -99.02% | +1.39 | 39 (+12) |
| US County | 2 | 3,234 | 12,717 | -74.56% | +1.77 | 39 (+12) |
| wikiCS | 10 | 11,701 | 216,123 | -84.88% | +2.03 | 7 (+11) |
+
+random splits, as in Wang & Leskovec (2020). Standard deviations in prediction accuracy over splits is $< 1\%$ in most experiments and such variance does not change our qualitative comparisons.
+
+C&S setup and baselines. We use Linear and MLP models as simple base predictors based on node features. When a spectral embedding is included as a node feature, we refer to these models as Linear-SE and MLP-SE. We also evaluate Label Propagation itself (LP; specifically, the Zhou et al. (2004) version), which only uses labels. In all cases, the number of LP iterations is fixed to 50.
+
+For GNN models comparable to our framework in terms of simplicity or style, we use GCN, SGC, and APPNP. For GCNs, we add residual connections from the input to every layer and from every layer to the output, as well as dropout. Thus, GCN is not the original model Kipf & Welling (2017) and instead serves as a fairly strong representative of out-of-the-box GNN capabilities. The number of layers and hidden layer dimensions for the GCNs are the same as the MLPs used by our base predictors. The GCN only uses raw node features, and additional results in Appendix C show that including spectral embeddings minimally changes performance. APPNP uses a linear model for base predictions, also with the raw node features.
+
+Finally, we include several "state-of-the-art" (SOTA) baselines. For Arxiv and Products, this is UniMP (Shi et al., 2020) (top of OGB leaderboard, as of October 1, 2020). For Cora, Citeseer and Pubmed, we use the top scores from Chen et al. (2020). For Email and US County, we use GCNII (Chen et al., 2020). For Rice31, we use GCN with spectral embedding as additional features, which is the best GNN-based model that we found. For wikiCS, we use APPNP as reported by Mernyei & Cangea (2020). Hyperparameters are tuned using the validation set.
+
+All of the above models select hyperparameters using the validation set. See Appendix A for additional model architecture details.
+
+# 3.1 FIRST RESULTS ON NODE CLASSIFICATION
+
+In our first set of results, we only use the training labels in our C&S framework, as these are what GNNs typically use to train models. For the results discussed here, this is generous to our baselines. The ability to include validation labels is an advantage of our approach (and LP in general), and this improves performance of our framework even further (Table 1). We discuss this in the next section.
+
+Table 2 reports the results, and we highlight a few important findings. First, within our model, there are substantial gains from the LP post-processing steps (e.g., the MLP-SE base prediction accuracy increases from $63\%$ to $84\%$ on Products). Second, even Linear with C&S outperforms GCNs in many cases, and simple LP is often competitive with GCNs. This is striking given that the main motivation for GCNs was to address the fact that connected nodes may not have similar labels (Kipf & Welling, 2017). Our results suggest that directly incorporating correlation in the graph with simple use of the features is often a better idea. Results in Appendix B show that both label propagation post-processing steps are important for performance. Third, our model variants can out-perform SOTA on Products, Cora, Email, Rice31, and US County (often substantially so). On the other datasets, there is not much difference between the best C&S model and the SOTA.
+
+Table 2: Performance of our C&S framework, using only the training labels as ground truth in final prediction smoothing (Equation (4)). Further improvements can be made by including ground truth validation labels (Table 3). The Email dataset has no raw node features, so some methods are not evaluated. APPNP ran out of memory (OOM) on the products dataset.
+
+| Method | Arxiv | Products | Cora | Citeseer | Pubmed |
| LP | 68.5 | 74.76 | 86.50 | 70.64 | 83.74 |
| GCN | 71.74 | 75.64 | 85.77 | 73.68 | 88.13 |
| SGC | 69.39 | 68.83 | 86.81 | 72.04 | 84.04 |
| APPNP | 66.38 | OOM | 87.87 | 76.53 | 89.40 |
| SOTA | 73.79 | 82.56 | 88.49 | 77.99 | 90.30 |
| Linear | 52.32 | 47.73 | 73.85 | 70.27 | 87.10 |
| Linear-SE | 70.08 | 50.05 | 74.75 | 70.51 | 87.19 |
| MLP-SE | 71.51 | 63.41 | 74.06 | 68.10 | 86.85 |
| Linear + C&S (Autoscale) | 71.11 | 80.24 | 88.62 | 76.31 | 89.99 |
| Linear-SE + C&S (Autoscale) | 72.07 | 80.25 | 88.73 | 76.75 | 89.93 |
| MLP-SE + C&S (Autoscale) | 72.62 | 78.60 | 87.39 | 76.31 | 89.33 |
| Linear + C&S (Fdiff-scale) | 70.60 | 82.54 | 89.05 | 76.22 | 89.74 |
| Linear-SE + C&S (Fdiff-scale) | 71.57 | 83.01 | 88.66 | 77.06 | 89.51 |
| MLP-SE + C&S (Fdiff-scale) | 72.43 | 84.18 | 87.39 | 76.42 | 89.23 |
| Method | Email | Rice31 | US County | wikiCS | |
| LP | 70.69 | 82.19 | 87.90 | 76.72 | |
| GCN | — | 15.45 | 84.13 | 78.61 | |
| SGC | — | 16.59 | 83.92 | 72.86 | |
| APPNP | — | 11.34 | 84.14 | 69.83 | |
| SOTA | 71.96 | 86.50 | 88.08 | 79.84 | |
| Linear | — | 9.84 | 75.74 | 72.45 | |
| Linear-SE | 66.24 | 70.26 | 84.07 | 74.29 | |
| MLP-SE | 69.13 | 17.16 | 87.70 | 73.07 | |
| Linear + C&S (Autoscale) | — | 75.99 | 85.25 | 79.57 | |
| Linear-SE + C&S (Autoscale) | 72.50 | 86.42 | 86.15 | 79.53 | |
| MLP-SE + C&S (Autoscale) | 74.55 | 85.50 | 89.64 | 78.10 | |
| Linear + C&S (Fdiff-scale) | — | 73.66 | 87.38 | 79.54 | |
| Linear-SE + C&S (Fdiff-scale) | 72.53 | 87.55 | 88.11 | 79.25 | |
| MLP-SE + C&S (Fdiff-scale) | 75.74 | 85.74 | 89.85 | 78.24 | |
+
+To get a sense of how much using ground truth labels directly helps, we also evaluate a version of C&S where we smooth base predictions from a linear model or MLP, using the Zhou et al. (2004) version of label propagation. We call these Linear-SE-smooth and MLP-SE-smooth and find that they often outperform GCNs (right). Again,
+
+| Method | Arxiv | Products |
| Linear-SE-smooth | 71.42 | 78.73 |
| MLP-SE-smooth | 72.48 | 80.34 |
| GCN | 71.74 | 75.64 |
+
+these results suggest that smoothed outputs are important, aligning with recent research (Wu et al., 2019; Bojchevski et al., 2020), and that the original motivations for GCNs might be misleading. However, there are still gaps in performance between these models and those in Table 2 that directly use labels. Next, we see how to improve performance of C&S even further by using more labels.
+
+# 3.2 FURTHER IMPROVEMENTS BY USING MORE LABELS
+
+We improve the C&S performance by using both training and validation labels, instead of just the training labels as in Equation (4). Importantly, we do not use validation labels to update the base prediction model — they are just used to select hyperparameters. Using validation labels boosts performance even further: Table 3 shows accuracies and Table 1 shows gains over SOTA. The ability to incorporate validation labels is a benefit of our approach. On the other hand, GNNs do not have this advantage, as they often rely on early stopping to prevent overfitting, may not always
+
+Table 3: Performance of C&S, using both training and validation labels as ground truth in the final prediction smoothing (cf. Equation (4), Table 2).
+
+| Method | Arxiv | Products | Cora | Citeseer | Pubmed |
| Linear + C&S (Autoscale) | 72.71 | 80.55 | 89.54 | 76.83 | 90.01 |
| Linear-SE + C&S (Autoscale) | 73.78 | 80.56 | 89.77 | 77.11 | 89.98 |
| MLP-SE + C&S (Autoscale) | 74.02 | 79.29 | 88.55 | 76.36 | 89.50 |
| Linear + C&S (Fdiff-scale) | 72.42 | 82.89 | 89.47 | 77.08 | 89.74 |
| Linear-SE + C&S (Fdiff-scale) | 72.93 | 83.27 | 89.53 | 77.29 | 89.57 |
| MLP-SE + C&S (Fdiff-scale) | 73.46 | 84.55 | 88.18 | 76.41 | 89.38 |
| SOTA | 73.65 | 82.56 | 88.49 | 77.99 | 90.30 |
| Methods | Email | Rice31 | US County | wikiCS | |
| Linear + C&S (Autoscale) | — | 76.59 | 85.22 | 81.87 | |
| Linear-SE + C&S (Autoscale) | 73.33 | 87.25 | 86.38 | 81.57 | |
| MLP-SE + C&S (Autoscale) | 73.45 | 86.13 | 89.71 | 80.75 | |
| Linear + C&S (Fdiff-scale) | — | 75.31 | 88.16 | 81.18 | |
| Linear-SE + C&S (Fdiff-scale) | 72.57 | 87.89 | 88.06 | 81.06 | |
| MLP-SE + C&S (Fdiff-scale) | 76.22 | 86.26 | 90.05 | 80.83 | |
| SOTA | 71.96 | 86.50 | 88.08 | 79.84 | |
+
+
+Figure 2: Accuracy and model size on Products.
+
+Table 4: C&S with GNN base predictions.
+
+| Dataset | Model | Performance |
| ogbn-arxiv | GAT | 73.56 |
| GAT + C&S | 73.86 |
| SOTA | 73.79 |
| US County | GCNII (SOTA) | 88.08 |
| GCNII + C&S | 89.59 |
+
+benefit from more data (e.g., under distributional shift), and do not directly use labels. Thus, our comparisons in Table 2 are more generous than needed. With validation labels, our best model out-performs SOTA in seven of nine datasets, often by substantial margins (Table 1).
+
+The evaluation procedure for GNN benchmarks differ from those for LP. For GNNs, a sizable validation set is often used (and needed) for substantial hyperparameter tuning, as well as early stopping. With LP, one can use the entire set of labeled nodes $L$ with cross-validation to select the single hyperparameter $\alpha$ . Given the setup of transductive node classification, there is no reason not to use validation labels at inference if they are helpful (e.g., via LP in our case). The results in Tables 1 and 3 show the true performance of our model and is the proper point of comparison.
+
+Overall, our results highlight two important findings. First, big and expensive-to-train GNN models are not actually necessary to achieve top performance for transductive node classification on many datasets. Second, combining classical label propagation ideas with simple base predictors outperforms graph neural networks on these tasks.
+
+# 3.3 TRAINING TIME AND IMPROVING EXISTING GNNS
+
+Our C&S framework often has significantly fewer parameters compared to GNNs or other SOTA solutions. As an example, we plot parameters vs. performance for the Products dataset in Figure 2. While having fewer parameters is useful, the real gain is in faster training time. Our models are typically orders of magnitude faster to train than models with comparable accuracy because we do not use the graph structure for our base prediction models. As one example, although our MLP-SE + C&S model for the Arxiv dataset has a similar number of parameters compared to the
+
+
+(a) Ground Truth
+
+
+(b) Linear-SE + C&S predictions
+
+
+
+
+
+
+Figure 3: (a) US County visualizations, where the embedding is given by GraphViz and colors correspond to class labels. (b) Panels corresponding to parts of (a) that show at which stage Linear-SE + C&S made a correct prediction. (c) The same panels showing GCN predictions.
+
+
+(c) GCN-SE predictions
+
+
+
+"GCN+linear+labels" method on the OGB leaderboard (Wang, 2020), our model runs 7 times faster per epoch and converges much faster. In addition, compared to the SOTA for the Products dataset, our framework with a linear base predictor has higher accuracy, trains over 100 times faster, and has 137 times fewer parameters.
+
+We also evaluated our methods on an even larger dataset, the papers100M OGB benchmark (Hu et al., 2020). Here, we obtain $65.33\%$ using C&S with the Linear model as the base predictor, which out-performs the state-of-the-art $(63.29\%)$ as of October 1, 2020).
+
+Our pipeline can also be used to improve the performance of GNNs in general. We used C&S with base predictions given by GCNII or GAT. This improves our results on some datasets, such as ogbnarxiv (Table 4). However, the performance improvements are sometimes only minor, suggesting that big models might be capturing the same signal as our simple C&S framework.
+
+# 3.4 PERFORMANCE VISUALIZATION
+
+To aid in understanding the performance of our C&S framework, we visualize the predictions on the US County dataset (Figure 3). As expected, the residual error correlation tends to correct nodes where neighboring counties provide relevant information. For example, we see that many errors in the base predictions are corrected by the residual correlation (Figure 3b, left and right panels). In these cases, which correspond to parts of Texas and Hawaii, the demographic features of the counties are outliers compared to the rest of the country, leading both the linear model and GCN astray. The error correlation from neighboring counties is able to fix the predictions. We also see that the final prediction correlation can fix errors when nearby nodes are correctly classified, as shown in the center panel of Figure 3b. We observe similar behavior on the Rice31 dataset (Appendix D).
+
+# 4 DISCUSSION
+
+GNN models are becoming more expressive, more parameterized, and more expensive to train. Our results suggest that we should explore other techniques for improving performance, such as label propagation and feature augmentation. In particular, label propagation and its variants are longstanding, powerful ideas. More directly incorporating them into graph learning models has major benefits, and we have shown that these can lead to both better predictions and faster training.
+
+Acknowledgments. This research was supported by Facebook AI, NSF Award DMS-1830274, ARO Award W911NF19-1-0057, ARO MURI, and JP Morgan Chase & Co. We also thank Cornell University Artificial Intelligence for their support, as well as Marc Brockschmidt, Matthias Fey, Stephan Gunnemann, Weihua Hu, and Junteng Jia for insightful discussions.
+
+# REFERENCES
+
+P. Battaglia, Jessica B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, V. Zambaldi, Mateusz Malinowski, Andrea Tacchetti, D. Raposo, Adam Santoro, R. Faulkner, Caglar Gulçehre, H. Song, A. Ballard, J. Gilmer, G. Dahl, Ashish Vaswani, Kelsey R. Allen, C. Nash, V. Langston, Chris Dyer, N. Heess, Daan Wierstra, Pushmeet Kohli, M. Botvinick, Oriol Vinyals, Y. Li, and Razvan Pascanu. Relational inductive biases, deep learning, and graph networks. arXiv:1806.01261, 2018.
+Aleksandar Bojchevski, Johannes Klicpera, Bryan Perozzi, Amol Kapoor, Martin Blais, Benedek Rozemberczki, Michal Lukasik, and Stephan Gunnemann. Scaling graph neural networks with approximate PageRank. In International Conference on Knowledge Discovery and Data Mining, 2020.
+Olivier Chapelle, Jason Weston, and Bernhard Scholkopf. Cluster kernels for semi-supervised learning. In Advances in Neural Information Processing Systems, 2003.
+Kamalika Chaudhuri, Fan Chung, and Alexander Tsiatas. Spectral clustering of graphs with general degrees in the extended planted partition model. In The Conference on Learning Theory, 2012.
+Ming Chen, Zhewei Wei, Zengfeng Huang, Bolin Ding, and Yaliang Li. Simple and deep graph convolutional networks. In International Conference on Machine Learning, 2020.
+Alex Chin, Yatong Chen, Kristen M. Altenburger, and Johan Ugander. Decoupled smoothing on graphs. In The Web Conference, 2019.
+David Easley and Jon Kleinberg. Networks, Crowds, and Markets. Cambridge University Press, 2010.
+Buchnik Eliav and Edith Cohen. Bootstrapped graph diffusions: Exposing the power of nonlinearity. International Conference on Measurement and Modeling of Computer Systems, 2018.
+Matthias Fey and Jan Eric Lenssen. Fast graph representation learning with pytorch geometric. In ICLR Workshop Representation Learning on Graphs and Manifolds, 2019.
+Fabrizio Frasca, Emanuele Rossi, Davide Eynard, Benjamin Chamberlain, Michael Bronstein, and Federico Monti. SIGN: Scalable inception graph neural networks. In ICML Workshop on Graph Representation Learning and Beyond, 2020.
+Hongchang Gao, Jian Pei, and Heng Huang. Conditional random field enhanced graph convolutional neural networks. In The 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2019.
+Lise Getoor. Link-based classification. In Advanced Methods for Knowledge Discovery from Complex Data, 2005.
+Lise Getoor, Nir Friedman, Daphne Koller, and Benjamin Taskar. Learning probabilistic models of relational structure. In International Conference on Machine Learning, 2001.
+David F Gleich and Michael W Mahoney. Using local spectral methods to robustify graph-based learning algorithms. In International Conference on Knowledge Discovery and Data Mining, 2015.
+Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, 2017a.
+William L Hamilton, Rex Ying, and Jure Leskovec. Representation learning on graphs: Methods and applications. IEEE Data Engineering Bulletin, 2017b.
+Keith Henderson, Brian Gallagher, Lei Li, Leman Akoglu, Tina Eliassi-Rad, Hanghang Tong, and Christos Faloutsos. It's who you know: graph mining using recursive structural features. In International Conference on Knowledge Discovery and Data Mining, 2011.
+Keith Henderson, Brian Gallagher, Tina Eliassi-Rad, Hanghang Tong, Sugato Basu, Leman Akoglu, Danai Koutra, Christos Faloutsos, and Lei Li. RolX: structural role extraction & mining in large graphs. In International Conference on Knowledge Discovery and Data Mining, 2012.
+
+Weihua Hu, M. Fey, M. Zitnik, Yuxiao Dong, H. Ren, Bowen Liu, Michele Catasta, and J. Leskovec. Open graph benchmark: Datasets for machine learning on graphs. In Advances in Neural Information Processing Systems, 2020.
+Rania Ibrahim and David Gleich. Nonlinear diffusion for community detection and semi-supervised learning. In The Web Conference, 2019.
+Junteng Jia and Austin R. Benson. Residual correlation in graph neural network regression. In ICML Workshop on Graph Representation Learning and Beyond workshop, 2020.
+Junteng Jia and Austin R Benson. A unifying generative model for graph learning algorithms: Label propagation, graph convolutions, and combinations. arXiv:2101.07730, 2021.
+Thorsten Joachims. Transductive learning via spectral graph partitioning. In International Conference on Machine Learning, 2003.
+Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, 2017.
+Johannes Klicpera, Aleksandar Bojchevski, and Stephan Gunnemann. Predict then propagate: Graph neural networks meet personalized pagerank. In International Conference on Learning Representations, 2018.
+Danai Koutra, Tai-You Ke, U Kang, Duen Horng Polo Chau, Hsing-Kuo Kenneth Pao, and Christos Faloutsos. Unifying guilt-by-association approaches: Theorems and fast algorithms. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 2011.
+Jure Leskovec, Jon Kleinberg, and Christos Faloutsos. Graph evolution: Densification and shrinking diameters. ACM Transactions on Knowledge Discovery from Data, 2007.
+Guohao Li, Matthias Müller, Ali Thabet, and Bernard Ghanem. Deep GCs: Can GCs go as deep as cnns? In IEEE International Conference on Computer Vision, 2019.
+Leland McInnes, John Healy, Nathaniel Saul, and Lukas Großberger. UMAP: Uniform Manifold Approximation and Projection. Journal of Open Source Software, 2018.
+Miller McPherson, Lynn Smith-Lovin, and James M Cook. Birds of a feather: Homophily in social networks. Annual Review of Sociology, 2001.
+P'eter Mernyei and Catalina Cangea. Wiki-CS: A wikipedia-based benchmark for graph neural networks. In ICML Workshop on Graph Representation Learning and Beyond, 2020.
+Galileo Namata, Ben London, Lise Getoor, and Bert Huang. Query-driven active surveying for collective classification. In International Workshop on Mining and Learning with Graphs, 2012.
+Mark EJ Newman. Mixing patterns in networks. Physical Review E, 2003.
+Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, 2019.
+Leto Peel. Graph-based semi-supervised learning for relational networks. In SIAM International Conference on Data Mining. SIAM, 2017.
+Meng Qu, Yoshua Bengio, and Jian Tang. GMNN: Graph markov neural networks. In International Conference on Machine Learning, 2019.
+Usha Nandini Raghavan, Réka Albert, and Soundar Kumara. Near linear time algorithm to detect community structures in large-scale networks. Physical Review E, 2007.
+Yu Rong, Wenbing Huang, Tingyang Xu, and Junzhou Huang. Droppedge: Towards deep graph convolutional networks on node classification. In International Conference on Learning Representations, 2019.
+
+Cosma Shalizi. Advanced data analysis from an elementary point of view. Cambridge University Press, 2013.
+Yunsheng Shi, Zhengjie Huang, Shikun Feng, and Yu Sun. Masked label prediction: Unified massage passing model for semi-supervised classification. arXiv:2009.03509, 2020.
+Amanda L Traud, Peter J Mucha, and Mason A Porter. Social structure of facebook networks. Physica A: Statistical Mechanics and its Applications, 2012.
+Francesco Tudisco, Austin R Benson, and Konstantin Prokopchik. Nonlinear higher-order label spreading. In The Web Conference, 2021. To appear.
+Petar Velicković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph Attention Networks. In International Conference on Learning Representations, 2018.
+Fei Wang and Changshui Zhang. Label propagation through linear neighborhoods. IEEE Transactions on Knowledge and Data Engineering, 2007.
+Hongwei Wang and Jure Leskovec. Unifying graph convolutional neural networks and label propagation. arXiv:2002.06755, 2020.
+Yangkun Wang. Gcn+linear+labels. https://ogb.stanford.edu/docs/leader_nodeprop/, 2020.
+Felix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Weinberger. Simplifying graph convolutional networks. In International Conference on Machine Learning, 2019.
+Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, C. Zhang, and Philip S. Yu. A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 2020.
+Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations, 2018.
+Hao Yin, Austin R Benson, Jure Leskovec, and David F Gleich. Local higher-order graph clustering. In International Conference on Knowledge Discovery and Data Mining, 2017.
+Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan, and Viktor Prasanna. Graph-SAINT: Graph sampling based inductive learning method. In International Conference on Learning Representations, 2019.
+Yilin Zhang and Karl Rohe. Understanding regularized spectral clustering via graph conductance. In Advances in Neural Information Processing Systems, 2018.
+Dengyong Zhou, Olivier Bousquet, Thomas N Lal, Jason Weston, and Bernhard Schölkopf. Learning with local and global consistency. In Advances in Neural Information Processing Systems, 2004.
+Xiaojin Zhu, Zoubin Ghahramani, and J. Lafferty. Semi-supervised learning using gaussian fields and harmonic functions. In International Conference on Machine Learning, 2003.
+Xiaojin Jerry Zhu. Semi-supervised learning literature survey. Technical report, University of Wisconsin-Madison Department of Computer Sciences, 2005.
+
+# A MODEL DETAILS
+
+Here we provide some more details on the models that we use. In all cases we use the Adam optimizer and tune the learning rate. We follow the models and hyperparameters provided in OGB (Hu et al., 2020) and wikiCS (Mernyei & Cangea, 2020) and manually tune some hyperparameters on the validation data for the potential of better performance.
+
+For our MLPs, every linear layer is followed by batch normalization, ReLU activation, and 0.5 dropout. The other parameters depend on the dataset as follows.
+
+- Products and Arxiv: 3 layers and 256 hidden channels with learning rate equal to 0.01.
+- Cora, Citseer, and Pubmed (Getoor et al., 2001; Getoor, 2005; Namata et al., 2012) and Email (Leskovec et al., 2007; Yin et al., 2017): 3 layers and 64 hidden channels with learning rate $= 0.01$ .
+- wikiCS: 3 layers and 256 hidden channels with learning rate equal to 0.005.
+- US County (Jia & Benson, 2020) and Rice31 (Traud et al., 2012): 5 layers and 256 hidden channels with learning rate equal to 0.005.
+
+SOTA models for most datasets are taken from existing benchmarks. We determined SOTA for Email, US County, and Rice31 by evaluating several models discussed in the paper. The best performing baselines were as follows. For Email, GCNII with 5 layers, 256 hidden channels, learning rate equal to 0.01. For US County, GCNII with 8 layers, 256 hidden channels, learning rate equal to 0.03. For Rice31, we reused our GCN architecture and trained it over spectral embedding, which substantially outperformed the other GNN variants.
+
+All models were implemented with PyTorch (Paszke et al., 2019) and PyTorch Geometric (Fey & Lenssen, 2019).
+
+# B PERFORMANCE RESULTS WITH ONLY THE CORRECTION STEP
+
+Table 5 shows results with and without smoothing in the final predictions, i.e., just the "C step" vs. C&S. Including final prediction smoothing provides a substantial performance boost in many cases.
+
+# C ANALYSIS OF PERFORMANCE GAINS FROM SPECTRAL EMBEDDINGS
+
+Table 6 shows the effect of including spectral embeddings as node features on the accuracy of the MLP-based and GCN models. In the case of the Arxiv dataset, including the spectral embedding improves the MLP base prediction performance substantially and the C&S performance modestly, but hardly changes the performance of the GCN. For Pubmed, including the spectral embeddings barely changes the performance of any model.
+
+# D ADDITIONAL VISUALIZATION
+
+Full visualizations of C&S and GCN-SE performance for the US County dataset are in Figures 4 to 6. Similar visualizations for the Rice31 are in Figures 7 to 9, which are generated by projecting the 128-dimensional spectral embedding used in the main text down to two dimensions with UMAP (McInnes et al., 2018).
+
+Table 5: Performance of our C&S framework with and without the final prediction smoothing. In cases where final prediction smoothing is used, only ground truth training are used.
+
+| Method | Arxiv | Products | Cora | Citeseer | Pubmed |
| Linear + C (Autoscale) | 66.89 | 74.63 | 79.56 | 72.56 | 88.56 |
| Linear + C&S (Autoscale) | 71.11 | 80.24 | 88.62 | 76.31 | 89.99 |
| Linear-SE + C (Autoscale) | 71.52 | 70.93 | 79.08 | 70.77 | 88.84 |
| Linear-SE + C&S (Autoscale) | 72.07 | 80.25 | 88.73 | 76.75 | 89.93 |
| MLP-SE + C (Autoscale) | 71.97 | 69.85 | 74.11 | 71.78 | 87.35 |
| MLP-SE + C&S (Autoscale) | 72.62 | 78.60 | 87.39 | 76.31 | 89.33 |
| Linear + C (Fdiff-scale) | 65.62 | 80.97 | 76.48 | 70.48 | 87.52 |
| Linear + C&S (Fdiff-scale) | 70.60 | 82.54 | 89.05 | 76.22 | 89.74 |
| Linear-SE + C (Fdiff-scale) | 70.26 | 73.89 | 79.32 | 70.53 | 84.47 |
| Linear-SE + C&S (Fdiff-scale) | 71.57 | 83.01 | 88.66 | 77.06 | 89.51 |
| MLP-SE + C (Fdiff-scale) | 71.55 | 72.72 | 74.36 | 71.45 | 86.97 |
| MLP-SE + C&S (Fdiff-scale) | 72.43 | 84.18 | 87.39 | 76.42 | 89.23 |
| Method | Email | Rice31 | US County | wikiCS | |
| Linear + C (Autoscale) | — | 43.97 | 82.60 | 77.49 | |
| Linear + C&S (Autoscale) | — | 75.99 | 85.25 | 79.57 | |
| Linear-SE + C (Autoscale) | 73.39 | 86.19 | 84.08 | 74.06 | |
| Linear-SE + C&S (Autoscale) | 72.50 | 86.42 | 86.15 | 79.53 | |
| MLP-SE + C (Autoscale) | 71.64 | 84.61 | 88.83 | 78.72 | |
| MLP-SE + C&S (Autoscale) | 74.55 | 85.50 | 89.64 | 78.10 | |
| Linear + C (Fdiff-scale) | — | 72.44 | 87.16 | 75.98 | |
| Linear + C&S (Fdiff-scale) | — | 73.66 | 87.38 | 79.54 | |
| Linear-SE + C (Fdiff-scale) | 71.31 | 85.22 | 88.27 | 73.86 | |
| Linear-SE + C&S (Fdiff-scale) | 72.53 | 87.55 | 88.11 | 79.25 | |
| MLP-SE + C (Fdiff-scale) | 72.59 | 85.42 | 89.62 | 78.40 | |
| MLP-SE + C&S (Fdiff-scale) | 75.74 | 85.74 | 89.85 | 78.24 | |
+
+Table 6: Comparison of models with and without spectral embeddings, using only ground truth training labels for final prediction smoothing within C&S.
+
+| Method | Arxiv | Products | Cora | Citeseer | Pubmed |
| GCN | 71.74 | 75.64 | 85.77 | 73.68 | 88.13 |
| GCN-SE | 71.76 | 76.12 | 85.83 | 73.60 | 88.32 |
| MLP | 59.67 | 59.23 | 74.21 | 69.34 | 86.73 |
| MLP-SE | 71.51 | 63.41 | 74.06 | 68.10 | 86.85 |
| MLP + C&S (Autoscale) | 71.76 | 79.42 | 87.56 | 76.42 | 89.29 |
| MLP-SE + C&S (Autoscale) | 72.62 | 78.60 | 87.39 | 76.31 | 89.33 |
| MLP + C&S (FDiff-scale) | 71.57 | 83.8 | 87.61 | 76.44 | 89.28 |
| MLP-SE + C&S (FDiff-scale) | 72.43 | 84.18 | 87.39 | 76.42 | 89.23 |
| Method | Email | Rice31 | US County | wikiCS | |
| GCN | — | 15.45 | 84.13 | 78.61 | |
| GCN-SE | 74.51 | 38.54 | 89.72 | 78.15 | |
| MLP | — | 15.73 | 87.77 | 71.42 | |
| MLP-SE | 69.13 | 17.16 | 87.70 | 73.07 | |
| MLP + C&S (Autoscale) | — | 85.05 | 89.67 | 78.92 | |
| MLP-SE + C&S (Autoscale) | 74.55 | 85.50 | 89.64 | 78.10 | |
| MLP + C&S (FDiff-scale) | — | 86.40 | 89.64 | 78.10 | |
| MLP-SE + C&S (FDiff-scale) | 75.74 | 85.74 | 89.85 | 78.24 | |
+
+
+Figure 4: US County ground truth class labels.
+
+Figure 5: Linear-SE + C&S prediction performance on US County.
+
+Train Base Correction step Smoothing step Incorrect
+
+
+Figure 6: GCN-SE prediction performance on US County.
+
+
+
+
+Figure 7: Rice31 ground truth class labels.
+
+Figure 8: Linear-SE + C&S prediction performance on Rice31.
+
+Train
+Base
+Correction step
+Smoothing step
+Incorrect
+
+
+Figure 9: GCN-SE prediction performance on Rice31.
\ No newline at end of file
diff --git a/combininglabelpropagationandsimplemodelsoutperformsgraphneuralnetworks/images.zip b/combininglabelpropagationandsimplemodelsoutperformsgraphneuralnetworks/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..194ff127e451382724dd3a903118605dc7bc952a
--- /dev/null
+++ b/combininglabelpropagationandsimplemodelsoutperformsgraphneuralnetworks/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:72888075ca47487bfb409f7af6388fb6dffda817036045ef154912383727b0ff
+size 1468306
diff --git a/combininglabelpropagationandsimplemodelsoutperformsgraphneuralnetworks/layout.json b/combininglabelpropagationandsimplemodelsoutperformsgraphneuralnetworks/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..3623625ee0ec9d11da6cc0b48dfb2022fa5217fa
--- /dev/null
+++ b/combininglabelpropagationandsimplemodelsoutperformsgraphneuralnetworks/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:40ad4a7a3bbb8a785c224e261f63720507a3c85db7aad564452ce469b4c13542
+size 455060
diff --git a/combiningphysicsandmachinelearningfornetworkflowestimation/533bcf4a-eba1-4c03-ada5-a4d10c518e73_content_list.json b/combiningphysicsandmachinelearningfornetworkflowestimation/533bcf4a-eba1-4c03-ada5-a4d10c518e73_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..acec8aa522142477e465d6566cf5cda7e658db60
--- /dev/null
+++ b/combiningphysicsandmachinelearningfornetworkflowestimation/533bcf4a-eba1-4c03-ada5-a4d10c518e73_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:076e3aabe6a7cc7315a6338fb6de143b7ce4fa2e11487dac4e35300a9f2a2a17
+size 109060
diff --git a/combiningphysicsandmachinelearningfornetworkflowestimation/533bcf4a-eba1-4c03-ada5-a4d10c518e73_model.json b/combiningphysicsandmachinelearningfornetworkflowestimation/533bcf4a-eba1-4c03-ada5-a4d10c518e73_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..67135b4f452547b1ab09a9003c672dca70c5baee
--- /dev/null
+++ b/combiningphysicsandmachinelearningfornetworkflowestimation/533bcf4a-eba1-4c03-ada5-a4d10c518e73_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:822c5bb378978889377d4feff7691ab6f9772aea5f6a6ebdef132eb19782f88f
+size 132210
diff --git a/combiningphysicsandmachinelearningfornetworkflowestimation/533bcf4a-eba1-4c03-ada5-a4d10c518e73_origin.pdf b/combiningphysicsandmachinelearningfornetworkflowestimation/533bcf4a-eba1-4c03-ada5-a4d10c518e73_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..1edf03c8ab5563315ddef08f2892ef2623d09b1f
--- /dev/null
+++ b/combiningphysicsandmachinelearningfornetworkflowestimation/533bcf4a-eba1-4c03-ada5-a4d10c518e73_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dce8d096f18e16b08587654b498d3a27027592ceb00d988674eadd95d9d05523
+size 2119051
diff --git a/combiningphysicsandmachinelearningfornetworkflowestimation/full.md b/combiningphysicsandmachinelearningfornetworkflowestimation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..45efaef15607b35685e5de36af1e2b676fe33320
--- /dev/null
+++ b/combiningphysicsandmachinelearningfornetworkflowestimation/full.md
@@ -0,0 +1,483 @@
+# COMBINING PHYSICS AND MACHINE LEARNING FOR NETWORK FLOW ESTIMATION
+
+Arlei Silva, Furkan Kocayusufoglu
+
+Computer Science Department, UC Santa Barbara, CA 93106-5110, USA
+
+Saber Jafarpour, Francesco Bullo
+
+Mechanical Engineering Department and the Center of Control, Dynamical Systems and
+
+Computation, UC Santa Barbara, CA 93106-5070, USA
+
+Ananthram Swami
+
+U.S. Army Research Lab, Adelphi, MD 20783, USA
+
+Ambuj Singh
+
+Computer Science Department, UC Santa Barbara, CA 93106-5110, USA
+
+# ABSTRACT
+
+The flow estimation problem consists of predicting missing edge flows in a network (e.g., traffic, power, and water) based on partial observations. These missing flows depend both on the underlying physics (edge features and a flow conservation law) as well as the observed edge flows. This paper introduces an optimization framework for computing missing edge flows and solves the problem using bilevel optimization and deep learning. More specifically, we learn regularizers that depend on edge features (e.g., number of lanes in a road, resistance of a power line) using neural networks. Empirical results show that our method accurately predicts missing flows, outperforming the best baseline, and is able to capture relevant physical properties in traffic and power networks.
+
+# 1 INTRODUCTION
+
+In many applications, ranging from road traffic to supply chains to power networks, the dynamics of flows on edges of a graph is governed by physical laws/models (Bressan et al., 2014; Garavello & Piccoli, 2006). For instance, the LWR model describes equilibrium equations for road traffic Lighthill & Whitham (1955); Richards (1956). However, it is often difficult to fully observe flows in these applications and, as a result, they rely on off-the-shelf machine learning models to make predictions about missing flows (Li et al., 2017; Yu et al., 2018). A key limitation of these machine learning models is that they disregard the physics governing the flows. So, the question arises: can we combine physics and machine learning to make better flow predictions?
+
+This paper investigates the problem of predicting missing edge flows based on partial observations and the underlying domain-specific physics defined by flow conservation and edge features (Jia et al., 2019). Edge flows depend on the graph topology due to a flow conservation law—i.e. the total inflow at every vertex is approximately its total out-flow. Moreover, the flow at an edge also depends on its features, which might regularize the space of possible flow distributions in the graph. Here, we propose a model that learns how to predict missing flows from data using bilevel optimization (Franceschi et al., 2017) and neural networks. More specifically, features are given as inputs to a neural network that produces edge flow regularizers. Weights of the network are then optimized via reverse-mode differentiation based on a flow estimation loss from multiple train-validation pairs.
+
+Our work falls under a broader effort towards incorporating physics knowledge to machine learning, which is relevant for natural sciences and engineering applications where data availability is limited (Rackauckas et al., 2020). Conservation laws (of energy, mass, momentum, charge, etc.) are
+
+essential to our understanding of the physical world. The classical Noether's theorem shows that such laws arise from symmetries in nature (Hanc et al., 2004). However, flow estimation, which is an inverse problem (Tarantola, 2005; Arridge et al., 2019), is ill-posed under conservation alone. Regularization enables us to apply domain-knowledge in the solution of inverse problems.
+
+We motivate our problem and evaluate its solutions using two application scenarios. The first is road traffic networks (Coclite et al., 2005), where vertices represent locations, edges are road segments, flows are counts of vehicles that traverse a segment and features include numbers of lanes and speed limits. The second scenario is electric power networks (Dörfler et al., 2018), where vertices represent power buses, edges are power lines, flows are amounts of power transmitted and edge features include resistances and lengths of lines. Irrigation channels, gas pipelines, blood circulation, supply chains, air traffic, and telecommunication networks are other examples of flow graphs.
+
+Our contributions can be summarized as follows: (1) We introduce a missing flow estimation problem with applications in a broad class of flow graphs; (2) we propose a model for flow estimation that is able to learn the physics of flows by combining reverse-mode differentiation and neural networks; (3) we show that our model outperforms the best baseline by up to $18\%$ ; and (4) we provide evidence that our model learns interpretable physical properties, such as the role played by resistance in a power transmission network and by the number of lanes in a road traffic network.
+
+# 2 FLOW ESTIMATION PROBLEM
+
+We introduce the flow estimation problem, which consists of inferring missing flows in a network based on a flow conservation law and edge features. We provide a list of symbols in the Appendix.
+
+Flow Graph. Let $\mathcal{G}(\mathcal{V},\mathcal{E},\mathcal{X})$ be a flow graph with vertices $\mathcal{V}$ $(n = |\mathcal{V}|)$ , edges $\mathcal{E}$ $(m = |\mathcal{E}|)$ , and edge feature matrix $\mathcal{X}\in \mathbb{R}^{m\times d}$ , where $\mathcal{X}[e]$ are the features of edge $e$ . A flow vector $\mathbf{f}\in \mathbb{R}^m$ contains the (possibly noisy) flow $f_{e}$ for each edge $e\in \mathcal{E}$ . In case $\mathcal{G}$ is directed, $\mathbf{f}\in \mathbb{R}_+^m$ , otherwise, a flow is negative if it goes against the arbitrary orientation of its edge. We assume that flows are induced by the graph, and thus, the total flow—in plus out—at each vertex is approximately conserved:
+
+$$
+\sum_ {(v _ {i}, u) \in \mathcal {E}} f _ {(v _ {i}, u)} \approx \sum_ {(u, v _ {o}) \in \mathcal {E}} f _ {(u, v _ {o})}, \forall u \in \mathcal {V}
+$$
+
+In the case of a road network, flow conservation implies that vehicles mostly remain on the road.
+
+Flow Estimation Problem. Given a graph $\mathcal{G}(\mathcal{V},\mathcal{E},\mathcal{X})$ with partial flow observations $\hat{\mathbf{f}}\in \mathbb{R}^{m^{\prime}}$ for a subset $\mathcal{E}'\subseteq \mathcal{E}$ of edges $(\hat{f}_e$ is the flow for $e\in \mathcal{E}'$ $m^{\prime} = |\mathcal{E}^{\prime}| < m)$ , predict flows for edges in $\mathcal{E}\setminus \mathcal{E}^{\prime}$ .
+
+In our road network example, partial vehicle counts $\hat{f}$ might be measured by sensors placed at a few segments, and the goal is to estimate counts at the remaining segments. One would expect flows not to be fully conserved in most applications due to the existence of inputs and outputs, such as parking lots and a power generators/consumers. In case these input and output values are known exactly, they can be easily incorporated to our problem as flow observations. Moreover, if they are known approximately, we can apply them as priors (as will be detailed in the next section). For the remaining of this paper, we assume that inputs and outputs are unknown and employ flow conservation as an approximation of the system. Thus, different from classical flow optimization problems, such as min-cost flow (Ahuja et al., 1988), we assume that flows are conserved approximately.
+
+Notice that our problem is similar to the one studied in Jia et al. (2019). However, while their definition also assumes flow conservation, it does not take into account edge features. We claim that these features play important role in capturing the physics of flows. Our main contribution is a new model that is able to learn how to regularize flows based on edge features using neural networks.
+
+# 3 OUR APPROACH: PHYSICS+LEARNING
+
+In this section, we introduce our approach for the flow estimation problem, which is summarized in Figure 1. We formulate flow estimation as an optimization problem (Section 3.1), where the interplay between the flow network topology and edge features is defined by the physics of flow graphs. Flow estimation is shown to be equivalent to a regularized least-squares problem (Section
+
+
+Figure 1: Summary of the proposed approach for predicting missing flows in a graph based on partial observations and edge features. We learn to combine features and a flow conservation law, which together define the physics of the flow graph. A regularization function $\mathcal{Q}(\mathcal{X};\Theta)$ modeled as a neural network with parameters $\Theta$ takes as input edge features $\mathcal{X}[e]$ . A flow estimation algorithm applies the regularization, partial observations $(\widetilde{\mathbf{f}})$ , prior flows $(\mathbf{x}^{(0)})$ and flow conservation to predict missing flows $\mathbf{x}$ . Network parameters $\Theta$ are learned based on a $K$ -fold cross validation loss with respect to validation flows $\hat{\mathbf{x}}$ . Our model is trained end-to-end using reverse-mode differentiation.
+
+3.2). Moreover, we describe how the effect of edge features and the graph topology can be learned from data using bilevel optimization and neural networks in Section 3.3. Finally, we propose a reverse-mode differentiation algorithm for flow estimation in Section 3.4.
+
+# 3.1 FLOW ESTIMATION VIA OPTIMIZATION
+
+The extent to which flow conservation holds for flows in a graph is known as divergence and can be measured using the oriented incidence matrix $B \in \mathbb{R}^{n \times m}$ of $\mathcal{G}$ . The matrix is defined as follows, $B_{ij} = 1$ if $\exists u$ such that $e_j = (v_i, u) \in \mathcal{E}$ , $B_{ij} = -1$ if $\exists u$ such that $e_j = (u, v_i) \in \mathcal{E}$ , and $B_{ij} = 0$ , otherwise. Given $B$ and $\mathbf{f}$ , the divergence at a vertex $u$ can be computed as:
+
+$$
+\left(B \mathbf {f}\right) _ {u} = \sum_ {\left(v _ {i}, u\right) \in \mathcal {E}} f _ {\left(v _ {i}, u\right)} - \sum_ {\left(u, v _ {o}\right) \in \mathcal {E}} f _ {\left(u, v _ {o}\right)} \tag {1}
+$$
+
+And thus, we can compute the total (squared) divergence in the graph as $||B\mathbf{f}||_2^2 = \mathbf{f}^\top B^\top B\mathbf{f} = \sum_{u\in \mathcal{V}}((B\mathbf{f})_u)^2$ . One could try to solve the flow estimation problem by minimizing $||B\mathbf{f}||_2^2$ while keeping the observed flows fixed, however, this problem is ill-posed—there might be multiple solutions to the optimization. The standard approach in such a scenario is to resort to regularization. In particular, we apply a generic regularization function $\Phi$ with parameters $\Theta$ as follows:
+
+$$
+\mathbf {f} ^ {*} = \underset {\mathbf {f} \in \Omega} {\arg \min } \| B \mathbf {f} \| _ {2} ^ {2} + \Phi (\mathbf {f}, \mathcal {X}; \mathbf {f} ^ {(0)}; \Theta) \quad s t. \quad f _ {e} = \hat {f} _ {e}, \forall e \in \mathcal {E} ^ {\prime} \tag {2}
+$$
+
+where $\Omega$ is the domain of $\mathbf{f}$ , $\mathbf{f}^{(0)} \in \mathbb{R}^m$ is a prior for flows, $f_e(\hat{f}_e)$ are entries of $\mathbf{f}(\hat{\mathbf{f}})$ for edge $e$ and the constraint guarantees that observed flows are not changed. Priors $\mathbf{f}^{(0)}$ , not be confused with observed flows $\hat{\mathbf{f}}$ , should be set according to the application (e.g., as zero, based on a black-box model or historical data). Regarding the domain $\Omega$ , we consider $\Omega = \mathbb{R}^m$ and $\Omega = \mathbb{R}_+^m$ . The second case is relevant for directed graphs—when flows must follow edge orientations (e.g., traffic).
+
+In Jia et al. (2019), the authors set $\Phi (\mathbf{f},X,\mathbf{f}^{(0)};\Theta)$ as $\lambda^2 ||\mathbf{f}||_2^2$ for a regularization parameter $\lambda$ , which implies a uniform zero prior with an $L_{2}$ penalty over edges. We claim that the regularization function plays an important role in capturing the physics of flow graphs. As an example, for a power network, $\Phi$ should account for the resistance of the lines. Thus, we propose learning the regularization from data. Our approach is based on a least-squares formulation, which will be described next.
+
+# 3.2 REGULARIZED LEAST-SQUARES FORMULATION
+
+Flow estimation problem can be viewed as an inverse problem (Tarantola, 2005). Let $\mathbf{x} \in \mathbb{R}^{m - m'}$ be the vector of missing flows and $H \in \mathbb{R}^{m \times m - m'}$ be a matrix such that $H_{ij} = 1$ if $f_i$ maps to $x_j$ (i.e., they are associated to the same edge), and $H_{i,j} = 0$ , otherwise. Moreover, let $\widetilde{\mathbf{f}} \in \mathbb{R}^m$ be such that $\widetilde{f}_e = \hat{f}_e$ if $e \in \mathcal{E}'$ and $\widetilde{f}_i = 0$ , otherwise. Using this notation, we define flow estimation as $BH\mathbf{x} = -B\widetilde{\mathbf{f}} + \epsilon$ , where $BH$ is a forward operator, projecting $\mathbf{x}$ to a vector of vertex divergences, and $-B\widetilde{\mathbf{f}} + \epsilon$ is the observed data, capturing (negative) vertex divergences for observed flows. The error $\epsilon$ can be interpreted as noise in observations or some level of model misspecification.
+
+We can also define a regularized least-squares problem with the goal of recovering missing flows $\mathbf{x}$ :
+
+$$
+\mathbf {x} ^ {*} = \underset {\mathbf {x} \in \Omega^ {\prime}} {\arg \min } | | B H \mathbf {x} + B \widetilde {\mathbf {f}} | | _ {2} ^ {2} + | | \mathbf {x} - \mathbf {x} ^ {(0)} | | _ {\mathcal {Q} (\mathcal {X}; \Theta)} ^ {2} \tag {3}
+$$
+
+where $\Omega'$ is a projection of the domain of $\mathbf{f}$ to the space of $\mathbf{x}$ , $||\mathbf{x}||_M^2 = \mathbf{x}^\top M\mathbf{x}$ is the matrix-scaled norm of $\mathbf{x}$ and $\mathbf{x}^{(0)} \in \mathbb{R}^{m - m'}$ are priors for missing flows. The regularization function $\Phi(\mathbf{f}, \mathcal{X}; \mathbf{f}^{(0)}, \Theta)$ has the form $||\mathbf{x} - \mathbf{x}^{(0)}||_{\mathcal{Q}(\mathcal{X}; \Theta)}^2$ , where the matrix $\mathcal{Q}(\mathcal{X}; \Theta)$ is a function of parameters $\Theta$ and edge features $\mathcal{X}$ . We focus on the case where $\mathcal{Q}(\mathcal{X}; \Theta)$ is non-negative and diagonal.
+
+Equation 3 has a Bayesian interpretation, with $\mathbf{x}$ being a maximum likelihood estimate under a Gaussian assumption—i.e., $\mathbf{x} \sim N(\mathbf{x}^{(0)}, \mathcal{Q}(\mathcal{X}; \Theta)^{-1})$ and $B\widetilde{\mathbf{f}} \sim N(0, I)$ (Tarantola, 2005). Thus, $\mathcal{Q}(\mathcal{X}; \Theta)$ captures the variance in flow observations $\hat{\mathbf{f}}$ in prior estimates $\mathbf{f}^{(0)}$ compared to the one. This allows the regularization function to adapt to different edges based on their features. For instance, in our road network example, $Q(\mathcal{X}; \Theta)$ might place a lower weight on flow conservation for flows at a road segment with a small number of lanes, which are possible traffic bottlenecks.
+
+Given the least-squares formulation described in this section, how do we model the regularization function $\mathcal{Q}$ and learn its parameters $\Theta$ ? We would like $\mathcal{Q}$ to be expressive enough to be able to capture complex physical properties of flows, while $\Theta$ to be computed accurately and efficiently. We will address these challenges in the remaining of this paper.
+
+# 3.3 BILEVEL OPTIMIZATION FOR META-LEARNING THE PHYSICS OF FLOWS
+
+This section introduces a model for flow estimation that is able to learn the regularization function $\mathcal{Q}(\mathcal{X};\Theta)$ in Equation 3 from data using bilevel optimization and neural networks.
+
+Bilevel formulation. We learn the parameters $\Theta$ that determine the regularization function $\mathcal{Q}(\mathcal{X};\Theta)$ using the following bilevel optimization formulation:
+
+$$
+\Theta^ {*} = \underset {\Theta} {\arg \min } \mathbb {E} [ \| \hat {\mathbf {x}} - \mathbf {x} ^ {*} \| _ {2} ^ {2} ] \tag {4}
+$$
+
+$$
+\operatorname {s t}. \quad \mathbf {x} ^ {*} = \underset {\mathbf {x} \in \Omega^ {\prime}} {\arg \min } \| B H \mathbf {x} + B \widetilde {\mathbf {f}} \| _ {2} ^ {2} + \| \mathbf {x} - \mathbf {x} _ {0} \| _ {Q (\mathcal {X}; \Theta)} ^ {2} \tag {5}
+$$
+
+where the inner (lower) problem is the same as Equation 3 and the outer (upper) problem is the expected loss with respect to ground truth flows $\hat{\mathbf{x}}$ —which we estimate using cross-validation.
+
+Notice that optimal values for parameters $\Theta$ and missing flows $\mathbf{x}$ are both unknown in the bilevel optimization problem. The expectation in Equation 4 is a function of multiple instances of the inner problem (Equation 5). Each inner problem instance has an optimal solution $\mathbf{x}^*$ that depends on parameters $\Theta$ . In general, bilevel optimization is not only non-convex but also NP-hard (Colson et al., 2007). However, recent gradient-based solutions for bilevel optimization have been successfully applied to large-scale problems, such as hyper-parameter optimization and meta-learning (Franceschi et al., 2018; Lorraine et al., 2020). We will first describe how we model the function $\mathcal{Q}(\mathcal{X};\Theta)$ and then discuss how this problem can be solved efficiently using reverse-mode differentiation.
+
+We propose to model $\mathcal{Q}(\mathcal{X};\Theta)$ using a neural network, where $\mathcal{X}$ are inputs, $\Theta$ are learnable weights and the outputs are diagonal entries of the regularization matrix. This is a natural choice due to the expressive power of neural nets (Cybenko, 1989; Xu et al., 2018).
+
+Multi-Layer Perceptron (MLP). An MLP-based $\mathcal{Q}(\mathcal{X};\Theta)$ has the following form:
+
+$$
+\mathcal {Q} (\mathcal {X}; \Theta) = \operatorname {d i a g} (M L P (\mathcal {X}; \Theta)) \tag {6}
+$$
+
+where $MLP(\mathcal{X};\Theta)\in \mathbb{R}^{m - m^{\prime}}$ . For instance, $\mathcal{Q}(\mathcal{X};\Theta)$ can be a 2-layer MLP:
+
+$$
+\mathcal {Q} (\mathcal {X}; \Theta) = \operatorname {d i a g} (a (b (\mathcal {X} W ^ {(1)}) W ^ {(2)})) \tag {7}
+$$
+
+where $\Theta = \{W^{(1)}, W^{(2)}\}$ , $W^{(1)} \in \mathbb{R}^{d \times h}$ , $W^{(2)} \in \mathbb{R}^{h \times 1}$ , $h$ is the number of nodes in the hidden layer, both $a$ and $b$ are activation functions, and the bias was omitted for convenience.
+
+Graph Neural Network (GNN). The MLP-based approach assumes that each entry $[\mathcal{Q}(\mathcal{X};\Theta)]_{e,e}$ associated to an edge $e$ is a function of its features $\mathcal{X}[e]$ only. However, we are also interested in how entries $[\mathcal{Q}(\mathcal{X};\Theta)]_{e,e}$ might depend on the features of neighborhood of $e$ in the flow graph topology. Thus, we consider the case where $\mathcal{Q}(\mathcal{X};\Theta)$ is a GNN, which is described in the Appendix.
+
+# 3.4 FLOW ESTIMATION ALGORITHM
+
+We now focus on how to solve our bilevel optimization problem (Equations 4 and 5). Our solution applies gradient-based approaches (e.g., SGD (Bottou & Bousquet, 2008), Adam (Kingma & Ba, 2014)) and, for simplicity, our description will be based on the particular case of Gradient Descent and assume a zero prior $(\mathbf{x}^{(0)} = \mathbf{0})$ . A key challenge in our problem is to efficiently approximate the gradient of the outer objective with respect to the parameters $\Theta$ , which, by the chain rule, depends on the gradient of the inner objective with respect to $\Theta$ .
+
+We first introduce extra notation to describe the outer problem (Equation 4). Let $(\hat{\mathbf{f}}_k,\hat{\mathbf{g}}_k)$ be one of $K$ train-validation folds, both containing ground-truth flow values, such that $\hat{\mathbf{f}}_k\in \mathbb{R}^p$ and $\hat{\mathbf{g}}_k\in \mathbb{R}^q$ . For each fold $k$ , we apply the inner problem (Equation 5) to estimate missing flows $\mathbf{x}_k$ . Estimates for all folds are concatenated into a single vector $\mathbf{x} = [\mathbf{x}_1;\mathbf{x}_2;\dots ;\mathbf{x}_K]$ and the same for validation sets $\hat{\mathbf{g}} = [\hat{\mathbf{g}}_1;\hat{\mathbf{g}}_2;\dots \hat{\mathbf{g}}_K]$ . We define a matrix $R\in \mathbb{R}^{q\times (m - m')}$ such that $R_{ij} = 1$ if prediction $x_{j}$ corresponds to validation flow $\hat{g_i}$ and $R_{ij} = 0$ , otherwise. Using this representation, we can approximate the expectation in the outer objective as $\Psi (\mathbf{x},\Theta) = (1 / K)||R\mathbf{x} - \hat{\mathbf{g}} ||_2^2$ , where $\mathbf{x}$ depends implicitly on $\Theta$ . We also introduce $\Upsilon_{\Theta}(\mathbf{x})$ as the inner problem objective. Moreover, let $\Gamma_j(\mathbf{x}_{k,j - 1},\Theta_{i - 1})$ be one step of gradient descent for the value of $\mathbf{x}_k$ at iteration $j$ with learning rate $\beta$ :
+
+$$
+\begin{array}{l} \Gamma_ {j} \left(\mathbf {x} _ {k, j - 1}, \Theta_ {i - 1}\right) = \mathbf {x} _ {k, j - 1} - \beta \nabla_ {\mathbf {x}} \Upsilon_ {\Theta} \left(\mathbf {x} _ {k, j}\right) \\ = \mathbf {x} _ {k, j - 1} - 2 \beta \big [ H _ {k} ^ {\intercal} B ^ {\intercal} \big (B H _ {k} \mathbf {x} _ {k, j - 1} + B \widetilde {\mathbf {f}} _ {k} \big) + 2 Q _ {k} \mathbf {x} _ {k, j - 1} \big ] \\ \end{array}
+$$
+
+where $H_{k},Q_{k}$ and $\widetilde{\mathbf{f}}_k$ are the matrix $H$ , a sub-matrix of $\mathcal{Q}(\mathcal{X};\Theta_{i - 1})$ and the observed flows vector $\widetilde{\mathbf{f}}$ (see Section 3.2) for the specific fold $k$ . We have assumed the domain $(\Omega^{\prime})$ of flows $\mathbf{x}_{k,j}$ to be the set of real vectors. For non-negative flows, we add the appropriate proximal operator to $\Gamma_j$ .
+
+Our algorithm applies Reverse-Mode Differentiation (RMD) (Domke, 2012; Franceschi et al., 2017) to estimate $\nabla_{\Theta}\Psi$ and optimizes $\Theta$ also using an iterative algorithm. The main idea of RMD is to first unroll and store a finite number of iterations for the inner problem $\mathbf{x}_1,\mathbf{x}_2,\dots \mathbf{x}_J$ and then reverse over those iterations to estimate $\nabla_{\Theta}\Psi$ , which is computed as follows:
+
+$$
+\nabla_ {\mathbf {x} _ {J}, \Theta} \Psi (\mathbf {x} _ {J}, \Theta_ {i}) = \nabla_ {\mathbf {x}} \Psi (\mathbf {x} _ {J}, \Theta_ {i}) \sum_ {j = 1} ^ {J} \left(\prod_ {s = j + 1} ^ {J} \frac {\partial \Gamma_ {s} (\mathbf {x} _ {s - 1} , \Theta_ {i})}{\partial \mathbf {x} _ {\mathbf {s} - 1}}\right) \frac {\partial \Gamma_ {j} (\mathbf {x} _ {j - 1} , \Theta_ {i})}{\partial \Theta}
+$$
+
+In particular, our reverse iteration is based on the following equations:
+
+$$
+\nabla_ {\mathbf {x}} \Psi (\mathbf {x} _ {J}, \boldsymbol {\Theta} _ {i}) = (2 / K) R ^ {\intercal} (R \mathbf {x} _ {J} - \hat {\mathbf {g}})
+$$
+
+$$
+\frac {\partial \Gamma_ {s} (\mathbf {x} _ {s - 1} , \boldsymbol {\Theta} _ {i})}{\partial \mathbf {x} _ {\mathbf {s} - 1}} = I - 2 \beta (H ^ {\intercal} B ^ {\intercal} B H + 2 \mathcal {Q} (\mathcal {X}; \boldsymbol {\Theta} _ {i}))
+$$
+
+$$
+\frac {\partial \Gamma_ {j} (\mathbf {x} _ {j - 1} , \boldsymbol {\Theta} _ {i})}{\partial \boldsymbol {\Theta}} = - 4 \beta (\partial \mathcal {Q} (\mathcal {X}; \boldsymbol {\Theta} _ {i}) / \partial \boldsymbol {\Theta}) \mathbf {x} _ {j - 1}
+$$
+
+where $\partial \mathcal{Q}(\mathcal{X};\Theta_i) / \partial \Theta$ is the gradient of the regularization function $\mathcal{Q}(\mathcal{X};\Theta)$ evaluated at $\Theta_{i}$ . In our case, this gradient is the same as the neural network gradients and is omitted here for convenience.
+
+Algorithm 1 describes our RMD approach for flow estimation. It receives as inputs the flow network $\mathcal{G}(\mathcal{V},\mathcal{E},\mathcal{X})$ , $K$ train-validation folds $\{(\hat{\mathbf{f}}_k,\hat{\mathbf{g}}_k)\}_{k = 1}^K$ , and also hyperparameters $T$ , $J$ , $\alpha$ , and $\beta$ ,
+
+Algorithm 1 RMD Algorithm for Flow Estimation
+Require: Flow network $\mathcal{G}(\mathcal{V},\mathcal{E},\mathcal{X})$ , train-validation folds $\{\left(\hat{\mathbf{f}}_k,\hat{\mathbf{g}}_k\right)\}_{k = 1}^K$ , number of outer iterations $T$ and inner iterations $J$ , learning rates $\alpha$ and $\beta$
+Ensure: Regularization parameters $\Theta$
+1: Initialize parameters $\Theta_0$
+2: $\hat{\mathbf{g}}\gets [\hat{\mathbf{g}}_1;\dots \hat{\mathbf{g}}_K]$
+3: $B\gets$ incidence matrix of $\mathcal{G}$
+4: for outer iterations $i = 1,\ldots T$ do
+5: Initialize missing flows $\mathbf{x}_{k,0}$ for all $k$
+6: for inner iterations $j = 1,\dots J$ do
+7: for folds $k = 1,\dots K$ do
+8: $\mathbf{x}_{k,j}\gets \mathbf{x}_{k,j - 1} - 2\beta [H_k^\top B^\top (BH_k\mathbf{x}_{k,j - 1} + B\widetilde{\mathbf{f}}_k) + 2Q_k\mathbf{x}_{k,j - 1}]$
+9: end for
+10: $\mathbf{x}_j\gets [\mathbf{x}_{1,j};\dots \mathbf{x}_{K,j}]$
+11: end for
+12: $\mathbf{z}_J\gets (2 / K)R^T (Rx_J - \hat{\mathbf{g}})$
+13: for reverse inner iterations $j = J - 1,\ldots 1$ do
+14: $\overleftarrow{\Theta}\gets \overleftarrow{\Theta} -4\beta \mathbf{z}_{j + 1}(\partial \mathcal{Q}(\mathcal{X};\Theta_{i - 1}) / \partial \Theta)\mathbf{x}_{j + 1}$
+15: $\mathbf{z}_j\gets \mathbf{z}_{j + 1}[I - 2\beta (H^\top B^\top BH + \mathcal{Q}(\mathcal{X};\Theta_{i - 1}))]$
+16: end for
+17: Update $\Theta_i\gets \Theta_{i - 1} - \alpha \stackrel {\leftrightarrow}{\Theta}$
+18: end for
+19: return parameters $\Theta_I$
+
+corresponding to the number of outer and inner iterations, and learning rates for the outer and inner problem, respectively. Its output is a vector of optimal parameters $\Theta$ for the regularization function $\mathcal{Q}(\mathcal{X};\Theta)$ according to the bilevel objective in Equations 4 and 5. We use $\overleftarrow{\Theta}$ to indicate our estimate of $\nabla_{\Theta}\Psi (\Theta_i)$ . Iterations of the inner problem are stored for each train-validation fold in lines 4-12. Reverse steps, which produce an estimate $\overleftarrow{\Theta}$ , are performed in lines 13-17. We then use $\overleftarrow{\Theta}$ to update our estimate of $\Theta$ in line 17. The time and space complexities of the algorithm are $O(TJKm)$ and $O(Jm)$ , respectively, due to the cost of computing and storing the inner problem iterations.
+
+As discussed in the previous section, bilevel optimization is non-convex and thus we cannot guarantee that Algorithm 1 will return a global optima. In particular, the learning objective of our regularization function $\mathcal{Q}(\mathcal{X};\Theta)$ is non-convex—it is a neural network. However, the inner problem (Equation 5) in our formulation has a convex objective (least-squares). In Franceschi et al. (2018), the authors have shown that this property implies convergence. We also find that our algorithm often converges to a good estimate of the parameters in our experiments.
+
+# 4 EXPERIMENTS
+
+We evaluate our approaches for the flow estimation problem using two real datasets and a representative set of baselines and metrics. Due to space limitations, we provide an extended version of this section, with more details on datasets, experimental settings, and additional results in the Appendix.
+
+# 4.1 DATASETS
+
+This section summarizes the datasets used in our evaluation. We normalize flow values to $[0,1]$ and map discrete features to real vector dimensions using one-hot encoding.
+
+Traffic: Vertices represent locations and directed edges represent road segments between two locations in Los Angeles County, CA. Flows are daily average vehicle counts measured by sensors placed along highways in the year 2018. We assign each sensor to an edge in the graph based on proximity and other sensor attributes. Our road network covers the Los Angeles County area, with 5,749 vertices, 7,498 edges, of which 2,879 edges (38%) have sensors. The following features were mapped to an 18-dimensional vector: lat-long coordinates, number of lanes, max-speed, and
+
+highway type (motorway, motorway link, trunk, etc.), in-degree, out-degree, and centrality (PageRank). The in-degree and centrality of an edge are computed based on its source vertex. Similarly, the out-degree of an edge is the out-degree of its target vertex.
+
+Power: Vertices represent buses in Europe, undirected edges are power transmission lines and edge flows measure the total active power (in MW) being transmitted through the lines. The dataset is obtained from PyPSA-Eur (Horsch et al., 2018; Brown et al., 2017)—an optimization model of the European power transmission system—which generates realistic power flows based on solutions of optimal linear power flow problems with historical production and consumption data. Default values were applied for the PyPSA-Eur settings. The resulting graph has 2,048 vertices, 2,729 edges, and 14-dimensional feature vectors capturing resistance, reactance, length, and number of parallel lines, nominal power, edge degree etc. Please see the Appendix for more details.
+
+# 4.2 EXPERIMENTAL SETTINGS
+
+Evaluation metrics: We apply Pearson's correlation (CORR), Mean Absolute Percentage Error (MAPE), Mean Absolute Error (MAE), and Root Mean Squared Error (RMSE) to compare ground-truth and predicted flows. These metrics are formally defined in the Appendix.
+
+Baselines: Divergence minimization (Div) (Jia et al., 2019) maximizes flow conservation using a single regularization parameter $\lambda$ , which we optimize using line search in a validation set of flows. Multi-Layer Perceptron (MLP) is a 2-layer neural network with ReLU activations for all layers that learns to predict flows based on edge features. Graph Convolutional Network (GCN) is a 2-layer graph neural network, also with ReLU activations and Chebyshev convolutions of degree 2, that learns to predict the flows using both edge features and the topology but disregarding flow conservation (Kipf & Welling, 2016; Defferrard et al., 2016). We also consider two hybrid baselines. MLP-Div applies the predictions from MLP as priors to Div. Similarly, predictions from GCN are used as priors for GCN-Div. For both hybrid models, we also optimize the parameter $\lambda$ .
+
+Our approaches: We consider three variations of Algorithm 1. However, one important modification is that we perform the reverse iterations for each fold—i.e., folds are treated as batches in SGD. Bil-MLP and Bil-GCN apply our reverse-mode differentiation approach using an MLP and a GCN as a regularizer. Moreover, both approaches use zero as the prior $\mathbf{x}^{(0)}$ . Bil-GCN-Prior applies the GCN predictions as flow priors. Architectures of the neural nets are the same as the baselines.
+
+# 4.3 FLOW ESTIMATION ACCURACY
+
+Table 1 compares our methods and the baselines in terms of several metrics using the Traffic and Power datasets. Values of CORR achieved by MLP and GCN for Traffic are missing because they were undefined—they have generated predictions with zero variance for at least one of the train-test folds. All methods suffer from high MAPE errors for Power, which is due to an over-estimation of small flows. Bil-GCN achieves the best results in both datasets in terms of all metrics, with $6\%$ and $18\%$ lower RMSE than the best baseline for Traffic and Power, respectively. However, notice that Bil-MLP and Bil-GCN achieve very similar performance for Power and Bil-GCN-Prior does not outperform our other methods. We also show scatter plots with the true vs. predicted flows for some of the best approaches in Figure 2. Traffic has shown to be the more challenging dataset, which can be explained, in part, by training data sparsity—only $38\%$ of edges are labeled.
+
+
+(a) GCN, Traffic
+
+
+(b) Bil-GCN, Traffic
+Figure 2: Scatter plots with true (x) and predicted (y) flows for two approaches on each dataset. The results are consistent with Table 1 and show that our methods are more accurate than the baselines.
+
+
+(c) Div, Power
+
+
+(d) Bil-GCN, Power
+
+ | Traffic | Power |
| Method | RMSE | MAE | MAPE | CORR | RMSE | MAE | MAPE | CORR |
| Div | 0.071 | 0.041 | 1.23 | 0.76 | 0.034 | 0.015 | 1419.2 | 0.93 |
| MLP | 0.083 | 0.055 | 1.13 | - | 0.069 | 0.043 | 8334.5 | 0.61 |
| GCN | 0.066 | 0.040 | 0.94 | - | 0.064 | 0.043 | 5622.3 | 0.64 |
| MLP-Div | 0.066 | 0.041 | 1.51 | 0.81 | 0.033 | 0.015 | 1593.5 | 0.93 |
| GCN-Div | 0.071 | 0.048 | 1.69 | 0.81 | 0.033 | 0.015 | 1795.2 | 0.93 |
| Bil-MLP | 0.069 | 0.038 | 1.05 | 0.79 | 0.027 | 0.011 | 758.0 | 0.95 |
| Bil-GCN | 0.062 | 0.034 | 0.86 | 0.82 | 0.027 | 0.011 | 788.5 | 0.95 |
| Bil-GCN-Prior | 0.062 | 0.035 | 0.91 | 0.82 | 0.027 | 0.011 | 691.5 | 0.95 |
+
+Table 1: Average flow estimation accuracy for the baselines (Div, MLP and GCN) and our methods (Bil-MLP, Bil-GCN and Bil-GCN-Prior) using the Traffic and Power datasets. RMSE, MAE and MAPE are errors (the lower the better) and CORR is a correlation (the higher the better). Values of correlation for MLP and GCN using Traffic were undefined. Bil-GCN (ours) outperforms the best baseline for all the metrics, with up to $20\%$ lower RMSE than Div using Power.
+
+# 4.4 ANALYSIS OF REGULARIZERS
+
+Figure 3 illustrates the regularization function learned by Bil-MLP. We focus on Bil-MLP because it can be analyzed independently of the topology. Figures 3a-3c show scatter plots where the x and y axes represent the value of the regularizer and features, respectively. For Power, Bil-MLP captures the effect of resistance over flows (Fig. 3a). However, only high values of resistance are mostly affected—that is the reason few points can be seen and also explains the good results for Div. We did not find a significant correlation for other features, with the exception of reactance, which is related to resistance. For Traffic, the model learns how the number of lanes constrains the flow at a road segment (Fig. 3b). Results for speed limit are more surprising, $45\mathrm{mph}$ roads are less regularized (Fig. 3c). This is evidence that regularization is affecting mostly traffic bottlenecks in highways—with few lanes but a $65\mathrm{mph}$ speed limit. To further investigate this result, we also show the regularizers over the Traffic topology in Figure 3d. High regularization overlaps with well-known congested areas in Los Angeles, CA (e.g., Highway 5, Southeast). These results are strong evidence that our methods are able to learn the physics of flows in road traffic and power networks.
+
+
+(a) Resistance, Power
+
+
+(b) Lanes, Traffic
+Figure 3: Edge regularizer learned by Bil-MLP vs. features values (a-c) and visualization of regularizers on the Traffic topology (d). Our model is able to learn the effect of the resistance for Power. In Traffic, a higher number of lanes is correlated to less regularization and lower speed roads (45mph) are less regularized. The regularization is also correlated with congested areas in Los Angeles, CA.
+
+
+(c) Speed limit, Traffic
+
+
+(d) Visualization, Traffic
+
+# 5 RELATED WORK
+
+Flow graphs are quite ubiquitous in engineering, biomedical and social sciences. Two important properties of flow graphs are that their state space is defined by a graph topology and their dynamics are governed by the physics (or logic) of the problem of interest. We refer to Bressan et al. (2014) for a unified characterization of the mathematical treatment of flow graphs. Notice that these studies do not address the flow inference problem and their applications to real data is limited (Herrera et al.,
+
+2010; Work et al., 2010). Moreover, we focus on long term flows (e.g. daily vehicle traffic flows) and not on the dynamics. This simplifies the equations of our model to the conservation law.
+
+Flow inference via divergence minimization was originally proposed in Jia et al. (2019). However, their work has not considered edge features and instead applied a single regularization parameter to the norm of the flow vector $\mathbf{f}$ in Equation 2. Our work leverages relevant edge features to learn the interplay between flow conservation and local predictions (priors). Thus, we generalize the formulation from Jia et al. (2019) to the case of a learnable regularization function $\mathcal{Q}(\Theta, X)$ . Our experiments show that the proposed approach achieves superior results in two datasets.
+
+Flow optimization problems, such as min-cost flow, max-flow and multi-commodity flow, have a long history in computer science (Ahuja et al., 1988; Ford Jr & Fulkerson, 2015). These problems impose flow conservation as a hard constraint, requiring full knowledge of source and sink vertices and noiseless flow observations. Our approach relaxes these requirements by minimizing the flow divergence (see Equation 2). Moreover, our problem does not assume edge capacities and costs.
+
+The relationship between flow estimation and inverse problems is of particular interest due to the role played by regularization (Engl et al., 1996) in the solution of ill-posed problems. Recent work on inverse problems has also focused on learning to regularize based on data and even learning the forward operator as well—see Arridge et al. (2019) for a review. The use of the expression "learning the physics" is also popular in the context of the universal differential equation framework, which enables the incorporation of domain-knowledge from scientific models to machine learning (Raissi et al., 2019; Long et al., 2018; Rackauckas et al., 2020).
+
+Bilevel optimization in machine learning has been popularized due its applications in hyperparameter optimization (Bengio, 2000; Larsen et al., 1996). In the last decade, deep learning has motivated novel approaches able to optimize millions of hyperparameters using gradient-based schemes (Maclaurin et al., 2015; Lorraine et al., 2020; Pedregosa, 2016). Our flow estimation algorithm is based on reverse-mode differentiation, which is a scalable approach for bilevel optimization (Franceschi et al., 2017; Domke, 2012; Maclaurin et al., 2015). Another application of bilevel optimization quite related to ours is meta-learning (Franceschi et al., 2018; Grefenstette et al., 2019).
+
+Our problem is also related to semi-supervised learning on graphs (Zhu et al., 2003; Belkin et al., 2006; Zhou et al., 2004), which is the inference of vertex labels given partial observations. These approaches can be applied for flow estimation via the line graph transformation (Jia et al., 2019). The duality between a recent approach for predicting vertex labels Hallac et al. (2015) and min-cost flows was shown in Jung (2020). However, the same relation does not hold for flow estimation.
+
+Graph neural network models, which generalize deep learning to graph data, have been shown to outperform traditional semi-supervised learning methods in many tasks (Kipf & Welling, 2016; Hamilton et al., 2017; Velicković et al., 2018). These models have also been applied for traffic forecasting (Li et al., 2017; Yu et al., 2018; Yao et al., 2019). Different from our approach, traditional GNNs do not conserve flows. We show that our models outperform GNNs at flow prediction. Moreover, we also apply GNNs as a regularization function in our model.
+
+# 6 CONCLUSIONS
+
+We have introduced an approach for flow estimation on graphs by combining a conservation law and edge features. Our model learns the physics of flows from data by combining bilevel optimization and deep learning. Experiments using traffic and power networks have shown that the proposed model outperforms a set of baselines and learns interpretable physical properties of flow graphs.
+
+While we have focused on learning a diagonal regularization matrix, we want to apply our framework to the case of a full matri. We are also interested in combining different edge measurements in order to learn more complex physical laws, such as described by the fundamental diagram in the LWR model Lighthill & Whitham (1955); Daganzo (1994; 1995); Garavello & Piccoli (2006).
+
+# ACKNOWLEDGEMENTS
+
+Research partially funded by the grants NSF IIS #1817046 and DTRA #HDTRA1-19-1-0017.
+
+# REFERENCES
+
+Ravindra K Ahuja, Thomas L Magnanti, and James B Orlin. Network flows. 1988.
+Simon Arridge, Peter Maass, Ozan Oktem, and Carola-Bibiane Schonlieb. Solving inverse problems using data-driven models. Acta Numerica, 28:1-174, 2019.
+Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. Journal of Machine Learning Research, 7(Nov):2399-2434, 2006.
+Yoshua Bengio. Gradient-based optimization of hyperparameters. Neural computation, 12(8):1889-1900, 2000.
+Léon Bottou and Olivier Bousquet. The tradeoffs of large scale learning. In Advances in neural information processing systems, pp. 161-168, 2008.
+Alberto Bressan, Sunčica Čanic, Mauro Garavello, Michael Herty, and Benedetto Piccoli. Flows on networks: recent results and perspectives. EMS Surveys in Mathematical Sciences, 1(1):47-111, 2014.
+Tom Brown, Jonas Hörsch, and David Schlachtberger. Pypsa: Python for power system analysis. arXiv preprint arXiv:1707.09913, 2017.
+Giuseppe Maria Coclite, Mauro Garavello, and Benedetto Piccoli. Traffic flow on a road network. SIAM Journal on Mathematical Analysis, 36(6):1862-1886, 2005.
+Benoit Colson, Patrice Marcotte, and Gilles Savard. An overview of bilevel optimization. Annals of operations research, 153(1):235-256, 2007.
+George Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems, 2(4):303-314, 1989.
+Carlos F Daganzo. The cell transmission model: A dynamic representation of highway traffic consistent with the hydrodynamic theory. Transportation Research Part B: Methodological, 28(4): 269-287, 1994.
+Carlos F Daganzo. The cell transmission model, part ii: network traffic. Transportation Research Part B: Methodological, 29(2):79-93, 1995.
+Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in neural information processing systems, pp. 3844-3852, 2016.
+Justin Domke. Generic methods for optimization-based modeling. In Artificial Intelligence and Statistics, pp. 318-326, 2012.
+Florian Dörfler, John W Simpson-Porco, and Francesco Bullo. Electrical networks and algebraic graph theory: Models, properties, and applications. Proceedings of the IEEE, 106(5):977-1005, 2018.
+Heinz Werner Engl, Martin Hanke, and Andreas Neubauer. Regularization of inverse problems, volume 375. Springer Science & Business Media, 1996.
+Lester Randolph Ford Jr and Delbert Ray Fulkerson. *Flows in networks*. Princeton university press, 2015.
+L Franceschi, M Donini, P Frasconi, and M Pontil. Forward and reverse gradient-based hyperparameter optimization. In ICML, volume 70, pp. 1165-1173. JMLR, 2017.
+L Franceschi, P Frasconi, S Salzo, R Grazzi, and M Pontil. Bilevel programming for hyperparameter optimization and meta-learning. In ICML, volume 80, pp. 1563-1572. PMLR (Proceedings of Machine Learning Research), 2018.
+
+M. Garavello and B. Piccoli. Traffic Flow on Networks: Conservation Laws Model. AIMS series on applied mathematics. American Institute of Mathematical Sciences, 2006.
+Edward Grefenstette, Brandon Amos, Denis Yarats, Phu Mon Htut, Artem Molchanov, Franziska Meier, Douwe Kiela, Kyunghyun Cho, and Soumith Chintala. Generalized inner loop meta-learning. arXiv preprint arXiv:1910.01727, 2019.
+David Hallac, Jure Leskovec, and Stephen Boyd. Network lasso: Clustering and optimization in large graphs. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pp. 387-396, 2015.
+Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, pp. 1024-1034, 2017.
+David K Hammond, Pierre Vandergheynst, and Rémi Gribonval. Wavelets on graphs via spectral graph theory. Applied and Computational Harmonic Analysis, 30(2):129-150, 2011.
+Jozef Hanc, Slavomir Tuleja, and Martina Hancova. Symmetries and conservation laws: Consequences of noether's theorem. American Journal of Physics, 72(4):428-435, 2004.
+Juan C Herrera, Daniel B Work, Ryan Herring, Xuegang Jeff Ban, Quinn Jacobson, and Alexandre M Bayen. Evaluation of traffic data obtained via GPS-enabled mobile phones: The mobile century field experiment. *Transportation Research Part C: Emerging Technologies*, 18(4):568-583, 2010.
+Jonas Hörsch, Fabian Hofmann, David Schlachtberger, and Tom Brown. Pypsa-eur: An open optimisation model of the european transmission system. Energy Strategy Reviews, 22:207-215, 2018.
+Junteng Jia, Michael T. Schaub, Santiago Segarra, and Austin R. Benson. Graph-based semi-supervised and active learning for edge flows. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 761-771, 2019.
+Alexander Jung. On the duality between network flows and network lasso. IEEE Signal Processing Letters, 2020.
+Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
+Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
+Jan Larsen, Lars Kai Hansen, Claus Svarer, and M Ohlsson. Design and regularization of neural networks: the optimal use of a validation set. In Neural Networks for Signal Processing VI. Proceedings of the 1996 IEEE Signal Processing Society Workshop, pp. 62-71. IEEE, 1996.
+Yaguang Li, Rose Yu, Cyrus Shahabi, and Yan Liu. Diffusion convolutional recurrent neural network: Data-driven traffic forecasting. arXiv preprint arXiv:1707.01926, 2017.
+Michael James Lighthill and Gerald Beresford Whitham. On kinematic waves ii. a theory of traffic flow on long crowded roads. Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences, 229(1178):317-345, 1955.
+Zichao Long, Yiping Lu, Xianzhong Ma, and Bin Dong. Pde-net: Learning pdes from data. In 35th International Conference on Machine Learning, ICML 2018, pp. 5067-5078. International Machine Learning Society (IMLS), 2018.
+Jonathan Lorraine, Paul Vicol, and David Duvenaud. Optimizing millions of hyperparameters by implicit differentiation. In International Conference on Artificial Intelligence and Statistics, pp. 1540-1552. PMLR, 2020.
+Dougal Maclaurin, David Duvenaud, and Ryan Adams. Gradient-based hyperparameter optimization through reversible learning. In International Conference on Machine Learning, pp. 2113-2122, 2015.
+
+Fabian Pedregosa. Hyperparameter optimization with approximate gradient. In Proceedings of the 33rd International Conference on International Conference on Machine Learning-Volume 48, pp. 737-746, 2016.
+Christopher Rackauckas, Yingbo Ma, Julius Martensen, Collin Warner, Kirill Zubov, Rohit Supekar, Dominic Skinner, and Ali Ramadhan. Universal differential equations for scientific machine learning. arXiv preprint arXiv:2001.04385, 2020.
+Maziar Raissi, Paris Perdikaris, and George E Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378:686-707, 2019.
+Paul I Richards. Shock waves on the highway. Operations research, 4(1):42-51, 1956.
+Albert Tarantola. Inverse problem theory and methods for model parameter estimation, volume 89. SIAM, 2005.
+Petar Velicković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. In International Conference on Learning Representations, 2018.
+Minjie Wang, Lingfan Yu, Da Zheng, Quan Gan, Yu Gai, Zihao Ye, Mufei Li, Jinjing Zhou, Qi Huang, Chao Ma, et al. Deep graph library: Towards efficient and scalable deep learning on graphs. arXiv preprint arXiv:1909.01315, 2019.
+Daniel B Work, Sébastien Blandin, Olli-Pekka Tossavainen, Benedetto Piccoli, and Alexandre M Bayen. A traffic model for velocity data assimilation. Applied Mathematics Research eXpress, 2010(1):1-35, 2010.
+Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations, 2018.
+Huaxiu Yao, Xianfeng Tang, Hua Wei, Guanjie Zheng, and Zhenhui Li. Revisiting spatial-temporal similarity: A deep learning framework for traffic prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 5668-5675, 2019.
+Bing Yu, Haoteng Yin, and Zhanxing Zhu. Spatio-temporal graph convolutional networks: a deep learning framework for traffic forecasting. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, pp. 3634-3640, 2018.
+Dengyong Zhou, Olivier Bousquet, Thomas N Lal, Jason Weston, and Bernhard Schölkopf. Learning with local and global consistency. In Advances in Neural Information Processing Systems, pp. 321-328, 2004.
+Xiaojin Zhu, Zoubin Ghahramani, and John D Lafferty. Semi-supervised learning using gaussian fields and harmonic functions. In Proceedings of the 20th International Conference on Machine learning (ICML-03), pp. 912-919, 2003.
+
+| Symbol | Meaning |
| G | Flow graph |
| V | Set of vertices in G |
| n | Size of V |
| E | Set of edges in G |
| m | Size of E |
| E'⊆ E | Set of observed edges |
| m' | Size of E' |
| X∈Rm×d | Edge feature matrix |
| X[e]∈Rd | Features of edge e |
| f∈Rm | Complete flow vector |
| fe∈R | Flow for edge e |
| ˆf∈Rm' | Observed flow vector |
| ˆfe∈R | Observed flow for edge e |
| B | Incidence matrix of G |
| Φ(f, X; f(0); Θ)∈R+ | Regularization function |
| f(0)∈Rm | Flow prior |
| Θ | Regularization parameters |
| Ω | Domain of F |
| x∈Rm-m' | Estimated vector of missing flows |
| ˆx∈Rm-m' | True vector of missing flows |
| x(0)∈Rm-m' | Prior for missing flows |
| H∈Rm×m' | Map from f to x |
| f∈Rm | Vector with observed flows or 0 otherwise |
| Q(X; Θ)∈R(m-m')×(m-m') | Regularization function (diagonal matrix) |
| K | Number of folds in cross-validation |
| T | Number of outer iterations for Algorithm 1 |
| J | Number of inner iterations for Algorithm 1 |
| α | Outer learning rate for Algorithm 1 |
| β | Inner learning rate for Algorithm 1 |
| Ψ(x, Θ) | Outer objective |
| YΘ(x) | Inner objective |
| Γj(xk,j-1, Θi-1) | One step of SGD |
| Θ | Estimate of ∇ΘΨ(x, Θi-1) |
| Hk | Matrix H for fold k |
| Qk | Matrix Q for fold k |
| f_k | Vector f for fold k |
+
+Table 2: Table of the main symbols used in this paper.
+
+# A TABLE OF SYMBOLS
+
+Table 2 lists the main symbols used in our paper.
+
+# B BILEVEL OPTIMIZATION WITH GRAPH NEURAL NETWORKS
+
+This section is an extension of Section 3.3. Here, we consider the case where $\mathcal{Q}(\mathcal{X};\Theta)$ is a GNN:
+
+$$
+\mathcal {Q} (\mathcal {X}, \Theta) = \operatorname {d i a g} (G N N (X, \Theta , \mathcal {G})) \tag {8}
+$$
+
+For instance, we apply a 2-layer spectral Graph Convolutional Network (GCN) with Chebyshev convolutions (Defferrard et al., 2016; Kipf & Welling, 2016; Hammond et al., 2011):
+
+$$
+\mathcal {Q} (\mathcal {X}; \Theta) = \operatorname {d i a g} \left(R e L U \left(\sum_ {z ^ {\prime} = 1} ^ {Z ^ {\prime}} T _ {z ^ {\prime}} (\widetilde {L}) R e L U \left(\sum_ {z = 1} ^ {Z} T _ {z} (\widetilde {L}) \mathcal {X} W _ {z} ^ {(1)}\right) W _ {z ^ {\prime}} ^ {(2)}\right)\right) \tag {9}
+$$
+
+where $\widetilde{L} = 2 / \lambda_{max}L - I$ , $L$ is the normalized Laplacian of the undirected version of the line graph $\mathcal{G}'$ of $\mathcal{G}$ , $\lambda_{max}$ is the largest eigenvalue of $L$ , $T_z(\widetilde{L})$ is a Chebyshev polynomial of $\widetilde{L}$ with order $z$ and $W_z^{(i)}$ is the matrix of learnable weights for the $z$ -th order polynomial at the layer $i$ . In a line graph, each vertex represents an edge of the undirected version of $\mathcal{G}$ and two vertices are connected if their corresponding edges in $\mathcal{G}$ are adjacent. Moreover $L = I - D^{-1/2}AD^{-1/2}$ , where $A$ and $D$ are the adjacency and degree matrices of $\mathcal{G}'$ . Chebyshev polynomials are defined recursively, with $T_z(y) = 2yT_{z-1}(y) - T_{z-2}(y)$ and $T_1(y) = y$ .
+
+In our experiments, we compare GCN against MLP regularization functions. We have also applied the more popular non-spectral graph convolutional operator (Kipf & Welling, 2016) but preliminary results have shown that the Chebyshev operator achieves better performance in flow estimation.
+
+# C EXTENDED EXPERIMENTAL SECTION
+
+This section in an extension of Section 4.
+
+# C.1 MORE DETAILS ON DATASETS
+
+Traffic: Flow data was collected from the Caltrans—the California Department of Transportation—PeMS (Performance Measurement System).1 Sensors are placed at major highways in the state. We use sensor geo-locations and other attributes to approximately match them to a compressed version of road network extracted from Openstreetmap.2 The compression merges any sequence of segments without a branch, as these extra edges would not affect the flow estimation results. We emphasize that this dataset is not of as high quality as Power, due to possible sensor malfunction and matchings of sensors to the wrong road segments. This explains why flow estimation is more challenging in Traffic. Figure 4 is a visualization of our traffic dataset with geographic (lat-long) located vertices and colors indicating light versus heavy traffic (compared to the average). The road segments in the graph (approximately) cover the LA County area. We show the map (from Openstreetmap) of the area covered by our road network in Figure 5.
+
+Power: We will provide more details on how we build the power dataset. PyPSA (Python for Power System Analsys) is a toolbox for the simulation of power systems (Brown et al., 2017). We applied the European transmission system (PyPSA-Eur), which covers the ENTSO-E area (Horsch et al., 2018), to generate a single network snapshot. Besides the PyPSA-Eur original set of edges, which we will refer to as line edges, we have added a set of bus edges. These extra edges allow us to represent power generation and consumption as edge flows. For the line edges, we cover the following PyPSA attributes (with their respective PyPSA identifiers): reactance (x), resistance(r), capacity (s_nom), whether the capacity s_nom can be extended (s_nomExtendable), the capital cost of extending s_nom (capital_cost), the length of the line (length), the number of parallel lines (num_parallel) and the optimized capacity (s_nom_opt). For bus lines, the only attribute is the control strategy (PQ, PV, or Slack). Notice that we create a single vector representation for both line and bus lines by adding an extra indicator position (line or bus). Moreover, categorical attributes (e.g., the control strategy) were represented using one-hot encoding. Figure 6 is a visualization of our power dataset with geographic (lat-long) located vertices and colors indicating high versus low power (compared to the average).
+
+# C.2 EVALUATION METRICS
+
+We apply the following evaluation metrics for flow estimation. Let $\mathbf{f}_{true}$ and $\mathbf{f}_{pred}$ be $m'$ -dimensional vectors with true and predicted values for missing flows associated to edges in $\mathcal{E} \setminus \mathcal{E}'$ .
+
+Correlation (Corr):
+
+$$
+c o v \left(\mathbf {f} _ {p r e d}, \mathbf {f} _ {t r u e}\right) / \left(\sigma \left(\mathbf {f} _ {p r e d}\right). \sigma \left(\mathbf {f} _ {t r u e}\right)\right)
+$$
+
+where $cov$ is the covariance and $\sigma$ is the standard deviation.
+
+
+
+
+Figure 4: Visualization of our traffic network with geo-located vertices. Edges in grey have missing flows, edges in red have traffic above the average and edges in blue have traffic below the average. Better seen in color. See Figure 5 for map of the area.
+Figure 5: Road map covered by the road network shown in Figure 4 (from Openstreetmap)
+
+Mean Absolute Percentage Error (MAPE):
+
+$$
+\frac {1}{m - m ^ {\prime}} \sum_ {e \in \mathcal {E} \backslash \mathcal {E} ^ {\prime}} | \frac {(\mathbf {f} _ {t r u e}) _ {e} - (\mathbf {f} _ {p r e d}) _ {e}}{(\mathbf {f} _ {t r u e}) _ {e}} |
+$$
+
+
+Figure 6: Visualization of our power network with geo-located vertices. Edges in red have traffic above the average and edges in blue have traffic below the average. Better seen in color.
+
+Mean Absolute Error (MAE):
+
+$$
+\frac {1}{m ^ {\prime}} \sum_ {e \in \mathcal {E} \backslash \mathcal {E} ^ {\prime}} | (\mathbf {f} _ {t r u e}) _ {e} - (\mathbf {f} _ {p r e d}) _ {e} |
+$$
+
+Root Mean Squared Error (RMSE):
+
+$$
+\sqrt {\frac {1}{m ^ {\prime}} \sum_ {e \in \mathcal {E} \backslash \mathcal {E} ^ {\prime}} [ (\mathbf {f} _ {t r u e}) _ {e} - (\mathbf {f} _ {p r e d}) _ {e} ] ^ {2}}
+$$
+
+Divergence (Div):
+
+$$
+\sum_ {v} (\sum_ {u} \mathbf {f} _ {(u, v)} - \sum_ {u} \mathbf {f} _ {(v, u)}) ^ {2}
+$$
+
+# C.3 MORE EXPERIMENTAL SETTINGS
+
+Train/test splits: We report results of a 10-fold cross-validation based on the set of labeled flows. Moreover, we use $10\%$ of training flows for validation.
+
+Implementation4: We have implemented Algorithm 1 using PyTorch, CUDA, and Higher (Grefenstette et al., 2019), a meta-learning framework that greatly facilitates the implementation of bilevel optimization algorithms by implicitly performing the reverse iterations for a list of optimization algorithms, including SGD. Moreover, our GCN implementation is based on the Deep Graph Library (DGL) (Wang et al., 2019).
+
+Hardware: We ran our experiments on a single machine with 4 NVIDIA GeForce RTX 2080 GPUs (each with 8Gb of RAM) and 32 Intel Xeon CPUs (2.10GHz and 128Gb of RAM).
+
+Hyperparameter settings: We have selected the parameters based on RMSE for each method using grid search with learning rate over $[10^0, 10^{-1}, 10^{-2}, 10^{-3}]$ and number of nodes in the hidden layer over [4,8,16]. The total number of iterations was set to 3000 for Min-Div and 5000 for MLP and GCN, all with early stop on convergence after 10 iterations. For our methods (both based on Algorithm 1), we set $T = 10$ , $J = 300$ , $\alpha = 10^{-2}$ , $\beta = 10^{-2}$ and $K = 10$ in all experiments.
+
+# C.4 DIVERGENCE RESULTS
+
+Although the main goal of flow estimation is to minimize the flow prediction loss, we also evaluate how our methods and the baselines perform in terms of divergence (or flow conservation) in Table 3. As expected, MLP and GCN do not conserve the flows. However, interestingly, our methods (Bil-MLP and Bil-GCN) achieve higher flow conservation than Min-Div. This is due to the regularization parameter $\lambda$ , which is tuned based on a set of validation flows.
+
+ | Traffic | Power |
| Min-Div | 2.94 | 2.45 |
| MLP | 5.69 | 2.77 |
| GCN | 5.71 | 2.80 |
| Bil-MLP | 2.81 | 2.43 |
| Bil-GCN | 2.83 | 2.43 |
| Bil-GCN-Prior | 2.43 | 2.43 |
+
+Table 3: Divergence results.
+
+# D TRUE VS. PREDICTED FLOWS
+
+Figure 7 shows scatter plots with the true vs. predicted flows that are missing from Figure 2.
+
+
+(a) MLP, Traffic
+
+
+(b) Min-Div, Traffic
+
+
+(c) Bil-MLP, Traffic
+
+
+(d) Bil-GCN-Prior, Traffic
+
+
+(e) MLP, Power
+
+
+(f) GCN, Power
+
+
+(g) Bil-MLP, Power
+Figure 7: Scatter plots with true (x) and predicted (y) flows for remaining methods (beyond the ones shown in Figure 2).
+
+
+(h) Bil-GCN-Prior, Power
+
+# D.1 VISUALIZATION OF REGULARIZER FOR POWER
+
+Figure 8 shows the regularizers over the Power network topology. As discussed in Section 4.4, the regularizer affects mostly a few top resistance edges. For the remaining ones, regularizers have a small value. Notice that these high resistance edges are associated with lines transmitting small amounts of power, as shown in Figure Figure 6, and have a large impact on the overall flow estimation accuracy.
+
+
+Figure 8: Visualization of regularizers on the Power network topology. We highlight edges with large values of regularizer. Better seen in color.
+
+ | Traffic | Power |
| Method | Training | Test | Training | Test |
| Min-Div | 424.4 | 0.09 | 364.2 | 0.01 |
| MLP | 21.95 | 0.10 | 12.32 | 0.01 |
| GCN | 2.43 | 0.09 | 0.77 | 0.01 |
| Bil-MLP | 1860.2 | 0.08 | 369.7 | 0.01 |
| Bil-GCN | 1870.1 | 0.09 | 346.7 | 0.01 |
| Bil-MLP-Prior | 1886.1 | 0.01 | 334.1 | 0.01 |
+
+Table 4: Average training and test times (in seconds) for our methods and the baselines (in seconds).
+
+# D.2 RUNNING TIME
+
+Table 4 shows the average running times—over the 10-fold cross-validation—of our methods and the baselines for the Traffic and Power datasets. We show both training and test times. The results show that our reverse-mode differentiation algorithm adds significant overhead on training time for Traffic, taking up to 4 times longer than Min-Div to finish. As described in Section 3.4, this is due mainly to the cost of computing and storing the inner problem iterations. On the other hand, all the methods are efficient at testing. GCN converged quickly (due to early stopping) for both datasets. However, it achieved poor results for Power, as shown in Table 1, which is a sign of overfitting or underfitting. Notice that the results reported are the best in terms of RMSE.
\ No newline at end of file
diff --git a/combiningphysicsandmachinelearningfornetworkflowestimation/images.zip b/combiningphysicsandmachinelearningfornetworkflowestimation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..57d442d9b2e32ec700018ae9d97e5f1ba5037006
--- /dev/null
+++ b/combiningphysicsandmachinelearningfornetworkflowestimation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:63f97ae5db0017c671267b32bcd4839545c416c355559db07fd0fb6f335db79f
+size 825691
diff --git a/combiningphysicsandmachinelearningfornetworkflowestimation/layout.json b/combiningphysicsandmachinelearningfornetworkflowestimation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..355361edf76b964aa454add736ebd38a5819b257
--- /dev/null
+++ b/combiningphysicsandmachinelearningfornetworkflowestimation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:779a6bb5333c1e65662cab6d44cac248d3cafe5899cedb8fdd18ec7c259a81fe
+size 664357
diff --git a/communicationinmultiagentreinforcementlearningintentionsharing/430994de-c7d4-4837-8818-0c79b3572f32_content_list.json b/communicationinmultiagentreinforcementlearningintentionsharing/430994de-c7d4-4837-8818-0c79b3572f32_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..61b52776e7dd2ebc319e784c03cbe9c93bf45903
--- /dev/null
+++ b/communicationinmultiagentreinforcementlearningintentionsharing/430994de-c7d4-4837-8818-0c79b3572f32_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9db590c4aa78bbac126119d5a12380ec28515de056bc8ef9e2781b5d1ffadae3
+size 74935
diff --git a/communicationinmultiagentreinforcementlearningintentionsharing/430994de-c7d4-4837-8818-0c79b3572f32_model.json b/communicationinmultiagentreinforcementlearningintentionsharing/430994de-c7d4-4837-8818-0c79b3572f32_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..5dfe533a9dde84a3d4fd3b1699f9786ba879b88d
--- /dev/null
+++ b/communicationinmultiagentreinforcementlearningintentionsharing/430994de-c7d4-4837-8818-0c79b3572f32_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c7f053f30011f9d1f7bf9423b6b7ab3ff129e1eae1908c5ac3a62ee11dd383ad
+size 88568
diff --git a/communicationinmultiagentreinforcementlearningintentionsharing/430994de-c7d4-4837-8818-0c79b3572f32_origin.pdf b/communicationinmultiagentreinforcementlearningintentionsharing/430994de-c7d4-4837-8818-0c79b3572f32_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..7e5cb82def913ac43eff83bc3908317c231bffde
--- /dev/null
+++ b/communicationinmultiagentreinforcementlearningintentionsharing/430994de-c7d4-4837-8818-0c79b3572f32_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:723690d97ff74224a522900d9a16bc7b1ca5b0002c3480dab81ea1c2237ab23f
+size 1780010
diff --git a/communicationinmultiagentreinforcementlearningintentionsharing/full.md b/communicationinmultiagentreinforcementlearningintentionsharing/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..6261ed865bf4491ecaf96f3b26971d318f3a7f9e
--- /dev/null
+++ b/communicationinmultiagentreinforcementlearningintentionsharing/full.md
@@ -0,0 +1,292 @@
+# COMMUNICATION IN MULTI-AGENT REINFORCEMENT LEARNING: INTENTION SHARING
+
+Woojun Kim, Jongeui Park, Youngchul Sung*
+
+School of Electrical Engineering, KAIST
+
+Daejeon, South Korea
+
+{woojun.kim, jongeui.park, ycsung}@kaist.ac.kr
+
+# ABSTRACT
+
+Communication is one of the core components for learning coordinated behavior in multi-agent systems. In this paper, we propose a new communication scheme named Intention Sharing (IS) for multi-agent reinforcement learning in order to enhance the coordination among agents. In the proposed IS scheme, each agent generates an imagined trajectory by modeling the environment dynamics and other agents' actions. The imagined trajectory is a simulated future trajectory of each agent based on the learned model of the environment dynamics and other agents and represents each agent's future action plan. Each agent compresses this imagined trajectory capturing its future action plan to generate its intention message for communication by applying an attention mechanism to learn the relative importance of the components in the imagined trajectory based on the received message from other agents. Numeral results show that the proposed IS scheme significantly outperforms other communication schemes in multi-agent reinforcement learning.
+
+# 1 INTRODUCTION
+
+Reinforcement learning (RL) has achieved remarkable success in various complex control problems such as robotics and games (Gu et al. (2017); Mnih et al. (2013); Silver et al. (2017)). Multi-agent reinforcement learning (MARL) extends RL to multi-agent systems, which model many practical real-world problems such as connected cars and smart cities (Roscia et al. (2013)). There exist several distinct problems in MARL inherent to the nature of multi-agent learning (Gupta et al. (2017); Lowe et al. (2017)). One such problem is how to learn coordinated behavior among multiple agents and various approaches to tackling this problem have been proposed (Jaques et al. (2018); Pesce & Montana (2019); Kim et al. (2020)). One promising approach to learning coordinated behavior is learning communication protocol among multiple agents (Foerster et al. (2016); Sukhbaatar et al. (2016); Jiang & Lu (2018); Das et al. (2019)). The line of recent researches on communication for MARL adopts end-to-end training based on differential communication channel (Foerster et al. (2016); Jiang & Lu (2018); Das et al. (2019)). That is, a message-generation network is defined at each agent and connected to other agents' policies or critic networks through communication channels. Then, the message-generation network is trained by using the gradient of other agents' policy or critic losses. Typically, the message-generation network is conditioned on the current observation or the hidden state of a recurrent network with observations as input. Thus, the trained message encodes the past and current observation information to minimize other agents' policy or critic loss. It has been shown that due to the capability of sharing observation information, this kind of communication scheme has good performance as compared to communication-free MARL algorithms such as independent learning, which is widely used in MARL, in partially observable environments.
+
+In this paper, we consider the following further question for communication in MARL:
+
+"How to harness the benefit of communication beyond sharing partial observation."
+
+We propose intention of each agent as the content of message to address the above question. Sharing intention using communication has been used in natural multi-agent systems like human society.
+
+For example, drivers use signal light to inform other drivers of their intentions. A car driver may slow down if a driver in his or her left lane turns the right signal light on. In this case, the signal light encodes the driver's intention, which indicates the driver's future behavior, not current or past observation such as the field view. By sharing intention using signal light, drivers coordinate their drive with each other. In this paper, we formalize and propose a new communication scheme for MARL named Intention sharing (IS) in order to go beyond existing observation-sharing schemes for communication in MARL. The proposed IS scheme allows each agent to share its intention with other agents in the form of encoded imagined trajectory. That is, each agent generates an imagined trajectory by modeling the environment dynamics and other agents' actions. Then, each agent learns the relative importance of the components in the imagined trajectory based on the received messages from other agents by using an attention model. The output of the attention model is an encoded imagined trajectory capturing the intention of the agent and used as the communication message. We evaluate the proposed IS scheme in several multi-agent environments requiring coordination among agents. Numerical results show that the proposed IS scheme significantly outperforms other existing communication schemes for MARL including the state-of-the-art algorithms such as ATOC and TarMAC.
+
+# 2 RELATED WORKS
+
+Under the asymmetry in learning resources between the training and execution phases, the framework of centralized training and decentralized execution (CTDE), which assumes the availability of all system information in the training phase and distributed policy in the execution phase, has been adopted in most recent MARL researches (Lowe et al. (2017); Foerster et al. (2018); Iqbal & Sha (2018); Kim et al. (2020)). Under the framework of CTDE, learning communication protocol has been considered to enhance performance in the decentralized execution phase for various multi-agent tasks (Foerster et al. (2016); Jiang & Lu (2018); Das et al. (2019)). For this purpose, Foerster et al. (2016) proposed Differentiable Inter-Agent Learning (DIAL). DIAL trains a message-generation network by connecting it to other agents' Q-networks and allowing gradient flow through communication channels in the training phase. Then, in the execution phase the messages are generated and passed to other agents through communication channels. Jiang & Lu (2018) proposed an attentional communication model named ATOC to learn when to communicate and how to combine information received from other agents through communication based on attention mechanism. Das et al. (2019) proposed Targeted Multi-Agent Communication (TarMAC) to learn the message-generation network in order to produce different messages for different agents based on a signature-based attention model. The message-generation networks in the aforementioned algorithms are conditioned on the current observation or a hidden state of LSTM. Under partially observable environments, such messages which encode past and current observations are useful but do not capture any future information. In our approach, we use not only the current information but also future information to generate messages and the weight between the current and future information is adaptively learned according to the environment. This yields further performance enhancement, as we will see in Section 5.
+
+In our approach, the encoded imagined trajectory capturing the intention of each agent is used as the communication message in MARL. Imagined trajectory was used in other problems too. Racanière et al. (2017) used imagined trajectory to augment it into the policy and critic for combining model-based and model-free approaches in single-agent RL. It is shown that arbitrary imagined trajectory (rolled-out trajectory by using a random policy or own policy) is useful for single-agent RL in terms of performance and data efficiency. Strouse et al. (2018) introduced information-regularizer to share or hide agent's intention to other agents for a multi-goal MARL setting in which some agents know the goal and other agents do not know the goal. By maximizing (or minimizing) the mutual information between the goal and action, an agent knowing the goal learns to share (or hide) its intention to other agents not knowing the goal in cooperative (or competitive) tasks. They showed that sharing intention is effective in the cooperative case.
+
+In addition to our approach, Theory of Mind (ToM) and Opponent Modeling (OM) use the notion of intention. Rabinowitz et al. (2018) proposed the Theory of Mind network (ToM-net) to predict other agents' behaviors by using meta-learning. Raileanu et al. (2018) proposed Self Other-Modeling (SOM) to infer other agents' goal in an online manner. Both ToM and OM take advantage of predicting other agents' behaviors capturing the intention. One difference between our approach and
+
+the aforementioned two methods is that we use communication to share the intention instead of inference. That is, the agents in our approach allow other agents to know their intention directly through communication, whereas the agents in ToM and OM should figure out other agents' intention by themselves. Furthermore, the messages in our approach include future information by rolling out the policy, whereas ToM and CM predict only the current or just next time-step information.
+
+# 3 SYSTEM MODEL
+
+We consider a partially observable $N$ -agent Markov game (Littman (1994)) and assume that communication among agents is available. At time step $t$ , Agent $i$ observes its own observation $o_{t}^{i}$ which is a part of the global environment state $s_t$ , and selects action $a_{t}^{i} \in \mathcal{A}_{i}$ and message $m_{t}^{i} \in \mathcal{M}_{i}$ based on its own observation $o_{t}^{i}$ and its own previous time step message $m_{t-1}^{i}$ plus the received messages from other agents, i.e., $m_{t-1} = (m_{t-1}^{1}, \dots, m_{t-1}^{N})$ . We assume that the message $m_{t}^{i}$ of Agent $i$ is sent to all other agents and available at other agents at the next time step, i.e., time step $t+1$ . The joint actions $a_{t} = (a_{t}^{1}, \dots, a_{t}^{N})$ yield the next environment state $s_{t+1}$ and rewards $\{r_{t}^{i}\}_{i=1}^{N}$ according to the transition probability $\mathcal{T}: \mathcal{S} \times \mathcal{A} \times \mathcal{S} \to [0,1]$ and the reward function $R^{i}: \mathcal{S} \times \mathcal{A} \to \mathbb{R}$ , respectively, where $\mathcal{S}$ and $\mathcal{A} = \prod_{i=1}^{N} \mathcal{A}^{i}$ are the environment state space and the joint action space, respectively. The goal of Agent $i$ is to find the policy $\pi^{i}$ that maximizes its discounted return $R_{t}^{i} = \sum_{t'=t}^{\infty} \gamma^{t'} r_{t'}^{i}$ . Hence, the objective function of Agent $i$ is defined as $J_{i}(\pi^{i}) = \mathbb{E}_{\pi}[R_{0}^{i}]$ , where $\pi = (\pi^{1}, \dots, \pi^{N})$ and $\gamma \in [0,1]$ are the joint policy and the discounting factor, respectively.
+
+# 4 THE PROPOSED INTENTION SHARING SCHEME
+
+The key idea behind the IS scheme is that multiple agents communicate with other agents by sending their implicit future plans, which carry their intention. The received messages capturing the intention of other agents enable the agent to coordinate its action with those of other agents. We now describe the architecture of the proposed IS scheme. At time step $t$ , Agent $i$ selects an action $a_{t}^{i} \sim \pi^{i}(\cdot | o_{t}^{i}, m_{t-1})$ and a message $m_{t}^{i} = MGN^{i}(o_{t}^{i}, m_{t-1}, \pi^{i})$ based on its own observation $o_{t}^{i}$ and received messages $m_{t-1}$ , where $MGN^{i}$ is the message-generation network (MGN) of Agent $i$ . The MGN consists of two components: Imagined trajectory generation module (ITGM) and attention module (AM). Each agent generates an imagined trajectory by using ITGM and learns the importance of each imagined step in the imagined trajectory by using AM. The output of AM is an encoded imagined trajectory reflecting the importance of imagined steps and is used as the communication message. The overall architecture of the proposed IS scheme is shown in Fig. 1. In the following we describe the detail of each module.
+
+# 4.1 IMAGINED TRAJECTORY GENERATION MODULE (ITGM)
+
+The role of ITGM is to produce the next imagined step. ITGM takes the received messages, observation, and action as input and yields the predicted next observation and predicted action as output. By stacking ITGMs, we generate an imagined trajectory, as shown in Fig. 1. For Agent $i$ at time step $t$ , we define an $H$ -length imagined trajectory as
+
+$$
+\tau^ {i} = \left(\tau_ {t} ^ {i}, \hat {\tau} _ {t + 1} ^ {i}, \dots , \hat {\tau} _ {t + H - 1} ^ {i}\right), \tag {1}
+$$
+
+where $\hat{\tau}_{t+k}^i = (\hat{o}_{t+k}^i, \hat{a}_{t+k}^i)$ is the imagined step at time step $t + k$ . Note that $\tau_t^i = (o_t^i, a_t^i)$ is the true values of observation and action, but the imagined steps except $\tau_t^i$ are predicted values.
+
+ITGM consists of a roll-out policy and two predictors: Other agents' action predictor $f_{a}^{i}(o_{t}^{i})$ (we will call this predictor simply action predictor) and observation predictor $f_{o}^{i}(o_{t}^{i},a_{t}^{i},a_{t}^{-i})$ . First, we model the action predictor which takes the observation as input and produces other agents' predicted actions. The output of the action predictor is given by
+
+$$
+f _ {a} ^ {i} \left(o _ {t} ^ {i}\right) = \left(\hat {a} _ {t} ^ {1}, \dots , \hat {a} _ {t} ^ {i - 1}, \hat {a} _ {t} ^ {i + 1}, \dots , \hat {a} ^ {N}\right) =: \hat {a} _ {t} ^ {- i} \tag {2}
+$$
+
+Note that the action predictor can be trained by the previously proposed opponent modeling method (Rabinowitz et al. (2018); Raileanu et al. (2018)) and can take the received messages as input. Next,
+
+
+Figure 1: Overall architecture of the IS scheme from the perspective of Agent $i$
+
+we model the observation predictor $f_{o}^{i}(o_{t}^{i},a_{t}^{i},\hat{a}_{t}^{-i})$ which is conditioned on the observation $o_{t}^{i}$ , own action $a_{t}^{i}$ , and the output of the action predictor $\hat{a}_t^{-i}$ . Here, we adopt the dynamics function that predicts the difference between the next observation and the current observation, i.e., $o_{t + 1}^i -o_t^i$ instead of the next observation $o_{t + 1}^i$ proposed in (Nagabandi et al. (2018)) in order to reduce model bias in the early stage of learning. Hence, the next observation can be written as
+
+$$
+\hat {o} _ {t + 1} ^ {i} = o _ {t} ^ {i} + f _ {o} ^ {i} \left(o _ {t} ^ {i}, a _ {t} ^ {i}, \hat {a} _ {t} ^ {- i}\right). \tag {3}
+$$
+
+By injecting the predicted next observation and the received messages into the roll-out policy in ITGM, we obtain the predicted next action $\hat{a}_{t + 1}^{i} = \pi^{i}(\hat{o}_{t + 1}^{i},m_{t - 1})$ . Here, we use the current policy as the roll-out policy. Combining $\hat{o}_{t + 1}^{i}$ and $\hat{a}_{t + 1}^{i}$ , we obtain next imagined step at time step $t + 1$ , $\tau_{t + 1}^{i} = (\hat{o}_{t + 1}^{i},\hat{a}_{t + 1}^{i})$ . In order to produce an $H$ -length imagined trajectory, we inject the output of ITGM and the received messages $m_{t - 1}$ into the input of ITGM recursively. Note that we use the received messages at time step $t$ , $m_{t - 1}$ , in every recursion of ITGM. $^{1}$
+
+# 4.2 ATTENTION MODULE (AM)
+
+Instead of the naive approach that uses the imagined trajectory $\left[\tau_{t},\dots ,\tau_{t + H - 1}\right]$ directly as the message, we apply an attention mechanism in order to learn the relative importance of imagined steps and encode the imagined trajectory according to the relative importance. We adopt the scaledot product attention proposed in (Vaswani et al. (2017)) as our AM. Our AM consists of three components: query, key, and values. The output of AM is the weighted sum of values, where the weight of values is determined by the dot product of the query and the corresponding key. In our model, the query consists of the received messages, and the key and value consist of the imagined trajectory. For Agent $i$ at time step $t$ , the query, key and value are defined as
+
+$$
+q _ {t} ^ {i} = W _ {Q} ^ {i} m _ {t - 1} = W _ {Q} ^ {i} \left[ m _ {t - 1} ^ {1} \| m _ {t - 1} ^ {2} \| \dots \| m _ {t - 1} ^ {N - 1} \| m _ {t - 1} ^ {N} \right] \in \mathbb {R} ^ {d _ {k}} \tag {4}
+$$
+
+$$
+k _ {t} ^ {i} = \left[ W _ {K} ^ {i} \tau_ {t}, \dots , \underbrace {W _ {K} ^ {i} \tau_ {t + h - 1}} _ {=: k _ {t} ^ {i, h}}, \dots , W _ {K} ^ {i} \tau_ {t + H - 1} \right] \in \mathbb {R} ^ {H \times d _ {k}} \tag {5}
+$$
+
+$$
+v _ {t} ^ {i} = \left[ W _ {V} ^ {i} \tau_ {t}, \dots , \underbrace {W _ {V} ^ {i} \tau_ {t + h - 1}} _ {=: v _ {t} ^ {i, h}}, \dots , W _ {V} ^ {i} \tau_ {t + H - 1} \right] \in \mathbb {R} ^ {H \times d _ {m}}, \tag {6}
+$$
+
+where $W_{Q}^{i}\in \mathbb{R}^{d_{k}\times Nd_{m}}$ , $W_{K}^{i}\in \mathbb{R}^{d_{k}\times d_{\tau}}$ and $W_{V}^{i}\in \mathbb{R}^{d_{m}\times d_{\tau}}$ are learnable parameters and operation $\|$ denotes the concatenation of vectors. The output $m_t^i$ of the attention model, which is used for message, is the weighted sum of the values:
+
+$$
+m _ {t} ^ {i} = \sum_ {h = 1} ^ {H} \alpha_ {h} ^ {i} v _ {t} ^ {i, h}, \tag {7}
+$$
+
+where the weight vector $\alpha^i = (\alpha_1^i,\dots ,\alpha_H^i)$ is computed as
+
+$$
+\alpha^ {i} = \operatorname {s o f t m a x} \left[ \frac {q _ {t} ^ {i T} k _ {t} ^ {i , 0}}{\sqrt {d _ {k}}}, \dots , \underbrace {q _ {t} ^ {i T} k _ {t} ^ {i , h}} _ {=: \alpha_ {h} ^ {i}}, \dots , \frac {q _ {t} ^ {i T} k _ {t} ^ {i , H}}{\sqrt {d _ {k}}} \right]. \tag {8}
+$$
+
+The weight of each value is computed by the dot product of the corresponding key and query. Since the projections of the imagined trajectory and the received messages are used for key and query, respectively, the weight can be interpreted as the relative importance of imagined step given the received messages. Note that $W_{Q}$ , $W_{K}$ and $W_{V}$ are updated through the gradients from the other agents.
+
+# 4.3 TRAINING
+
+We implement the proposed IS scheme on the top of MADDPG (Lowe et al. (2017)), but it can be applied to other MARL algorithms. MADDPG is a well-known MARL algorithm and is briefly explained in Appendix A. In order to handle continuous state-action spaces, the actor, critic, observation predictor, and action predictor are parameterized by deep neural networks. For Agent $i$ , let $\theta_{\mu}^{i}, \theta_{Q}^{i}, \theta_{o}^{i}$ , and $\theta_{a}^{i}$ be the deep neural network parameters of actor, critic, observation predictor, and action predictor, respectively. Let $W^{i} = (W_{Q}^{i}, W_{K}^{i}, W_{V}^{i})$ be the trainable parameters in the attention module of Agent $i$ . The centralized critic for Agent $i$ , $Q^{i}$ , is updated to minimize the following loss:
+
+$$
+L _ {Q} \left(\theta_ {Q} ^ {i}\right) = \mathbb {E} _ {x, a, r ^ {i}, x ^ {\prime}} \left[ \left(y ^ {i} - Q ^ {i} (x, a)\right) ^ {2} \right], \quad y ^ {i} = r ^ {i} + \gamma Q ^ {i -} \left(x ^ {\prime}, a ^ {\prime}\right) | _ {a ^ {j ^ {\prime}} = \mu^ {i -} \left(o ^ {\prime j}, m\right)}, \tag {9}
+$$
+
+where $Q^{i-}$ and $\mu^{i-}$ are the target Q-function and the target policy of Agent $i$ and parameterized by $\theta_{\mu}^{i-}, \theta_{Q}^{i-}$ , respectively. The policy is updated to minimize the policy gradient loss:
+
+$$
+\nabla_ {\theta_ {\mu} ^ {i}} J \left(\theta_ {\mu} ^ {i}\right) = \mathbb {E} _ {x, a} \left[ \nabla_ {\theta_ {\mu} ^ {i}} \mu^ {i} \left(o ^ {i}, m\right) \nabla_ {a ^ {i}} Q ^ {i} \left(x, a\right) \mid_ {a ^ {i} = \mu^ {i} \left(o ^ {i}, m\right)} \right] \tag {10}
+$$
+
+Since the MGN is connected to the agent's own policy and other agents' policies, the attention module parameters $W^{i}$ are trained by gradient flow from all agents. The gradient of Agent $i$ 's attention module parameters is given by $\nabla_{W^i}J(W^i) =$
+
+$$
+\frac {1}{N} \sum_ {j = 1} ^ {N} \left[ \mathbb {E} _ {\bar {x}, \bar {m}, x, a} \left[ \nabla_ {W ^ {i}} M G N (\tilde {m} ^ {i} | \bar {\sigma} ^ {i}, \bar {m}) \nabla_ {\tilde {m} ^ {i}} \mu^ {j} \left(o ^ {j}, \tilde {m} ^ {i}, \tilde {m} ^ {- i}\right) \nabla_ {a ^ {j}} Q ^ {j} (x, a) \mid_ {a ^ {j} = \mu^ {j} \left(o ^ {j}, m\right)} \right] \right], \tag {11}
+$$
+
+where $\overline{o}^i$ and $\overline{m}$ are the previous observation and received messages, respectively. The gradient of the attention module parameters are obtained by applying the chain rule to policy gradient.
+
+Both the action predictor and the observation predictor are trained based on supervised learning and the loss functions for agent $i$ are given by
+
+$$
+L \left(\theta_ {a} ^ {i}\right) = \mathbb {E} _ {o ^ {i}, a} \left[ \left(f _ {\theta_ {a} ^ {i}} ^ {i} \left(o ^ {i}\right) - a ^ {- i}\right) ^ {2} \right] \tag {12}
+$$
+
+$$
+L \left(\theta_ {o} ^ {i}\right) = \mathbb {E} _ {o ^ {i}, a, o ^ {\prime i}} \left[ \left(\left(o ^ {\prime i} - o ^ {i}\right) - f _ {\theta_ {o} ^ {i}} ^ {i} \left(o ^ {i}, a ^ {i}, \hat {a} ^ {- i}\right)\right) ^ {2} \right]. \tag {13}
+$$
+
+# 5 EXPERIMENT
+
+In order to evaluate the proposed algorithm and compare it with other communication schemes fairly, we implemented existing baselines on the top of the same MADDPG used for the proposed scheme.
+
+The considered baselines are as follows. 1) MADDPG (Lowe et al. (2017)): we can assess the gain of introducing communication from this baseline. 2) DIAL (Foerster et al. (2016)): we modified DIAL, which is based on Q-learning, to our setting by connecting the message-generation network to other agents' policies and allowing the gradient flow through communication channel. 3) TarMAC (Das et al. (2019)): we adopted the key concept of TarMAC in which the agent sends targeted messages using a signature-based attention model. 4) Comm-OA: the message consists of its own observation and action. 5) ATOC (Jiang & Lu (2018)): an attentional communication model which learns when communication is needed and how to combine the information of agents. We considered three multi-agent environments: predator-prey, cooperative navigation, and traffic junction, and we slightly modified the conventional environments to require more coordination among agents.
+
+
+(a) PP
+
+
+(b) CN
+Figure 2: Considered environments: (a) predator-prey (PP), (b) cooperative-navigation (CN), and (c) traffic-junction (TJ)
+
+
+(c) TJ
+
+# 5.1 ENVIRONMENTS
+
+Predator-prey (PP) The predator-prey environment is a standard task in multi-agent systems. We used a PP environment that consists of $N$ predators and fixed $M$ preys in a continuous state-action domain. We control the actions of predators and the goal is to capture as many preys as possible in a given time. Each agent observes the positions of predators and preys. When $C$ predators catch a prey simultaneously, the prey is captured and all predators get shared reward $R_{1}$ . At every time when all the preys are captured, the preys are respawn and the shared reward value $R_{1}$ increases by one with initial value one to accelerate the capture speed for the given time. We simulated three cases: $(N = 2, C = 1)$ , $(N = 3, C = 1)$ , and $(N = 4, C = 2)$ with all $M = 9$ preys, where the fixed positions of the preys are shown in Fig.2(a). In the cases of $(N = 2, C = 1)$ and $(N = 3, C = 1)$ , the initial positions of all predators are the same and randomly determined. Thus, the predators should learn not only how to capture preys but also how to spread out. In the case of $(N = 4, C = 2)$ , the initial positions of all predators are randomly determined independently. Thus, the predators should learn to capture preys in group of two.
+
+Cooperative-navigation (CN) The goal of cooperative navigation introduced in (Lowe et al. (2017)) is for $N$ agents to cover $L$ landmarks while avoiding collisions among the agents. We modified the original environment so that collisions occur more easily. We set $L = N$ , increased the size of agent, and assigned a specific landmark to cover to each agent (i.e., each agent should cover the landmark of the same color in Fig.2(b)). Each agent observes the positions of predators and landmarks. The agent receives shared reward $R_{1}$ which is the sum of the distance between each agent and the corresponding landmark at each time step and success reward $N^{\prime} \times R_{2}$ where $N^{\prime}$ is the number of the covered landmark. Agents who collide with other agents receive negative reward $R_{3}$ . We simulated the environment with $N = L = 3$ , $R_{1} = 1 / 3$ , $R_{2} = 1$ and $R_{3} = -5$ .
+
+Traffic-junction (TJ) We modified the traffic-junction introduced in Sukhbaatar et al. (2016) to continuous state-action domain. In the beginning of an episode, each agent is randomly located in a predefined initial position and assigned one of three routes: left, right or straight, as seen in Fig.2(c). The observation of each agent consists of the positions of all agents (no route information of other agents) and 2 one-hot vectors which encodes the initial position and assigned route of the agent. The action of each agent is a real value in $(0,1)$ , which indicates the distance to go along the assigned route from the current position. The goal is to go to the destination as fast as possible while avoiding collision with other agents. To achieve the goal, we design reward with three components. Each agent receives success reward $R_{1}$ if it arrives at the destination without any collision with
+
+
+(a) PP $(\mathrm{N} = 2)$
+
+
+(b) PP $(N = 3)$
+
+
+(c) PP $(N = 4)$
+
+
+(d) CN $(N = 3)$
+
+
+(e) TJ $(\mathrm{N} = 3)$
+
+
+(f) TJ $(\mathbf{N} = 4)$
+Figure 3: Performance for MADDPG (blue), DIAL (green), TarMAC (red), Comm-OA (purple), ATOC (cyan) and the proposed IS method (black).
+
+other agents, collision negative reward $R_{2}$ if its position is overlapped with that of other agent, and time negative reward $R_{3}$ to avoid traffic jam. When an agent arrives at the destination, the agent is assigned a new initial position and the route. An episode ends when $T$ time steps elapse. We set $R_{1} = 20$ , $R_{2} = -10$ , and $R_{3} = -0.01\tau$ , where $\tau$ is the total time step after agent is initialized.
+
+# 5.2 RESULTS
+
+Fig. 3 shows the performance of the proposed IS scheme and the considered baselines on the PP, CN, and TJ environments. Figs.3(a)-(d) shows the learning curves of the algorithms on PP and CN and Figs. (e)-(f) show the average return using deterministic policy over 100 episodes every 250000 time steps. All performance is averaged over 10 different seeds. It is seen that Comm-OA performs similarly to MADDPG in the considered environments. Since the received messages come from other agents at the previous time step, Comm-OA in which the communication message consists of agent's observation and action performs similarly to MADDPG. Unlike Comm-OA, DIAL, TarMAC, and ATOC outperform MADDPG and the performance gain comes from the benefit of learning communication protocol in the considered environments except PP with $N = 4$ . In PP with $N = 4$ , four agents need to coordinate to spread out in group of two to capture preys. In this complicated coordination requirement, simply learning communication protocol based on past and current information did not obtain benefit from communication. On the contrary, the proposed IS scheme sharing intention with other agents achieved the required coordination even in this complicated environment.
+
+# 5.3 ANALYSIS
+
+Imagined trajectory The proposed IS scheme uses the encoded imagined trajectory as the message content. Each agent rolls out an imagined trajectory based on its own policy and trained models including action predictor and observation predictor. Since the access to other agents' policies is not available, the true trajectory and the imagined trajectory can mismatch. Especially, the mismatch is large in the beginning of an episode because each agent does not receive any messages from other agents (In this case, we inject zero vector instead of the received messages into the policy). We expect that the mismatch will gradually decrease as the episode progresses and this can be interpreted as the procedure of coordination among agents. Fig.5 shows the positions of all agents and each agent's imagined trajectory over time step in one episode for predator-prey with $N = 3$ predators, where the initial positions of the agents after the end of training ( $t = 0$ ) is bottom right on the map. Note that each agent estimates the future positions of other agents as well as their
+
+
+(a) PP $(\mathbf{N} = 4)$
+
+
+(b) TJ $(N = 3)$
+Figure 4: Performance for our proposed method with different length of imagined trajectory $H$ and without attention module.
+
+own future position due to the assumption of full observability. The first, second, and third row of Fig.5 show the imagined trajectories of all agents at Agent 1 (red), Agent 2 (green) and Agent 3 (blue), respectively. Note that the imagined trajectory of each agent represents its future plan for the environment. As seen in Fig.5, at $t = 0$ the intention of both Agent 1 and Agent 3 is to move to the left to catch preys. At $t = 1$ , all agents receive the messages from other agents. It is observed that Agent 3 changes its future plan to catch preys around the center while Agent 1 maintains its future plan. This procedure shows that coordination between Agent 1 and Agent 3 starts to occur. It is seen that as time goes, each agent roughly predicts other agents' future actions.
+
+We conducted experiments to examine the impact of the length of the imagined trajectory $H$ . Fig.4 shows the performances of the proposed method for different values of $H$ . It is seen that the training speed is reduced when $H = 7$ as compared to $H = 3$ or $H = 5$ . However, the final performance all outperforms the baseline.
+
+Attention In the proposed IS scheme, the imagined trajectory is encoded based on the attention module to capture the importance of components in the imagined trajectory. Recall that the message of Agent $i$ is expressed as $m_t^i = \sum_{h=1}^{H} \alpha_h^i v_t^{i,h}$ , as seen in (7), where $\alpha_h^i$ denotes the importance of $v_t^{i,h}$ , which is the encoded imagined step. Note that the previously proposed communication schemes are the special case corresponding to $\alpha^i = (1,0,\dots,0)$ . In Fig.5, the brightness of each circle is proportional to the attention weight. At time step $t = K$ , where $K = 37$ , $\alpha_2^1$ , which indicates when Agent 1 moves to the prey in the bottom middle, is the highest. In addition, $\alpha_4^3$ , which indicates when Agent 3 moves to the prey in the left middle, is the highest. Hence, the agent tends to send future information when it is near a prey. Similar attention weight tendency is also captured in the time step $t = K + 1$ and $t = K + 2$ .
+
+As aforementioned, the aim of the IS scheme is to communicate with other agents based on their own future plans. How far future is important depends on the environment and on the tasks. In order to analyze the tendency in the importance of future plans, we averaged the attention weight over the trajectories on the fully observable PP environment with 3 agents and a partially observable PP environment with 3 agents in which each agent knows the locations of other agents within a certain range from the agent. The result is summarized in Table 1. It is observed that the current information (time $k$ ) and the farthest future information (time $k + 4$ ) are mainly used as the message content in the fully observable case, whereas the current information and the information next to the present (time $k$ and $k + 1$ ) are mainly used in the partially observable environment. This is because sharing observation information is more critical in the partially observable case than the fully observable case. A key aspect of the proposed IS scheme is that it adaptively selects most important steps as the message content depending on the environment by using the attention module.
+
+We conducted an ablation study for the attention module and the result is shown in Fig. 4. We compared the proposed IS scheme with and without the attention module. We replace the attention
+
+
+Figure 5: Imagined trajectories and attention weights of each agent on PP $(N = 3)$ : 1st row - agent1 (red), 2nd row - agent2 (green), and 3rd row - agent3 (blue). Black squares, circle inside the times-icon, and other circles denote the prey, current position, and estimated future positions, respectively. The brightness of the circle is proportional to the attention weight. $K = 37$ .
+
+module with an averaging layer, which is the special case corresponding to $\alpha^i = \left(\frac{1}{H},\dots ,\frac{1}{H}\right)$ . Fig. 4 shows that the proposed IS scheme with the attention module yields better performance than the one without the attention module. This shows the necessity of the attention module. In the PP environment with 4 agents, the imagined trajectory alone without the attention module improves the training speed while the final performance is similar to that of MADDPG. In the TJ environment with 3 agents, the imagined trajectory alone without the attention module improves both the final performance and the training speed.
+
+| Imagined steps | k | k+1 | k+2 | k+3 | k+4 |
| Fully observable PP (N=3) | 0.33 | 0.18 | 0.15 | 0.14 | 0.20 |
| Partially observable PP (N=3) | 0.32 | 0.22 | 0.17 | 0.15 | 0.14 |
+
+Table 1: Averaged attention weight over the trajectory at time step $k$
+
+# 6 CONCLUSION
+
+In this paper, we proposed the IS scheme, a new communication protocol, based on sharing intention among multiple agents for MARL. The message-generation network in the proposed IS scheme consists of ITGM, which is used for producing predicted future trajectories, and AM, which learns the importance of imagined steps based on the received messages. The message in the proposed scheme is encoded imagined trajectory capturing the agent's intention so that the communication message includes the future information as well as the current information, and their weights are adaptively determined depending on the environment. We studied examples of imagined trajectories and attention weights. It is observed that the proposed IS scheme generates meaningful imagined trajectories and attention weights. Numerical results show that the proposed IS scheme outperforms other communication algorithms including state-of-the-art algorithms. Furthermore, we expect that the key idea of the proposed IS scheme combining with other communication algorithms such as ATOC and TarMAC would yield even better performance.
+
+# 7 ACKNOWLEDGMENTS
+
+This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning(NRF-2017R1E1A1A03070788).
+
+# REFERENCES
+
+Abhishek Das, Théophile Gervet, Joshua Romoff, Dhruv Batra, Devi Parikh, Mike Rabbat, and Joelle Pineau. Tarmac: Targeted multi-agent communication. In International Conference on Machine Learning, pp. 1538-1546, 2019.
+Jakob Foerster, Ioannis Alexandros Assael, Nando De Freitas, and Shimon Whiteson. Learning to communicate with deep multi-agent reinforcement learning. In Advances in neural information processing systems, pp. 2137-2145, 2016.
+Jakob N Foerster, Gregory Farquhar, Triantafyllos Afouras, Nantas Nardelli, and Shimon Whiteson. Counterfactual multi-agent policy gradients. In Thirty-second AAAI conference on artificial intelligence, 2018.
+Shixiang Gu, Ethan Holly, Timothy Lillicrap, and Sergey Levine. Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates. In 2017 IEEE international conference on robotics and automation (ICRA), pp. 3389-3396. IEEE, 2017.
+Jayesh K Gupta, Maxim Egorov, and Mykel Kochenderfer. Cooperative multi-agent control using deep reinforcement learning. In International Conference on Autonomous Agents and Multiagent Systems, pp. 66-83. Springer, 2017.
+Shariq Iqbal and Fei Sha. Actor-attention-critic for multi-agent reinforcement learning. arXiv preprint arXiv:1810.02912, 2018.
+Natasha Jaques, Angeliki Lazaridou, Edward Hughes, Caglar Gulcehre, Pedro A Ortega, DJ Strouse, Joel Z Leibo, and Nando De Freitas. Social influence as intrinsic motivation for multi-agent deep reinforcement learning. arXiv preprint arXiv:1810.08647, 2018.
+Jiechuan Jiang and Zongqing Lu. Learning attentional communication for multi-agent cooperation. In Advances in neural information processing systems, pp. 7254-7264, 2018.
+Woojun Kim, Whiyoung Jung, Myungsik Cho, and Youngchul Sung. A maximum mutual information framework for multi-agent reinforcement learning. arXiv preprint arXiv:2006.02732, 2020.
+Michael L Littman. Markov games as a framework for multi-agent reinforcement learning. In Machine learning proceedings 1994, pp. 157-163. Elsevier, 1994.
+Ryan Lowe, Yi Wu, Aviv Tamar, Jean Harb, OpenAI Pieter Abbeel, and Igor Mordatch. Multi-agent actor-critic for mixed cooperative-competitive environments. In Advances in Neural Information Processing Systems, pp. 6379–6390, 2017.
+Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
+Anusha Nagabandi, Gregory Kahn, Ronald S Fearing, and Sergey Levine. Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 7559-7566. IEEE, 2018.
+Emanuele Pesce and Giovanni Montana. Improving coordination in small-scale multi-agent deep reinforcement learning through memory-driven communication. arXiv preprint arXiv:1901.03887, 2019.
+Neil C Rabinowitz, Frank Perbet, H Francis Song, Chiyuan Zhang, SM Eslami, and Matthew Botvinick. Machine theory of mind. arXiv preprint arXiv:1802.07740, 2018.
+Sebastien Racanière, Théophane Weber, David Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomenech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, et al. Imagination-augmented agents for deep reinforcement learning. In Advances in neural information processing systems, pp. 5690-5701, 2017.
+Roberta Raileanu, Emily Denton, Arthur Szlam, and Rob Fergus. Modeling others using oneself in multi-agent reinforcement learning. arXiv preprint arXiv:1802.09640, 2018.
+
+Mariacristina Roscia, Michela Longo, and George Cristian Lazaroiu. Smart city by multi-agent systems. In 2013 International Conference on Renewable Energy Research and Applications (ICRERA), pp. 371-376. IEEE, 2013.
+David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. nature, 550(7676):354-359, 2017.
+DJ Strouse, Max Kleiman-Weiner, Josh Tenenbaum, Matt Botvinick, and David J Schwab. Learning to share and hide intentions using information regularization. In Advances in Neural Information Processing Systems, pp. 10249–10259, 2018.
+Sainbayar Sukhbaatar, Rob Fergus, et al. Learning multiagent communication with backpropagation. In Advances in neural information processing systems, pp. 2244-2252, 2016.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998-6008, 2017.
+
+# A MULTI-AGENT DEEP DETERMINISTIC POLICY GRADIENT (MADDPG)
+
+MADDPG is an extended version of DDPG to multi-agent systems under the framework of CTDE (Lowe et al. (2017)). Each agent has a deterministic policy $a_{t} = \mu_{\theta_{\mu}}^{i}(o_{t})$ conditioned on its own observation $o_{t}$ and a centralized critic $Q_{\theta_Q}^i (x,a) = E[R_t^i |x_t = x,a_t = a]$ conditioned on the joint actions $a_{t}$ and state information $x_{t}$ . Here, $x_{t}$ can be state $s_t$ or the set of observations $(o_t^1,\dots ,o_t^N)$ . The centralized critic is trained by minimizing the following loss:
+
+$$
+L _ {Q} \left(\theta_ {Q}\right) = \mathbb {E} _ {x, a, r ^ {i}, x ^ {\prime}} \left[ \left(y ^ {i} - Q _ {\theta_ {Q}} ^ {i} (x, a)\right) ^ {2} \right], \quad y ^ {i} = r ^ {i} + \gamma Q _ {\theta_ {Q} ^ {-}} ^ {i} \left(x ^ {\prime}, a ^ {\prime}\right) | _ {a ^ {j ^ {\prime}} = \mu^ {i -} (o ^ {j})}, \tag {14}
+$$
+
+where $\theta_{Q}^{-}$ is the parameter of target Q-function and $\mu^{i - }$ is the target policy of Agent $i$ . The policy is trained by Deterministic Policy Gradient (DPG), and the gradient of the objective with respect to the policy parameter $\theta_{\mu^i}$ is given by
+
+$$
+\nabla_ {\theta_ {\mu^ {i}}} J \left(\mu^ {i}\right) = \mathbb {E} _ {x, a} \left[ \nabla_ {\theta_ {\mu^ {i}}} \mu^ {i} \left(o ^ {i}\right) \nabla_ {a ^ {i}} Q _ {\theta_ {Q}} ^ {i} (x, a) \mid_ {a ^ {i} = \mu^ {i} \left(o ^ {i}\right)} \right]. \tag {15}
+$$
+
+# B TRAINING DETAILS AND HYPERPARAMETERS
+
+Table 2: Hyperparameters of all algorithms
+
+ | MADDPG | TARMAC | DIAL | ATOC | IS |
| REPLAY BUFFER SIZE | 2 × 105 | 2 × 105 | 2 × 105 | 2 × 105 | 2 × 105 |
| DISCOUNT FACTOR | 0.99 | 0.99 | 0.99 | 0.99 | 0.99 |
| MINI-BATCH SIZE | 128 | 128 | 128 | 128 | 128 |
| OPTIMIZER | ADAM | ADAM | ADAM | ADAM | ADAM |
| LEARNING RATE | 0.0005 | 0.0005 | 0.0005 | 0.0005 | 0.0005 |
| NUMBER OF HIDDEN LAYERS (ALL NETWORKS) | 2 | 2 | 2 | 2 | 2 |
| NUMBER OF HIDDEN UNITS PER LAYER | 128 | 128 | 128 | 128 | 128 |
| ACTIVATION FUNCTION FOR HIDDEN LAYER | RELU | RELU | RELU | RELU | RELU |
| MESSAGE DIMENSION ON PP | - | 5 | 5 | 5 | 5 |
| MESSAGE DIMENSION ON CN | - | 3 | 3 | 3 | 3 |
| MESSAGE DIMENSION ON TJ | - | 3 | 3 | 3 | 3 |
| ATTENTION DIMENSION ON PP | - | 5 | - | 5 | 5 |
| ATTENTION DIMENSION ON CJ | - | 3 | - | 3 | 3 |
| ATTENTION DIMENSION ON TJ | - | 3 | - | 3 | 3 |
| IMAGINED TRAJECTORY LENGTH H | - | - | - | - | 5 |
+
+# C ADDITIONAL ABLATION STUDY
+
+We conducted an additional experiment to examine whether performance improvement is gained from sharing intention or having a prediction of the future. We compared the proposed IS scheme with MADPPG-p in which the agent does not use communication, but uses their own imagined trajectory as additional input. Fig. 6 shows that the proposed IS scheme outperforms MADPPG-p. Thus, sharing intention, which is a core idea of this paper, is more important than having a prediction of the future.
+
+
+(a) PP $(N = 2)$
+
+
+(b) PP $(N = 3)$
+Figure 6: Performance for MADDPG (blue), MADDPG-p (orange), and the proposed IS method (black).
+
+# D PSEUDO CODE
+
+Algorithm 1 Intention Sharing (IS) Communication Scheme
+Initialize parameter $\theta_{\mu}^{i},\theta_{Q}^{i},\theta_{\mu}^{i - },\theta_{Q}^{i - },\theta_{o}^{i},\theta_{a}^{i},W^{i},\forall i\in \{1,\dots ,N\}$
+for episode $= 1,2,\dots$ do Initialize state $s_1$ messages $m_0 = (\overrightarrow{0},\dots ,\overrightarrow{0})$ and each agent observes $o_1^i$
+for $t < = T$ and $s_t\neq$ terminal do Each agent receives the messages $m_{t - 1} = (m_{t - 1}^{1},\dots ,m_{t - 1}^{N})$ Each agent selects action $a_{t}^{i}\sim \pi^{i}(\cdot |o_{t}^{i},m_{t - 1})$ for each agent $i$ Execute $a_{t}$ and each agent $i$ receives $r_t$ and $o_{t + 1}^i$
+for $h = 1,2,\dots ,H$ do Predict other agents' actions $\hat{a}_{t + h - 1}^{-i}$ from the action predictor $f_{a}^{i}$ Generate $\hat{o}_{t + h}^{i}$ from observation predictor $f_{o}^{i}(o_{t + h - 1}^{i},\hat{a}_{t + h - 1}^{i},\hat{a}_{t + h - 1}^{-i})$ Generate $\hat{a}_{t + h}^{i}\sim \pi^{i}(\cdot |\hat{o}_{t + h}^{i},m_{t - 1})$
+end for Each agent generates the messages $m_t^i$ by injecting $\tau^i = (\tau_t^i,\hat{\tau}_{t + 1}^i,\dots ,\hat{\tau}_{t + H - 1}^i)$ into Atten tion Module (AM) Store transitions in $D$
+end for
+for each gradient step do Update $\theta_Q^i$ and $(\theta_o^i,\theta_a^i)$ by minimizing the loss (9) and the loss (12) Update $\theta_{\mu}^{i}$ and $W^{i}$ based on the gradient (10) and the gradient (11)
+end for
+Update $\theta_{\mu}^{i - },\theta_{Q}^{i - }$ using the moving average method
+end for
\ No newline at end of file
diff --git a/communicationinmultiagentreinforcementlearningintentionsharing/images.zip b/communicationinmultiagentreinforcementlearningintentionsharing/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..0e67f8c926385bffbd290d1fc4adec56c12878d0
--- /dev/null
+++ b/communicationinmultiagentreinforcementlearningintentionsharing/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3bbcb10e3449bdd63958ee004d8a74bd6a52e49d668eb5d6e612bf1ecb7328ba
+size 468356
diff --git a/communicationinmultiagentreinforcementlearningintentionsharing/layout.json b/communicationinmultiagentreinforcementlearningintentionsharing/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..1a2b014eb8cfb8a95a72bdf561723b309bfdf0d5
--- /dev/null
+++ b/communicationinmultiagentreinforcementlearningintentionsharing/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b64bfb5b50b47a385cfcb5127c4d0aa3bfc1ce01504aa0db043ed07cac992163
+size 457909
diff --git a/compofacompoundonceforallnetworksforfastermultiplatformdeployment/347398fe-6f33-4332-8fae-4adfa9262732_content_list.json b/compofacompoundonceforallnetworksforfastermultiplatformdeployment/347398fe-6f33-4332-8fae-4adfa9262732_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..9889342ccb6e65dfdfb4abc14b798575fead8885
--- /dev/null
+++ b/compofacompoundonceforallnetworksforfastermultiplatformdeployment/347398fe-6f33-4332-8fae-4adfa9262732_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3c57ef56170406a6aec5a55401f92473335aa9126e7c2bfcc7792ce276bbe58d
+size 70930
diff --git a/compofacompoundonceforallnetworksforfastermultiplatformdeployment/347398fe-6f33-4332-8fae-4adfa9262732_model.json b/compofacompoundonceforallnetworksforfastermultiplatformdeployment/347398fe-6f33-4332-8fae-4adfa9262732_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..3eb81b22a2bd31df4575345f39bb82a5e486c3b6
--- /dev/null
+++ b/compofacompoundonceforallnetworksforfastermultiplatformdeployment/347398fe-6f33-4332-8fae-4adfa9262732_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:97c9ecb09b9d912c4e31320a7d225b831fc190a0d744b7ea312119c69f90c6c1
+size 84193
diff --git a/compofacompoundonceforallnetworksforfastermultiplatformdeployment/347398fe-6f33-4332-8fae-4adfa9262732_origin.pdf b/compofacompoundonceforallnetworksforfastermultiplatformdeployment/347398fe-6f33-4332-8fae-4adfa9262732_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..8f0537a56626f62c4b7999912cd201096c2af3ea
--- /dev/null
+++ b/compofacompoundonceforallnetworksforfastermultiplatformdeployment/347398fe-6f33-4332-8fae-4adfa9262732_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3aaddc686d46300332d03ba874348a5bccd4ea835a6b19c95243b237cbfe5961
+size 974392
diff --git a/compofacompoundonceforallnetworksforfastermultiplatformdeployment/full.md b/compofacompoundonceforallnetworksforfastermultiplatformdeployment/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..e258932191d928d27d89618422f6432a3c9ec200
--- /dev/null
+++ b/compofacompoundonceforallnetworksforfastermultiplatformdeployment/full.md
@@ -0,0 +1,264 @@
+# COMPOFA: COMPOUND ONCE-FOR-ALL NETWORKS FOR FASTER MULTI-PLATFORM DEPLOYMENT
+
+Manas Sahni, Shreya Varshini, Alind Khare, Alexey Tumanov
+
+Georgia Institute of Technology
+
+{sahnimanas, shreyavarshini, alindkhare, atumanov}@gatech.edu
+
+# ABSTRACT
+
+The emergence of CNNs in mainstream deployment has necessitated methods to design and train efficient architectures tailored to maximize the accuracy under diverse hardware & latency constraints. To scale these resource-intensive tasks with an increasing number of deployment targets, Once-For-All (OFA) proposed an approach to jointly train several models at once with a constant training cost. However, this cost remains as high as 40-50 GPU days and also suffers from a combinatorial explosion of sub-optimal model configurations. We seek to reduce this search space – and hence the training budget – by constraining search to models close to the accuracy-latency Pareto frontier. We incorporate insights of compound relationships between model dimensions to build CompOFA, a design space smaller by several orders of magnitude. Through experiments on ImageNet, we demonstrate that even with simple heuristics we can achieve a 2x reduction in training time1 and 216x speedup in model search/extraction time compared to the state of the art, without loss of Pareto optimality! We also show that this smaller design space is dense enough to support equally accurate models for a similar diversity of hardware and latency targets, while also reducing the complexity of the training and subsequent extraction algorithms.2
+
+# 1 INTRODUCTION
+
+CNNs are emerging in mainstream deployment across diverse hardware platforms, latency requirements, and/or workload characteristics. The available processing power, memory, and latency requirements may vary vastly across deployment platforms – say, from server-grade GPUs to low-power embedded devices, cycles of high or low workload, etc.
+
+Since model accuracies tend to increase with computational budget, it becomes vital to build models tailored to each deployment scenario, maximizing accuracy constrained by the desired model inference latency. These efficient models lie close to the Pareto-frontier of the accuracy-latency trade-off. Building such models (either manually or by searching) and then training them are resource-intensive tasks – requiring massive computational resources, expertise in both ML and underlying systems, time, dollar cost, and $CO_2$ emissions every time they are performed. Repeating such intensive processes for each deployment target is prohibitively expensive w.r.t. multiple metrics of cost and this does not scale.
+
+Once-For-All (OFA) (Cai et al., 2020) proposed to address this challenge by decoupling the search and training phases through a novel progressive shrinking algorithm. OFAs builds a family of $10^{19}$ models of varying depth, width, kernel size, and image resolution. These models are jointly trained in a single-shot via sharing of their intersecting weights. Once trained, search techniques can extract specialized sub-networks that meet specific deployment targets – a task that can then be independently repeated on the same trained family.
+
+This massive search space leads to a training cost that remains prohibitively expensive. Though the cost can be amortized over a number of deployment targets, it's still significant - reaching 1200 GPU hours for OFA. The search space arises from training every possible model combination, and many
+
+
+Figure 1: (a): Conventional methods require expensive designing & training per deployment platform, which is infeasible to scale. (b): OFA co-trains a family of subnetworks of a teacher supernet. However, combinatorial explosion of depth (D) and width (W) compels progressive, phased training requiring 1200 GPU hours. (c): CompOFA exploits the insight of compound couplings between D & W to vastly simplify the search space while maintaining Pareto optimality. The smaller space can be trained in half the time without phases, and gives equally performant and diverse model families
+
+of these models may lie well below the accuracy-latency Pareto frontier, as represented in Figure 1(b). This exhaustive approach misses opportunities for any accuracy- or latency-guided exploration in such a vast space, thus suffering a clear inefficiency. These sub-optimal models not only go unutilized but also add training interference, which necessitates a longer phased training to stabilize their simultaneous optimization. Finally, searching & extracting an optimal model from this space can only be done via indirect estimators of their accuracy and latency, as all model combinations cannot be enumerated.
+
+On the other hand, we argue that such a large search space in unnecessary for two reasons. First, common practices as well as empirical studies (Tan & Le, 2019; Radosavovic et al., 2020) have shown that model dimensions such as depth, width, and resolution are not orthogonal - models that follow some compound couplings between these dimensions produce a better accuracy-latency trade-off than those with unconstrained settings. Informally, increasing model capacity along one dimension (say, depth) is helped by an accompanying increase along another dimension (say, width). Secondly, a much coarser latency granularity (order of $1\mathrm{ms}$ ) is sufficient for practical systems deployment.
+
+In this work we propose CompOFA - a model design space leveraging compound couplings between model dimensions, and demonstrate the following:
+
+1. Utilizing the insight of compound coupling, we show that simple, easy to implement heuristics can capture models close to the Pareto frontier (depicted in Figure1(c)). This enables us to reduce OFA's $10^{19}$ models to just 243 in CompOFA, and still train a model family with an equally good accuracy-latency tradeoff.
+2. We show that this tractable design space directly reduces interference in training, which allows us to reduce training duration and cost by $2\mathbf{x}$ .
+3. Once trained, CompOFA's simplicity avails itself to easier extraction that's faster by 216x.
+4. Despite the size reduction, we show that the latency granularity is sufficient to cover the same range and diversity of hardware targets as OFA.
+5. Finally, the generality of CompOFA's insights is validated by training it on another base architecture, achieving similar gains.
+
+# 2 RELATED WORK
+
+Efficient neural network design has been an active area of research due to the high computational complexity of CNNs. NAS is increasingly used to guide or replace previously manual design pro
+
+cesses. Early NAS techniques (Zoph et al., 2018; Zoph & Le, 2016) sampled several architectures and trained them from scratch every time, making them extremely compute-hungry. The technique of weight-sharing has emerged as one way to address these inefficiencies (Berman et al., 2020; Bender et al., 2018; Brock et al., 2017; Guo et al., 2019; Liu et al., 2018; Pham et al., 2018). These methods slice candidate sub-networks from a larger super-network, thereby sharing weights between them. Simultaneously, latency-guided NAS methods (Tan et al., 2019; Cai et al., 2018; Berman et al., 2020; Stamoulis et al., 2019; Wu et al., 2019) have sought to incorporate model complexity into their search to find efficient models for a given target latency on a given hardware target.
+
+Nevertheless, these methods yield a single model per run – both search and training must be repeated for new deployment targets. With compute costs reaching as high as $O(10^{4})$ GPU hours, this linear scaling is infeasible for an ever-growing need for multi-platform, multi-latency deployment. Once-For-All (OFA) (Cai et al., 2020) proposed to reduce this cost by using weight-sharing for a large model family that collectively supports diverse range of latencies. They perform one-time training of $O(10^{19})$ sub-networks, which can then be independently searched to support a given deployment target later, thus amortizing the training cost. However, this cost is still prohibitively expensive, reaching 1200 GPU hours. The unnecessarily large search space complicates both training and searching, stemming from an uninhibited combinatorial explosion of model configurations.
+
+On the other hand, empirical studies on neural network design spaces (Tan & Le, 2019; Radosavovic et al., 2020) have recently shown that model dimensions (e.g., depth, width, resolution) are not independent – underlying relations between them can be used to obtain an optimal accuracy-latency trade-off. In other words, the number of degrees of freedom in the architecture search space can be reduced without loss of Pareto optimality. This insight forms the basis of our work to constrain the design space of OFA networks while maintaining the same quality (w.r.t. accuracy) as well as diversity of models with reduced train and search costs.
+
+Dynamic Neural Networks with weight-sharing across models of varying latencies have been explored before (Yu et al., 2018; Yu & Huang, 2019) but with much fewer models (e.g. 4 in Yu et al. (2018)) and few dimensions (e.g. only width) which make for much sparser and narrower support for diverse latency targets. We explore a middle ground between these works and Cai et al. (2020) to build design spaces that are tractable yet sufficiently diverse to support many latency targets and varying in multiple model dimensions.
+
+Yu et al. (2020) proposed replacing OFA's multi-stage training by a single-stage one, challenging the usual practice of progressive training in one-shot NAS. However, its training cost remains high at over 2300 TPU hours for $O(10^{12})$ models. Contemporary to our work, Wang et al. (2020) point out a similar wasted computation on sub-optimal models in one-shot NAS, and use attention mechanisms to push the Pareto front. Our approach instead focuses on achieving the same Pareto frontier as OFA with a smaller budget. We also emphasize architectural insights to show that an intractably large cardinality is not necessary in the first place – simple heuristics identify good models while enabling a host of other cost savings.
+
+# 3 MOTIVATION
+
+# 3.1 DESIGN SPACE PARAMETRIZATION
+
+Consider a network architecture $\mathcal{N}$ composed of $m$ micro-architectural blocks, $B_{1}, B_{2}, \ldots, B_{m}$ . Each block is parametrized by its depth, per-layer width, and per-layer kernel size as $B_{i}(d_{i}, W_{i}, K_{i})$ . Here $d_{i}$ denotes the number of layers (depth) in the block, and $W_{i}$ and $K_{i}$ are lists denoting the width & kernel sizes of each of these $d_{i}$ layers.
+
+Once-For-All(Cai et al., 2020) builds a family of networks $\mathcal{N}_1, \mathcal{N}_2, \ldots$ of varying accuracies and latencies. The weights of common layers and channels of a block $B_i$ are shared across all the networks. The "block" used in OFA is the Inverted Residual from MobileNetV3 (Howard et al., 2019) and hence "width" here refers to the channel expansion ratio. $d_i, w_{ij}, k_{ij}$ are sampled independently from sets $D = [2,3,4], W = [3,4,6], K = [3,5,7]$ respectively.
+
+In OFA, each of these dimensions is treated as orthogonal. Hence, the resulting number of possible networks is enormous, with $O(10^{19})$ models for $m = 5$ blocks(Cai et al., 2020).
+
+
+Figure 2: Accuracy and latency heatmaps for varying uniform depth and expansion ratios and fixed kernel size=5, as measured in MobileNetV3 architecture. Latency is measured in ms on NVIDIA RTX 2080 Ti GPU with BS=64.
+
+
+
+# 3.2 COMPOUND RELATION OF MODEL DIMENSIONS
+
+The combinatorial explosion from just 3 independent model dimensions of $D, W, K$ yields a model family with an enormous number of models. While the aim for jointly training "every possible model" in this large design space is to support a diverse range of hardware platforms, we note the following concerns that arise from this:
+
+# 1. Model dimensions are not orthogonal
+
+Scaling model dimensions like depth, width, and resolution to higher FLOP regimes is a common practice and Tan & Le (2019) systematically showed that compound relations exist between these dimensions for achieving an optimal accuracy-latency trade-off. In particular, increasing model dimensions in accordance with a compound scaling rule gives better results than doing so independently. Radosavovic et al. (2020) elevated this concept to the model population level and found that a quantized linear relation between model depth and width yielded design spaces with a better concentration of well-performing models. These works solidified the common practice of jointly increasing model depth and width. Yet, all these other sub-optimal models are still included in the search space when all dimensions are sampled independently.
+
+# 2. Large model families complicate training & searching
+
+The interference between OFA's sub-networks complicates their simultaneous optimization and necessitates techniques like progressive shrinking (Cai et al., 2020), increasing training time. Further, extracting models for a desired target cannot rely on simple enumeration and instead needs to rely on predicting model accuracy/latency. This requires additional training for these accuracy predictors specific to the trained model family and collecting latency look-up tables (Cai et al., 2018). With a more tractable cardinality, we could rely on more standard, faster approaches during training and achieve "off-the-shelf" usability during search.
+
+# 3. Hardware latency differences below a certain granularity are noisy
+
+Finally, the difference between unique architectures' accuracy or latency needs to be distinguishable during search. The original OFA design space covers a FLOP range of 120-560 MFLOPs and within this range there exist $O(10^{19})$ architecture choices. Even if these models were uniformly distributed into buckets of 1 FLOP, we would still have $O(10^{11})$ models in each FLOP target. Note that these results do not account for variable resolution, which would otherwise further multiply the number of choices at each FLOP target. On any hardware, inference time below a certain threshold is expected to be indistinguishable from noise. This threshold can vary depending on application context or hardware (e.g. it can be upto 1-5 ms in ML serving scenarios). Irrespective of the threshold, having these many models with such fine-grained differences in their compute requirement is well below the minimum error resolution at which they can be meaningfully compared. This motivates us to consider that a design space that is sparser by several orders of magnitude might still support the same density and range of deployment targets.
+
+Figure 2 shows heatmaps comparing accuracy and latency of OFA sub-networks with uniform depth and width for a fixed kernel size. A closer look into these heatmaps, depicts a monotonically increasing gradient along the diagonal of each of these two dimensional heatmaps. This observation
+
+showcases that when the configurations are increased along both the depth and width dimensions we retrieve a class of sub-networks that achieve a better accuracy-latency trade-off as opposed to those formulated through a single dimensional configuration change. Thereby, emphasizing the existence of a coefficient that dictates the relationship between depth and width configurations of a model. This suggests that by quantitatively conforming to this coupling between the model dimensions, we can aim toward reducing the network design space of an architecture without sacrificing its accuracy. Subsequently, this further implies that OFA is unnecessarily training a lot more models than required and we additionally show that OFA's model distribution suffers from high variance as well (Appendix A.1). This leads us toward a solution that leverages these insights to prune out redundant and ineffective sub-networks otherwise generated by the unconstrained OFA architecture.
+
+# 4 COMPOUND OFA
+
+# 4.1 COUPLING HEURISTIC
+
+Motivated by the above observations, we constrain the network design space using a simple heuristic: depth and width dimensions should increase or decrease together, not independently. Specifically, in each block, whenever we sample the $i^{\text{th}}$ largest depth $d_i \in D$ , we correspondingly sample the $i^{\text{th}}$ largest width $w_i \in W$ for all $d_i$ layers in the block. For instance, with $D = [2,3,4]$ and $W = [3,4,6]$ , each block can have either two layers of channel expansion ratio 3, or three layers of ratio 4, or four layers of ratio 6. Once this modified search space is defined by the heuristic, the method of extracting a given model configuration remains the same - we slice out a sub-network of the specified dimensions from the largest network, thus sharing common weights between all sub-networks.
+
+This significantly reduces the degrees of freedom in the design space. An additional consequence of this is that all layers within a block now have the same width, further reducing the architecture-count. Nevertheless, we will show in Section 5 that the design space is still diverse and dense enough to provide models for different deployment requirements.
+
+We fix the kernel size by default, but for fair comparison we also show a variant with Elastic-Kernel. With a fixed-kernel, we create a simplified search space with kernel sizes fixed to either 3 or 5 in each block, to match the original setting used in MobileNetV3. Thereby in this design space, depth (or width) alone can fully specify a block's configuration. For 5 blocks, this yields a family of $3^5 = 243$ models. In the elastic-kernel design space, we expand the kernel size as done in OFA, sampled from $K = [3,5,7]$ per layer. This results in a family of $(3^{2} + 3^{3} + 3^{4})^{5}\approx 10^{10}$ models. Unless otherwise specified, we use "CompOFA" to refer to the fixed-kernel design space.
+
+As we show in the following section, the source of our training speedup is the reduction in cardinality of the search space. The input resolution to the model affects the model inference time but does not affect the number of unique trainable architectures. Hence, we keep the resolution elastic, which allows using one architecture to support multiple latencies for "free", without increasing our search space cardinality (and soon, training budget).
+
+# 4.2 TRAINING SPEEDDUP
+
+Cai et al. (2020) proposed a "progressive shrinking" approach to train the OFA sub-networks in a phased manner, starting with the largest model and then progressively including smaller sub-networks in training. At each batch, a different sub-network is sampled for forward/backward pass. If the expected time to complete one epoch in phase $p$ is $\mathbb{E}(t_p)$ and the training is run for $e_p$ number of epochs, then the total time to train a model family becomes $T_{family} \propto \sum_{p \in Phases} e_p \times \mathbb{E}(t_p)$ .
+
+Our goal is to reduce $T_{family}$ while achieving the same level of accuracy-latency trade-off and supporting the same range of deployment scenarios. Since we wish to train models of similar latency targets, we try to keep $\mathbb{E}(t_p)$ unchanged and target a reduction in the number of phases to reduce $T_{family}$ .
+
+While the number of models does not explicitly factor in $T_{family}$ , note that it implicitly affects the count & duration of training phases – progressive shrinking was required in the first place due to interference between a large number of models. The maximum number of simultaneously trainable
+
+Table 1: Training schedule, duration, and GPU hour comparisons for OFA and CompOFA. CompOFA reduces the training time of OFA by $50\%$ with a fixed kernel space. The columns $K, D, W$ represent the sets of possible model dimensions (as in Section 3.1). While CompOFA allows all sub-networks to be trained after the teacher network, OFA progresses to the full search space with multiple phases of similar duration. See Appendix A.3 for CompOFA with Elastic Kernel
+(a) Once-For-All (Cai et al., 2020) (Elastic Kernel)
+
+| Phase | K | D | W | Nsample | Epochs | Wall Time | GPU Hours |
| Teacher | 7 | 4 | 6 | 1 | 180 | 28h 45m | 172h 30m |
| Elastic Kernel | 3, 5, 7 | 4 | 6 | 1 | 125 | 26h 51m | 161h 06m |
| Elastic Depth-1 | 3, 5, 7 | 3, 4 | 6 | 2 | 25 | 7h 46m | 46h 36m |
| Elastic Depth-2 | 3, 5, 7 | 2, 3, 4 | 6 | 2 | 125 | 38h 32m | 231h 12m |
| Elastic Width-1 | 3, 5, 7 | 2, 3, 4 | 4, 6 | 4 | 25 | 10h 06m | 60h 36m |
| Elastic Width-2 | 3, 5, 7 | 2, 3, 4 | 3, 4, 6 | 4 | 125 | 51h 03m | 306h 18m |
| Total | | | | | 605 | 163h 03m | 978h 18m |
+
+(b) CompOFA (Fixed Kernel)
+
+| Phase | K | D | W | Nsample | Epochs | Wall Time | GPU Hours |
| Teacher | 7 | 4 | 6 | 1 | 180 | 28h 45m | 172h 30m |
| Compound | - | 2,3,4 | 3,4,6 | 4 | 25 | 8h 43m | 52h 18m |
| Compound | - | 2,3,4 | 3,4,6 | 4 | 125 | 44h 47m | 268h 42m |
| Total | | | | | 330 | 82h 15m | 493h 30m |
+
+models in CompOFA's search space is smaller by 17 orders of magnitude. With this significantly smaller family size, we find that the interference between these models is reduced to the extent that we are now able to remove progressive shrinking altogether, and instead achieve the same accuracy with all trainable sub-networks included in training (see Section 5).
+
+In Section 5 we show that this allows us to reduce the training time by $50\%$ . Additionally, we show an ablation to confirm that this reduction in phases is only possible due to the reduced number of models in CompOFA, and not in the original large design space of OFA.
+
+# 5 EXPERIMENTS
+
+# 5.1 TRAINING SETUP
+
+We train the full-sized CompOFA network ( $D = [4], W = [6]$ ) on ImageNet (Deng et al., 2009) using the same base architecture of MobileNetV3 as OFA. We use a batch size of 1536 and a learning rate of 1.95 to train on 6 NVIDIA V100 GPUs. All other training hyperparameters are kept the same as OFA for accurate comparison.
+
+Next, for CompOFA with fixed kernel sizes, we unlock all permissible model configurations $(D,W = [(2,3),(3,4),(4,6)])$ in one stage, as opposed to progressively adding more configurations. We train this for 25 epochs with an initial learning rate of 0.06, followed by 120 epochs with an initial learning rate of 0.18 to obtain our final network. We sample $N_{\text{sample}} = 4$ models in each batch and aggregate their gradients before each optimizer step. The full network serves as a teacher model for knowledge distillation (Hinton et al., 2015). The comparison of the possible configurations and epoch durations for each training phase is summarized in Table 1.
+
+# 5.2 TRAINING TIME AND COST
+
+By using just one stage of training after the teacher model (in the default fixed kernel setting), instead of multiple stages of the same duration, we are able to reduce the training cost of CompOFA by $50\%$ . Our reproduction of OFA's training scheme on 6 GPUs takes 978 GPU hours while CompOFA-Fixed Kernel finishes in 493 GPU hours, as shown in Table 1. Table 2 shows the translation of this time reduction in terms of dollar cost and $CO_2$ emissions. Note that the total duration of an epoch with the same design space configuration is comparable, as we do not target a speedup by training smaller
+
+Table 2: Comparing OFA and CompOFA on training monetary cost, $CO_2$ emission and average search duration. Monetary cost is based on hourly price of 1 NVIDIA V100 on Google Cloud. $CO_2$ emission estimation is based on Strubell et al. (2019). Search time is reported for an average over latency targets, without the use of latency estimators
+
+| Method | Cost | CO2 emission | Avg. Search Time |
| OFA | $2.4k | 277 lbs | 4.5 hours |
| CompOFA (Elastic Kernel) | $1.7k | 196 lbs | 2.25 hours |
| CompOFA (Fixed Kernel) | $1.2k | 138 lbs | 75 seconds |
+
+
+Figure 3: CompOFA networks consistently achieve comparable and higher ImageNet accuracies for similar latency and FLOP constraints on CPU, GPU and mobile platforms within 1-5ms granularities.
+
+
+
+
+
+
+
+models. Instead, the speedup stems largely from halving the number of epochs, by way of reducing the phases for training. This reduction is in turn possible due to a smaller, constrained design space.
+
+# 5.3 ACCURACY-LATENCY TRADE-OFF
+
+After training CompOFA networks, we carried out an evolutionary search (Real et al., 2019) to retrieve specialized sub-networks for diverse hardware deployment scenarios, as done in OFA. The evolutionary search fetches trained sub-networks that maximize accuracy subject to a target efficiency constraint (latency or FLOPs). Similar to OFA, we use a 3-layer accuracy predictor common across all platforms. For latency, we use a lookup-table based latency estimator for Samsung Note 10 CPU provided by Cai et al. (2020). For other hardware platforms – namely NVIDIA GeForce RTX 2080 Ti GPU and the Intel(R) Xeon(R) Gold 6226 CPU – we measured actual latency with a batch size of 64 and 1 respectively. The estimated accuracies of the best models returned by this search were then verified on the actual ImageNet validation set.
+
+Figure 3 reports the performances of accuracy-latency trade-off for CompOFA and OFA networks. We observe that the best model in CompOFA for evaluated latency targets are at least as accurate, despite their significantly smaller family size and training budget. This result validates our intuition behind the simple heuristic that aids in creating a smaller design space without losing out on Pareto-optimality or density of the generated models.
+
+
+Figure 4: CDF comparisons of 50 randomly sampled models sampled in latency buckets of 5ms each. CompOFA has a higher fraction of its population at or better than a given classification error.
+
+
+
+
+
+
+
+
+
+
+Figure 5: Random sampling of 20 models in each latency bucket of 5ms for CompOFA & OFA actual latency (left) and bucketed latency (right). CompOFA yields a higher average accuracy, i.e. as a population it has a higher concentration of accurate models. The shaded regions in the left plot show the $95\%$ confidence interval of the average accuracy.
+
+
+
+Figure 6: Left and Center: Comparing accuracies with and without progressive shrinking (PS). OFA suffers upto $3.7\%$ accuracy drop when progressive shrinking is removed. For CompOFA, no such accuracy drop is seen when compared to the longer training with progressive shrinking.
+
+Right: CDF of model configurations common to OFA & CompOFA. CompOFA does not lose accuracy despite its smaller cardinality and smaller training budget.
+
+
+
+
+
+# 5.4 DESIGN SPACE COMPARISON
+
+Apart from individual models, we further evaluate CompOFA at a population level through a statistical sampling of design space as introduced by Radosavovic et al. (2020; 2019). Using Samsung Note 10 as sample hardware, we divide the supported range of latencies into buckets of 5ms each. For each latency bucket, we randomly sample 50 models in OFA and CompOFA. Figure 4 plots the cumulative distribution function (CDF) of classification error for each of these latency buckets. In each bucket, the CDFs depict the fraction of models that exceed a given accuracy on the x-axis. CompOFA's CDFs lie above those of OFA, showing that CompOFA has a higher fraction of accurate models. Next, in Figure 5, we randomly sample 20 models in each bucket for each search space, and show that the average accuracy of a randomly picked model from CompOFA is higher than that in OFA. The takeaway of both these evaluations is that CompOFA yields a better concentration of accurate models and fewer sub-optimal models - i.e. a more accurate overall model population. Note that this is different from the best accuracy per latency target, where CompOFA matches OFA but with half the training budget.
+
+# 5.5 EFFECT OF NUMBER OF PHASES
+
+Cai et al. (2020) showed that OFA networks suffer a top-1 accuracy drop of up to $3.7\%$ when progressive shrinking is removed, showing its role in resolving interference. With $O(10^{19})$ models interfering with each other for simultaneous optimization of their weights, a phased approach is needed to stabilize training by gradually train model configurations. In CompOFA, our primary method of reducing the training time is by reducing the number of phases in training. We repeat a
+
+similar ablation to compare the effect of progressive shrinking in OFA & CompOFA. For both design spaces, we train with & without progressive shrinking and compare the accuracies of common subnetworks. Figure 6 shows CompOFA achieves the same accuracy with and without progressive shrinking. We attribute this to lower interference between sub-networks in a design space smaller by 17 orders of magnitude.
+
+Next, we build a CDF of model configurations in CompOFA and then extract the same sub-networks from OFA. Figure 6 shows that CompOFA still maintains a slightly higher accuracy despite comparing the same models in both spaces. Thus, models discarded in the process of constraining the design space do not add to the accuracy of OFA or CompOFA – the retained models can achieve the same (if not better) accuracy independent of any potential collaboration from these discarded models.
+
+# 5.6 GENERALIZING TO OTHER ARCHITECTURES
+
+The guiding principle behind our heuristic – namely, the optimality of coupling depth & width over the unconstrained search space – is expected to apply to other architectures, datasets, and tasks. We demonstrate similar savings in training budget with our heuristic applied to a different base architecture.
+
+We train OFA and CompOFA with the base architecture changed from MobileNetV3 to Proxyless-NAS (Cai et al., 2018). We keep the same depths and widths ( $D = [2,3,4], W = [3,4,6]$ ) and build the search spaces of OFA & CompOFA using the cartesian product or the heuristic, respectively. In CompOFA, we fix the kernel size to 3 in all layers. Both networks use a width multiplier of 1.3.
+
+The training hyperparameters, schedule, and search spaces are identical to those described in Section 5.1 and Table 1. Hence, CompOFA again requires half the training epochs – and therefore half the training time, cost, and $CO_2$ emissions. Appendix A.2 shows CDFs of models common to both OFA-ProxylessNAS & CompOFA-ProxylessNAS, showing that the CompOFA has the same or marginally higher accuracies for the same model configurations despite half the training budget.
+
+# 5.7 SEARCH TIME
+
+The intractable cardinality of OFA necessitated the use of latency estimators as a proxy to real latency measurement. With a significantly smaller design space, this need is removed - avoiding the time and effort of building these latency tables that eases "off-the-shelf" practical usability of CompOFA. We introduce a simple memoization technique during the evolutionary search to cache the measured latencies of model architectures and hence avoid its re-measurement, which is only practical with the smaller search space. Table 2 reports the average run time of NAS algorithms for a single latency target with this optimization added to both CompOFA and OFA, reducing the search time to just 75 seconds for CompOFA Fixed-Kernel. We also show that the evolutionary search converges in fewer iterations for CompOFA, in Appendix A.4
+
+# 6 CONCLUSION
+
+To conclude, we have built upon Once-For-All (OFA) networks to propose CompOFA - a design space for OFA networks using compound couplings between model dimensions, which speeds up the process of one-shot training and neural architecture search with hardware latency constraints.
+
+We show that intractably large architectural search spaces are unnecessary for both accuracy and diversity of models. By leveraging the insight of compound couplings we introduced a simple heuristic that vastly shrunk the search space without losing on Pareto optimality. Despite its sparsity, this smaller search space is sufficiently dense to support the same diversity and range of deployment targets.
+
+This smaller cardinality reduces interference in weight-shared training which allows CompOFA to reach the same accuracy in half the training budget. Once trained, CompOFA's tractability lends itself to easier extraction which is faster by 216x without the time or effort to build latency estimators. This improves the ease of "off-the-shelf" usability of our method in real-world settings. Finally, we apply our heuristic on another model and show that CompOFA continues to uncover similar gains.
+
+# REFERENCES
+
+Gabriel Bender, Pieter-Jan Kindermans, Barret Zoph, Vijay Vasudevan, and Quoc Le. Understanding and simplifying one-shot architecture search. In International Conference on Machine Learning, pp. 550-559, 2018.
+Maxim Berman, Leonid Pishchulin, Ning Xu, Matthew B Blaschko, and Gérard Medioni. Aows: Adaptive and optimal network width search with latency constraints. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11217-11226, 2020.
+Andrew Brock, Theodore Lim, James M Ritchie, and Nick Weston. Smash: one-shot model architecture search through hypernetworks. arXiv preprint arXiv:1708.05344, 2017.
+Han Cai, Ligeng Zhu, and Song Han. Proxylessnas: Direct neural architecture search on target task and hardware. arXiv preprint arXiv:1812.00332, 2018.
+Han Cai, Chuang Gan, Tianzhe Wang, Zhekai Zhang, and Song Han. Once for all: Train one network and specialize it for efficient deployment. In International Conference on Learning Representations, 2020. URL https://arxiv.org/pdf/1908.09791.pdf.
+J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009.
+Zichao Guo, Xiangyu Zhang, Haoyuan Mu, Wen Heng, Zechun Liu, Yichen Wei, and Jian Sun. Single path one-shot neural architecture search with uniform sampling. arXiv preprint arXiv:1904.00420, 2019.
+Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
+Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, et al. Searching for mobilenetv3. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1314-1324, 2019.
+Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055, 2018.
+Hieu Pham, Melody Y Guan, Barret Zoph, Quoc V Le, and Jeff Dean. Efficient neural architecture search via parameter sharing. arXiv preprint arXiv:1802.03268, 2018.
+Ilija Radosavovic, Justin Johnson, Saining Xie, Wan-Yen Lo, and Piotr Dólár. On network design spaces for visual recognition. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1882-1890, 2019.
+Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, and Piotr Dólar. Designing network design spaces. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10428-10436, 2020.
+Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V Le. Regularized evolution for image classifier architecture search. In Proceedings of the aaai conference on artificial intelligence, volume 33, pp. 4780-4789, 2019.
+Dimitrios Stamoulis, Ruizhou Ding, Di Wang, Dimitrios Lymberopoulos, Bodhi Priyantha, Jie Liu, and Diana Marculescu. Single-path nas: Designing hardware-efficient convnets in less than 4 hours. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 481-497. Springer, 2019.
+Emma Strubell, Ananya Ganesh, and Andrew McCallum. Energy and policy considerations for deep learning in nlp. arXiv preprint arXiv:1906.02243, 2019.
+Mingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In International Conference on Machine Learning, pp. 6105-6114, 2019.
+
+Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, and Quoc V Le. Mnasnet: Platform-aware neural architecture search for mobile. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2820-2828, 2019.
+Dilin Wang, Meng Li, Chengyue Gong, and Vikas Chandra. Attentivenas: Improving neural architecture search via attentive sampling. arXiv preprint arXiv:2011.09011, 2020.
+Bichen Wu, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong Tian, Peter Vajda, Yangqing Jia, and Kurt Keutzer. Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10734-10742, 2019.
+Jiahui Yu and Thomas S Huang. Universally slimmable networks and improved training techniques. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1803-1811, 2019.
+Jiahui Yu, Linjie Yang, Ning Xu, Jianchao Yang, and Thomas Huang. Slimmable neural networks. arXiv preprint arXiv:1812.08928, 2018.
+Jiahui Yu, Pengchong Jin, Hanxiao Liu, Gabriel Bender, Pieter-Jan Kindermans, Mingxing Tan, Thomas Huang, Xiaodan Song, Ruoming Pang, and Quoc Le. Bignas: Scaling up neural architecture search with big single-stage models. arXiv preprint arXiv:2003.11142, 2020.
+Barret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578, 2016.
+Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le. Learning transferable architectures for scalable image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8697-8710, 2018.
+
+# A APPENDIX
+
+# A.1 EXTRA MODELS IN OFA
+
+Figure 8 depicts the stratification of the latency levels into multiple buckets of size 5ms each and uniformly sampled 100 sub-networks of OFA for each bucket. This box-plot helps in uncovering the considerable variance between the maximum and minimum accuracies of the models of the same latency bucket in OFA. This variance is mainly caused due to OFA's enormous search space that contains redundant models that do not improve the overall accuracy of the model distribution belonging to a certain latency bucket.
+
+# A.2 COMPOFA-PROXYLESSNAS
+
+
+Figure 7: Cumulative distribution function of accuracies of model configurations common to OFA & CompOFA with the base architecture changed from MobileNetV3 to ProxylessNAS. Despite the change in architecture, the same heuristic allows CompOFA to train to the same or marginally higher accuracies with half the training budget.
+
+# A.3 TRAINING SCHEDULE FOR COMPOFA-ELASTIC KERNEL
+
+Table 3 shows the training schedule for CompOFA with Elastic Kernel. Compared to OFA, the training budget is reduced by $31\%$ .
+Table 3: CompOFA (Elastic Kernel)
+
+| Phase | K | D | W | Nsample | Epochs | Wall Time | GPU Hours |
| Teacher | 7 | 4 | 6 | 1 | 180 | 28h 45m | 172h 30m |
| Elastic Kernel | 3,5,7 | 4 | 6 | 1 | 125 | 26h 51m | 161h 06m |
| Compound | 3,5,7 | 2,3,4 | 3,4,6 | 4 | 25 | 9h 21m | 56h 06m |
| Compound | 3,5,7 | 2,3,4 | 3,4,6 | 4 | 125 | 48h 01m | 288h 06m |
| Total | | | | | 455 | 112h 58m | 677h 48m |
+
+# A.4 FASTER CONVERGENCE OF NAS
+
+An evolutionary algorithm is set to converge after a certain number of iterations $(N)$ beyond which the fitness value of the population $(P)$ does not improve significantly. OFA runs NAS with a setting of $N = 500$ iterations for a population of size $|P| = 100$ . Figure 9 demonstrates that the search time of NAS could further be reduced by reducing the number of iterations to $N = 300$ and $N = 50$ for CompOFA Elastic-Kernel and CompOFA Fixed-Kernel respectively, without losing out on their Pareto-optimality. Coupled with the removal of the latency predictor, these CompOFA-specific optimizations to the search are effective in reducing the search time and improving direct usability of CompOFA.
+
+
+Figure 8: Distribution of accuracies of the randomly sampled models from OFA
+
+
+Figure 9: ImageNet accuracies and latencies on Samsung Note10 with reduced NAS iterations for CompOFA.
\ No newline at end of file
diff --git a/compofacompoundonceforallnetworksforfastermultiplatformdeployment/images.zip b/compofacompoundonceforallnetworksforfastermultiplatformdeployment/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..92efeb5dd66cf5f16e0837ba612dfc636d52edc1
--- /dev/null
+++ b/compofacompoundonceforallnetworksforfastermultiplatformdeployment/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0346cb09b28472737f4bcb7c531824bc231a80a6a78b81a6d8c42ce911738f81
+size 428814
diff --git a/compofacompoundonceforallnetworksforfastermultiplatformdeployment/layout.json b/compofacompoundonceforallnetworksforfastermultiplatformdeployment/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..e8b6472a511c5e3cae1920d34f9c2d965bc797ba
--- /dev/null
+++ b/compofacompoundonceforallnetworksforfastermultiplatformdeployment/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:87564a313b740c13e4b12de56ffb0c95d4af97c4a34681eb7900b2d8bdf28c67
+size 338035
diff --git a/computationalseparationbetweenconvolutionalandfullyconnectednetworks/59e57418-c965-4bd5-baff-aca1008c42d6_content_list.json b/computationalseparationbetweenconvolutionalandfullyconnectednetworks/59e57418-c965-4bd5-baff-aca1008c42d6_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..1fd4fd0b49ffe395e8db3edf59c6d566afb0a791
--- /dev/null
+++ b/computationalseparationbetweenconvolutionalandfullyconnectednetworks/59e57418-c965-4bd5-baff-aca1008c42d6_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6acf56cfd1b6884a57dc1c1bd337447de40fa151a330aa059eccc98c902828fb
+size 76097
diff --git a/computationalseparationbetweenconvolutionalandfullyconnectednetworks/59e57418-c965-4bd5-baff-aca1008c42d6_model.json b/computationalseparationbetweenconvolutionalandfullyconnectednetworks/59e57418-c965-4bd5-baff-aca1008c42d6_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..f29403e3485c81ba00e2c45f09e41d7d411d941d
--- /dev/null
+++ b/computationalseparationbetweenconvolutionalandfullyconnectednetworks/59e57418-c965-4bd5-baff-aca1008c42d6_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:99f88b0df594c44fde437e487647e1b803fe0ff884aa0159e624f09bfb5c5ab1
+size 92716
diff --git a/computationalseparationbetweenconvolutionalandfullyconnectednetworks/59e57418-c965-4bd5-baff-aca1008c42d6_origin.pdf b/computationalseparationbetweenconvolutionalandfullyconnectednetworks/59e57418-c965-4bd5-baff-aca1008c42d6_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..37f8be1cc4efee7e66c32a2344e14e449868babd
--- /dev/null
+++ b/computationalseparationbetweenconvolutionalandfullyconnectednetworks/59e57418-c965-4bd5-baff-aca1008c42d6_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ea90328249a711712e18836815100f2f52f9eefcd478ecca73c56ea7af9e16bf
+size 619272
diff --git a/computationalseparationbetweenconvolutionalandfullyconnectednetworks/full.md b/computationalseparationbetweenconvolutionalandfullyconnectednetworks/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..af663db1ab874ab741954b951eea8b2d19d4ad2c
--- /dev/null
+++ b/computationalseparationbetweenconvolutionalandfullyconnectednetworks/full.md
@@ -0,0 +1,342 @@
+# COMPUTATIONAL SEPARATION BETWEEN CONVOLU-TIONAL AND FULLY-CONNECTED NETWORKS
+
+Eran Malach
+
+School of Computer Science
+
+Hebrew University
+
+Jerusalem, Israel
+
+eran.malach@mail.huji.ac.il
+
+Shai Shalev-Shwartz
+
+School of Computer Science
+
+Hebrew University
+
+Jerusalem, Israel
+
+shais@cs.huji.ac.il
+
+# ABSTRACT
+
+Convolutional neural networks (CNN) exhibit unmatched performance in a multitude of computer vision tasks. However, the advantage of using convolutional networks over fully-connected networks is not understood from a theoretical perspective. In this work, we show how convolutional networks can leverage locality in the data, and thus achieve a computational advantage over fully-connected networks. Specifically, we show a class of problems that can be efficiently solved using convolutional networks trained with gradient-descent, but at the same time is hard to learn using a polynomial-size fully-connected network.
+
+# 1 INTRODUCTION
+
+Convolutional neural networks (LeCun et al., 1998; Krizhevsky et al., 2012) achieve state-of-the-art performance on every possible task in computer vision. However, while the empirical success of convolutional networks is indisputable, the advantage of using them is not well understood from a theoretical perspective. Specifically, we consider the following fundamental question:
+
+Why do convolutional networks (CNNs) perform better than fully-connected networks (FCNs)?
+
+Clearly, when considering expressive power, FCNs have a big advantage. Since convolution is a linear operation, any CNN can be expressed using a FCN, whereas FCNs can express a strictly larger family of functions. So, any advantage of CNNs due to expressivity can be leveraged by FCNs as well. Therefore, expressive power does not explain the superiority of CNNs over FCNs.
+
+There are several possible explanations to the superiority of CNNs over FCNs: parameter efficiency (and hence lower sample complexity), weight sharing, and locality prior. The main result of this paper is arguing that locality is a key factor by proving a computational separation between CNNs and FCNs based on locality. But, before that, let's discuss the other possible explanations.
+
+First, we observe that CNNs seem to be much more efficient in utilizing their parameters. A FCN needs to use a greater number of parameters compared to an equivalent CNN: each neuron of a CNN is limited to a small receptive field, and moreover, many of the parameters of the CNN are shared. From classical results in learning theory, using a large number of param
+
+
+Figure 1: Comparison between CNN and FCN of various depths (2/4/6) and widths, trained for 125 epochs with RMSprop optimizer.
+
+eters may result in inferior generalization. So, can the advantage of CNNs be explained simply by counting parameters?
+
+To answer this question, we observe the performance of CNN and FCN based architecture of various widths and depths trained on the CIFAR-10 dataset. For each architecture, we observe the final test accuracy versus the number of trainable parameters. The results are shown in Figure 1. As can be seen, CNNs have a clear advantage over FCNs, regardless of the number of parameters used. As is often observed, a large number of parameters does not hurt the performance of neural networks, and so parameter efficiency cannot explain the advantage of CNNs. This is in line with various theoretical works on optimization of neural networks, which show that over-parameterization is beneficial for convergence of gradient-descent (e.g., Du et al. (2018); Soltanolkotabi et al. (2018); Li & Liang (2018)).
+
+The superiority of CNNs can be also attributed to the extensive weight sharing between the different convolutional filters. Indeed, it has been previously shown that weight sharing is important for the optimization of neural networks (Shalev-Shwartz et al., 2017b). Moreover, the translation-invariant nature of CNNs, which relies on weight sharing, is often observed to be beneficial in various signal processing tasks (Kauderer-Abrams, 2017; Kayhan & Gemert, 2020). So, how much does the weight sharing contribute to the superiority of CNNs over FCNs?
+
+To understand the effect of weight sharing on the behavior of CNNs, it is useful to study locally-connected network (LCN) architectures, which are similar to CNNs, but have no weight sharing between the kernels of the network. While CNNs are far more popular in practice (also due to the fact that they are much more efficient in terms of model size), LCNs have also been used in different contexts (e.g., Bruna et al. (2013); Chen et al. (2015); Liu et al. (2020)). It has been recently observed that in some cases, the performance of LCNs is on par with CNNs (Neyshabur, 2020). So, even if weight sharing explains some of the advantage of CNNs, it clearly doesn't tell the whole story.
+
+Finally, a key property of CNN architectures is their strong utilization of locality in the data. Each neuron in a CNN is limited to a local receptive field of the input, hence encoding a strong locality bias. In this work we demonstrate how CNNs can leverage the local structure of the input, giving them a clear advantage in terms of computational complexity. Our results hint that locality is the principal property that explains the advantage of using CNNs.
+
+Our main result is a computational separation result between CNNs and FCNs. To show this result, we introduce a family of functions that have a very strong local structure, which we call $k$ -patterns. A $k$ -pattern is a function that is determined by $k$ consecutive bits of the input. We show that for inputs of $n$ bits, when the target function is a $(\log n)$ -pattern, training a CNN of polynomial size with gradient-descent achieves small error in polynomial time. However, gradient-descent will fail to learn $(\log n)$ -patterns, when training a FCN of polynomial-size.
+
+# 1.1 RELATED WORK
+
+It has been empirically observed that CNN architectures perform much better than FCNs on computer vision tasks, such as digit recognition and image classification (e.g., Urban et al. (2017); Driss et al. (2017)). While some works have applied various techniques to improve the performance of FCNs (Lin et al. (2015); Fernando et al. (2016); Neyshabur (2020)), there is still a gap between performance of CNNs and FCNs, where the former give very good performance "out-of-the-box". The focus of this work is to understand, from a theoretical perspective, why CNNs give superior performance when trained on input with strong local structure.
+
+Various theoretical works show the advantage of architectures that leverage local and hierarchical structure. The work of Poggio et al. (2015) shows the advantage of using deep hierarchical models over wide and shallow functions. These results are extended in Poggio et al. (2017), showing an exponential gap between deep and shallow networks, when approximating locally compositional functions. The works of Mossel (2016); Malach & Shalev-Shwartz (2018) study learnability of deep hierarchical models. The work of Cohen et al. (2017) analyzes the expressive efficiency of convolutional networks via hierarchical tensor decomposition. While all these works show that indeed CNNs powerful due to their hierarchical nature and the efficiency of utilizing local structure, they do not explain why these models are superior to fully-connected models.
+
+There are a few works that provide a theoretical analysis of CNN optimization. The works of Brutzkus & Globerson (2017); Du et al. (2018) show that gradient-descent can learn a shallow CNN with a single filter, under various distributional assumptions. The work of Zhang et al. (2017)
+
+
+Figure 2: Example of a $k$ -pattern with $k = 5$ .
+
+shows learnability of a convex relaxation of convolutional networks. While these works focus on computational properties of learning CNNs, as we do in this work, they do not compare CNNs to FCNs, but focus only on the behavior of CNNs. The works of Cohen & Shashua (2016); Novak et al. (2018) study the implicit bias of simplified CNN models. However, these results are focused on generalization properties of CNNs, and not on computational efficiency of the optimization.
+
+# 2 DEFINITIONS AND NOTATIONS
+
+Let $\mathcal{X} = \{\pm 1\}^n$ be our instance space, and let $\mathcal{Y} = \{\pm 1\}$ be the label space. Throughout the paper, we focus on learning a binary classification problem using the hinge-loss: $\ell (\hat{y},y) = \max \{1 - y\hat{y},0\}$ . Given some distribution $\mathcal{D}$ over $\mathcal{X}$ , some target function $f:\mathcal{X}\to \mathcal{Y}$ and some hypothesis $h:\mathcal{X}\rightarrow \mathcal{Y}$ , we define the loss of $h$ with respect to $f$ on the distribution $\mathcal{D}$ by:
+
+$$
+L _ {f, \mathcal {D}} (h) = \underset {\mathbf {x} \sim \mathcal {D}} {\mathbb {E}} \left[ \ell (h (\mathbf {x}), f (\mathbf {x})) \right]
+$$
+
+The goal of a supervised learning algorithm is, given access to examples sampled from $\mathcal{D}$ and labeled by $f$ , to find a hypothesis $h$ that minimizes $L_{f,\mathcal{D}}(h)$ . We focus on the gradient-descent (GD) algorithm: given some parametric hypothesis class $\mathcal{H} = \{h_{\mathbf{w}} : \mathbf{w} \in \mathbb{R}^q\}$ , gradient-descent starts with some (randomly initialized) hypothesis $h_{\mathbf{w}^{(0)}}$ and, for some learning rate $\eta > 0$ , updates:
+
+$$
+\mathbf {w} ^ {(t)} = \mathbf {w} ^ {(t - 1)} - \eta \nabla_ {\mathbf {w}} L _ {f, \mathcal {D}} \left(h _ {\mathbf {w} ^ {(t - 1)}}\right)
+$$
+
+We compare the behavior of gradient-descent, when learning two possible neural network architectures: a convolutional network (CNN) and a fully-connected network (FCN).
+
+Definition 1. A convolutional network $h_{\mathbf{u},W,\mathbf{b}}$ is defined as follows:
+
+$$
+h _ {\mathbf {u}, W, b} (\mathbf {x}) = \sum_ {j = 1} ^ {n - k} \left\langle \mathbf {u} ^ {(j)}, \sigma (W \mathbf {x} _ {j \dots j + k - 1} + \mathbf {b}) \right\rangle
+$$
+
+for activation function $\sigma$ , with kernel $W \in \mathbb{R}^{q \times k}$ , bias $\mathbf{b} \in \mathbb{R}^q$ and readout layer $\mathbf{u}^{(1)}, \ldots, \mathbf{u}^{(n)} \in \mathbb{R}^q$ . Note that this is a standard depth-2 CNN with kernel $k$ , stride 1 and $q$ filters.
+
+Definition 2. A fully-connected network $h_{\mathbf{u},\mathbf{w},\mathbf{b}}$ is defined as follows:
+
+$$
+h _ {\mathbf {u}, \mathbf {w}, \mathbf {b}} (\mathbf {x}) = \sum_ {i = 1} ^ {q} u _ {i} \sigma \left(\left\langle \mathbf {w} ^ {(i)}, \mathbf {x} \right\rangle + b _ {i}\right)
+$$
+
+for activation function $\sigma$ , first layer $\mathbf{w}^{(1)}, \ldots, \mathbf{w}^{(q)} \in \mathbb{R}^n$ , bias $\mathbf{b} \in \mathbb{R}^q$ and second layer $\mathbf{u} \in \mathbb{R}^q$ .
+
+We demonstrate the advantage of CNNs over FCNs by observing a problem that can be learned using CNNs, but is hard to learn using FCNs. We call this problem the $k$ -pattern problem:
+
+Definition 3. A function $f: \mathcal{X} \to \mathcal{Y}$ is a $k$ -pattern, if for some $g: \{\pm 1\}^k \to \mathcal{Y}$ and index $j^*$ :
+
+$$
+f (\mathbf {x}) = g \left(x _ {j ^ {*} \dots j ^ {*} + k - 1}\right)
+$$
+
+Namely, a $k$ -pattern is a function that depends only on a small pattern of consecutive bits of the input. The $k$ -pattern problem is the problem of learning $k$ -patterns: for some $k$ -pattern $f$ and some distribution $\mathcal{D}$ over $\mathcal{X}$ , given access to $\mathcal{D}$ labeled by $f$ , find a hypothesis $h$ with $L_{f,\mathcal{D}}(h) \leq \epsilon$ . We note that a similar problem has been studied in Golovnev et al. (2017), providing results on PAC learnability of a related target class.
+
+# 3 CNNS EFFICIENTLY LEARN $(\log n)$ -PATTERNS
+
+The main result in this section shows that gradient-descent can learn $k$ -patterns when training convolutional networks for poly $(2^k, n)$ iterations, and when the network has poly $(2^k, n)$ neurons:
+
+Theorem 4. Assume we uniformly initialize $W^{(0)} \sim \{\pm 1 / k\}^{q \times k}$ , $b_{i} = 1 / k - 1$ and $\mathbf{u}^{(0,j)} = 0$ for every $j$ . Assume the activation $\sigma$ satisfies $|\sigma| \leq c$ , $|\sigma'| \leq 1$ , for some constant $c$ . Fix some $\delta > 0$ , some $k$ -pattern $f$ and some distribution $\mathcal{D}$ over $\mathcal{X}$ . Then, if $q > 2^{k + 3}\log (2^k /\delta)$ , with probability at least $1 - \delta$ over the initialization, when training a convolutional network $h_{\mathbf{u},W,\mathbf{b}}$ using gradient descent with $\eta = \frac{\sqrt{n}}{\sqrt{qT}}$ we have:
+
+$$
+\frac {1}{T} \sum_ {t = 1} ^ {T} L _ {f, \mathcal {D}} (h _ {\mathbf {u} ^ {(t)}, W ^ {(t)}, b}) \leq \frac {2 c n ^ {2} k ^ {2} 2 ^ {k}}{q} + \frac {2 (2 ^ {k} k) ^ {2}}{\sqrt {q n}} + \frac {c ^ {2} n ^ {1 . 5} \sqrt {q}}{T}
+$$
+
+Before we prove the theorem, observe that the above immediately implies that when $k = O(\log n)$ , gradient-descent can efficiently learn to solve the $k$ -pattern problem, when training a CNN:
+
+Corollary 5. Let $k = O(\log n)$ . Then, running $GD$ on a CNN with $q = O(\epsilon^{-2}n^{3}\log^{2}n)$ neurons for $T = O(\epsilon^{-2}n^{3}\log n)$ iterations, using a sample $S\sim \mathcal{D}$ of size $O(\epsilon^{-2}nkq\log (nkq / \delta))$ , learns the $k$ -pattern problem up to accuracy $\epsilon$ w.p. $\geq 1 - \delta$ .
+
+Proof. Sample $S \sim \mathcal{D}$ , and let $\widehat{\mathcal{D}}$ be the uniform distribution over $S$ . Then, from Theorem 4 and the choice of $q$ and $T$ there exists $t \in [T]$ with $L_{f,\widehat{\mathcal{D}}} (h_{\mathbf{u}^{(t)}, W^{(t)}, b}) \leq \epsilon / 2$ , i.e. GD finds a hypothesis with train loss at most $\epsilon / 2$ . Now, using the fact the VC dimension of depth-2 ReLU networks with $W$ weights is $O(W \log W)$ (see Bartlett et al. (2019)), we can bound the generalization gap by $\epsilon / 2$ .
+
+To prove Theorem 4, we show that, for a large enough CNN, the $k$ -pattern problem becomes linearly separable, after applying the first layer of the randomly initialized CNN:
+
+Lemma 6. Assume we uniformly initialize $W \sim \{\pm 1 / k\}^{q \times k}$ and $b_{i} = 1 / k - 1$ . Fix some $\delta > 0$ . Then if $q > 2^{k + 3}\log (2^k /\delta)$ , w.p. $\geq 1 - \delta$ over the choice of $W$ , for every $k$ -pattern $f$ there exist $\mathbf{u}^{*(1)}, \ldots, \mathbf{u}^{*(n - k)} \in \mathbb{R}^q$ with $\| \mathbf{u}^{*(j^*)}\| \leq \frac{2^{k + 1}k}{\sqrt{q}}$ and $\| \mathbf{u}^{*(j)}\| = 0$ for $j \neq j^*$ , s.t. $h_{\mathbf{u}^*,W,b} = f(\mathbf{x})$ .
+
+Proof. Fix some $\mathbf{z} \in \{\pm 1\}^k$ , then for every $\mathbf{w}^{(i)} \sim \{\pm 1/k\}^k$ , we have: $\mathbb{P}\left[\mathrm{sign}(\mathbf{w}^{(i)}) = \mathbf{z}\right] = 2^{-k}$ . Denote by $J_{\mathbf{z}} \subseteq [q]$ the subset of indexes satisfying $\mathrm{sign} \mathbf{w}^{(i)} = \mathbf{z}$ , for every $i \in J_{\mathbf{z}}$ , and note that $\mathbb{E}_W|J_{\mathbf{z}}| = q2^{-k}$ . From Chernoff bound:
+
+$$
+\mathbb {P} \left[ | J _ {\mathbf {z}} | \leq q 2 ^ {- k} / 2 \right] \leq e ^ {- q 2 ^ {- k} / 8} \leq \delta 2 ^ {- k}
+$$
+
+by choosing $q > 2^{k + 3}\log (2^k /\delta)$ . So, using the union bound, w.p. at least $1 - \delta$ , for every $\mathbf{z} \in \{\pm 1\}^k$ we have $|J_{\mathbf{z}}| \geq q2^{-k - 1}$ . By the choice of $b_i$ we have $\sigma (\langle \mathbf{w}^{(i)},\mathbf{z}\rangle +b_i) = (1 / k)\mathbf{1}\{\mathrm{sign}\mathbf{w}^{(i)} = \mathbf{z}\}$ .
+
+Now, fix some $k$ -pattern $f$ , where $f(\mathbf{x}) = g(\mathbf{x}_{j^*,\ldots,j^* +k - 1})$ . For every $i \in J_{\mathbf{z}}$ we choose $\mathbf{u}_i^{*(j^*)} = \frac{k}{|J_{\mathbf{z}}|} g(\mathbf{z})$ and $\mathbf{u}^{*(j)} = 0$ for every $j \neq j^*$ . Therefore, we get:
+
+$$
+\begin{array}{l} h_{\mathbf{u}^{*},W,b}(\mathbf{x}) = \sum_{j = 1}^{n - k}\Big\langle \mathbf{u}^{*(j)},\sigma (W\mathbf{x}_{j\dots j + k - 1} + \mathbf{b})\Big\rangle = \sum_{\substack{\mathbf{z}\in \{\pm 1\}^{k}\\ i\in J_{\mathbf{z}}}}\mathbf{u}_{i}^{*(j^{*})}\sigma \left(\left\langle \mathbf{w}^{(i)},\mathbf{x}_{j^{*}\dots j^{*} + k - 1}\right\rangle +b_{i}\right) \\ = \sum_ {\mathbf {z} \in \{\pm 1 \} ^ {k}} \mathbf {1} \left\{\mathbf {z} = \mathbf {x} _ {j ^ {*} \dots j ^ {*} + k - 1} \right\} g (\mathbf {z}) = g \left(x _ {j ^ {*} \dots j ^ {*} + k - 1}\right) = f (\mathbf {x}) \\ \end{array}
+$$
+
+Note that by definition of $\mathbf{u}^{*(j*)}$ we have $\left\| \mathbf{u}^{*(j*)}\right\| ^2 = \sum_{\mathbf{z}\in \{\pm 1\}^k}\sum_{i\in J_{\mathbf{z}}}\frac{k^2}{|J_{\mathbf{z}}|^2}\leq 4\frac{(2^k k)^2}{q}$ .
+
+Comment 7. Admittedly, the initialization assumed above is non-standard, but is favorable for the analysis. A similar result can be shown for more natural initialization (e.g., normal distribution), using known results from random features analysis (for example, Bresler & Nagaraj (2020)).
+
+From Lemma 6 and known results on learning linear classifiers with gradient-descent, solving the $k$ -pattern problem can be achieved by optimizing the second layer of a randomly initialized CNN. However, since in gradient-descent we optimize both layers of the network, we need a more refined analysis to show that full gradient-descent learns to solve the problem. We follow the scheme introduced in Daniely (2017), adapting it our setting.
+
+We start by showing that the first layer of the network does not deviate from the initialization during the training:
+
+Lemma 8. We have $\left\| \mathbf{u}^{(T,j)}\right\| \leq \eta T\sqrt{q}$ for all $j\in [n - k]$ , and $\left\| W^{(0)} - W^{(T)}\right\| \leq c\eta^2 T^2 n\sqrt{qk}$ .
+
+We can now bound the difference in the loss when the weights of the first layer change during the training process:
+
+Lemma 9. For every $\mathbf{u}^*$ we have:
+
+$$
+\left| L _ {f, \mathcal {D}} \left(h _ {\mathbf {u} ^ {*}, W ^ {(T)}, b}\right) - L _ {f, \mathcal {D}} \left(h _ {\mathbf {u} ^ {*}, W ^ {(0)}, b}\right) \right| \leq c \eta^ {2} T ^ {2} n k \sqrt {q} \sum_ {j = 1} ^ {n - k} \left\| \mathbf {u} ^ {* (j)} \right\|
+$$
+
+The proofs of Lemma 8 and Lemma 9 are shown in the appendix.
+
+Finally, we use the following result on the convergence of online gradient-descent to show that gradient-descent converges to a good solution. The proof of the Theorem is given in Shalev-Shwartz et al. (2011), with an adaptation to a similar setting in Daniely & Malach (2020).
+
+Theorem 10. (Online Gradient Descent) Fix some $\eta$ , and let $f_{1},\ldots,f_{T}$ be some sequence of convex functions. Fix some $\theta_{1}$ , and update $\theta_{t+1} = \theta_{t} - \eta\nabla f_{t}(\theta_{t})$ . Then for every $\theta^{*}$ the following holds:
+
+$$
+\frac {1}{T} \sum_ {t = 1} ^ {T} f _ {t} (\theta_ {t}) \leq \frac {1}{T} \sum_ {t = 1} ^ {T} f _ {t} (\theta^ {*}) + \frac {1}{2 \eta T} \| \theta^ {*} \| ^ {2} + \| \theta_ {1} \| \frac {1}{T} \sum_ {t = 1} ^ {T} \| \nabla f _ {t} (\theta_ {t}) \| + \eta \frac {1}{T} \sum_ {t = 1} ^ {T} \| \nabla f _ {t} (\theta_ {t}) \| ^ {2}
+$$
+
+Proof of Theorem 4. From Lemma 6, with probability at least $1 - \delta$ over the initialization, there exist $\mathbf{u}^{*(1)},\ldots ,\mathbf{u}^{*(n - k)}\in \mathbb{R}^q$ with $\| \mathbf{u}^{*(1)}\| \leq \frac{2^{k + 1}k}{\sqrt{q}}$ and $\| \mathbf{u}^{*(j)}\| = 0$ for $j > 1$ such that $h_{\mathbf{u}^{*},W^{(0)},b}(\mathbf{x}) = f(\mathbf{x})$ , and so $L_{f,\mathcal{D}}(h_{\mathbf{u}^{*},W^{(0)},b}) = 0$ . Using Theorem 10, since $L_{f,\mathcal{D}}(h_{\mathbf{u},W,b})$ is convex with respect to $\mathbf{u}$ , we have:
+
+$$
+\begin{array}{l} \frac {1}{T} \sum_ {t = 1} ^ {T} L _ {f, \mathcal {D}} (h _ {\mathbf {u} ^ {(t)}, W ^ {(t)}, b}) \\ \leq \frac {1}{T} \sum_ {t = 1} ^ {T} L _ {f, \mathcal {D}} (h _ {\mathbf {u} ^ {*}, W ^ {(t)}, b}) + \frac {1}{2 \eta T} \sum_ {j = 1} ^ {n - k} \left\| \mathbf {u} ^ {* (j)} \right\| ^ {2} + \eta \frac {1}{T} \sum_ {t = 1} ^ {T} \left\| \frac {\partial}{\partial \mathbf {u}} L _ {f, \mathcal {D}} (f _ {\mathbf {u} ^ {(t)}, W ^ {(t)}, b}) \right\| ^ {2} \\ \leq \frac {1}{T} \sum_ {t = 1} ^ {T} L _ {f, \mathcal {D}} \left(h _ {\mathbf {u} ^ {*}, W ^ {(t)}, b}\right) + \frac {2 \left(2 ^ {k} k\right) ^ {2}}{q \eta T} + c ^ {2} \eta n q = (*) \\ \end{array}
+$$
+
+Using Lemma 9 we have:
+
+$$
+\begin{array}{l} (*) \leq \frac {1}{T} \sum_ {t = 1} ^ {T} L _ {f, \mathcal {D}} \left(h _ {\mathbf {u} ^ {*}, W ^ {(0)}, b}\right) + c \eta^ {2} T ^ {2} n k \sqrt {q} \sum_ {j = 1} ^ {n - k} \left\| \mathbf {u} ^ {* (j)} \right\| + \frac {2 \left(2 ^ {k} k\right) ^ {2}}{q \eta T} + c ^ {2} \eta n q \\ \leq 2 c \eta^ {2} T ^ {2} n k ^ {2} 2 ^ {k} + \frac {2 (2 ^ {k} k) ^ {2}}{q \eta T} + c ^ {2} \eta m q \\ \end{array}
+$$
+
+Now, choosing $\eta = \frac{\sqrt{n}}{\sqrt{qT}}$ we get the required.
+
+
+
+# 3.1 ANALYSIS OF LOCALLY-CONNECTED NETWORKS
+
+The above result shows that polynomial-size CNNs can learn $(\log n)$ -patterns in polynomial time. As discussed in the introduction, the success of CNNs can be attributed to either the weight sharing
+
+or the locality-bias of the architecture. While weight sharing may contribute to the success of CNNs in some cases, we note that it gives no benefit when learning $k$ -patterns. Indeed, we can show a similar positive result for locally-connected networks (LCN), which have no weight sharing.
+
+Observe the following definition of a LCN with one hidden-layer:
+
+Definition 11. A locally-connected network $h_{\mathbf{u}, \mathbf{w}, \mathbf{b}}$ is defined as follows:
+
+$$
+h _ {\mathbf {u}, \mathbf {W}, \mathbf {b}} (\mathbf {x}) = \sum_ {j = 1} ^ {n - k} \left\langle \mathbf {u} ^ {(j)}, \sigma (W ^ {(j)} \mathbf {x} _ {j \dots j + k - 1} + \mathbf {b} ^ {(j)}) \right\rangle
+$$
+
+for some activation function $\sigma$ , with $W^{(1)}, \ldots, W^{(q)} \in \mathbb{R}^{q \times k}$ , bias $\mathbf{b}^{(1)}, \ldots, \mathbf{b}^{(q)} \in \mathbb{R}^q$ and readout layer $\mathbf{u}^{(1)}, \ldots, \mathbf{u}^{(n)} \in \mathbb{R}^q$ .
+
+Note that the only difference from Definition 1 is the fact that the weights of the first layer are not shared. It is easy to verify that Theorem 4 can be modified in order to show a similar positive result for LCN architectures. Specifically, we note that in Lemma 6, which is the core of the Theorem, we do not use the fact that the weights in the first layer are shared. So, LCNs are "as good as" CNNs for solving the $k$ -pattern problem. This of course does not resolve the question of comparing between LCN and CNN architectures, which we leave for future work.
+
+# 4 LEARNING $(\log n)$ -PATTERNS WITH FCN
+
+In the previous section we showed that patterns of size $\log n$ are efficiently learnable, when using CNNs trained with gradient-descent. In this section we show that, in contrast, gradient-descent fails to learn $(\log n)$ -patterns using fully-connected networks, unless the size of the network is superpolynomial (namely, unless the network is of size $n^{\Omega (\log n)}$ ). For this, we will show an instance of the $k$ -pattern problem that is hard for fully connected networks.
+
+We take $\mathcal{D}$ to be the uniform distribution over $\mathcal{X}$ , and let $f(\mathbf{x}) = \prod_{i\in I}x_i$ , where $I$ is some set of $k$ consecutive bits. Specifically, we take $I = \{1,\dots ,k\}$ , although the same proof holds for any choice of $I$ . In this case, we show that the initial gradient of the network is very small, when a fully-connected network is initialized from a permutation invariant distribution.
+
+Theorem 12. Assume $|\sigma| \leq c, |\sigma'| \leq 1$ . Let $\mathcal{W}$ be some permutation invariant distribution over $\mathbb{R}^n$ , and assume we initialize $\mathbf{w}^{(1)}, \ldots, \mathbf{w}^{(q)} \sim \mathcal{W}$ and initialize $\mathbf{u}$ such that $|u_i| \leq 1$ and for all $\mathbf{x}$ we have $h_{\mathbf{u},\mathbf{w}}(\mathbf{x}) \in [-1,1]$ . Then, the following holds:
+
+$\mathbb{E}_{\mathbf{w}\sim \mathcal{W}}\left\| \frac{\partial}{\partial W} L_{f,\mathcal{D}}(h_{\mathbf{u},\mathbf{w},\mathbf{b}})\right\| _2^2\leq qn\cdot \min \left\{\binom {n - 1}{k}^{-1},\binom {n - 1}{k - 1}^{-1}\right\}$
+$\mathbb{E}_{\mathbf{w}\sim \mathcal{W}}\left\| \frac{\partial}{\partial\mathbf{u}} L_{f,\mathcal{D}}(h_{\mathbf{u},\mathbf{w},\mathbf{b}})\right\| _2^2\leq c^2 q\binom {n}{k}^{-1}$
+
+From the above result, if $k = \Omega(\log n)$ then the average norm of initial gradient is $qn^{-\Omega(\log n)}$ . Therefore, unless $q = n^{\Omega(\log n)}$ , we get that with overwhelming probability over the randomness of the initialization, the gradient is extremely small. In fact, if we run GD on a finite-precision machine, the true population gradient is effectively zero. A formal argument relating such bound on the gradient norm to the failure of gradient-based algorithms has been shown in various previous works (e.g. Shamir (2018); Abbe & Sandon (2018); Malach & Shalev-Shwartz (2020)).
+
+The key for proving Theorem 12 is the following observation: since the first layer of the FCN is initialized from a symmetric distribution, we observe that if learning some function that relies on $k$ bits of the input is hard, then learning any function that relies on $k$ bits is hard. Using Fourier analysis (e.g., Blum et al. (1994); Kearns (1998); Shalev-Shwartz et al. (2017a)), we can show that learning $k$ -parities (functions of the form $\mathbf{x} \mapsto \prod_{i \in I} x_i$ ) using gradient-descent is hard. Since an arbitrary $k$ -parity is hard, then any $k$ -parity, and specifically a parity of $k$ consecutive bits, is also hard. That is, since the first layer is initialized symmetrically, training a FCN on the original input is equivalent to training a FCN on an input where all the input bits are randomly permuted. So, for a FCN, learning a function that depends on consecutive bits is just as hard as learning a function that depends on arbitrary bits (a task that is known to be hard).
+
+Proof of Theorem 12. Denote $\chi_{I'} = \prod_{i \in I'} x_i$ , so $f(\mathbf{x}) = \chi_I$ with $I = \{1, \ldots, k\}$ . We begin by calculating the gradient w.r.p. to $\mathbf{w}_j^{(i)}$ :
+
+$$
+\frac {\partial}{\partial \mathbf {w} _ {j} ^ {(i)}} L _ {f, \mathcal {D}} (h _ {\mathbf {u}, \mathbf {w}, \mathbf {b}}) = \frac {\mathbb {E}}{\mathcal {D}} \left[ \frac {\partial}{\partial \mathbf {w} _ {j} ^ {(i)}} \ell (h _ {\mathbf {u}, \mathbf {w}, \mathbf {b}} (\mathbf {x}), f (\mathbf {x})) \right] = - \frac {\mathbb {E}}{\mathcal {D}} \left[ x _ {j} u _ {i} \sigma^ {\prime} \left(\left< \mathbf {w} ^ {(i)}, \mathbf {x} \right> + b _ {i}\right) \chi_ {I} (\mathbf {x}) \right]
+$$
+
+Fix some permutation $\pi : [n] \to [n]$ . For some vector $\mathbf{x} \in \mathbb{R}^n$ we denote $\pi(\mathbf{x}) = (x_{\pi(1)}, \ldots, x_{\pi(n)})$ , for some subset $I \subseteq [n]$ we denote $\pi(I) = \cup_{j \in I} \{\pi(j)\}$ . Notice that we have for all $\mathbf{x}, \mathbf{z} \in \mathbb{R}^n$ : $\chi_I(\pi(\mathbf{x})) = \chi_{\pi(I)}$ and $\langle \pi(\mathbf{x}), \mathbf{z} \rangle = \langle \mathbf{x}, \pi^{-1}(\mathbf{z}) \rangle$ . Denote $\pi(h_{\mathbf{u}, \mathbf{w}, \mathbf{b}})(\mathbf{x}) = \sum_{i=1}^{k} u_i \sigma(\langle \pi(\mathbf{w}^{(i)}), \mathbf{x} \rangle + b_i)$ . Denote $\pi(\mathcal{D})$ the distribution of $\pi(\mathbf{x})$ where $\mathbf{x} \sim \mathcal{D}$ . Notice that since $\mathcal{D}$ is the uniform distribution, we have $\pi(\mathcal{D}) = \mathcal{D}$ . From all the above, for every permutation $\pi$ with $\pi(j) = j$ we have:
+
+$$
+\begin{array}{l} - \frac {\partial}{\partial \mathbf {w} _ {j} ^ {(i)}} L _ {\chi_ {\pi (I)}, \mathcal {D}} \left(h _ {\mathbf {u}, \mathbf {w}, \mathbf {b}}\right) = \underset {\mathbf {x} \sim \mathcal {D}} {\mathbb {E}} \left[ x _ {j} u _ {i} \sigma^ {\prime} \left(\left\langle \mathbf {w} ^ {(i)}, \mathbf {x} \right\rangle + b _ {i}\right) \chi_ {\pi (I)} (\mathbf {x}) \right] \\ = \underset {\mathbf {x} \sim \pi (\mathcal {D})} {\mathbb {E}} \left[ x _ {j} u _ {i} \sigma^ {\prime} \left(\left\langle \mathbf {w} ^ {(i)}, \pi^ {- 1} (\mathbf {x}) \right\rangle + b _ {i}\right) \chi_ {I} (\mathbf {x}) \right] \\ = \underset {\mathbf {x} \sim \mathcal {D}} {\mathbb {E}} \left[ x _ {j} u _ {i} \sigma^ {\prime} \left(\left\langle \pi \left(\mathbf {w} ^ {(i)}\right), \mathbf {x} \right\rangle + b _ {i}\right) \chi_ {I} (\mathbf {x}) \right] = - \frac {\partial}{\partial \mathbf {w} _ {j} ^ {(i)}} L _ {\chi_ {I}, \mathcal {D}} (\pi (h _ {\mathbf {u}, \mathbf {w}, \mathbf {b}})) \\ \end{array}
+$$
+
+Fix some $I \subseteq [n]$ with $|I| = k$ and $j \in [n]$ . Now, let $S_{j}$ be a set of permutations satisfying:
+
+1. For all $\pi_1, \pi_2 \in S_j$ with $\pi_1 \neq \pi_2$ we have $\pi_1(I) \neq \pi_2(I)$ .
+2. For all $\pi \in S_j$ we have $\pi(j) = j$ .
+
+Note that if $j \notin I$ then the maximal size of such $S_{j}$ is $\binom{n-1}{k}$ , and if $j \in I$ then the maximal size is $\binom{n-1}{k-1}$ . Denote $g_{j}(\mathbf{x}) = x_{j}u_{i}\sigma'(\langle \mathbf{w}^{(i)},\mathbf{x}\rangle +b_{i})$ . We denote the inner-product $\langle \psi ,\phi \rangle_{\mathcal{D}} = \mathbb{E}_{\mathbf{x}\sim \mathcal{D}}[\psi (\mathbf{x})\phi (\mathbf{x})]$ and the induced norm $\| \psi \|_{\mathcal{D}} = \sqrt{\langle\psi,\psi\rangle_{\mathcal{D}}}$ . Since $\{\chi_{I'}\}_{I'\subseteq [n]}$ is an orthonormal basis w.r.p. to $\langle \cdot ,\cdot \rangle_{\mathcal{D}}$ from Parseval's equality we have:
+
+$$
+\begin{array}{l} \sum_ {\pi \in S _ {j}} \left(\frac {\partial}{\partial \mathbf {w} _ {j} ^ {(i)}} L _ {\chi_ {I}, \mathcal {D}} (\pi (h _ {\mathbf {u}, \mathbf {w}, \mathbf {b}}))\right) ^ {2} = \sum_ {\pi \in S} \left(\frac {\partial}{\partial \mathbf {w} _ {j} ^ {(i)}} L _ {\chi_ {\pi (I)}, \mathcal {D}} (h _ {\mathbf {u}, \mathbf {w}, \mathbf {b}})\right) ^ {2} \\ = \sum_ {\pi \in S} \left\langle g _ {j}, \chi_ {\pi (I)} \right\rangle_ {\mathcal {D}} ^ {2} \leq \sum_ {I ^ {\prime} \subseteq [ n ]} \left\langle g _ {j}, \chi_ {I ^ {\prime}} \right\rangle_ {\mathcal {D}} ^ {2} = \| g _ {j} \| _ {\mathcal {D}} ^ {2} \leq 1 \\ \end{array}
+$$
+
+So, from the above we get that, taking $S_{j}$ of maximal size:
+
+$$
+\underset {\pi \sim S _ {j}} {\mathbb {E}} \left(\frac {\partial}{\partial \mathbf {w} _ {j} ^ {(i)}} L _ {\chi_ {I}, \mathcal {D}} (\pi (h _ {\mathbf {u}, \mathbf {w}, \mathbf {b}}))\right) ^ {2} \leq | S _ {j} | ^ {- 1} \leq \min \left\{\binom {n - 1} {k} ^ {- 1}, \binom {n - 1} {k - 1} ^ {- 1} \right\}
+$$
+
+Now, for some permutation invariant distribution of weights $\mathcal{W}$ we have:
+
+$$
+\underset {\mathbf {w} \sim \mathcal {W}} {\mathbb {E}} \left(\frac {\partial}{\partial \mathbf {w} _ {j} ^ {(i)}} L _ {\chi_ {I}, \mathcal {D}} \left(h _ {\mathbf {u}, \mathbf {w}, \mathbf {b}}\right)\right) ^ {2} = \underset {\mathbf {w} \sim \mathcal {W}} {\mathbb {E}} \underset {\pi \sim S _ {j}} {\mathbb {E}} \left(\frac {\partial}{\partial \mathbf {w} _ {j} ^ {(i)}} L _ {\chi_ {I}, \mathcal {D}} \left(\pi \left(h _ {\mathbf {u}, \mathbf {w}, \mathbf {b}}\right)\right)\right) ^ {2} \leq | S _ {j} | ^ {- 1}
+$$
+
+Summing over all neurons we get:
+
+$$
+\underset {\mathbf {w} \sim \mathcal {W}} {\mathbb {E}} \left\| \frac {\partial}{\partial W} L _ {\chi_ {I}, \mathcal {D}} (h _ {\mathbf {u}, \mathbf {w}, \mathbf {b}}) \right\| _ {2} ^ {2} \leq q n \cdot \min \left\{\binom {n - 1} {k} ^ {- 1}, \binom {n - 1} {k - 1} ^ {- 1} \right\}
+$$
+
+We can use a similar argument to bound the gradient of $\mathbf{u}$ . We leave the details to the appendix.
+
+
+
+
+
+
+
+
+Figure 3: Top: Performance of different architectures on a size- $n$ MNIST sequences, where the label is determined by the parity of the central 3 digits. Bottom: MNIST sequences of varying length.
+
+
+
+
+
+# 5 NEURAL ARCHITECTURE SEARCH
+
+So far, we showed that while the $(\log n)$ -pattern problem can be solved efficiently using a CNN, this problem is hard for a FCN to solve. Since the CNN architecture is designed for processing consecutive patterns of the inputs, it can easily find the pattern that determines the label. The FCN, however, disregards the order of the input bits, and so it cannot enjoy from the fact that the bits which determine the label are consecutive. In other words, the FCN architecture needs to learn the order of the bits, while the CNN already encodes this order in the architecture.
+
+So, a FCN fails to recover the $k$ -pattern since it does not assume anything about the order of the input bits. But, is it possible to recover the order of the bits prior to training the network? Can we apply some algorithm that searches for an optimal architecture to solve the $k$ -pattern problem? Such motivation stands behind the thriving research field of Neural Architecture Search algorithms (see Elsken et al. (2018) for a survey).
+
+Unfortunately, we claim that if the order of the bits is not known to the learner, no architecture search algorithm can help in solving the $k$ -pattern problem. To see this, it is enough to observe that when the order of the bits is unknown, the $k$ -pattern problem is equivalent to the $k$ -Junta problem: learning a function that depends on an arbitrary (not necessarily consecutive) set of $k$ bits from the input. Learning $k$ -Juntas is a well-studied problem in the literature of learning theory (e.g., Mossel et al. (2003)). The best algorithm for solving the $(\log n)$ -Junta problem runs in time $n^{O(\log n)}$ , and no poly-time algorithm is known for solving this problem. Moreover, if we consider statistical-query algorithms (a wide family of algorithms, that only have access to estimations of query function on the distribution, e.g. Blum et al. (2003)), then existing lower bounds show that the $(\log n)$ -Junta problem cannot be solved in polynomial time (Blum et al., 1994).
+
+# 6 EXPERIMENTS
+
+In the previous sections we showed a simplistic learning problem that can be solved using CNNs and LCNs, but is hard to solve using FCNs. In this problem, the label is determined by a few consecutive bits of the input. In this section we show some experiments that validate our theoretical results. In these experiments, the input to the network is a sequence of $n$ MNIST digits, where each digit is scaled and cropped to a size of $24 \times 8$ . We then train three different network architectures: FCN, CNN and LCN. The CNN and LCN architectures have kernels of size $24 \times 24$ , so that 3 MNIST digits fit in a single kernel. In all the architectures we use a single hidden-layer with 1024 neurons, and ReLU activation. The networks are trained with AdaDelta optimizer for 30 epochs1.
+
+
+Figure 4: $n$ -sequence MNIST with non-consecutive parity.
+
+
+
+In the first experiment, the label of the example is set to be the parity of the sum of the 3 consecutive digits located in the middle of the sequence. So, as in our theoretical analysis, the label is determined by a small area of consecutive bits of the input. Figure 3 shows the results of this experiment. As can be clearly seen, the CNN and LCN architectures achieve good performance regardless of the choice of $n$ , where the performance of the FCN architectures critically degrades for larger $n$ , achieving only chance-level performance when $n = 19$ . We also observe that LCN has a clear advantage over CNN in this task. As noted, our primary focus is on demonstrating the superiority of locality-based architectures, such as CNN and LCN, and we leave the comparison between the two to future work.
+
+Our second experiment is very similar to the first, but instead of taking the label to be the parity of 3 consecutive digits, we calculate the label based on 3 digits that are far apart. Namely, we take the parity of the first, middle and last digits of the sequence. The results of this experiment are shown in Figure 4. As can be seen, for small $n$ , FCN performs much better than CNN and LCN. This demonstrates that when we break the local structure, the advantage of CNN and LCN disappears, and using FCN becomes a better choice. However, for large $n$ , all architectures perform poorly.
+
+Acknowledgements: This research is supported by the European Research Council (TheoryDL project). We thank Tomaso Poggio for raising the main question tackled in this paper and for valuable discussion and comments
+
+# REFERENCES
+
+Emmanuel Abbe and Colin Sandon. Provable limitations of deep learning. arXiv preprint arXiv:1812.06369, 2018.
+Peter L Bartlett, Nick Harvey, Christopher Liaw, and Abbas Mehrabian. Nearly-tight vc-dimension and pseudodimension bounds for piecewise linear neural networks. J. Mach. Learn. Res., 20: 63-1, 2019.
+Avrim Blum, Merrick Furst, Jeffrey Jackson, Michael Kearns, Yishay Mansour, and Steven Rudich. Weakly learning dnf and characterizing statistical query learning using fourier analysis. In Proceedings of the twenty-sixth annual ACM symposium on Theory of computing, pp. 253-262, 1994.
+Avrim Blum, Adam Kalai, and Hal Wasserman. Noise-tolerant learning, the parity problem, and the statistical query model. Journal of the ACM (JACM), 50(4):506-519, 2003.
+Guy Bresler and Dheeraj Nagaraj. A corrective view of neural networks: Representation, memorization and learning. arXiv preprint arXiv:2002.00274, 2020.
+Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally connected networks on graphs. arXiv preprint arXiv:1312.6203, 2013.
+Alon Brutzkus and Amir Globerson. Globally optimal gradient descent for a convnet with gaussian inputs. arXiv preprint arXiv:1702.07966, 2017.
+Yu-hsin Chen, Ignacio Lopez-Moreno, Tara N Sainath, Mirkó Visontai, Raziel Alvarez, and Carolina Parada. Locally-connected and convolutional neural networks for small footprint speaker recognition. In Sixteenth Annual Conference of the International Speech Communication Association, 2015.
+Nadav Cohen and Amnon Shashua. Inductive bias of deep convolutional networks through pooling geometry. arXiv preprint arXiv:1605.06743, 2016.
+Nadav Cohen, Or Sharir, Yoav Levine, Ronen Tamari, David Yakira, and Amnon Shashua. Analysis and design of convolutional networks via hierarchical tensor decompositions. arXiv preprint arXiv:1705.02302, 2017.
+Amit Daniely. Sgd learns the conjugate kernel class of the network. In Advances in Neural Information Processing Systems, pp. 2422-2430, 2017.
+Amit Daniely and Eran Malach. Learning parities with neural networks. arXiv preprint arXiv:2002.07400, 2020.
+S Ben Driss, Mahmoud Soua, Rostom Kachouri, and Mohamed Akil. A comparison study between mlp and convolutional neural network models for character recognition. In Real-Time Image and Video Processing 2017, volume 10223, pp. 1022306. International Society for Optics and Photonics, 2017.
+Simon Du, Jason Lee, Yuandong Tian, Aarti Singh, and Barnabas Poczos. Gradient descent learns one-hidden-layer cnn: Don't be afraid of spurious local minima. In International Conference on Machine Learning, pp. 1339-1348, 2018.
+Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Neural architecture search: A survey. arXiv preprint arXiv:1808.05377, 2018.
+Chrisantha Fernando, Dylan Banarse, Malcolm Reynolds, Frederic Besse, David Pfau, Max Jaderberg, Marc Lanctot, and Daan Wierstra. Convolution by evolution: Differentiable pattern producing networks. In Proceedings of the Genetic and Evolutionary Computation Conference 2016, pp. 109-116, 2016.
+Alexander Golovnev, Mika Göös, Daniel Reichman, and Igor Shinkar. String matching: Communication, circuits, and learning. arXiv preprint arXiv:1709.02034, 2017.
+Eric Kauderer-Abrams. Quantifying translation-invariance in convolutional neural networks. arXiv preprint arXiv:1801.01450, 2017.
+
+Osman Semih Kayhan and Jan C van Gemert. On translation invariance in cnns: Convolutional layers can exploit absolute spatial location. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14274-14285, 2020.
+Michael Kearns. Efficient noise-tolerant learning from statistical queries. Journal of the ACM (JACM), 45(6):983-1006, 1998.
+Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097-1105, 2012.
+Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998.
+Yuanzhi Li and Yingyu Liang. Learning overparameterized neural networks via stochastic gradient descent on structured data. In Advances in Neural Information Processing Systems, pp. 8157-8166, 2018.
+Zhouhan Lin, Roland Memisevic, and Kishore Konda. How far can we go without convolution: Improving fully-connected networks. arXiv preprint arXiv:1511.02580, 2015.
+Wen Liu, Hong Chen, Zhongliang Deng, Xinyu Zheng, Xiao Fu, and Qianqian Cheng. Lc-dnn: Local connection based deep neural network for indoor localization with csi. IEEE Access, 8: 108720-108730, 2020.
+Eran Malach and Shai Shalev-Shwartz. A provably correct algorithm for deep learning that actually works. arXiv preprint arXiv:1803.09522, 2018.
+Eran Malach and Shai Shalev-Shwartz. When hardness of approximation meets hardness of learning. arXiv preprint arXiv:2008.08059, 2020.
+Elchanan Mossel. Deep learning and hierarchical generative models. arXiv preprint arXiv:1612.09057, 2016.
+Elchanan Mossel, Ryan O'Donnell, and Rocco P Servedio. Learning juntas. In Proceedings of the thirty-fifth annual ACM symposium on Theory of computing, pp. 206-212, 2003.
+Behnam Neyshabur. Towards learning convolutions from scratch. arXiv preprint arXiv:2007.13657, 2020.
+Roman Novak, Lechao Xiao, Jaehoon Lee, Yasaman Bahri, Greg Yang, Jiri Hron, Daniel A Abolafia, Jeffrey Pennington, and Jascha Sohl-Dickstein. Bayesian deep convolutional networks with many channels are gaussian processes. arXiv preprint arXiv:1810.05148, 2018.
+Tomaso Poggio, Fabio Anselmi, and Lorenzo Rosasco. I-theory on depth vs width: hierarchical function composition. Technical report, Center for Brains, Minds and Machines (CBMM), 2015.
+Tomaso Poggio, Hrushikesh Mhaskar, Lorenzo Rosasco, Brando Miranda, and Qianli Liao. Why and when can deep-but not shallow-networks avoid the curse of dimensionality: a review. International Journal of Automation and Computing, 14(5):503-519, 2017.
+Shai Shalev-Shwartz, Ohad Shamir, and Shaked Shammah. Failures of gradient-based deep learning. arXiv preprint arXiv:1703.07950, 2017a.
+Shai Shalev-Shwartz, Ohad Shamir, and Shaked Shammah. Weight sharing is crucial to successful optimization. arXiv preprint arXiv:1706.00687, 2017b.
+Shai Shalev-Shwartz et al. Online learning and online convex optimization. Foundations and trends in Machine Learning, 4(2):107-194, 2011.
+Ohad Shamir. Distribution-specific hardness of learning neural networks. The Journal of Machine Learning Research, 19(1):1135-1163, 2018.
+
+Mahdi Soltanolkotabi, Adel Javanmard, and Jason D Lee. Theoretical insights into the optimization landscape of over-parameterized shallow neural networks. IEEE Transactions on Information Theory, 65(2):742-769, 2018.
+Gregor Urban, Krzysztof J Geras, Samira Ebrahimi Kahou, Ozlem Aslan, Shengjie Wang, Rich Caruana, Abdelrahman Mohamed, Matthai Philipose, and Matt Richardson. Do deep convolutional nets really need to be deep and convolutional? International Conference on Learning Representations, 2017.
+Yuchen Zhang, Percy Liang, and Martin J Wainwright. Convexified convolutional neural networks. In International Conference on Machine Learning, pp. 4044-4053. PMLR, 2017.
\ No newline at end of file
diff --git a/computationalseparationbetweenconvolutionalandfullyconnectednetworks/images.zip b/computationalseparationbetweenconvolutionalandfullyconnectednetworks/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..79df3dd38fe2a7d505b6438efdb595764eea3109
--- /dev/null
+++ b/computationalseparationbetweenconvolutionalandfullyconnectednetworks/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e3e15d9072589e6f146e05c5f749b3343b3cb410fc2bf71426657c9272deb01b
+size 312754
diff --git a/computationalseparationbetweenconvolutionalandfullyconnectednetworks/layout.json b/computationalseparationbetweenconvolutionalandfullyconnectednetworks/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..2387abbee402a5dfdcd31da2d17a3fd9b6b7b7e1
--- /dev/null
+++ b/computationalseparationbetweenconvolutionalandfullyconnectednetworks/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f27ac3920c71da4c231953df2087c15418038689e7bb354d7a3dedadd3ab0eac
+size 520850
diff --git a/conceptlearnersforfewshotlearning/04772f39-6cc3-4328-8858-9783ca9801eb_content_list.json b/conceptlearnersforfewshotlearning/04772f39-6cc3-4328-8858-9783ca9801eb_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..c5267040e9f30abd8a6ca2187f0399e66aff38b0
--- /dev/null
+++ b/conceptlearnersforfewshotlearning/04772f39-6cc3-4328-8858-9783ca9801eb_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1ae03951fd658a3d8ba2756a1e2a0b44c2faf54c35ab7875011151d776e02af5
+size 103313
diff --git a/conceptlearnersforfewshotlearning/04772f39-6cc3-4328-8858-9783ca9801eb_model.json b/conceptlearnersforfewshotlearning/04772f39-6cc3-4328-8858-9783ca9801eb_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..6ef9d73fa50b85845c8ba354fda4c5e6bca1b003
--- /dev/null
+++ b/conceptlearnersforfewshotlearning/04772f39-6cc3-4328-8858-9783ca9801eb_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c7ea301234e6f65e887c064a3c00b0394ed7f4763e81afb07e979dd691ac1062
+size 126199
diff --git a/conceptlearnersforfewshotlearning/04772f39-6cc3-4328-8858-9783ca9801eb_origin.pdf b/conceptlearnersforfewshotlearning/04772f39-6cc3-4328-8858-9783ca9801eb_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..0c6de003ff56017152abb540a7189d837f9350f8
--- /dev/null
+++ b/conceptlearnersforfewshotlearning/04772f39-6cc3-4328-8858-9783ca9801eb_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2b17140ccb497a506b5ca1034409eecf35e82829b1b9bfcd6cd3dd3449cafb16
+size 2519215
diff --git a/conceptlearnersforfewshotlearning/full.md b/conceptlearnersforfewshotlearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..f2ec63c99e477dd2d2e24efcc57bac1f61b1d6ce
--- /dev/null
+++ b/conceptlearnersforfewshotlearning/full.md
@@ -0,0 +1,401 @@
+# CONCEPT LEARNERS FOR FEW-SHOT LEARNING
+
+Kaidi Cao*, Maria Brbic*, Jure Leskovec
+
+Department of Computer Science
+
+Stanford University
+
+{kaidicao, mbrbic,jure}@cs.stanford.edu
+
+# ABSTRACT
+
+Developing algorithms that are able to generalize to a novel task given only a few labeled examples represents a fundamental challenge in closing the gap between machine- and human-level performance. The core of human cognition lies in the structured, reusable concepts that help us to rapidly adapt to new tasks and provide reasoning behind our decisions. However, existing meta-learning methods learn complex representations across prior labeled tasks without imposing any structure on the learned representations. Here we propose COMET, a meta-learning method that improves generalization ability by learning to learn along human-interpretable concept dimensions. Instead of learning a joint unstructured metric space, COMET learns mappings of high-level concepts into semi-structured metric spaces, and effectively combines the outputs of independent concept learners. We evaluate our model on few-shot tasks from diverse domains, including fine-grained image classification, document categorization and cell type annotation on a novel dataset from a biological domain developed in our work. COMET significantly outperforms strong meta-learning baselines, achieving $6 - 15\%$ relative improvement on the most challenging 1-shot learning tasks, while unlike existing methods providing interpretations behind the model's predictions.
+
+# 1 INTRODUCTION
+
+Deep learning has reached human-level performance on domains with the abundance of large-scale labeled training data. However, learning on tasks with a small number of annotated examples is still an open challenge. Due to the lack of training data, models often overfit or are too simplistic to provide good generalization. On the contrary, humans can learn new tasks very quickly by drawing upon prior knowledge and experience. This ability to rapidly learn and adapt to new environments is a hallmark of human intelligence.
+
+Few-shot learning (Miller et al., 2000; Fei-Fei et al., 2006; Koch et al., 2015) aims at addressing this fundamental challenge by designing algorithms that are able to generalize to new tasks given only a few labeled training examples. Meta-learning (Schmidhuber, 1987; Bengio et al., 1992) has recently made major advances in the field by explicitly optimizing the model's ability to generalize, or learning how to learn, from many related tasks (Snell et al., 2017; Vinyals et al., 2016; Ravi & Larochelle, 2017; Finn et al., 2017). Motivated by the way humans effectively use prior knowledge, meta-learning algorithms acquire prior knowledge over previous tasks so that new tasks can be efficiently learned from a small amount of data. However, recent works (Chen et al., 2019b; Raghu et al., 2020) show that simple baseline methods perform comparably to existing meta-learning methods, opening the question about which components are crucial for rapid adaptation and generalization.
+
+Here, we argue that there is an important missing piece in this puzzle. Human knowledge is structured in the form of reusable concepts. For instance, when we learn to recognize new bird species we are already equipped with the critical concepts, such as wing, beak, and feather. We then focus on these specific concepts and combine them to identify a new species. While learning to recognize new species is challenging in the complex bird space, it becomes remarkably simpler once the reasoning is structured into familiar concepts. Moreover, such a structured way of cognition gives us the ability to provide reasoning behind our decisions, such as "ravens have thicker beaks than crows, with more
+
+
+Figure 1: Along each concept dimension, COMET learns concept embeddings using independent concept learners and compares them to concept prototypes. COMET then effectively aggregates information across concept dimensions, assigning concept importance scores to each dimension.
+
+of a curve to the end". We argue that this lack of structure is limiting the generalization ability of the current meta-learners. The importance of compositionality for few-shot learning was emphasized in (Lake et al., 2011; 2015) where hand-designed features of strokes were combined using Bayesian program learning.
+
+Motivated by the structured form of human cognition, we propose COMET, a meta-learning method that discovers generalizable representations along human-interpretable concept dimensions. COMET learns a unique metric space for each concept dimension using concept-specific embedding functions, named concept learners, that are parameterized by deep neural networks. Along each high-level dimension, COMET defines concept prototypes that reflect class-level differences in the metric space of the underlying concept. To obtain final predictions, COMET effectively aggregates information from diverse concept learners and concept prototypes. Three key aspects lead to a strong generalization ability of our approach: (i) semi-structured representation learning, (ii) concept-specific metric spaces described with concept prototypes, and (iii) ensembling of many models. The latter assures that the combination of diverse and accurate concept learners improves the generalization ability of the base learner (Hansen & Salamon, 1990; Dvornik et al., 2019). Remarkably, the high-level universe of concepts that are used to guide our algorithm can be discovered in a fully unsupervised way, or we can use external knowledge bases to define concepts. In particular, we can get a large universe of noisy, incomplete and redundant concepts and COMET learns which subsets of those are important by assigning local and global concept importance scores. Unlike existing methods (Snell et al., 2017; Vinyals et al., 2016; Sung et al., 2018; Gidaris & Komodakis, 2018), COMET's predictions are interpretable—an advantage especially important in the few-shot learning setting, where predictions are based only on a handful of labeled examples making it hard to trust the model. As such, COMET is the first domain-agnostic interpretable meta-learning approach.
+
+We demonstrate the effectiveness of our approach on tasks from extremely diverse domains, including fine-grained image classification in computer vision, document classification in natural language processing, and cell type annotation in biology. In the biological domain, we conduct the first systematic comparison of meta-learning algorithms. We develop a new meta-learning dataset and define a novel benchmark task to characterize single-cell transcriptome of all mouse organs (Consortium, 2018; 2020). Additionally, we consider the scenario in which concepts are not given in advance, and test COMET's performance with automatically extracted visual concepts. Our experimental results show that on all domains COMET significantly improves generalization ability, achieving $6 - 15\%$ relative improvement over state-of-the-art methods in the most challenging 1-shot task. Furthermore, we demonstrate the ability of COMET to provide interpretations behind the model's predictions, and support our claim with quantitative and qualitative evaluations of the generated explanations.
+
+# 2 PROPOSED METHOD
+
+Problem formulation. In few-shot classification, we assume that we are given a labeled training set $\mathcal{D}_{tr}$ , an unlabeled query set $\mathcal{D}_{qr}$ , and a support set $S$ consisting of a few labeled data points that share the label space with the query set. Label space between training and query set is disjoint, i.e., $\{Y_{tr}\} \cap \{Y_{qr}\} = \emptyset$ , where $\{Y_{tr}\}$ denotes label space of training set and $\{Y_{qr}\}$ denotes label space of
+
+query set. Each labeled data point $(\mathbf{x},y)$ consists of a $D$ -dimensional feature vector $\mathbf{x}\in \mathbb{R}^{D}$ and a class label $y\in \{1,\dots ,K\}$ . Given a training set of previously labeled tasks $\mathcal{D}_{tr}$ and the support set $\mathcal{S}$ of a few labeled data points on a novel task, the goal is to train a model that can generalize to the novel task and label the query set $\mathcal{D}_{qr}$ .
+
+# 2.1 PRELIMINARIES
+
+Episodic training. To achieve successful generalization to a new task, training of meta-learning methods is usually performed using sampled mini-batches called episodes (Vinyals et al., 2016). Each episode is formed by first sampling classes from the training set, and then sampling data points labeled with these classes. The sampled data points are divided into disjoint sets of: (i) a support set consisting of a few labeled data points, and (ii) a query set consisting of data points whose labels are used to calculate a prediction error. Given the sampled support set, the model minimizes the loss on the sampled query set in each episode. The key idea behind this meta-learning training scheme is to improve generalization of the model by trying to mimic the low-data regime encountered during testing. Episodes with balanced training sets are usually referred to as "N-way, k-shot" episodes where $N$ indicates number of classes per episode ("way"), and $k$ indicates number of support points (labeled training examples) per class ("shot").
+
+Prototypical networks. Our work is inspired by prototypical networks (Snell et al., 2017), a simple but highly effective metric-based meta-learning method. Prototypical networks learn a non-linear embedding function $f_{\theta} : \mathbb{R}^{D} \to \mathbb{R}^{M}$ parameterized by a convolutional neural network. The main idea is to learn a function $f_{\theta}$ such that in the $M$ -dimensional embedding space data points cluster around a single prototype representation $\mathbf{p}_k \in \mathbb{R}^M$ for each class $k$ . Class prototype $\mathbf{p}_k$ is computed as the mean vector of the support set labeled with the class $k$ :
+
+$$
+\mathbf {p} _ {k} = \frac {1}{| \mathcal {S} _ {k} |} \sum_ {\left(\mathbf {x} _ {i}, y _ {i}\right) \in \mathcal {S} _ {k}} f _ {\boldsymbol {\theta}} \left(\mathbf {x} _ {i}\right), \tag {1}
+$$
+
+where $S_{k}$ denotes the subset of the support set $S$ belonging to the class $k$ . Given a query data point $\mathbf{x}_q$ , prototypical networks output distribution over classes using the softmax function:
+
+$$
+p _ {\boldsymbol {\theta}} (y = k | \mathbf {x} _ {q}) = \frac {\exp \left(- d \left(f _ {\boldsymbol {\theta}} \left(\mathbf {x} _ {q}\right) , \mathbf {p} _ {k}\right)\right)}{\sum_ {k ^ {\prime}} \exp \left(- d \left(f _ {\boldsymbol {\theta}} \left(\mathbf {x} _ {q}\right) , \mathbf {p} _ {k ^ {\prime}}\right)\right)}, \tag {2}
+$$
+
+where $d:\mathbb{R}^M\to \mathbb{R}$ denotes the distance function. Query data point $\mathbf{x}_q$ is assigned to the class with the minimal distance between the class prototype and embedded query point.
+
+# 2.2 META-LEARNING VIA CONCEPT LEARNERS
+
+Our main assumption is that input dimensions can be separated into subsets of related dimensions corresponding to high-level, human-interpretable concepts that guide the training. Such sets of potentially overlapping, noisy and incomplete human-interpretable dimensions exist in many real-world scenarios. For instance, in computer vision concepts can be assigned to image segments; in natural language processing to semantically related words; whereas in biology we can use external knowledge bases and ontologies. In many problems, concepts are already available as a prior domain knowledge (Ashburner et al., 2000; Murzin et al., 1995; Wah et al., 2011; Mo et al., 2019; Miller et al., 2000), or can be automatically generated using existing techniques (Blei et al., 2003; Zhang et al., 2018; Jakab et al., 2018). Intuitively, concepts can be seen as part-based representations of the input and reflect the way humans reason about the world. Importantly, we do not assume these concepts are clean or complete. On the contrary, we show that even if there are thousands of concepts, which are noisy, incomplete, overlapping, or redundant, they still provide useful guidance to the meta-learning algorithm.
+
+Formally, let $\mathcal{C} = \{\mathbf{c}^{(j)}\}_{j=1}^{N}$ denote a set of $N$ concepts given/extracted as a prior knowledge, where each concept $\mathbf{c}^{(j)} \in \{0,1\}^D$ is a binary vector such that $c_i^{(j)} = 1$ if $i$ -th dimension should be used to describe the $j$ -th concept and $D$ denotes the dimensionality of the input. We do not impose any constraints on $\mathcal{C}$ , meaning that the concepts can be disjoint or overlap. Instead of learning single mapping function $f_\theta: \mathbb{R}^D \to \mathbb{R}^M$ across all dimensions, COMET separates original space into subspaces of predefined concepts and learns individual embedding functions $f_\theta^{(j)}: \mathbb{R}^D \to \mathbb{R}^M$
+
+for each concept $j$ (Figure 1). Concept embedding functions $f_{\theta}^{(j)}$ , named concept learners, are non-linear functions parametrized by a deep neural network. Each concept learner $j$ produces its own concept prototypes $\mathbf{p}_k^{(j)}$ for class $k$ computed as the average of concept embeddings of data points in the support set:
+
+$$
+\mathbf {p} _ {k} ^ {(j)} = \frac {1}{| \mathcal {S} _ {k} |} \sum_ {\left(\mathbf {x} _ {i}, y _ {i}\right) \in \mathcal {S} _ {k}} f _ {\boldsymbol {\theta}} ^ {(j)} \left(\mathbf {x} _ {i} \circ \mathbf {c} ^ {(j)}\right), \tag {3}
+$$
+
+where $\circ$ denotes Hadamard product. As a result, each class $k$ is represented with a set of $N$ concept prototypes $\{\mathbf{p}_k^{(j)}\}_{j=1}^N$ .
+
+Given a query data point $\mathbf{x}_q$ , we compute its concept embeddings and estimate their distances to the concept prototypes of each class. We then aggregate the information across all concepts by taking sum over distances between concept embeddings and concept prototypes. Specifically, for each concept embedding $f_{\boldsymbol{\theta}}^{(j)}(\mathbf{x}_q \circ \mathbf{c}^{(j)})$ we compute its distance to concept prototype $\mathbf{p}_k^{(j)}$ of a given class $k$ , and sum distances across all concepts to obtain a distribution over support classes. The probability of assigning query point $\mathbf{x}_q$ to $k$ -th class is then given by:
+
+$$
+p _ {\boldsymbol {\theta}} (y = k | \mathbf {x} _ {q}) = \frac {\exp \left(- \sum_ {j} d \left(f _ {\boldsymbol {\theta}} ^ {(j)} \left(\mathbf {x} _ {q} \circ \mathbf {c} ^ {(j)}\right) , \mathbf {p} _ {k} ^ {(j)}\right)\right)}{\sum_ {k ^ {\prime}} \exp \left(- \sum_ {j} d \left(f _ {\boldsymbol {\theta}} ^ {(j)} \left(\mathbf {x} _ {q} \circ \mathbf {c} ^ {(j)}\right) , \mathbf {p} _ {k ^ {\prime}} ^ {(j)}\right)\right)}. \tag {4}
+$$
+
+The loss is computed as the negative log-likelihood $L_{\theta} = -\log p_{\theta}(y = k|\mathbf{x}_q)$ of the true class, and COMET is trained by minimizing the loss on the query samples of training set in the episodic fashion (Snell et al., 2017; Vinyals et al., 2016). In equation (4), we use euclidean distance as the distance function. Experimentally, we find that it outperforms cosine distance (Appendix B), which agrees with the theory and experimental findings in (Snell et al., 2017). We note that in order for distances to be comparable, it is crucial to normalize neural network layers using batch normalization (Ioffe & Szegedy, 2015).
+
+# 2.3 INTERPRETABILITY
+
+Local and global concept importance scores. In COMET, each class is represented with $N$ concept prototypes. Given a query data point $\mathbf{x}_q$ , COMET assigns local concept importance scores by comparing concept embeddings of the query to concept prototypes. Specifically, for a concept $j$ in a class $k$ the local importance score is obtained by inverted distance $d(f_{\theta}^{(j)}(\mathbf{x}_q \circ \mathbf{c}^{(j)}), \mathbf{p}_k^{(j)})$ . Higher importance score indicates higher contribution in classifying query points to the class $k$ . Therefore, explanations for the query point $\mathbf{x}_q$ are given by local concept importance scores, and directly provide reasoning behind each prediction. To provide global explanations that can reveal important concepts for a set of query points of interest or an entire class, COMET computes average distance between concept prototype and concept embeddings of all query points of interest. Inverted average distance reflects global concept importance score and can be used to rank concepts, providing insights on important concepts across a set of examples.
+
+Discovering locally similar examples. Given a fixed concept $j$ , COMET can be used to rank data points based on the distance of their concept embeddings to the concept prototype $\mathbf{p}_k^{(j)}$ of class $k$ . By ranking data points according to their similarity to the concept of interest, COMET can find examples that locally share similar patterns within the same class, or even across different classes. For instance, COMET can reveal examples that well reflect a concept prototype, or examples that are very distant from the concept prototype.
+
+# 3 EXPERIMENTS
+
+# 3.1 EXPERIMENTAL SETUP
+
+Datasets. We apply COMET to four datasets from three diverse domains: computer vision, natural language processing (NLP) and biology. In the computer vision domain, we consider fine-grained image classification tasks. We use bird classification CUB-200-2011 (Wah et al., 2011) and flower classification Flowers-102 (Nilsback & Zisserman, 2008) datasets, referred to as CUB and Flowers
+
+hereafter. To define concepts, CUB provides part-based annotations, such as beak, wing, and tail of a bird. Parts were annotated by pixel location and visibility in each image. The total number of 15 parts/concepts is available; however concepts are incomplete and only a subset of them is present in an image. In case concept is not present, we rely on the prototypical concept to substitute for a missing concept. Based on the part coordinates, we create a surrounding bounding box with a fixed length to serve as the concept mask $\mathbf{c}^{(j)}$ . On both CUB and Flowers datasets, we test automatic concept extraction. In NLP domain, we apply COMET to benchmark document classification dataset Reuters (Lewis et al., 2004) consisting of news articles. To define concepts, we use all hypernyms of a given word based on the WordNet hierarchy (Lewis et al., 2004). On all datasets, we include a concept that captures the whole input, corresponding to a binary mask of all ones.
+
+In the biology domain, we introduce a new cross-organ cell type classification task (Brbic et al., 2020) together with a new dataset. We develop a novel single-cell transcriptomic dataset based on the Tabula Muris dataset (Consortium, 2018; 2020) that comprises 105,960 cells of 124 cell types collected across 23 organs of the mouse model organism. The features correspond to the gene expression profiles of cells. Out of the 23,341 genes, we select 2,866 genes with high standardized log dispersion given their mean. We define concepts using Gene Ontology (Ashburner et al., 2000; Consortium, 2019), a resource which characterizes gene functional roles in a hierarchically structured vocabulary. We select Gene Ontology terms at level 3 that have at least 64 assigned genes, resulting in the total number of 190 terms that define our concepts. We propose the evaluation protocol in which different organs are used for training, validation, and test splits. Therefore, a meta-learner needs to learn to generalize to unseen cell types across organs. This novel dataset along with the cross-organ evaluation splits is publicly available at https://snap.stanford.edu/comet. To our knowledge, this is the first meta-learning dataset from the biology domain.
+
+Baselines. We compare COMET's performance to seven baselines, including FineTune/Baseline++ (Chen et al., 2019b), Matching Networks (MatchingNet) (Vinyals et al., 2016), Model Agnostic Meta-Learning (MAML) (Finn et al., 2017), Relation Networks (Sung et al., 2018), MetaOptNet (Lee et al., 2019), DeepEMD (Zhang et al., 2020) and Prototypical Networks (ProtoNet) (Snell et al., 2017). DeepEMD is only applicable to image datasets.
+
+We provide more details on evaluation and implementation in Appendix A. Code is publicly available at https://github.com/snap-stanford/comet.
+
+# 3.2 RESULTS
+
+Performance comparison. We report results on CUB, Tabula Muris and Reuters datasets with concepts given as a prior domain knowledge in Table 1. COMET outperforms all baselines by a remarkably large margin on all datasets. Specifically, COMET achieves $9.5\%$ and $9.3\%$ average improvements over the best performing baseline in the 1-shot and 5-shot tasks across datasets. Notably, COMET improves the result of the ProtoNet baseline by $19 - 23\%$ in the 1-shot tasks across datasets. COMET's substantial improvement are retained with the deeper Conv-6 backbone (Appendix C). To confirm that the improvements indeed come from concept learners and not from additional weights, we compare COMET to ensemble of prototypical networks, and further evaluate performance of COMET with shared weights across all concepts. Results shown in Table 2 demonstrate that COMET achieves significantly better performance than the ensemble of ProtoNets even when the weights across concepts are shared. Of note, COMET's performance is only slightly affected with shared weights across concepts. More experimental details are provided in Appendix D.
+
+Effect of number of concepts. We systematically evaluate the effect of the number of concepts on COMET's performance on CUB and Tabula Muris datasets (Figure 2). In particular, we start from ProtoNet's result that can be seen as using a single concept in COMET that covers all dimensions of the input. We then gradually increase number of concepts and train and evaluate COMET with the selected number of concepts. For the CUB dataset, we add concepts based on their visibility frequency, whereas on the Tabula Muris we are not limited in the coverage of concepts so we randomly select them. The results demonstrate that on both domains COMET consistently improves performance when increasing the number of concepts. Strikingly, by adding just one most frequent concept corresponding to a bird's beak on top of the whole image concept, we improve ProtoNet's performance on CUB by $10\%$ and $5\%$ in 1-shot and 5-shot tasks, respectively. On the Tabula Muris, with just 8 concepts COMET significantly outperforms all baselines and achieves $7\%$ and $17\%$
+
+Table 1: Results on 1-shot and 5-shot classification on the CUB and Tabula Muris datasets. We report average accuracy and standard deviation over 600 randomly sampled episodes.
+
+| Method | CUB | Tabula Muris | Reuters |
| 1-shot | 5-shot | 1-shot | 5-shot | 1-shot | 5-shot |
| Finetune | 61.4 ± 1.0 | 80.2 ± 0.6 | 65.3 ± 1.0 | 82.1 ± 0.7 | 48.2 ± 0.7 | 64.3 ± 0.4 |
| MatchingNet | 61.0 ± 0.9 | 75.9 ± 0.6 | 71.0 ± 0.9 | 82.4 ± 0.7 | 55.9 ± 0.6 | 70.9 ± 0.4 |
| MAML | 52.8 ± 1.0 | 74.4 ± 0.8 | 50.4 ± 1.1 | 57.4 ± 1.1 | 45.0 ± 0.8 | 60.5 ± 0.4 |
| RelationNet | 62.1 ± 1.0 | 78.6 ± 0.7 | 69.3 ± 1.0 | 80.1 ± 0.8 | 53.8 ± 0.7 | 68.3 ± 0.3 |
| MetaOptNet | 62.2 ± 1.0 | 79.6 ± 0.6 | 73.6 ± 1.1 | 85.4 ± 0.9 | 62.1 ± 0.8 | 77.8 ± 0.4 |
| DeepEMD | 64.0 ± 1.0 | 81.1 ± 0.7 | NA | NA | NA | NA |
| ProtoNet | 57.1 ± 1.0 | 76.1 ± 0.7 | 64.5 ± 1.0 | 82.5 ± 0.7 | 58.3 ± 0.7 | 75.1 ± 0.4 |
| COMET | 67.9 ± 0.9 | 85.3 ± 0.5 | 79.4 ± 0.9 | 91.7 ± 0.5 | 71.5 ± 0.7 | 89.8 ± 0.3 |
+
+Table 2: Comparison to the ensemble of prototypical networks and COMET with shared weights across concepts. On the CUB dataset weights are always shared.
+
+| Method | CUB | Tabula Muris | Reuters |
| 1-shot | 5-shot | 1-shot | 5-shot | 1-shot | 5-shot |
| ProtoNetEns | 64.0 ± 0.8 | 82.3 ± 0.5 | 67.2 ± 0.8 | 83.6 ± 0.5 | 62.4 ± 0.7 | 79.3 ± 0.4 |
| COMET shared w | 67.9 ± 0.9 | 85.3 ± 0.5 | 78.2 ± 1.0 | 91.0 ± 0.5 | 69.8 ± 0.8 | 88.6 ± 0.3 |
| COMET | 67.9 ± 0.9 | 85.3 ± 0.5 | 79.4 ± 0.9 | 91.7 ± 0.5 | 71.5 ± 0.7 | 89.8 ± 0.3 |
+
+improvement over ProtoNet in 1-shot and 5-shot tasks, respectively. To demonstrate the robustness of our method to a huge set of overlapping concepts, we extend the number of concepts to 1500 by capturing all levels of the Gene Ontology hierarchy, therefore allowing many redundant relationships. Even in this scenario, COMET slightly improves the results compared to 190 concepts obtained from a single level. These results demonstrate that COMET outperforms other methods even when the number of concepts is small and annotations are incomplete, as well as with many overlapping and redundant concepts.
+
+
+Figure 2: The effect of number of concepts on COMET's performance. COMET consistently improves performance when we gradually increase number of concept terms.
+
+
+
+Unsupervised concept annotation. While COMET achieves remarkable results with human-validated concepts given as external knowledge, we next investigate COMET's performance on automatically inferred concepts. In addition to CUB dataset, we consider Flowers dataset for fine-grained image classification. To automatically extract visual concepts, we train the autoencoding framework for landmarks discovery proposed in (Zhang et al., 2018). The encoding module outputs landmark coordinates that we use as part coordinates. We generate a concept mask by creating a bounding box with a fixed length around landmark coordinates. Although extracted coordinates are often noisy and capture background (Appendix F), we find that COMET outperforms all baselines on both CUB and Flowers fine-grained classification datasets (Table 3). This analysis shows that the benefits of our method are expected even with noisy concepts extracted in a fully automated and unsupervised way.
+
+To test unsupervised concept annotation on Tabula Muris and Reuters datasets, we randomly select subsets of features for concept definition. Since COMET is interpretable and can be used to find important concepts, we use validation set to select concepts with the highest importance scores. Even
+
+in this case, COMET significantly outperforms all baselines, achieving only $2\%$ lower accuracy on the Tabula Muris dataset and $1\%$ on the Reuters dataset on both 1-shot and 5-shot tasks compared to human-defined concepts. This additionally confirms COMET's effectiveness with automatically extracted concepts. We provide more results in Appendix E.
+
+Table 3: Results on 1-shot and 5-shot classification with automatically extracted concepts. We report average accuracy and standard deviation over 600 randomly sampled episodes. We show the average relative improvement of COMET over the best and ProtoNet baselines.
+
+| Accuracy | CUB: 1-shot | CUB: 5-shot | Flowers: 1-shot | Flowers: 5-shot |
| COMET | 64.8 ± 1.0 | 82.0 ± 0.5 | 70.4 ± 0.9 | 86.7 ± 0.6 |
| Improvement of COMET... |
| over best baseline | 1.3% | 1.1% | 4.8% | 4.6% |
| over ProtoNet | 13.5% | 7.8% | 6.0% | 8.1% |
+
+# 3.3 INTERPRETABILITY
+
+We analyze the reasoning part of COMET by designing case studies aiming to answer the following questions: (i) Which concepts are the most important for a given query point (i.e., local explanation)? Which concepts are the most important for a given class (i.e., global explanation)?; (iii) Which examples share locally similar patterns?; (iv) Which examples reflect well concept prototype? We perform all analyses exclusively on classes from the novel task that are not seen during training.
+
+Concept importance. Given a query point, COMET ranks concepts based on their importance scores, therefore identifying concepts highly relevant for the prediction of a single query point. We demonstrate examples of local explanations in Appendix G. To quantitatively evaluate global explanations that assign concept importance scores to the entire class, we derive ground truth explanations on the Tabula Muris dataset. Specifically, using the ground truth labels on the test set, we obtain a set of genes that are differentially expressed for each class (i.e., cell type). We then find Gene Ontology terms that are significantly enriched (false discovery rate corrected $p$ -value $< 0.1$ ) in the set of differentially expressed genes of a given class, and use those terms as ground-truth concepts. We consider only cell types that have at least two assigned terms. To obtain COMET's explanations, we rank global concept importance scores for each class and report the number of relevant terms that are successfully retrieved in top 20 concepts with the highest scores in the 5-shot setting (Figure 3 left). We find that COMET's importance scores agree extremely well with the ground truth annotations, achieving 0.71 average recall@20 across all cell types. We further investigate global explanations on the CUB dataset by computing the frequency of the most relevant concepts across the species (Figure 3 right). Beak, belly and forehead turn out to be the most relevant features, supporting
+
+
+Figure 3: (Left) Quantitatively, on the Tabula Muris dataset COMET's global importance scores agree well with the ground truth important Gene Ontology terms estimated using differentially expressed genes. (Right) Qualitatively, on the CUB dataset importance scores correctly reflect the most relevant bird features.
+
+
+
+common-sense intuition. For instance, 'beak' is selected as the most relevant concept for 'parakeet auklet' known for its nearly circular beak; 'belly' for 'cape may warbler' known for its tiger stripes on the belly; while 'belted kingfisher' indeed has characteristic 'forehead' with its shaggy crest on the top of the head. This confirms that COMET correctly identifies important class-level concepts.
+
+Locally similar patterns. Given a fixed concept of interest, we apply COMET to sort images with respect to the distance of their concept embedding to the concept prototype (Figure 4). COMET finds images that locally resemble the prototypical image and well express concept prototype, correctly reflecting the underlying concept of interest. On the contrary, images sorted using the whole image as a concept often reflect background similarity and can not provide intuitive explanations. Furthermore, by finding most distant examples COMET can aid in identifying misannotated or non-visible concepts (Appendix H) which can be particularly useful when the concepts are automatically extracted. These analyses suggest that COMET can be used to discover, sort and visualize locally similar patterns, revealing insights on concept-based similarity across examples.
+
+
+Images ranked according to their distance to the prototypical concept
+Figure 4: Top row shows images with beak concept embeddings most similar to the prototypical beak. Bottom row shows images ranked according the global concept that captures whole image. COMET correctly reflects local similarity in the underlying concept of interest, while global concept often reflects environmental similarity.
+
+# 4 RELATED WORK
+
+Our work draws motivation from a rich line of research on meta-learning, compositional representations, and concept-based interpretability.
+
+Meta-learning. Recent meta-learning methods fall broadly into two categories. Optimization-based methods (Finn et al., 2017; Rusu et al., 2019; Nichol & Schulman, 2018; Grant et al., 2018; Antoniou et al., 2019) aim to learn a good initialization such that network can be fine-tuned to a target task within a few gradient steps. On the other hand, metric-based methods (Snell et al., 2017; Vinyals et al., 2016; Sung et al., 2018; Gidaris & Komodakis, 2018) learn a metric space shared across tasks such that in the new space target task can be solved using nearest neighbour or simple linear classifier. DeepEMD (Zhang et al., 2020) learns optimal distance between local image representations. Prototypical networks (Snell et al., 2017) learn a metric space such that data points cluster around a prototypical representation computed for each category as the mean of embedded labeled examples. It has remained one of the most competitive few-shot learning methods (Triantafillou et al., 2019), resulting in many follow-up works (Sung et al., 2018; Oreshkin et al., 2018; Ren et al., 2018; Liu et al., 2019; Xing et al., 2019). Two recent works (Hou et al., 2019; Zhu et al.) proposed to learn local discriminative features with attention mechanisms in image classification tasks. Our work builds upon prototypical networks and extends the approach by introducing concept-based prototypes. Prototypical networks were extended by learning mixture prototypes in (Allen et al., 2019); however prototypes in this work share the same metric space. In contrast, COMET defines human-interpretable concept-specific metric spaces where each prototype reflects class-level differences in the metric space of the corresponding concept.
+
+Compositionality. The idea behind learning from a few examples using compositional representations originates from work on Bayesian probabilistic programs in which individual strokes were combined for the handwritten character recognition task (Lake et al., 2011; 2015). This approach was extended in (Wong & Yuille, 2015) by replacing hand designed features with symmetry axis as object descriptors. Although these early works effectively demonstrated that compositionality is a
+
+key ingredient for adaptation in a low-data regime, it is unclear how to extend them to generalize beyond simple visual concepts. Recent work (Tokmakov et al., 2019) revived the idea and showed that deep compositional representations generalize better in few-shot image classification. However, this approach requires category-level attribute annotations that are impossible to get in domains not intuitive to humans, such as biology. Moreover, even in domains in which annotations can be collected, they require tedious manual effort. On the contrary, our approach is domain-agnostic and generates human-understandable interpretations in any domain.
+
+Interpretability. There has been much progress on designing interpretable methods that estimate the importance of individual features (Selvaraju et al., 2016; Sundararajan et al., 2017; Smilkov et al., 2017; Ribeiro et al., 2016; Lundberg & Lee, 2017; Melis & Jaakkola, 2018). However, individual features are often not intuitive, or can even be misleading when interpreted by humans (Kim et al., 2018). To overcome this limitation, recent advances have been focused on designing methods that explain predictions using high-level human understandable concepts (Kim et al., 2018; Ghorbani et al., 2019). TCAV (Kim et al., 2018) defines concepts based on user-annotated set of examples in which the concept of interest appears. On the contrary, high-level concepts in our work are defined with a set of related dimensions. As such, they are already available in many domains, or can be obtained in an unsupervised manner. Once defined, they are transferable across problems that share feature space. As opposed to the methods that base their predictions on the posthoc analysis (Ribeiro et al., 2016; Lundberg & Lee, 2017; Melis & Jaakkola, 2018; Kim et al., 2018), COMET is designed as an inherently interpretable model and explains predictions by gaining insights from the reasoning process of the network. The closest to our work are prototypes-based explanation models (Li et al., 2018; Chen et al., 2019a). However, they require specialized convolutional architecture for feature extraction and are not applicable beyond image classification, or to a few-shot setting.
+
+# 5 CONCLUSION
+
+We introduced COMET, a novel metric-based meta-learning algorithm that learns to generalize along human-interpretable concept dimensions. We showed that COMET learns generalizable representations with incomplete, noisy, redundant, very few or a huge set of concept dimensions, selecting only important concepts for classification and providing reasoning behind the decisions. Our experimental results showed that COMET does not make a trade-off between interpretability and accuracy and significantly outperforms existing methods on tasks from diverse domains, including a novel benchmark dataset from the biology domain developed in our work.
+
+# ACKNOWLEDGEMENTS
+
+The authors thank Yusuf Roohani, Michihiro Yasunaga and Marinka Zitnik for their helpful comments. We gratefully acknowledge the support of DARPA under Nos. N660011924033 (MCS); ARO under Nos. W911NF-16-1-0342 (MURI), W911NF-16-1-0171 (DURIP); NSF under Nos. OAC-1835598 (CINES), OAC-1934578 (HDR), CCF-1918940 (Expeditions), IIS-2030477 (RAPID); Stanford Data Science Initiative, Wu Tsai Neurosciences Institute, Chan Zuckerberg Biohub, Amazon, JPMorgan Chase, Docomo, Hitachi, JD.com, KDDI, NVIDIA, Dell, Toshiba, and UnitedHealth Group. J. L. is a Chan Zuckerberg Biohub investigator.
+
+# REFERENCES
+
+Kelsey Allen, Evan Shelhamer, Hanul Shin, and Joshua Tenenbaum. Infinite mixture prototypes for few-shot learning. In International Conference on Machine Learning, pp. 232-241, 2019.
+Antreas Antoniou, Harrison Edwards, and Amos Storkey. How to train your MAML. In International Conference on Learning Representations, 2019.
+Michael Ashburner, Catherine A Ball, Judith A Blake, David Botstein, Heather Butler, J Michael Cherry, Allan P Davis, Kara Dolinski, Selina S Dwight, Janan T Eppig, et al. Gene Ontology: tool for the unification of biology. Nature Genetics, 25(1):25-29, 2000.
+Samy Bengio, Yoshua Bengio, Jocelyn Cloutier, and Jan Gecsei. On the optimization of a synaptic learning rule. In *Preprints of the Conference Optimality in Artificial and Biological Neural Networks*, volume 2, 1992.
+David M Blei, Andrew Y Ng, and Michael I Jordan. Latent Dirichlet Allocation. Journal of Machine Learning Research, 3:993-1022, 2003.
+Maria Brbic, Marinka Zitnik, Sheng Wang, Angela O Pisco, Russ B Altman, Spyros Darmanis, and Jure Leskovec. Mars: discovering novel cell types across heterogeneous single-cell experiments. Nature Methods, 17(12):1200-1206, 2020.
+Chaofan Chen, Oscar Li, Daniel Tao, Alina Barnett, Cynthia Rudin, and Jonathan K Su. This looks like that: Deep learning for interpretable image recognition. In Advances in Neural Information Processing Systems, pp. 8928-8939, 2019a.
+Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Frank Wang, and Jia-Bin Huang. A closer look at few-shot classification. In International Conference on Learning Representations, 2019b.
+Gene Ontology Consortium. The Gene Ontology resource: 20 years and still GOing strong. Nucleic Acids Research, 47(D1):D330-D338, 2019.
+Tabula Muris Consortium. Single-cell transcriptomics of 20 mouse organs creates a Tabula Muris. Nature, 562(7727):367, 2018.
+Tabula Muris Consortium. A single cell transcriptomic atlas characterizes aging tissues in the mouse. Nature, 583:590-595, 2020.
+Nikita Dvornik, Cordelia Schmid, and Julien Mairal. Diversity with cooperation: Ensemble methods for few-shot classification. In IEEE International Conference on Computer Vision, pp. 3723-3731, 2019.
+Li Fei-Fei, Rob Fergus, and Pietro Perona. One-shot learning of object categories. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(4):594-611, 2006.
+Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In International Conference on Machine Learning, pp. 1126-1135, 2017.
+Amirata Ghorbani, James Wexler, James Y Zou, and Been Kim. Towards automatic concept-based explanations. In Advances in Neural Information Processing Systems, pp. 9273-9282, 2019.
+Spyros Gidaris and Nikos Komodakis. Dynamic few-shot visual learning without forgetting. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 4367-4375, 2018.
+Erin Grant, Chelsea Finn, Sergey Levine, Trevor Darrell, and Thomas Griffiths. Recasting gradient-based meta-learning as hierarchical Bayes. In International Conference on Learning Representations, 2018.
+Lars Kai Hansen and Peter Salamon. Neural network ensembles. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12(10):993-1001, 1990.
+Ruiming Hou, Hong Chang, MA Bingpeng, Shiguang Shan, and Xilin Chen. Cross attention network for few-shot classification. In Advances in Neural Information Processing Systems, pp. 4003-4014, 2019.
+
+Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, pp. 448-456, 2015.
+Tomas Jakab, Ankush Gupta, Hakan Bilen, and Andrea Vedaldi. Unsupervised learning of object landmarks through conditional image generation. In Advances in Neural Information Processing Systems, pp. 4016-4027, 2018.
+Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, and Rory Sayres. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (TCAV). In International Conference on Machine Learning, pp. 2668-2677, 2018.
+Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
+Gregory Koch, Richard Zemel, and Ruslan Salakhutdinov. Siamese neural networks for one-shot image recognition. In ICML Deep Learning Workshop, volume 2, 2015.
+Brenden Lake, Ruslan Salakhutdinov, Jason Gross, and Joshua Tenenbaum. One shot learning of simple visual concepts. In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 33, 2011.
+Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332-1338, 2015.
+Kwonjoon Lee, Subhransu Maji, Avinash Ravichandran, and Stefano Soatto. Meta-learning with differentiable convex optimization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10657-10665, 2019.
+David D Lewis, Yiming Yang, Tony G Rose, and Fan Li. Rcv1: A new benchmark collection for text categorization research. Journal of Machine Learning Research, 5(Apr):361-397, 2004.
+Oscar Li, Hao Liu, Chaofan Chen, and Cynthia Rudin. Deep learning for case-based reasoning through prototypes: A neural network that explains its predictions. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
+Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, and Chengqi Zhang. Learning to propagate for graph meta-learning. In Advances in Neural Information Processing Systems, pp. 1037-1048, 2019.
+Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems, pp. 4765-4774, 2017.
+David Alvarez Melis and Tommi Jaakkola. Towards robust interpretability with self-explaining neural networks. In Advances in Neural Information Processing Systems, pp. 7775-7784, 2018.
+Erik G Miller, Nicholas E Matsakis, and Paul A Viola. Learning from one example through shared densities on transforms. In IEEE Conference on Computer Vision and Pattern Recognition, volume 1, pp. 464-471, 2000.
+Kaichun Mo, Shilin Zhu, Angel X Chang, Li Yi, Subarna Tripathi, Leonidas J Guibas, and Hao Su. PartNet: A large-scale benchmark for fine-grained and hierarchical part-level 3D object understanding. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 909-918, 2019.
+Alexey G Murzin, Steven E Brenner, Tim Hubbard, and Cyrus Chothia. SCOP: a structural classification of proteins database for the investigation of sequences and structures. Journal of Molecular Biology, 247(4):536-540, 1995.
+Alex Nichol and John Schulman. Reptile: A scalable metalearning algorithm. arXiv preprint arXiv:1803.02999, 2:2, 2018.
+Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing, pp. 722-729. IEEE, 2008.
+
+Boris Oreshkin, Pau Rodríguez López, and Alexandre Lacoste. TADAM: Task dependent adaptive metric for improved few-shot learning. In Advances in Neural Information Processing Systems, pp. 721-731, 2018.
+Aniruddh Raghu, Maithra Raghu, Samy Bengio, and Oriol Vinyals. Rapid learning or feature reuse? Towards understanding the effectiveness of MAML. In International Conference on Learning Representations, 2020.
+Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In International Conference on Learning Representations, 2017.
+Mengye Ren, Eleni Triantafillou, Sachin Ravi, Jake Snell, Kevin Swersky, Joshua B Tenenbaum, Hugo Larochelle, and Richard S Zemel. Meta-learning for semi-supervised few-shot classification. arXiv preprint arXiv:1803.00676, 2018.
+Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. "Why should I trust you?" Explaining the predictions of any classifier. In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135-1144, 2016.
+Andrei A Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero, and Raia Hadsell. Meta-learning with latent embedding optimization. In International Conference on Learning Representations, 2019.
+Jürgen Schmidhuber. Evolutionary principles in self-referential learning, or on learning how to learn: the meta-meta... hook. PhD thesis, Technische Universität München, 1987.
+Ramprasaath R Selvaraju, Abhishek Das, Ramakrishna Vedantam, Michael Cogswell, Devi Parikh, and Dhruv Batra. Grad-CAM: Why did you say that? arXiv preprint arXiv:1611.07450, 2016.
+Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, and Martin Wattenberg. SmoothGrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825, 2017.
+Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems, pp. 4077-4087, 2017.
+Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In International Conference on Machine Learning, pp. 3319-3328, 2017.
+Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales. Learning to compare: Relation network for few-shot learning. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 1199-1208, 2018.
+Pavel Tokmakov, Yu-Xiong Wang, and Martial Hebert. Learning compositional representations for few-shot recognition. In IEEE International Conference on Computer Vision, pp. 6372-6381, 2019.
+Eleni Triantafillou, Tyler Zhu, Vincent Dumoulin, Pascal Lamblin, Utku Evci, Kelvin Xu, Ross Goroshin, Carles Gelada, Kevin Swersky, Pierre-Antoine Manzagol, et al. Meta-dataset: A dataset of datasets for learning to learn from few examples. arXiv preprint arXiv:1903.03096, 2019.
+Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. Matching networks for one shot learning. In Advances in Neural Information Processing Systems, pp. 3630-3638, 2016.
+Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The Caltech-UCSD Birds-200-2011 dataset. 2011.
+Alex Wong and Alan L Yuille. One shot learning via compositions of meaningful patches. In IEEE International Conference on Computer Vision, pp. 1197-1205, 2015.
+Chen Xing, Negar Rostamzadeh, Boris Oreshkin, and Pedro O Pinheiro. Adaptive cross-modal few-shot learning. In Advances in Neural Information Processing Systems, pp. 4848-4858, 2019.
+
+Chi Zhang, Yujun Cai, Guosheng Lin, and Chunhua Shen. Deepemd: Few-shot image classification with differentiable earth mover's distance and structured classifiers. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 12203-12213, 2020.
+Yuting Zhang, Yijie Guo, Yixin Jin, Yijun Luo, Zhiyuan He, and Honglak Lee. Unsupervised discovery of object landmarks as structural representations. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 2694-2703, 2018.
+Yaohui Zhu, Chenlong Liu, and Shuqiang Jiang. Multi-attention meta learning for few-shot fine-grained image recognition. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20. International Joint Conferences on Artificial Intelligence Organization.
+
+# A EXPERIMENTAL SETUP
+
+Evaluation. We test all methods on the most broadly used 5-way classification setting. In each episode, we randomly sample 5 classes where each class contains $k$ examples as the support set in the $k$ -shot classification task. We construct the query set to have 16 examples, where each unlabeled sample in the query set belongs to one of the classes in the support set. We choose the best model according to the validation accuracy, and then evaluate it on the test set with novel classes. We report the mean accuracy by randomly sampling 600 episodes in the fine-tuning or meta-testing stage.
+
+On the CUB dataset, we followed the evaluation protocol in (Chen et al., 2019b) and split the dataset into 100 base, 50 validation, and 50 test classes in the exactly same split. On the Tabula Muris, we use 15 organs for training, 4 organs for validation, and 4 organs for test, resulting into 59 base, 47 validation, and 37 test classes corresponding to cell types. The 102 classes of Flowers dataset are split into 52, 25, 25 as the training, validation and testing set, respectively. As for Reuters dataset, we leave out 5 classes for validation and 5 for test.
+
+Implementation details. On the CUB dataset, we use the widely adopted four-layer convolutional backbones Conv-4 with an input size of $84 \times 84$ (Snell et al., 2017). We perform standard data augmentation, including random crop, rotation, horizontal flipping and color jittering. We use the Adam optimizer (Kingma & Ba, 2014) with an initial learning rate of $10^{-3}$ and weight decay 0. We train the 5-shot tasks for 40,000 episodes and 1-shot tasks for 60,000 episodes (Chen et al., 2019b). To speed up training of COMET, we share the network parameters between concept learners. In particular, we first forward the entire image $\mathbf{x}_i$ into the convolutional network and get a spatial feature embedding $f_{\theta}(\mathbf{x}_i)$ , and then get the $j$ -th concept embedding as $f_{\theta}(\mathbf{x}_i) \circ \mathbf{c}^{(j)}$ . Since convolutional filters only operate on pixels locally, in practice we get similar performance if we apply the mask at the beginning or at the end while significantly speeding up training time. In case the part is not annotated (i.e., visible), we use the prototypical concept corresponding to whole image to replace the missing concept. For the Tabula Muris dataset, we use a simple backbone network structure containing two fully-connected layers with batch normalization, ReLu activation and dropout. We use Adam optimizer (Kingma & Ba, 2014) with an initial learning rate of $10^{-3}$ and weight decay 0. We train the network for 1,000 episodes. For MAML, RelationNet, MatchingNet, FineTune and ProtoNet, we use implementations from (Chen et al., 2019b). For MetaOptNet and DeepEMD we use implementations from the respective papers.
+
+# B ABLATION STUDY ON DISTANCE FUNCTION
+
+We compare the effect of distance metric on the COMET's performance. We find that Euclidean distance consistently outperforms cosine distance in fine-grained image classification and cell type annotation tasks.
+
+Table 4: The effect of distance metric on COMET's performance.
+
+| Distance | CUB | Tabula Muris |
| 1-shot | 5-shot | 1-shot | 5-shot |
| Cosine | 65.7 ± 1.0 | 82.2 ± 0.6 | 77.1 ± 0.9 | 90.1 ± 0.6 |
| Euclidean | 67.9 ± 0.9 | 85.3 ± 0.5 | 79.4 ± 0.9 | 91.7 ± 0.5 |
+
+# C ABLATION STUDY ON BACKBONE NETWORK
+
+We compare performance of COMET to baselines methods using deeper Conv-6 backbone instead of Conv-4 backbone on the CUB dataset. We use part based annotations to define concepts. The results are reported in Table 5. COMET outperforms all baselines even with deeper backbone. Additionally, by adding just one most frequent concept corresponding to a bird's beak on top of the whole image concept, COMET improves ProtoNet's performance by $3.8\%$ on 1-shot task and $2.2\%$ on 5-shot task.
+
+Table 5: Performance using Conv-6 backbone on CUB and Flowers dataset. We report average accuracy and standard deviation over 600 randomly sampled episodes.
+
+| Method | CUB |
| 1-shot | 5-shot |
| Finetune | 66.0 ± 0.9 | 82.0 ± 0.6 |
| MatchingNet | 66.5 ± 0.9 | 77.9 ± 0.7 |
| MAML | 66.3 ± 1.1 | 78.8 ± 0.7 |
| RelationNet | 64.4 ± 0.9 | 80.2 ± 0.6 |
| MetaOptNet | 65.5 ± 1.2 | 83.0 ± 0.8 |
| DeepEMD | 66.8 ± 0.9 | 83.8 ± 0.7 |
| ProtoNet | 66.4 ± 1.0 | 82.0 ± 0.6 |
| COMET-1 concept | 68.9 ± 0.9 | 83.8 ± 0.6 |
| COMET | 72.2 ± 0.9 | 87.6 ± 0.5 |
+
+# D ABLATION STUDY ON ENSEMBLE METHODS
+
+We compare COMET to the ensemble of prototypical networks. We train ProtoNets in parallel and combine their outputs by majority voting as typically done in ensemble models. In particular, given a query point $\mathbf{x}_q$ and prototypes $\{\mathbf{p}_k^{(j)}\}_{k}$ , the prototypical ensemble outputs probability distribution for each ProtoNet model $j$ :
+
+$$
+p _ {\boldsymbol {\theta}} ^ {(j)} (y = k | \mathbf {x} _ {q}) = \frac {\exp \left(- d \left(f _ {\boldsymbol {\theta}} ^ {(j)} (\mathbf {x} _ {q}) , \mathbf {p} _ {k} ^ {(j)}\right)\right)}{\sum_ {k ^ {\prime}} \exp \left(- d \left(f _ {\boldsymbol {\theta}} ^ {(j)} (\mathbf {x} _ {q}), \mathbf {p} _ {k ^ {\prime}} ^ {(j)}\right)\right)}. \tag {5}
+$$
+
+On the CUB dataset, we use 5 ProtoNets. We use smaller number than the number of concepts because training an ensemble of a larger number of ProtoNets on CUB results in memory issues due to the unshared weights. On the Tabula Muris and Reuters datasets we use the same number of ProtoNets as the number of concepts, that is 190 on Tabula Muris and 126 on Reuters.
+
+# E UNSUPERVISED CONCEPT ANNOTATION: ADDITIONAL RESULTS
+
+We evaluate COMET and baseline methods on the Flowers dataset for fine-grained image classification. We automatically extract concepts using unsupervised landmarks discovery approach (Zhang et al., 2018). Results in Table 6 show that COMET outperforms all baselines by a large margin.
+
+Table 6: Results on 1-shot and 5-shot classification on the Flowers dataset. We report average accuracy and standard deviation over 600 randomly sampled episodes.
+
+| Method | Flowers |
| 1-shot | 5-shot |
| Finetune | 65.4 ± 0.9 | 81.9 ± 0.7 |
| MatchingNet | 66.0 ± 0.9 | 82.0 ± 0.8 |
| MAML | 63.2 ± 1.1 | 76.6 ± 0.8 |
| RelationNet | 66.4 ± 0.9 | 80.8 ± 0.6 |
| MetaOptNet | 64.8 ± 1.0 | 81.3 ± 0.7 |
| DeepEMD | 67.2 ± 0.9 | 82.9 ± 0.7 |
| ProtoNet | 64.4 ± 1.0 | 80.2 ± 0.8 |
| COMET | 70.4 ± 0.9 | 86.7 ± 0.6 |
+
+On the Tabula Muris and Reuters datasets, we test COMET without any prior knowledge by defining concepts using selected random masks. In particular, we randomly select subsets of features as concepts and then use validation set to select the concepts with the highest importance scores as defined by COMET. We use same number of concepts used in Tabula Muris and Reuters datasets. Results are reported in Table 7.
+
+# F UNSUPERVISED CONCEPT ANNOTATION: LANDMARKS EXAMPLES
+
+To assess the performance of COMET using automatically extracted visual concepts on the CUB dataset, we applied autoencoding framework for landmarks discovery proposed in (Zhang et al.,
+
+Table 7: Results on 1-shot and 5-shot classification on Tabula Muris and Retuters dataset with selected random masks as concepts and human-defined concepts. We report average accuracy and standard deviation over 600 randomly sampled episodes.
+
+| Method | Tabula Muris | Reuters |
| 1-shot | 5-shot | 1-shot | 5-shot |
| with selected random masks | 77.2 ± 1.0 | 89.8 ± 0.5 | 70.1 ± 0.9 | 89.0 ± 0.4 |
| with prior knowledge | 79.4 ± 0.9 | 91.7 ± 0.5 | 71.5 ± 0.7 | 89.8 ± 0.3 |
+
+2018). We use default parameters and implementation provided by the authors, and set the number of landmarks to 30. The encoding module provides coordinates of the estimated landmarks. To create concept mask, we create a bounding box around discovered landmarks. We train the autoencoder using same parameters as Zhang et al. (2018), and set the number of concepts to 30. Examples of extracted landmarks for 20 images from the CUB dataset are visualized in Figure 5.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 5: Examples of automatically extracted landmarks using (Zhang et al., 2018) on the CUB dataset.
+
+
+
+
+
+
+
+
+
+# G INTERPRETABILITY: LOCAL EXPLANATIONS
+
+Here, we demonstrate COMET's local explanations on the CUB dataset. Given a single query data point, COMET assigns local concept importance scores to each concept based on the distance between concept embedding of the query data point to the prototypical concept. We then rank concepts according to their local concept importance scores. Figure 6 shows examples of ranked concepts. Importance scores assigned by COMET visually reflect well the most relevant bird features.
+
+# H INTERPRETABILITY: LOCAL SIMILARITY
+
+Given fixed concept of interest, we apply COMET to sort images with respect to the distance of their concept embedding to the concept prototype. Figure 7 shows example of chipping sparrow images with the belly concept embedding most similar to the prototypical belly, and images with the belly
+
+
+Support set: 1-shot task
+
+
+Query data point
+
+# Ranked concepts
+
+1. Beak
+2. Forehead
+3. Crown
+4. Throat
+5. Left leg
+
+
+Support set: 1-shot task
+
+
+Query data point
+
+# Ranked concepts
+
+1. Breast
+2. Belly
+3. Crown
+4. Nape
+5. Forehead
+
+
+Support set: 1-shot task
+Figure 6: Examples of COMET's local explanations on the CUB dataset. Concepts are ranked according to the highest local concept similarity scores. Qualitatively, local importance scores correctly reflect the most relevant bird features.
+
+
+Query data point
+
+# Ranked concepts
+
+1. Tail
+2.Left leg
+3. Right leg
+4. Nape
+5. Breast
+
+concept embedding most distant to the prototypical belly. Most similar images indeed have clearly visible belly part and reflect prototypical belly well. On the contrary, most distant images have only small part of belly visible, indicating that COMET can be used to detect misannotated or non-visible concepts.
+
+
+Support set: 1-shot task
+Class: chipping sparrow Concept: belly
+
+
+Chipping sparrow images with belly embedding most similar to the prototypical belly
+
+
+
+
+
+
+
+
+Chipping sparrow images with belly embedding most distant to the prototypical belly
+Figure 7: Images ranked according to the distance of their belly concept embedding to the belly concept prototype. Most similar images (top) and most distant images (bottom). Images closest to the prototype have clearly visible belly part that visually looks like prototypical belly of a chipping sparrow, whereas most distant images do not have belly part clearly visible.
+
+
+
+
+
+
\ No newline at end of file
diff --git a/conceptlearnersforfewshotlearning/images.zip b/conceptlearnersforfewshotlearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..98794b6219ab95a77d54ea86a8c36e71ccac1f73
--- /dev/null
+++ b/conceptlearnersforfewshotlearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7006de950b9e36ddc57b86f5675fed96f7b37aa83ec8df93c49469e720bc8391
+size 703971
diff --git a/conceptlearnersforfewshotlearning/layout.json b/conceptlearnersforfewshotlearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..1ffba6137a95dba174d853bff0e0291273eb6f05
--- /dev/null
+++ b/conceptlearnersforfewshotlearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d2a3776296e03c54c5974d5e66691e1d157d00575b99999491e28a67ea353a35
+size 515832
diff --git a/conditionalgenerativemodelingvialearningthelatentspace/76643a5c-0ba2-4722-b6d1-a54c2f86d442_content_list.json b/conditionalgenerativemodelingvialearningthelatentspace/76643a5c-0ba2-4722-b6d1-a54c2f86d442_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..b457a4f3d92b0cf7836d0e8421cb599d76c25721
--- /dev/null
+++ b/conditionalgenerativemodelingvialearningthelatentspace/76643a5c-0ba2-4722-b6d1-a54c2f86d442_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:300a4c16edc2fae6f2ca6b2d780b2458da6ed6f46e8febb38858929ceb2ef44b
+size 89979
diff --git a/conditionalgenerativemodelingvialearningthelatentspace/76643a5c-0ba2-4722-b6d1-a54c2f86d442_model.json b/conditionalgenerativemodelingvialearningthelatentspace/76643a5c-0ba2-4722-b6d1-a54c2f86d442_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..ef2362d87940966d4a518636b090dc752257ad48
--- /dev/null
+++ b/conditionalgenerativemodelingvialearningthelatentspace/76643a5c-0ba2-4722-b6d1-a54c2f86d442_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:07d73d689e507981e4c43e87cc411d0d2b34b785c640448557627ae23653340e
+size 114923
diff --git a/conditionalgenerativemodelingvialearningthelatentspace/76643a5c-0ba2-4722-b6d1-a54c2f86d442_origin.pdf b/conditionalgenerativemodelingvialearningthelatentspace/76643a5c-0ba2-4722-b6d1-a54c2f86d442_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..3ba6adbdf97cca974f2ea908d513d8b7bef9dce2
--- /dev/null
+++ b/conditionalgenerativemodelingvialearningthelatentspace/76643a5c-0ba2-4722-b6d1-a54c2f86d442_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:66c6234271bb40ea1dde7dfab1aabaa82d371cb6775e9ea40994895270dc8b24
+size 7895423
diff --git a/conditionalgenerativemodelingvialearningthelatentspace/full.md b/conditionalgenerativemodelingvialearningthelatentspace/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..0338edc55ffa09c1ba5f2e66a178ee9909a4fa84
--- /dev/null
+++ b/conditionalgenerativemodelingvialearningthelatentspace/full.md
@@ -0,0 +1,367 @@
+# CONDITIONAL GENERATIVE MODELING VIA LEARNING THE LATENT SPACE
+
+Sameera Ramasinghe\* $\dagger \star$ , Kanchana Ranasinghe, Salman Khan
+
+Nick Barnes† and Stephen Gould†
+
+†Australian National University, †Data61 (CSIRO), ‡Mohamed bin Zayed University of AI
+
+sameera.ramasinghe@anu.edu.au
+
+# ABSTRACT
+
+Although deep learning has achieved appealing results on several machine learning tasks, most of the models are deterministic at inference, limiting their application to single-modal settings. We propose a novel general-purpose framework for conditional generation in multimodal spaces, that uses latent variables to model generalizable learning patterns while minimizing a family of regression cost functions. At inference, the latent variables are optimized to find solutions corresponding to multiple output modes. Compared to existing generative solutions, our approach demonstrates faster and more stable convergence, and can learn better representations for downstream tasks. Importantly, it provides a simple generic model that can perform better than highly engineered pipelines tailored using domain expertise on a variety of tasks, while generating diverse outputs. Code available at https://github.com/samgregoost/cGML.
+
+# 1 INTRODUCTION
+
+Conditional generative models provide a natural mechanism to jointly learn a data distribution and optimize predictions. In contrast, discriminative models improve predictions by modeling the label distribution. Learning to model the data distribution allows generating novel samples and is considered a preferred way to understand the real world. Existing conditional generative models have generally been explored in single-modal settings, where a one-to-one mapping between input and output domains exists (Nalisnick et al., 2019; Fetaya et al., 2020). Here, we investigate continuous multimodal (CMM) spaces for generative modeling, where one-to-many mappings exist between input and output domains. This is critical since many real world situations are inherently multimodal, e.g., humans can imagine several completions for a given occluded image. In a discrete setting, this problem becomes relatively easy to tackle using techniques such as maximum-likelihood-estimation, since the output can be predicted as a vector (Zhang et al., 2016), which is not possible in continuous domains. One way to model CMM spaces is by using variational inference, e.g., variational autoencoders (VAE) (Kingma & Welling, 2013). However, the approximated posterior distribution of VAEs are often restricted to the Gaussian family, which hinders its ability to model more complex distributions. As a solution, Maaløe et al. (2016) suggested using auxiliary variables to improve the variational distribution. To this end, the latent variables are hierarchically correlated through injected auxiliary variables, which can produce non-Gaussian distributions. A slightly similar work by Rezende & Mohamed (2015) proposed Normalizing Flows, that can hierarchically generate more complex probability distributions by applying a series of bijective mappings to an original simpler distribution. Recently, Chang et al. (2019) proposed a model, where a separate variable can be used to vary the impact of different loss components at inference, which allows diverse outputs. For a more detailed discussion on these methods see App. 1.
+
+In addition to the aforesaid methods, in order to model CMM spaces, a prominent approach in the literature is to use a combination of reconstruction and adversarial losses (Isola et al., 2017; Zhang et al., 2016; Pathak et al., 2016). However, this entails key shortcomings. 1) The goals of adversarial and reconstruction losses are contradictory (Sec. 4), hence model engineering and numerous regularizers are required to support convergence (Lee et al., 2019; Mao et al., 2019),
+
+thereby resulting in less-generic models tailored for specific applications (Zeng et al., 2019; Vitoria et al., 2020). 2) The adversarial loss based models are notorious for difficult convergence due to the challenge of finding Nash equilibrium of a non-convex min-max game in high-dimensions (Barnett, 2018; Chu et al., 2020; Kodali et al., 2017). 3) The convergence is heavily dependent on the architecture, hence such models show lack of scalability (Thanh-Tung et al., 2019; Arora & Zhang, 2017). 4) The promise of assisting downstream tasks remains challenging, with a large gap in performance between the generative modelling approaches and their discriminative counterparts (Grathwohl et al., 2020; Jing & Tian, 2020).
+
+In this work, we propose a general-purpose framework—Conditional Generation by Modeling the Latent Space (cGML)—for modeling CMM spaces using a set of domain-agnostic regression cost functions instead of the adversarial loss. This improves both the stability and eliminates the incompatibility between the adversarial and reconstruction losses, allowing more precise outputs while maintaining diversity. The underlying notion is to learn the 'behaviour of the latent variables' in minimizing these cost functions while converging to an optimum mode during the training phase, and mimicking the same at inference. Despite being a novel direction, the proposed framework showcases promising attributes by: (a) achieving state-of-the-art results on a diverse set of tasks using a generic model, implying generalizability, (b) rapid convergence to optimal modes despite architectural changes, (c) learning useful features for downstream tasks, and (d) producing diverse outputs via traversal through multiple output modes at inference.
+
+# 2 PROPOSED METHODOLOGY
+
+We define a family of cost functions $\{E_{i,j} = d(y_{i,j}^g,\mathcal{G}(x_j,w))\}$ , where $x_{j}\sim \chi$ is the input, $y_{i,j}^{g}\sim \Upsilon$ is the $i^{th}$ ground-truth mode for $x_{j}$ , $\mathcal{G}$ is a generator function with weights $w$ , and $d(\cdot ,\cdot)$ is a distance function. Note that the number of cost functions $E_{(\cdot ,j)}$ for a given $x_{j}$ can vary over $\chi$ . Our aim here is to come up with a generator function $\mathcal{G}(x_j,w)$ , that can minimize each $E_{i,j},\forall i$ as $\mathcal{G}(x_j,w)\to y_{i,j}^g$ . However, since $\mathcal{G}$ is a deterministic function ( $x$ and $w$ are both fixed at inference), it can only produce a single output. Therefore, we introduce a latent vector $z$ to the generator function, that can be used to converge $\bar{y}_{i,j} = \mathcal{G}(x_j,w,z_{i,j})$ towards a ground truth $y_{(i,j)}^g$ at inference, and possibly, to multiple solutions. Formally, the family of cost functions now becomes: $\{\hat{E}_{i,j} = d(y_{i,j}^g,\mathcal{G}(x_j,w,z_{i,j}))\},\forall z_{i,j}\sim \zeta$ . Then, our training objective can be defined as finding a set of optimal $z_{i}^{*}\in \zeta$ and $w^{*}\in \omega$ by minimizing $\mathbb{E}_{i\sim I}[\hat{E}_{i,j}]$ , where $I$ is the number of possible solutions for $x_{j}$ . Note that $w^{*}$ is fixed for all $i$ and a different $z_{i}^{*}$ exists for each $i$ . Considering all the training samples $x_{j}\sim \chi$ , our training objective becomes,
+
+$$
+\{\{z _ {i, j} ^ {*}, w ^ {*} \} = \underset {z _ {i, j} \in \zeta , w \in \omega} {\arg \min } \mathbb {E} _ {i \in I, j \in J} [ \hat {E} _ {i, j} ]. \tag {1}
+$$
+
+Eq. 1 can be optimized via Algorithm 1 (proof in App. 2.2). Intuitively, the goal of Eq. 1 is to obtain a family of optimal latent codes $\{z_{i,j}^{*}\}$ , each causing a global minima in the corresponding $\hat{E}_{i,j}$ as $y_{i,j}^{g} = \mathcal{G}(x_{j},w,z_{i,j}^{*})$ . Consequently, at inference, we can optimize $\bar{y}_{i,j}$ to converge to an optimal mode in the output space by varying $z$ . Therefore, we predict an estimated $\bar{z}_{i,j}$ at inference,
+
+$$
+\bar {z} _ {i, j} \approx \min _ {z} \hat {E} _ {i, j}, \tag {2}
+$$
+
+for each $y_{i,j}^{g}$ , which in turn can be used to obtain the prediction $\mathcal{G}(x_j, w, \bar{z}_{i,j}) \approx y_{i,j}^g$ . In other words, for a selected $x_j$ , let $\bar{y}_{i,j}^t$ be the initial estimate for $\bar{y}_{i,j}$ . At inference, $z$ can traverse gradually towards an optimum point $y_{i,j}^{g}$ in the space, forcing $\bar{y}_{i,j}^{t + n} \to y_{i,j}^{g}$ , in finite steps $(n)$ .
+
+However, still a critical problem exists: Eq. 2 depends on $y_{i,j}^{g}$ , which is not available at inference. As a remedy, we enforce Lipschitz constraints on $\mathcal{G}$ over $(x_{j},z_{i,j})$ , which bounds the gradient norm as,
+
+$$
+\frac {\left\| \mathcal {G} \left(x _ {j} , w ^ {*} , z _ {i , j} ^ {*}\right) - \mathcal {G} \left(x _ {j} , w ^ {*} , z _ {0}\right) \right\|}{\left\| z _ {i , j} ^ {*} - z _ {0} \right\|} \leq \int \left\| \nabla_ {z} \mathcal {G} \left(x _ {j}, w ^ {*}, \gamma (t)\right) \right\| d t \leq C, \tag {3}
+$$
+
+where $z_0 \sim \zeta$ is an arbitrary random initialization, $C$ is a constant, and $\gamma(\cdot)$ is a straight path from $z_0$ to $z_{i,j}^*$ (proof in App. 2.1). Intuitively, Eq. 3 implies that the gradients $\nabla_z\mathcal{G}(x_j,w^*,z_0)$ along the path $\gamma(\cdot)$ do not tend to vanish or explode, hence, finding the path to optimal $z_{i,j}^*$ in the space $\zeta$ becomes a fairly straight forward regression problem. Moreover, enforcing the Lipschitz constraint
+
+
+(a) Training
+
+
+(b) Inference
+Figure 1: Training and inference process. Refer to Algorithm 1 for the training process. At inference, $z$ is iteratively updated using the predictions of $\mathcal{Z}$ and fed to $\mathcal{G}$ to obtain increasingly fine-tuned outputs (see Sec. 3).
+
+encourages meaningful structuring of the latent space: suppose $z_{1,j}^{*}$ and $z_{2,j}^{*}$ are two optimal codes corresponding to two ground truth modes for a particular input. Since $\| z_{2,j}^{*} - z_{1,j}^{*} \|$ is lower bounded by $\frac{\|\mathcal{G}(x_j, w^*, z_{2,j}^*) - \mathcal{G}(x_j, w^*, z_{1,j}^*)\|}{L}$ , where $L$ is the Lipschitz constant, the minimum distance between the two latent codes is proportional to the difference between the corresponding ground truth modes. In practice, we observed that this encourages the optimum latent codes to be placed sparsely (visual illustration in App. 2), which helps a network to learn distinctive paths towards different modes.
+
+# 2.1 CONVERGENCE AT INFERENCE
+
+We formulate finding the convergence path of $z$ at inference as a regression problem, i.e., $z_{t+1} = r(z_t, x_j)$ . We implement $r(\cdot)$ as a recurrent neural network (RNN). The series of predicted values $\{z_{(t+k)} : k = 1, 2,.., N\}$ can be modeled as a first-order Markov chain requiring no memory for the RNN. We observe that enforcing Lipschitz continuity on $\mathcal{G}$ over $z$ leads to smooth trajectories even in high dimensional settings, hence, memorizing more than one step into the history is redundant. However, $z_{t+1}$ is not a state variable, i.e., the existence of multiple modes for output prediction $\bar{y}$ leads to multiple possible solutions for $z_{t+1}$ . On the contrary, $\mathbb{E}[z_{t+1}]$ is a state variable w.r.t. the state $(z_t, x)$ , which can be used as an approximation to reach the optimal $z^*$ at inference. Therefore, instead of directly learning $r(\cdot)$ , we learn a simplified version $r'(z_t, x) = \mathbb{E}[z_{t+1}]$ . Intuitively, the whole process can be understood as observing the behavior of $z$ on a smooth surface at the training stage, and predicting the movement at inference. A key aspect of $r'(z_t, x)$ is that the model is capable of converging to multiple possible optimum modes at inference based on the initial position of $z$ .
+
+# 2.2 MOMENTUM AS A SUPPLEMENTARY AID
+
+Based on Sec. 2.1, $z$ can now traverse to an optimal position $z^{*}$ during inference. However, there can exist rare symmetrical positions in the $\zeta$ where $\mathbb{E}[z_{t + 1}] - z_t\approx 0$ , although far away from $\{z^{*}\}$ , forcing $z_{t + 1}\approx z_t$ . Simply, the above phenomenon can occur if some $z_{t + 1}$ has traveled in many non-orthogonal directions, so the vector addition of $z_{t + 1}\approx 0$ . This can fool the system to falsely identify convergence points, forming phantom optimum point distributions amongst the true distribution (see Fig. 3). To avoid such behavior, we learn the expected momentum $\mathbb{E}[\rho (z_t,x_j)] = \alpha \mathbb{E}[|z_{t + 1} - z_t|_{x_j}]$ at each $(z_{t},x_{j})$ during the training phase, where $\alpha$ is an empirically chosen scalar. In practice, $\mathbb{E}[\rho (z_t,x_j)]\to 0$ as $z_{t + 1},z_{t}\rightarrow \{z^{*}\}$ . Thus, to avoid phantom distributions, we improve the $z$ update as,
+
+$$
+z _ {t + 1} = z _ {t} + \mathbb {E} [ \rho (z _ {t}, x _ {j}) ] \left[ \frac {r ^ {\prime} \left(z _ {t} , x _ {j}\right) - z _ {t}}{\left\| r ^ {\prime} \left(z _ {t} , x _ {j}\right) - z _ {t} \right\|} \right]. \tag {4}
+$$
+
+Since both $\mathbb{E}[\rho (z_t,x_j)]$ and $r^\prime (z_t,x_j)$ are functions on $(z_{t},x_{j})$ , we jointly learn these two functions using a single network $\mathcal{Z}(z_t,x_j)$ . Note that coefficient $\mathbb{E}[\rho (z_t,x_j)]$ serves two practical purposes: 1) slows down the movement of $z$ near true distributions, 2) pushes $z$ out of the phantom distributions.
+
+# 3 OVERALL DESIGN
+
+The proposed model consists of three major blocks as shown in Fig. 1: an encoder $\mathcal{H}$ , a generator $\mathcal{G}$ and $\mathcal{Z}$ . The detailed architecture diagram for $128 \times 128$ is shown in Fig. 2. Note that for derivations in Sec. 2, we used $x$ instead of $h = \mathcal{H}(x)$ , as $h$ is a high-level representation of $x$ . The training process is illustrated in Algorithm 1. At each optimization $z_{t+1} = z_t - \beta \nabla_{z_t}[\hat{E}_{i,j}]$ , $\mathcal{Z}$ is trained separately to approximate $(z_{t+1}, \rho)$ . At inference, $x$ is fed to $\mathcal{H}$ , and then $\mathcal{Z}$ optimizes the output $\bar{y}$ by updating $z$ for a pre-defined number of iterations of Eq. 4. For $\hat{E}(\cdot, \cdot)$ , we use $L_1$ loss. Furthermore, it is important to limit the search space for $z_{t+1}$ , to improve the performance of $\mathcal{Z}$ . To this end, we
+
+
+Figure 2: Overall architecture for $128 \times 128$ inputs.
+
+Algorithm 1: Training algorithm
+sample inputs $\{x_{1},x_{2},\ldots ,x_{J}\} \in \chi$ ; sample outputs $\{y_1,y_2,\dots ,y_J\} \in \Upsilon$ .
+for k epochs do
+for $x$ in $\chi$ do for l steps do update $z = \{z_{1},z_{2},\dots,z_{J}\}:\nabla_{z}\hat{E}$ Freeze $\mathcal{H},\mathcal{G},\mathcal{Z}$ and update z update $\mathcal{Z}\colon \nabla_wL_1[(z_{t + 1},\rho),\mathcal{Z}(z_t,\mathcal{H}(x))]$ Freeze $\mathcal{H},\mathcal{G},z$ and update $\mathcal{Z}$ update $\mathcal{G},\mathcal{H}\colon \nabla_w\hat{E}$ Freeze $\mathcal{Z},z$ and update $\mathcal{H},\mathcal{G}$
+
+sample $z$ from the surface of the $n$ -dimensional sphere $(\mathbb{S}^n)$ . Moreover, to ensure faster convergence of the model, we force Lipschitz continuity on both $\mathcal{Z}$ and the $\mathcal{G}$ (App. 2.4). For hyper-parameters and training details, see App. 3.1.
+
+# 4 MOTIVATION
+
+Here, we explain the drawbacks of conditional GAN methods and illustrate our idea via a toy example.
+
+Incompatibility of adversarial and reconstruction losses: cGANs use a combination of adversarial and reconstruction losses. We note that this combination is suboptimal to model CMM spaces.
+
+Remark: Consider a generator $G(x,z)$ and a discriminator $D(x,z)$ , where $x$ and $z$ are the input and the noise vector, respectively. Then, consider an arbitrary input $x_{j}$ and the corresponding set of ground-truths $\{y_{i,j}^{g}\}$ , $i = 1,2,\dots N$ . Further, let us define the optimal generator $G^{*}(x_{j},z) = \hat{y},\hat{y}\in \{y_{i,j}^{g}\}$ , $L_{GAN} = \mathbb{E}_i[\log D(y_{i,j}^g)] + \mathbb{E}_z[\log (1 - D(G(x_j,z)))]$ and $L_{\ell} = \mathbb{E}_{i,z}[|y_{i,j}^{g} - G(x_{j},z)|]$ . Then, $G^{*}\neq \hat{G}^{*}$ where $\hat{G}^{*} = \arg \min_{G}\max_{D}L_{GAN} + \lambda L_{\ell},\forall \lambda \neq 0$ . (Proof in App. 2.3).
+
+Generalizability: The incompatibility of above mentioned loss functions demands domain specific design choices from models that target high realism in CMM settings. This hinders the generalizability across different tasks (Vitoria et al., 2020; Zeng et al., 2019). We further argue that due to this discrepancy, cGANs learn sub-optimal features which are less useful for downstream tasks (Sec. 5.3).
+
+Convergence and the sensitivity to the architecture: The difficulty of converging GANs to the Nash equilibrium of a non-convex min-max game in high-dimensional spaces is well explored (Barnett, 2018; Chu et al., 2020; Kodali et al., 2017). Goodfellow et al. (2014b) underlines if the discriminator has enough capacity, and is optimal at every step of the GAN algorithm, then the generated distribution converges to the real distribution; that cannot be guaranteed in a practical scenario. In fact, Arora et al. (2018) confirmed that the adversarial objective can easily approach to an equilibrium even if the generated distribution has very low support, and further, the number of training samples required to avoid mode collapse can be in order of $\exp(d)$ ( $d$ is the data dimension).
+
+
+
+
+
+
+
+
+
+
+Figure 3: Toy Example: Plots generated for each dimension of the CMM space $\Upsilon$ . (a) Ground-truth distributions. (b) Model outputs for $L_{1}$ loss. (c) Output when trained with the proposed objective (without $\rho$ correction). Note the phantom distribution identified by the model. (d) $\mathbb{E}[\rho]$ as a heatmap on $(x, y)$ . $\mathbb{E}[\rho]$ is lower near the true distribution and higher otherwise. (e) Model outputs after $\rho$ correction.
+
+
+
+
+
+
+
+
+
+
+
+
+(a) GT
+
+
+(b) $L_{1}$
+
+
+(c) Ours w/o $\rho$
+
+
+(d) $\mathbb{E}[\rho ]$ heatmap
+
+
+
+Multimodality: The ability to generate diverse outputs, i.e., convergence to multiple modes in the output space, is an important requirement. Despite the typical noise input, cGANs generally lack the ability to generate diverse outputs (Lee et al., 2019). Pathak et al. (2016) and Iizuka et al. (2016) even state that better results are obtained when the noise is completely removed. Further, variants of cGAN that target diversity often face a trade-off between the realism and diversity (He et al., 2018), as they have to compromise between the reconstruction and adversarial losses.
+
+A toy example: Here, we experiment with the formulations in Sec. 2. Consider a 3D CMM space $y = \pm 4(x,x^2,x^3)$ . Then, we construct multi-layer perceptrons (MLP) with three layers to represent each of the functions, $\mathcal{H}$ , $\mathcal{G}$ , and $\mathcal{Z}$ , and compare the proposed method against the $L_{1}$ loss. Figure 3 illustrates the results. As expected, $L_{1}$ loss generates the line $y = 0$ , and is inadequate to model the multimodal space. As explained in Sec. 2.2, without momentum correction, the network is fooled by a phantom distribution where $\mathbb{E}[z_{t + 1}] \approx 0$ at training time. However, the push of momentum removes the phantom distribution and refines the output to closely resemble the input distribution. As implied in Sec. 2.2, the momentum is maximized near the true distribution and minimized otherwise.
+
+# 5 EXPERIMENTS AND DISCUSSIONS
+
+The distribution of natural images lies on a high dimensional manifold, making the task of modelling it extremely challenging. Moreover, conditional image generation poses an additional challenge with their constrained multimodal output space (a single input may correspond to multiple outputs while not all of them are available for training). In this section, we experiment on several such tasks. For a fair comparison with a similar capacity GAN, we use the encoder and decoder architectures used in Pathak et al. (2016) for $\mathcal{H}$ and $\mathcal{G}$ respectively. We make two minor modifications: the channel-wise fully connected (FC) layers are removed and U-Net style skip connections are added (see App. 3.1). We train the existing models for a maximum of 200 epochs where pretrained weights are not provided, and demonstrate the generalizability of our theoretical framework in diverse practical settings by using a generic network for all the comparisons. Models used for comparisons are denoted as follows: PN (Zeng et al., 2019), CA (Yu et al., 2018b), DSGAN (Yang et al., 2019), CIC (Zhang et al., 2016), RFR (Li et al., 2020), Chroma (Vitoria et al., 2020), P2P (Isola et al., 2017), Iizuka (Iizuka et al., 2016), CE (Pathak et al., 2016), CRN (Chen & Koltun, 2017a), and B-GAN (Zhu et al., 2017b).
+
+# 5.1 CORRUPTED IMAGE RECOVERY
+
+We design this task as image completion, i.e., given a masked image as input, our goal is to recover the masked area. Interestingly, we observed that the MNIST dataset, in its original form, does not have a multimodal behaviour, i.e., a fraction of the input image only maps to a single output. Therefore, we modify the training data as follows: first, we overlap the top half of an input image with the top half of another randomly sampled image. We carry out this corruption for $20\%$ of the training data. Corrupted samples are not fixed across epochs. Then, we apply a random sized mask to the top half, and ask the network to predict the missing pixels. We choose two competitive baselines here: our network with the $L_{1}$ loss and CE. Fig. 4 illustrates the predictions. As shown, our model converges to the most probable non-corrupted mode without any ambiguity, while other baselines give sub-optimal results. In the next experiment, we add a small white box to the top part of the ground-truth images at
+
+| Method | User study | Turing test |
| STL | ImageNet | ImageNet |
| Iizuka et al. | 21.89 | 32.28 | - |
| Chroma | 32.40 | 31.67 | - |
| Ours | 45.71 | 36.05 | 31.66 |
+
+Table 1: Colorization: Psychophysical study and Turing test results. All performances are in %.
+
+| Method | STL | ImageNet |
| LPIP ↓ | PicAPP ↓ | SSIM ↑ | PSNR ↑ | LPIP ↓ | PicAPP ↓ | SSIM ↑ | PSNR ↑ |
| Iizuka et al. | 0.18 | 2.37 | 0.81 | 24.30 | 0.17 | 2.47 | 0.87 | 18.43 |
| P2P | 1.21 | 2.69 | 0.73 | 17.80 | 2.01 | 2.80 | 0.87 | 18.43 |
| CIC | 0.18 | 2.81 | 0.71 | 22.04 | 0.19 | 2.56 | 0.71 | 19.11 |
| Chroma | 0.16 | 2.06 | 0.91 | 25.57 | 0.16 | 2.13 | 0.90 | 23.33 |
| Ours | 0.12 | 1.47 | 0.95 | 27.03 | 0.16 | 2.04 | 0.92 | 24.51 |
| Ours (w/o ρ) | 0.16 | 1.90 | 0.89 | 25.02 | 0.20 | 2.11 | 0.88 | 23.21 |
+
+Table 2: Colorization: Quantitative analysis of our method against the state-of-the-art. Ours perform better on a variety of metrics.
+
+| Method | 10% corruption | 15% corruption | 25% corruption |
| LPIP ↓ | PieAPP ↓ | PSNR ↑ | SSIM ↑ | LPIP ↓ | PieAPP ↓ | PSNR ↑ | SSIM ↑ | LPIP ↓ | PieAPP ↓ | PSNR ↑ | SSIM ↑ |
| DSGAN | 0.101 | 1.577 | 20.13 | 0.67 | 0.189 | 2.970 | 18.45 | 0.55 | 0.213 | 3.54 | 16.44 | 0.49 |
| PN | 0.045 | 0.639 | 27.11 | 0.88 | 0.084 | 0.680 | 20.50 | 0.71 | 0.147 | 0.764 | 19.41 | 0.63 |
| CE | 0.092 | 1.134 | 22.34 | 0.71 | 0.134 | 2.134 | 19.11 | 0.63 | 0.189 | 2.717 | 17.44 | 0.51 |
| P2P | 0.074 | 0.942 | 22.33 | 0.79 | 0.101 | 1.971 | 19.34 | 0.70 | 0.185 | 2.378 | 17.81 | 0.57 |
| CA | 0.048 | 0.731 | 26.45 | 0.83 | 0.091 | 0.933 | 20.12 | 0.72 | 0.166 | 0.822 | 21.43 | 0.72 |
| RFR | 0.051 | 0.743 | 29.31 | 0.85 | 0.097 | 1.033 | 19.22 | 0.70 | 0.171 | 1.127 | 18.42 | 0.61 |
| Ours (w/o ρ) | 0.053 | 0.799 | 27.77 | 0.83 | 0.085 | 0.844 | 23.22 | 0.76 | 0.141 | 0.812 | 22.31 | 0.74 |
| Ours | 0.051 | 0.727 | 27.83 | 0.89 | 0.080 | 0.740 | 26.43 | 0.80 | 0.129 | 0.760 | 24.16 | 0.77 |
+
+Table 3: Image completion: Quantitative analysis of our method against state-of-the-art on a variety of metrics.
+
+different rates. At inference, our model was able to converge to both the modes (Fig. 5), depending on the initial position of $z$ , as the probability of the alternate mode reaches 0.3.
+
+# 5.2 AUTOMATIC IMAGE COLORIZATION
+
+Deep models have tackled this problem using semantic priors (Iizuka et al., 2016; Vitoria et al., 2020), adversarial and $L_{1}$ losses (Isola et al., 2017; Zhu et al., 2017a; Lee et al., 2019), or by conversion to a discrete form through binning of color values (Zhang et al., 2016). Although these methods provide compelling results, several inherent limitations exist: (a) use of semantic priors results in complex models, (b) adversarial loss suffers from drawbacks (see Sec. 4), and (c) discretization reduces the precision. In contrast, we achieve better results using a simpler model.
+
+The input and the output of the network are $l$ and $(a, b)$ planes respectively (LAB color space). However, since the color distributions of $a$ and $b$ spaces are highly imbalanced over a natural dataset (Zhang et al., 2016), we add another constraint to the cost function $E$ to push the predicted $a$ and $b$ colors towards a uniform distribution: $E = \| a_{gt} - a\| + \| b_{gt} - b\| + \lambda (loss_{kl,a} + loss_{kl,b})$ , where $loss_{kl,\cdot} = \mathrm{KL}(\cdot ||u(0,1))$ . Here, $\mathrm{KL}(\cdot ||\cdot)$ is the KL divergence and $u(0,1)$ is a uniform distribution (see App. 3.3). Fig. 7 and Table 2 depict our qualitative and quantitative results, respectively. We demonstrate the superior performance of our method against four metrics: LPIP, PieAPP, SSIM and PSNR (App. 3.2). Fig. 10 depicts examples of multimodality captured by our model (more examples in App. 3.4). Fig. 6 shows colorization behaviour as the $z$ converges during inference.
+
+User study: We also conduct two user studies to further validate the quality of generated samples (Table 1). a) In the PSYCHOPHYSICAL STUDY, we present volunteers with batches of 3 images, each generated with a different method. A batch is displayed for 5 secs and the user has to pick the most realistic image. After 5 secs, the next image batch is displayed. b) We conduct a TURING TEST to validate our output quality against the ground-truth, following the setting proposed by Zhang et al. (2016). The volunteers are presented with a series of paired images (ground-truth and our output). The images are visible for 1 sec, and then the user has an unlimited time to pick the realistic image.
+
+
+Figure 4: Performance with $20\%$ corrupted data. Our model demonstrates better convergence compared to $L_{1}$ loss and a similar capacity GAN (Pathak et al., 2016).
+Figure 5: With $>30\%$
+
+
+
+
+Figure 6: The prediction quality increases as the $z$ traverses to an optimum position at the inference.
+
+
+Figure 7: Qualitative comparison against the state-of-the-art on ImageNet (left 5 columns) and STL (right 5 columns) datasets. Our model generally produces more vibrant and balanced color distributions.
+
+
+Figure 8: Image completion on Celeb-HQ (left) and Facade (right) datasets. We used fixed center masks and random irregular masks (Liu et al., 2018) for Celeb-HQ and Facades datasets, respectively.
+
+
+Figure 9: Qualitative comparison for image completion with $25\%$ missing data (models trained with random sized square masks).
+
+
+Figure 10: Multiple colorization modes predicted by our model for a single input. (Best viewed in color).
+Figure 11: Multi-modality of our predictions on Celeb-HQ dataset. (Best viewed with zoom)
+
+
+Figure 12: Translation from hand-bag sketches to images.
+
+
+Figure 13: Translation from shoe sketches to images.
+
+
+Figure 14: Map to aerial image translation. From left: GT, Input and Output. Also see App. 5.2.
+
+
+Figure 15: Diversity: Quantitative comparisons.
+Figure 16: Translation from facial landmarks to faces.
+
+
+Figure 17: Translation from surface-normals to pet faces.
+
+
+
+# 5.3 IMAGE COMPLETION
+
+In this case, we show that our generic model outperforms a similar capacity GAN (CE) as well as task-specific GANs. In contrast to task-specific models, we do not use any domain-specific modifications to make our outputs perceptually pleasing. We observe that with random irregular and fixed-sized masks, all the models perform well, and we were not able to visually observe a considerable difference (Fig. 8, see App. 3.11 for more results). Therefore, we presented models with a more challenging task: train with random sized square-shaped masks and evaluate the performance against masks of varying sizes. Fig. 9 illustrates qualitative results of the models with $25\%$ masked data. As evident, our model recovers details more accurately compared to the state-of-the-art. Notably, all models produce comparable results when trained with a fixed sized center mask, but find this setting more challenging. Table 3 includes a quantitative comparison. Observe that in the case of smaller sized masks, PN performs slightly better than ours, but worse otherwise. We also evaluate the learned features of the models against a downstream classification task (Table 5). First, we train all the models on Facades (Tyleček & Šára, 2013) against random masks, and then apply the trained models on CIFAR10 (Krizhevsky et al., 2009) to extract bottleneck features, and finally pass them through a FC layer for classification (App. 3.7). We compare PN and ours against an oracle (AlexNet features pre-trained on ImageNet) and show our model performs closer to the oracle.
+
+
+Figure 18: Convergence on image completion (Paris view). Our model exhibits rapid and stable convergence compared to state-of-the-art (PN, CE, P2P, CA).
+
+| Method | M10 | M40 |
| Sharma et al. (2016) | 80.5% | 75.5% |
| Han et al. (2019) | 92.2% | 90.2% |
| Achlioptas et al. (2017) | 95.3% | 85.7% |
| Yang et al. (2018) | 94.4% | 88.4% |
| Sauder & Sievers (2019) | 94.5% | 90.6% |
| Ramasinghe et al. (2019c) | 93.1% | - |
| Khan et al. (2019) | 92.2% | - |
| Ours | 92.4% | 90.9% |
+
+Table 4: Downstream 3D object classification results on ModelNet10 and ModelNet40 using features learned in an unsupervised manner. All results in % accuracy.
+
+| Method | Pretext | Acc. (%) |
| ResNet* | ImageNet CIs. | 74.2 |
| PN | Im. Completion | 40.3 |
| Ours | Im. Completion | 62.5 |
+
+Table 5: Comparison on downstream task (CIFAR10cls). (*) denotes the oracle case.
+
+| Method | M10 | M40 |
| CE | 10.3 | 4.6 |
| cVAE | 8.7 | 4.2 |
| Ours | 84.2 | 79.4 |
+
+Table 6: Reconstruction mAP of 3d spectral denoising.
+
+| Model | CE | PN | Chroma | CIC | P2P | Iizuka et al. | RFR | Ours |
| FLOPS (1 × 109) | 0.634 | 0.946 | 1.275 | 52.839 | 0.732 | 14.082 | 25.64 | 0.638 |
+
+Table 7: Model complexity comparison.
+
+# 5.3.1 DIVERSITY AND OTHER COMPELLING ATTRIBUTES
+
+We also experiment on a diverse set of image translation tasks to demonstrate our generalizability. Fig. 12, 13, 14, 16 and 17 illustrate the qualitative results of sketch-to-handbag, sketch-to-shoes, map-to-arial, lanmarks-to-faces and surface-normals-to-pets tasks. Fig. 10, 11, 12, 13, 16 and 17 show the ability of our model to converge to multiple modes, depending on the $z$ initialization. Fig. 15 demonstrates the quantitative comparison against other models. See App. 3.4 for further details on experiments. Another appealing feature of our model is its strong convergence properties irrespective of the architecture, hence, scalability to different input sizes. Fig. 19 shows examples from image completion and colorization for varying input sizes. We add layers to the architecture to be trained on increasingly high-resolution inputs, where our model was able to converge to optimal modes at each scale (App. 3.8). Fig. 18 demonstrates our faster and stable convergence. Table 7 compares the number of FLOPS required by the models for a batch size of 10.
+
+# 5.4 DENOISING OF 3D OBJECTS IN SPECTRAL SPACE
+
+Spectral moments of 3D objects provide a compact representation, and help building light-weight networks (Ramasinghe et al., 2020; 2019b; Cohen et al., 2018; Esteves et al., 2018). However, spectral information of 3D objects has not been used before for self-supervised learning, a key reason being the difficulty of learning representations in the spectral domain due to the complex structure and unbounded spectral coefficients. Here, we present an efficient pretext task that is conducted in the spectral domain: denoising 3D spectral maps. We use two types of spectral spaces: spherical harmonics and Zernike polynomials (App. 4). We first convert the 3D point clouds to spherical harmonic coefficients, arrange the values as a 2D map, and mask or add noise to a map portion (App. 3.12). The goal is to recover the original spectral map. Fig. 20 and Table 6 depicts our qualitative and quantitative results. We perform favorably well against other methods. To evaluate the learned features, we use Zernike polynomials, as they are more discriminative compared to spherical harmonics (Ramasinghe et al., 2019a). We first train the network on the 55k ShapeNet objects by denoising spectral maps, and then apply the trained network on the ModelNet10 & 40. The features are then extracted from the bottleneck (similar to Sec. 5.3), and fed to a FC classifier (Table 4). We achieve state-of-the-art results in ModelNet40 with a simple pretext task.
+
+
+
+
+Input
+Figure 19: Scalability: we subsequently add layers to the architecture to be trained on increasingly high-resolution inputs
+
+
+
+
+32×32
+
+
+
+
+64×64
+
+
+
+
+128×128
+
+
+
+
+256×256
+
+
+
+
+GT
+Figure 20: Qualitative comparison of 3D spectral denoising. The results are converted to the spatial domain for a clear visualization.
+
+
+
+
+CE
+
+
+
+
+Ours
+
+# 6 CONCLUSION
+
+Conditional generation in multimodal domains is a challenging task due to its ill-posed nature. In this paper, we propose a novel generative framework that minimizes a family of cost functions during training. Further, it observes the convergence patterns of latent variables and applies this knowledge during inference to traverse to multiple output modes during inference. Despite using a simple and generic architecture, we show impressive results on a diverse set of tasks. The proposed approach demonstrates faster convergence, scalability, generalizability, diversity and superior representation learning capability for downstream tasks.
+
+# REFERENCES
+
+Panos Achlioptas, Olga Diamanti, Ioannis Mitliagkas, and Leonidas Guibas. Representation learning and adversarial generation of 3d point clouds. arXiv preprint arXiv:1707.02392, 2017. 8
+Sanjeev Arora and Yi Zhang. Do gans actually learn the distribution? an empirical study. arXiv preprint arXiv:1706.08224, 2017. 2
+Sanjeev Arora, Andrej Risteski, and Yi Zhang. Do GANs learn the distribution? some theory and empirics. In International Conference on Learning Representations, 2018. 4
+Aayush Bansal, Xinlei Chen, Bryan Russell, Abhinav Gupta, and Deva Ramanan. Pixelnet: Representation of the pixels, by the pixels, and for the pixels. arXiv preprint arXiv:1702.06506, 2017a. 21
+Aayush Bansal, Yaser Sheikh, and Deva Ramanan. Pixelnn: Example-based image synthesis. arXiv preprint arXiv:1708.05349, 2017b. 21
+Jianmin Bao, Dong Chen, Fang Wen, Houqiang Li, and Gang Hua. Cvae-gan: Fine-grained image generation through asymmetric training. In The IEEE International Conference on Computer Vision (ICCV), Oct 2017. 15
+Samuel A Barnett. Convergence problems with generative adversarial networks (gans). arXiv preprint arXiv:1806.11382, 2018. 2, 4
+Smyung Chang, SeongUk Park, John Yang, and Nojun Kwak. Sym-parameterized dynamic inference for mixed-domain image translation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4803-4811, 2019. 1, 16
+Qifeng Chen and Vladlen Koltun. Photographic image synthesis with cascaded refinement networks. In Proceedings of the IEEE international conference on computer vision, pp. 1511-1520, 2017a. 5
+Qifeng Chen and Vladlen Koltun. Photographic image synthesis with cascaded refinement networks. In The IEEE International Conference on Computer Vision (ICCV), Oct 2017b. 15
+Casey Chu, Kentaro Minami, and Kenji Fukumizu. Smoothness and stability in gans. arXiv preprint arXiv:2002.04185, 2020. 2, 4
+Taco S Cohen, Mario Geiger, Jonas Kohler, and Max Welling. Spherical cnns. arXiv preprint arXiv:1801.10130, 2018. 9
+Aditya Deshpande, Jiajun Lu, Mao-Chuang Yeh, Min Jin Chong, and David Forsyth. Learning diverse image colorization. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. 15
+James R Driscoll and Dennis M Healy. Computing fourier transforms and convolutions on the 2-sphere. Advances in applied mathematics, 15(2):202-250, 1994. 28
+Yilun Du and Igor Mordatch. Implicit generation and modeling with energy based models. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems 32, pp. 3608-3618. Curran Associates, Inc., 2019. 15
+Carlos Esteves, Christine Allen-Blanchette, Ameesh Makadia, and Kostas Daniilidis. Learning so (3) equivariant representations with spherical cnns. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 52–68, 2018. 9
+Ethan Fetaya, Jörn-Henrik Jacobsen, Will Grathwohl, and Richard Zemel. Understanding the limitations of conditional generative models. In International Conference on Learning Representations, 2020. 1
+Arnab Ghosh, Viveka Kulharia, Vinay P. Namboodiri, Philip H.S. Torr, and Puneet K. Dokania. Multi-agent diverse generative adversarial networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. 15
+
+Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems 27, pp. 2672-2680. 2014a. 15
+Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672-2680, 2014b. 4, 17
+Will Grathwohl, Kuan-Chieh Wang, Joern-Henrik Jacobsen, David Duvenaud, Mohammad Norouzi, and Kevin Swersky. Your classifier is secretly an energy based model and you should treat it like one. In International Conference on Learning Representations, 2020. 2
+Zhizhong Han, Mingyang Shang, Yu-Shen Liu, and Matthias Zwicker. View inter-prediction gan: Unsupervised representation learning for 3d shapes by learning global shape memories to support local view predictions. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 8376-8384, 2019. 8
+Yang He, Bernt Schiele, and Mario Fritz. Diverse conditional image generation by stochastic regression with latent drop-out codes. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 406-421, 2018. 5
+Xun Huang, Ming-Yu Liu, Serge Belongie, and Jan Kautz. Multimodal unsupervised image-to-image translation. In The European Conference on Computer Vision (ECCV), September 2018. 15
+Satoshi Iizuka, Edgar Simo-Serra, and Hiroshi Ishikawa. Let there be color! joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification. ACM Transactions on Graphics (ToG), 35(4):1-11, 2016. 5, 6, 15, 28
+Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1125-1134, 2017. 1, 5, 6, 15, 21
+Longlong Jing and Yingli Tian. Self-supervised visual feature learning with deep neural networks: A survey. IEEE transactions on pattern analysis and machine intelligence, 2020. 2
+Salman H Khan, Yulan Guo, Munawar Hayat, and Nick Barnes. Unsupervised primitive discovery for improved 3d generative modeling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9739-9748, 2019. 8
+Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. 1
+Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. CoRR, abs/1312.6114, 2014. 15
+Naveen Kodali, Jacob Abernethy, James Hays, and Zsolt Kira. On convergence and stability of gans. arXiv preprint arXiv:1705.07215, 2017. 2, 4
+Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. Cifar-10 (canadian institute for advanced research). 2009. URL http://www.cs.toronto.edu/~kriz/cifar.html.8
+Hsin-Ying Lee, Hung-Yu Tseng, Jia-Bin Huang, Maneesh Singh, and Ming-Hsuan Yang. Diverse image-to-image translation via disentangled representations. In The European Conference on Computer Vision (ECCV), September 2018. 15
+Soochan Lee, Junsoo Ha, and Gunhee Kim. Harmonizing maximum likelihood with GANs for multimodal conditional generation. In International Conference on Learning Representations, 2019. 1, 5, 6, 15
+Jingyuan Li, Ning Wang, Lefei Zhang, Bo Du, and Dacheng Tao. Recurrent feature reasoning for image inpainting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7760-7768, 2020. 5
+
+Guilin Liu, Fitsum A Reda, Kevin J Shih, Ting-Chun Wang, Andrew Tao, and Bryan Catanzaro. Image inpainting for irregular holes using partial convolutions. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 85-100, 2018. 7
+Antonio Loquercio, Mattia Segù, and Davide Scaramuzza. A general framework for uncertainty estimation in deep learning. arXiv preprint arXiv:1907.06890, 2019. 19
+Lars Maaløe, Casper Kaae Sønderby, Søren Kaae Sønderby, and Ole Winther. Auxiliary deep generative models. arXiv preprint arXiv:1602.05473, 2016. 1, 15
+Lars Maaløe, Marco Fraccaro, Valentin Lievin, and Ole Winther. Biva: A very deep hierarchy of latent variables for generative modeling. In Advances in neural information processing systems, pp. 6548-6558, 2019. 15
+Qi Mao, Hsin-Ying Lee, Hung-Yu Tseng, Siwei Ma, and Ming-Hsuan Yang. Mode seeking generative adversarial networks for diverse image synthesis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1429-1437, 2019. 1
+Michaël Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyond mean square error. CoRR, abs/1511.05440, 2015. 15
+Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. ArXiv, abs/1411.1784, 2014. 15
+Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, and Balaji Lakshminarayanan. Do deep generative models know what they don't know? International Conference on Learning Representations, 2019. 1
+Omkar M. Parkhi, Andrea Vedaldi, Andrew Zisserman, and C. V. Jawahar. Cats and dogs. In IEEE Conference on Computer Vision and Pattern Recognition, 2012. 21
+Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei Efros. Context encoders: Feature learning by inpainting. 2016. 1, 5, 6, 15
+Nathanael Perraudin, Michael Defferrard, Tomasz Kacprzak, and Raphael Sgier. Deepsphere: Efficient spherical convolutional neural network with healpix sampling for cosmological applications. *Astronomy and Computing*, 27:130-146, 2019. 31
+Ekta Prashnani, Hong Cai, Yasamin Mostofi, and Pradeep Sen. Pieapp: Perceptual image-error assessment through pairwise preference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1808-1817, 2018. 21, 23
+Sameera Ramasinghe, Salman Khan, and Nick Barnes. Volumetric convolution: Automatic representation learning in unit ball. arXiv preprint arXiv:1901.00616, 2019a. 9, 31
+Sameera Ramasinghe, Salman Khan, Nick Barnes, and Stephen Gould. Representation learning on unit ball with 3d roto-translational equivariance. International Journal of Computer Vision, pp. 1-23, 2019b. 9
+Sameera Ramasinghe, Salman Khan, Nick Barnes, and Stephen Gould. Spectral-gans for high-resolution 3d point-cloud generation. arXiv preprint arXiv:1912.01800, 2019c. 8, 31
+Sameera Ramasinghe, Salman Khan, Nick Barnes, and Stephen Gould. Blended convolution and synthesis for efficient discrimination of 3d shapes. In The IEEE Winter Conference on Applications of Computer Vision, pp. 21-31, 2020. 9
+Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. arXiv preprint arXiv:1505.05770, 2015. 1, 15
+Herbert E. Robbins. A stochastic approximation method. Annals of Mathematical Statistics, 22: 400-407, 2007. 17
+Min-cheol Sagong, Yong-goo Shin, Seung-wook Kim, Seung Park, and Sung-jea Ko. Pepsi : Fast image inpainting with parallel decoding network. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019. 15
+
+Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In Advances in neural information processing systems, pp. 2234-2242, 2016. 23
+Jonathan Sauder and Bjarne Sievers. Self-supervised deep learning on point clouds by reconstructing space. In Advances in Neural Information Processing Systems, pp. 12942-12952, 2019. 8
+Abhishek Sharma, Oliver Grau, and Mario Fritz. Vconv-dae: Deep volumetric shape learning without object labels. In European Conference on Computer Vision, pp. 236-250. Springer, 2016. 8
+Kihyuk Sohn, Honglak Lee, and Xinchen Yan. Learning structured output representation using deep conditional generative models. In Advances in Neural Information Processing Systems 28. 2015. 15
+Hoang Thanh-Tung, Truyen Tran, and Svetha Venkatesh. Improving generalization and stability of generative adversarial networks. arXiv preprint arXiv:1902.03984, 2019. 2
+Radim Tyleček and Radim Šára. Spatial pattern templates for recognition of objects with regular structure. In Proc. GCPR, Saarbrucken, Germany, 2013. 8
+Patricia Vitoria, Lara Raad, and Coloma Ballester. Chromagan: Adversarial picture colorization with semantic class distribution. In The IEEE Winter Conference on Applications of Computer Vision, pp. 2445-2454, 2020. 2, 4, 5, 6, 15, 28
+Yi Wang, Xin Tao, Xiaojuan Qi, Xiaoyong Shen, and Jiaya Jia. Image inpainting via generative multi-column convolutional neural networks. In Advances in Neural Information Processing Systems 31. 2018. 15
+Zhou Wang, Eero P Simoncelli, and Alan C Bovik. Multiscale structural similarity for image quality assessment. In The Thirty-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, volume 2, pp. 1398-1402. IEEE, 2003. 21
+Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600-612, 2004. 21
+You Xie, Erik Franz, Mengyu Chu, and Nils Thuerey. tempoGAN: A Temporally Coherent, Volumetric GAN for Super-resolution Fluid Flow. ACM Transactions on Graphics (TOG), 37(4):95, 2018. 15
+Dingdong Yang, Seunghoon Hong, Yunseok Jang, Tianchen Zhao, and Honglak Lee. Diversity-sensitive conditional generative adversarial networks. arXiv preprint arXiv:1901.09024, 2019.5
+Yaoqing Yang, Chen Feng, Yiru Shen, and Dong Tian. Foldingnet: Point cloud auto-encoder via deep grid deformation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 206-215, 2018. 8
+Jiahui Yu, Zhe Lin, Jimei Yang, Xiaohui Shen, Xin Lu, and Thomas S. Huang. Generative image inpainting with contextual attention. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018a. 15
+Jiahui Yu, Zhe Lin, Jimei Yang, Xiaohui Shen, Xin Lu, and Thomas S Huang. Generative image inpainting with contextual attention. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5505-5514, 2018b. 5
+Yanhong Zeng, Jianlong Fu, Hongyang Chao, and Baining Guo. Learning pyramid-context encoder network for high-quality image inpainting. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1486-1494, 2019. 2, 4, 5, 15, 23
+Lin Zhang, Lei Zhang, Xuanqin Mou, and David Zhang. Fsim: A feature similarity index for image quality assessment. IEEE transactions on Image Processing, 20(8):2378-2386, 2011. 21
+
+Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. In European conference on computer vision, pp. 649-666. Springer, 2016. 1, 5, 6, 15, 21, 28
+Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, 2018. 21, 23
+Song-Yang Zhang, Zhifei and Hairong Qi. Age progression/regression by conditional adversarial autoencoder. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2017. 21
+Xian Zhang, Xin Wang, Bin Kong, Youbing Yin, Qi Song, Siwei Lyu, Jiancheng Lv, Canghong Shi, and Xiaojie Li. Domain embedded multi-model generative adversarial networks for image-based face inpainting. ArXiv, abs/2002.02909, 2020. 15
+Jun-Yan Zhu, Philipp Krahenbuhl, Eli Shechtman, and Alexei A. Efros. Generative visual manipulation on the natural image manifold. In Proceedings of European Conference on Computer Vision (ECCV), 2016. 15
+Jun-Yan Zhu, Richard Zhang, Deepak Pathak, Trevor Darrell, Alexei A Efros, Oliver Wang, and Eli Shechtman. Toward multimodal image-to-image translation. In Advances in Neural Information Processing Systems 30. 2017a. 6, 15
+Jun-Yan Zhu, Richard Zhang, Deepak Pathak, Trevor Darrell, Alexei A Efros, Oliver Wang, and Eli Shechtman. Toward multimodal image-to-image translation. In Advances in neural information processing systems, pp. 465-476, 2017b. 5
\ No newline at end of file
diff --git a/conditionalgenerativemodelingvialearningthelatentspace/images.zip b/conditionalgenerativemodelingvialearningthelatentspace/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..1f615d0c241737db6e146177cbd813b91d808d90
--- /dev/null
+++ b/conditionalgenerativemodelingvialearningthelatentspace/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a827a0c16dce2f09036e8ca3c15dbd489c145c5c311a8fafb3bcea43578d2fd0
+size 788907
diff --git a/conditionalgenerativemodelingvialearningthelatentspace/layout.json b/conditionalgenerativemodelingvialearningthelatentspace/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..2a26a43878edd6ae6990bf7e37e144d7ddfe6c87
--- /dev/null
+++ b/conditionalgenerativemodelingvialearningthelatentspace/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ba658c734f5bd874447639ec509d606afd1ba70b3a502a574286d675c3865845
+size 602679
diff --git a/conditionallyadaptivemultitasklearningimprovingtransferlearninginnlpusingfewerparameterslessdata/bd547187-1620-477b-89cf-4d7eba79901a_content_list.json b/conditionallyadaptivemultitasklearningimprovingtransferlearninginnlpusingfewerparameterslessdata/bd547187-1620-477b-89cf-4d7eba79901a_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..ff7a6ce13726f9ac35d9857f2f4a9804d7fcd6fe
--- /dev/null
+++ b/conditionallyadaptivemultitasklearningimprovingtransferlearninginnlpusingfewerparameterslessdata/bd547187-1620-477b-89cf-4d7eba79901a_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3f2c0b45c0b158b679cb2561f75602f10caf96d6287bfe793196e4579db8e712
+size 133175
diff --git a/conditionallyadaptivemultitasklearningimprovingtransferlearninginnlpusingfewerparameterslessdata/bd547187-1620-477b-89cf-4d7eba79901a_model.json b/conditionallyadaptivemultitasklearningimprovingtransferlearninginnlpusingfewerparameterslessdata/bd547187-1620-477b-89cf-4d7eba79901a_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..5f2ae1a8695632734b975cfe44cc12fecd6db0e7
--- /dev/null
+++ b/conditionallyadaptivemultitasklearningimprovingtransferlearninginnlpusingfewerparameterslessdata/bd547187-1620-477b-89cf-4d7eba79901a_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:863c57d3345b2b6950982242be150ab684b95157823eb0fc9494ef3ebfe239fa
+size 162063
diff --git a/conditionallyadaptivemultitasklearningimprovingtransferlearninginnlpusingfewerparameterslessdata/bd547187-1620-477b-89cf-4d7eba79901a_origin.pdf b/conditionallyadaptivemultitasklearningimprovingtransferlearninginnlpusingfewerparameterslessdata/bd547187-1620-477b-89cf-4d7eba79901a_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..5ad72a92928c3658bd51a637e1b1856ffae04ed1
--- /dev/null
+++ b/conditionallyadaptivemultitasklearningimprovingtransferlearninginnlpusingfewerparameterslessdata/bd547187-1620-477b-89cf-4d7eba79901a_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:36dad8e3805f320e53916cf8d10153426c714a432a5bd100e96b16a658fa2b3f
+size 1055217
diff --git a/conditionallyadaptivemultitasklearningimprovingtransferlearninginnlpusingfewerparameterslessdata/full.md b/conditionallyadaptivemultitasklearningimprovingtransferlearninginnlpusingfewerparameterslessdata/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..6fd12013de0ce771d34998cb9cfa805839973ea6
--- /dev/null
+++ b/conditionallyadaptivemultitasklearningimprovingtransferlearninginnlpusingfewerparameterslessdata/full.md
@@ -0,0 +1,441 @@
+# CONDITIONALLY ADAPTIVE MULTI-TASK LEARNING: IMPROVING TRANSFER LEARNING IN NLP USING FEWER PARAMETERS & LESS DATA
+
+Jonathan Pilault $^{1*}$ , Amine Elhattami $^{1*}$ , Christopher Pal $^{1,2,3}$
+
+1Polytechnique Montreal & Mila, 2Element AI, 3Canada CIFAR AI Chair {jonathan.pilault, amine.elhattami, christopher.pal}@polymtl.ca
+
+# ABSTRACT
+
+Multi-Task Learning (MTL) networks have emerged as a promising method for transferring learned knowledge across different tasks. However, MTL must deal with challenges such as: overfitting to low resource tasks, catastrophic forgetting, and negative task transfer, or learning interference. Often, in Natural Language Processing (NLP), a separate model per task is needed to obtain the best performance. However, many fine-tuning approaches are both parameter inefficient, i.e., potentially involving one new model per task, and highly susceptible to losing knowledge acquired during pretraining. We propose a novel Transformer based Adapter consisting of a new conditional attention mechanism as well as a set of task-conditioned modules that facilitate weight sharing. Through this construction, we achieve more efficient parameter sharing and mitigate forgetting by keeping half of the weights of a pretrained model fixed. We also use a new multi-task data sampling strategy to mitigate the negative effects of data imbalance across tasks. Using this approach, we are able to surpass single task fine-tuning methods while being parameter and data efficient (using around $66\%$ of the data for weight updates). Compared to other BERT Large methods on GLUE, our 8-task model surpasses other Adapter methods by $2.8\%$ and our 24-task model outperforms by $0.7 - 1.0\%$ models that use MTL and single task fine-tuning. We show that a larger variant of our single multi-task model approach performs competitively across 26 NLP tasks and yields state-of-the-art results on a number of test and development sets. Our code is publicly available at https://github.com/CAMTL/CA-MTL.
+
+# 1 INTRODUCTION
+
+The introduction of deep, contextualized Masked Language Models (MLM) $^{1}$ trained on massive amounts of unlabeled data has led to significant advances across many different Natural Language Processing (NLP) tasks (Peters et al., 2018; Liu et al., 2019a). Much of these recent advances can be attributed to the now well-known BERT approach (Devlin et al., 2018). Substantial improvements over previous state-of-the-art results on the GLUE benchmark (Wang et al., 2018) have been obtained by multiple groups using BERT models with task specific fine-tuning. The "BERT-variant + fine-tuning" formula has continued to improve over time with newer work constantly pushing the state-of-the-art forward on the GLUE benchmark. The use of a single neural architecture for multiple NLP tasks has shown promise long before the current wave of BERT inspired methods (Collobert & Weston, 2008) and recent work has argued that autoregressive language models (ARLMs) trained on large-scale datasets – such as the GPT family of models (Radford et al., 2018), are in practice multi-task learners (Brown et al., 2020). However, even with MLMs and ARLMs trained for multi-tasking, single task fine-tuning is usually also employed to achieve state-of-the-art performance on specific tasks of interest. Typically this fine-tuning process may entail: creating a task-specific fine-tuned model (Devlin et al., 2018), training specialized model components for task-specific predictions (Houlsby et al., 2019) or fine-tuning a single multi-task architecture (Liu et al., 2019b).
+
+Single-task fine-tuning overall pretrained model parameters may have other issues. Recent analyses of such MLM have shed light on the linguistic knowledge that is captured in the hidden states and attention maps (Clark et al., 2019b; Tenney et al., 2019a; Merchant et al., 2020). Particularly, BERT has middle Transformer (Vaswani et al., 2017) layers that are typically the most transferable to a downstream task (Liu et al., 2019a). The model proxies the steps of the traditional NLP pipeline in a localizable way (Tenney et al., 2019a) — with basic syntactic information appearing earlier in the network, while high-level semantic information appearing in higher-level layers. Since pretraining is usually done on large-scale datasets, it may be useful, for a variety of downstream tasks, to conserve that knowledge. However, single task fine-tuning
+
+
+Figure 1: CA-MTL base architecture with our uncertainty-based sampling algorithm. Each task has its own decoder. The input embedding layer and the lower Transformer layers are frozen. The upper Transformer layer and Conditional Alignment module are modulated with the task embedding.
+
+causes catastrophic forgetting of the knowledge learned during MLM (Howard & Ruder, 2018). To preserve knowledge, freezing part of a pretrained network and using Adapters for new tasks have shown promising results (Houlsby et al., 2019).
+
+Inspired by the human ability to transfer learned knowledge from one task to another new task, Multi-Task Learning (MTL) in a general sense (Caruana, 1997; Rajpurkar et al., 2016b; Ruder, 2017) has been applied in many fields outside of NLP. Caruana (1993) showed that a model trained in a multi-task manner can take advantage of the inductive transfer between tasks, achieving a better generalization performance. MTL has the advantage of computational/storage efficiency (Zhang & Yang, 2017), but training models in a multi-task setting is a balancing act; particularly with datasets that have different: (a) dataset sizes, (b) task difficulty levels, and (c) different types of loss functions. In practice, learning multiple tasks at once is challenging since negative transfer (Wang et al., 2019a), task interference (Wu et al., 2020; Yu et al., 2020) and catastrophic forgetting (Serrà et al., 2018) can lead to worse data efficiency, training stability and generalization compared to single task fine-tuning.
+
+Using Conditionally Adaptive Learning, we seek to improve pretraining knowledge retention and multi-task inductive knowledge transfer. Our contributions are the following:
+
+- A new task conditioned Transformer that adapts and modulates pretrained weights (Section 2.1).
+- A novel way to prioritize tasks with an uncertainty based multi-task data sampling method that helps balance the sampling of tasks to avoid catastrophic forgetting (Section 2.2).
+
+Our Conditionally Adaptive Multi-Task Learning (CA-MTL) approach is illustrated in Figure 1. To the best of our knowledge, our work is the first to explore the use of a latent representation of tasks to modularize and adapt pretrained architectures. Further, we believe our work is also the first to examine uncertainty sampling for large-scale multi-task learning in NLP. We show the efficacy of CA-MTL by: (a) testing on 26 different tasks and (b) presenting state-of-the-art results on a number of test sets as well as superior performance against both single-task and MTL baselines. Moreover, we further demonstrate that our method has advantages over (c) other adapter networks, and (d) other MTL sampling methods. Finally, we provide ablations and separate analysis of the MT-Uncertainty Sampling technique in section 4.1 and of each component of the adapter in 4.2.
+
+# 2 METHODOLOGY
+
+This section is organized according to the two main MTL problems that we will tackle: (1) How to modularize a pretrained network with latent task representations? (2) How to balance different tasks in MTL? We define each task as: $\mathfrak{T}_i\triangleq \{p_i(\mathbf{y}_i|\mathbf{x}_i,\mathbf{z}_i),\mathcal{L}_i,\tilde{p}_i(\mathbf{x}_i)\}$ , where $\mathbf{z}_i$ is task $i$ 's learnable shallow embedding, $\mathcal{L}_i$ is the task loss, and $\tilde{p}_i(\mathbf{x}_i)$ is the empirical distribution of the training data pair $\{\mathbf{x}_i,\mathbf{y}_i\}$ , for $i\in \{1,\dots ,T\}$ and $T$ the number of supervised tasks. The MTL objective is:
+
+$$
+\min _ {\phi (\mathbf {z}), \theta_ {1}, \dots , \theta_ {T}} \sum_ {i = 1} ^ {T} \mathcal {L} _ {i} \left(f _ {\phi \left(\mathbf {z} _ {i}\right), \theta_ {i}} \left(\mathbf {x} _ {i}\right), \mathbf {y} _ {i}\right) \tag {1}
+$$
+
+where $f$ is the predictor function (includes encoder model and decoder heads), $\phi(\mathbf{z})$ are learnable generated weights conditioned on $\mathbf{z}$ , and $\theta_i$ are task-specific parameters for the output decoder heads. $\mathbf{z}$ is constructed using an embedding lookup table.
+
+# 2.1 TASK CONDITIONED TRANSFORMER
+
+Our task conditioned Transformer architecture is based on one simple concept. We either add conditional layers or modulate existing pretrained weights using a task representation by extending Feature Wise Linear Modulation (Perez et al., 2018) functions in several ways depending on the Transformer layer. We define our framework below.
+
+Definition 1 (Conditional Weight Transformations). Given a neural network weight matrix $W$ , we compute transformations of the form $\phi(\mathbf{W}|\mathbf{z}_i) = \gamma_i(\mathbf{z}_i)\mathbf{W} + \beta_i(\mathbf{z}_i)$ , where $\gamma_i$ and $\beta_i$ are learned functions that transform the weights based on a learned vector embedding $\mathbf{z}_i$ , for task $i$ .
+
+Definition 2 (Conditionally Adaptive Learning). In our setting, Conditionally Adaptive Learning is the process of learning a set of $\phi s$ for the conditionally adaptive modules presented below along with a set of task embedding vectors $z_{i}$ for $T$ tasks, using a multi-task loss (see equation 1).
+
+In the subsections that follow: We introduce a new Transformer Attention Module using block-diagonal Conditional Attention that allows the original query-key based attention to account for task-specific biases (section 2.1.1). We propose a new Conditional Alignment method that aligns the data of diverse tasks and that performs better than its unconditioned and higher capacity predecessor (section 2.1.2). We adapt layer normalization statistics to specific tasks using a new Conditional Layer Normalization module (section 2.1.3). We add a Conditional Bottleneck that facilitates weight sharing and task-specific information flow from lower layers (section 2.1.4). In our experiments we provide an ablation study of these components (Table 1) examining performance in terms of GLUE scores.
+
+# 2.1.1 CONDITIONAL ATTENTION
+
+Given $d$ , the input dimensions, the query $\mathbf{Q}$ , the key $\mathbf{K}$ and the value $\mathbf{V}$ as defined in Vaswani et al. (2017), we redefine the attention operation:
+
+$$
+\operatorname {A t t e n t i o n} (\mathbf {Q}, \mathbf {K}, \mathbf {V}, \mathbf {z} _ {i})) = \operatorname {s o f t m a x} \left[ M (\mathbf {z} _ {i}) + \frac {\mathbf {Q K} ^ {T}}{\sqrt {d}} \right] \mathbf {V}
+$$
+
+$$
+M (\mathbf {z} _ {i}) = \bigoplus_ {n = 1} ^ {N} A _ {n} ^ {\prime} (\mathbf {z} _ {i}), \quad A _ {n} ^ {\prime} (\mathbf {z} _ {i}) = A _ {n} \gamma_ {i} (\mathbf {z} _ {i}) + \beta_ {i} (\mathbf {z} _ {i})
+$$
+
+
+Figure 2: Conditional Attention Module
+
+where $\bigoplus$ is the direct sum operator (see section A.6), $N$
+
+is the number of block matrices $A_{n}\in \mathbb{R}^{(L / N)\times (L / N)}$ along the diagonal of the attention matrix, $L$ is the input sequence, $M(\mathbf{z}_i) = \mathrm{diag}(A_1',\dots,A_N')$ is a block diagonal conditional matrix. Note that $A_{n}$ is constructed using $L / N$ trainable and randomly initialized $L / N$ dimensional vectors. While the original attention matrix depends on the hidden states $h$ , $M(\mathbf{z}_i)$ is a learnable weight matrix that only depends on the task embedding $\mathbf{z}_i\in \mathbb{R}^d$ . $\gamma_i,\beta_i:\mathbb{R}^d\mapsto \mathbb{R}^{L^2 /N^2}$ are Feature Wise Linear Modulation (Perez et al., 2018) functions. We also experimented with full-block Conditional Attention $\in \mathbb{R}^{L\times L}$ . Not only did it have $N^2$ more parameters compared to the block-diagonal variant, but it also performed significantly worse on the GLUE development set (see FBA variant in Table 10). It is possible that GLUE tasks derive a certain benefit from localized attention that is a consequence of $M(\mathbf{z}_i)$ . With $M(\mathbf{z}_i)$ , each element in a sequence can only attend to other elements in its subsequence of length $L / N$ . In our experiments we used $N = d / L$ . The full Conditional Attention mechanism used in our experiments is illustrated in Figure 2.
+
+# 2.1.2 CONDITIONAL ALIGNMENT
+
+Wu et al. (2020) showed that in MTL having $T$ separate alignment modules $R_{1},\ldots ,R_{T}$ increases $\mathrm{BERT}_{\mathrm{LARGE}}$ avg. scores on five GLUE tasks (CoLA, MRPC, QNLI, RTE, SST-2) by $2.35\%$ . Inspired by this work, we found that adding a task conditioned alignment layer between the input embedding
+
+layer and the first BERT Transformer layer improved multi-task model performance. However, instead of having $T$ separate alignment matrices $R_{i}$ for each $T$ task, one alignment matrix $\hat{R}$ is generated as a function of the task embedding $z_{i}$ . As in Wu et al. (2020), we tested this module on the same five GLUE tasks and with BERTLARGE. Enabling task conditioned weight sharing across covariance alignment modules allows us to outperforms BERTLARGE by $3.61\%$ . This is $1.26\%$ higher than having $T$ separate alignment matrices. Inserting $\hat{R}$ into BERT, yields the following encoder function $\hat{f}$ :
+
+$$
+\hat {f} = \sum_ {t = 1} ^ {T} g _ {\theta_ {i}} \left(E \left(\mathbf {x} _ {i}\right) \hat {R} \left(\mathbf {z} _ {i}\right) B\right), \quad \hat {R} \left(\mathbf {z} _ {i}\right) = R \gamma_ {i} \left(\mathbf {z} _ {i}\right) + \beta_ {i} \left(\mathbf {z} _ {i}\right) \tag {2}
+$$
+
+where $\mathbf{x}_i\in \mathbb{R}^d$ is the layer input, $g_{\theta_i}$ is the decoder head function for task $i$ with weights $\theta_{i}$ , $E$ the frozen BERT embedding layer, $B$ the BERT Transformer layers and $R$ the linear weight matrix of a single task conditioned alignment matrix. $\gamma_{i},\beta_{i}:\mathbb{R}^{d}\mapsto \mathbb{R}^{d}$ are Feature Wise Linear Modulation functions.
+
+# 2.1.3 CONDITIONAL LAYER NORMALIZATION (CLN)
+
+We extend the Conditional Batch Normalization idea from de Vries et al. (2017) to Layer Normalization (Ba et al., 2016). For task $\mathcal{T}_i$ , $i \in \{1, \dots, T\}$ :
+
+$$
+\mathbf {h} _ {i} = \frac {1}{\sigma} \odot \left(\mathbf {a} _ {i} - \mu\right) * \hat {\gamma} _ {i} \left(\mathbf {z} _ {i}\right) + \beta_ {i} \left(\mathbf {z} _ {i}\right), \quad \hat {\gamma} _ {i} \left(\mathbf {z} _ {i}\right) = \boldsymbol {\gamma} ^ {\prime} \gamma_ {i} \left(\mathbf {z} _ {i}\right) + \boldsymbol {\beta} ^ {\prime} \tag {3}
+$$
+
+where $\mathbf{h}_i$ is the CLN output vector, $\mathbf{a}_i$ are the preceding layer activations associated with task $i$ , $\mu$ and $\sigma$ are the mean and the variance of the summed inputs within each layer as defined in Ba et al. (2016). Conditional Layer Normalization is initialized with BERT's Layer Normalization affine transformation weights and bias $\gamma'$ and $\beta'$ from the original formulation: $\mathbf{h} = \frac{1}{\sigma} \odot (\mathbf{a} - \mu) * \boldsymbol{\gamma}' + \boldsymbol{\beta}'$ . During training, the weight and bias functions of $\gamma_i(*)$ and $\beta_i(*)$ are always trained, while the original Layer Normalization weight may be kept fixed. This module was added to account for task specific rescaling of individual training cases. Layer Normalization normalizes the inputs across features. The conditioning introduced in equation 2.1.3 allows us to modulate the normalization's output based on a task's latent representation.
+
+# 2.1.4 CONDITIONAL BOTTLENECK
+
+We created a task conditioned two layer feed-forward bottleneck layer (CFF up/down in Figure 3). The conditional bottleneck layer follows the same transformation as in equation 2. The module in Figure 3a is added to the top most Transformer layers of CA-MTLBASE and uses a CLN. For CA-MTLLARGE this module is the main building block of the skip connection added alongside all Transformer layers seen in Figure 3b. The connection at layer $j$ takes in the matrix sum of the Transformer layer output at $j$ and the previous connection's output at $j - 1$ . The Conditional bottleneck allows lower layer information to flow upwards depending on the task. Our intuition for introducing this component is related to recent studies (Tenney
+
+
+Figure 3: a) Conditional Bottleneck for CA-MTLBASE. b) Conditional Bottleneck for CA-MTLLARGE.
+
+
+
+et al., 2019a) that showed that the "most important layers for a given task appear at specific positions". As with the other modules described so far, each task adaptation is created from the weights of a single shared adapter that is modulated by the task embedding.
+
+# 2.2 MULTI-TASK UNCERTAINTY SAMPLING
+
+MT-Uncertainty Sampling is a task selection strategy that is inspired by Active Learning techniques. Our algorithm 1 is outlined in the Appendix, Section A.2. Similar to Active Learning, our algorithm first evaluates the model uncertainty. MT-Uncertainty Sampling uses Shannon Entropy, an uncertainty measure, to choose training examples by first doing forward pass through the model with $b \times T$ input samples. For an output classification prediction with $C_i$ possible classes and probabilities
+
+$(p_{i,1}, \ldots, p_{i,C_i})$ , the Shannon Entropy $H_i$ , for task $\mathfrak{T}_i$ and $i \in \{1, \ldots, T\}$ , our uncertainty measure $\mathcal{U}(\mathbf{x})$ are given by:
+
+$$
+H _ {i} = H _ {i} \left(f _ {\phi \left(\mathbf {z} _ {i}\right), \theta_ {i}} (\mathrm {x})\right) = - \sum_ {c = 1} ^ {C _ {i}} p _ {c} \log p _ {c}, \quad \mathcal {U} \left(x _ {i}\right) = \frac {H _ {i} \left(f _ {\phi \left(\mathbf {z} _ {i}\right) , \theta_ {i}} (\mathrm {x})\right)}{\hat {H} \times H _ {i} ^ {\prime}} \tag {4}
+$$
+
+$$
+\hat {H} = \max _ {i \in \{1, \dots , T \}} \bar {H} _ {i} = \max \left[ \frac {1}{b} \sum_ {\mathbf {x} \in \mathbf {x} _ {i}} H _ {i} \right], \quad H _ {i} ^ {\prime} = - \sum_ {c = 1} ^ {C _ {i}} \frac {1}{C _ {i}} \log \left[ \frac {1}{C _ {i}} \right] \tag {5}
+$$
+
+where $\bar{H}_i$ is the average Shannon Entropy across $b$ samples of task $t$ , $H_i'$ , the Shannon entropy of choosing classes with uniform distribution and $\hat{H}$ , the maximum of each task's average entropy over $b$ samples. $H_i'$ is normalizing factor that accounts for differing number of prediction classes (without the normalizing factor $H_i'$ , tasks with a binary classification $\bar{C}_i = 1$ were rarely chosen). Further, to limit high entropy outliers and to favor tasks with highest uncertainty, we normalize with $\hat{H}$ . The measure in eq. 4 allows Algorithm 1 to choose $b$ samples from $b \times T$ candidates to train the model.
+
+# 3 RELATED WORK
+
+Multi-Tasking in NLP. To take advantage of the potential positive transfer of knowledge from one task to another, several works have proposed carefully choosing which tasks to train as an intermediate step in NLP before single task fine-tuning (Bingel & Søgaard, 2017; Kerinec et al., 2018; Wang et al., 2019a; Standley et al., 2019; Pruksachatkun et al., 2020; Phang et al., 2018). The intermediate tasks are not required to perform well and are not typically evaluated jointly. In this work, all tasks are trained jointly and all tasks used are evaluated from a single model. In Natural Language Understanding (NLU), it is still the case that to get the best task performance one often needs a separate model per task (Clark et al., 2019c; McCann et al., 2018). At scale, Multilingual NMT systems (Aharoni et al., 2019) have also found that MTL model performance degrades as the number of tasks increases. We notice a similar trend in NLU with our baseline MTL model. Recently, approaches in MTL have tackled the problem by designing task specific decoders on top of a shared model (Liu et al., 2019b) or distilling multiple single-task models into one (Clark et al., 2019c). Nonetheless, such MTL approaches still involves single task fine-tuning. In this paper, we show that it is possible to achieve high performance in NLU without single task fine-tuning.
+
+Adapters. Adapters are trainable modules that are attached in specific locations of a pretrained network. They provide another promising avenue to limit the number of parameters needed when confronted with a large number of tasks. This approach is useful with pretrained MLM models that have rich linguistic information (Tenney et al., 2019b; Clark et al., 2019b; Liu et al., 2019a; Tenney et al., 2019a). Recently, Houlsby et al. (2019) added an adapter to a pretrained BERT model by fine-tuning the layer norms and adding feed forward bottlenecks in every Transformer layer. However, such methods adapt each task individually during the fine-tuning process. Unlike prior work, our method harnesses the vectorized representations of tasks to modularize a single pretrained model across all tasks. Stickland et al. (2019) and Tay et al. (2020) also mix both MTL and adapters with BERT and T5 encoder-decoder (Raffel et al., 2019) respectively by creating local task modules that are controlled by a global task agnostic module. The main drawback is that a new set of non-shared parameters must be added when a new task is introduced. CA-MTL shares all parameters and is able to re-modulate existing weights with a new task embedding vector.
+
+Active Learning, Task Selection and Sampling. Our sampling technique is similar to the ones found in several active learning algorithms (Chen et al., 2006) that are based on Shannon entropy estimations. Reichart et al. (2008) and Ikhwantri et al. (2018) examined Multi-Task Active Learning (MTAL), a technique that chooses one informative sample for $T$ different learners (or models) for each $T$ tasks. Instead we choose $T$ tasks samples for one model. Moreover, the algorithm weights each sample by the corresponding task score, and the Shannon entropy is normalized to account for various losses (see equation 5). Also, our algorithm is used in a large scale MTL setup ( $\gg 2$ tasks). Recently, Glover & Hokamp (2019) explored task selection in MTL using learning policies based on counterfactual estimations (Charles et al., 2013). However, such method considers only fixed stochastic parameterized policies while our method adapts its selection criterion based on model uncertainty throughout the training process.
+
+# 4 EXPERIMENTS AND RESULTS
+
+We show that our adapter of section 2 achieve parameter efficient transfer for 26 NLP tasks. Our implementation of CA-MTL is based on HuggingFace (Wolf et al., 2019). Hyperparameters and our experimental set-up are outlined in A.5. To preserve the weights of the pretrained model, CA-MTL's bottom half Transformer layers are frozen in all experiments (except in section 4.4). We also tested different layer freezing configurations and found that freezing half the layers worked best on average (see Section A.8).
+
+# 4.1 MULTI-TASK UNCERTAINTY SAMPLING
+
+Our MT-Uncertainty sampling strategy, from section 2.2, is compared to 3 other task selection schemes: a) Counterfactual b) Task size c) Random. We used a BERTBASE (no adapters) on 200k iterations and with the same hyperparameters as in Glover & Hokamp (2019). For more information on Counterfactual task selection, we invite the reader to consult the full explanation in Glover & Hokamp (2019). For $T$ tasks and the dataset $D_{i}$ for tasks $i \in \{1, \dots, T\}$ , we rewrite the definitions of Random $\pi_{rand}$ and Task size $\pi_{|task|}$ sampling:
+
+$$
+\pi_ {r a n d} = 1 / T, \quad \pi_ {| t a s k |} = \left| D _ {i} \right| \left[ \sum_ {i = 1} ^ {T} \left| D _ {i} \right| \right] ^ {- 1} \tag {6}
+$$
+
+
+Figure 4: MT-Uncertainty vs. other task sampling strategies: median dev set scores on 8 GLUE tasks and using BERTBASE. Data for the Counterfactual and Task Size policy πtask (eq. 6) is from Glover & Hokamp (2019).
+
+In Figure 4, we see from the results that MT-Uncertainty converges faster by reaching the $80\%$ average GLUE score line before other task sampling methods. Further, MT-Uncertainty maximum score on 200k iterations is at 82.2, which is $1.7\%$ higher than Counterfactual sampling. The datasets in the GLUE benchmark offers a wide range of dataset sizes. This is useful to test how MT-Uncertainty manages a jointly trained low resource task (CoLA) and high resource task (MNLI). Figure 5 explains how catastrophic forgetting is curtailed by sampling tasks before performance drops. With $\pi_{rand}$ , all of CoLA's tasks are sampled by iteration 500, at which point the larger MNLI
+
+
+Figure 5: CoLA/MNLI Dev set scores and Entropy for $\pi_{rand}$ (left) and MT-Uncertainty (right).
+
+dataset overtakes the learning process and CoLA's dev set performance starts to diminish. On the other hand, with MT-Uncertainty sampling, CoLA is sampled whenever Shannon entropy is higher than MNLI's. The model first assesses uncertain samples using Shannon Entropy then decides what data is necessary to train on. This process allows lower resource tasks to keep performance steady. We provide evidence in Figure 8 of A.2 that MT-Uncertainty is able to manage task difficulty — by choosing the most difficult tasks first.
+
+# 4.2 ABLATION AND MODULE ANALYSIS
+
+In Table 1, we present the results of an ablation study to determine which elements of CA-MTL_BERT_BASE had the largest positive gain on average GLUE scores. Starting from a MTL BERT_BASE baseline trained using random task sampling $(\pi_{rand})$ . Apart for the Conditional Adapter, each module as well as MT-Uncertainty lift overall performance and reduce variance across tasks. Please note that we also included
+
+Table 1: Model ablation study on the GLUE dev set. All models have the bottom half layers frozen.
+
+| Model changes | Avg GLUE | Task σ GLUE | % data used |
| BERTBASE MTL (πrand) + Conditional Attention | 80.61 | 14.41 | 100 |
| 82.41 | 10.67 | 100 |
| + Conditional Adapter | 82.90 | 11.27 | 100 |
| + CA and CLN | 83.12 | 10.91 | 100 |
| + MT-Uncertainty (CA-MTLBERT-BASE) | 84.03 | 10.02 | 66.3 |
+
+$^{\mathrm{a}}$ CA=Conditional Alignment, CLN=Conditional Layer Normalization, Task $\sigma =$ scores standard deviation across tasks.
+
+accuracy/F1 scores for QQP, MRPC and Pearson/ Spearman correlation for STS-B to calculate score standard deviation Task $\sigma$ . Intuitively, when negative task transfer occurs between two tasks, either (1) task interference is bidirectional and scores are both impacted, or (2) interference is unidirectional and only one score is impacted. We calculate Task $\sigma$ to characterize changes in the dynamic range of performance across multiple tasks. We do this to assess the degree to which performance improvements are distributed across all tasks or only subsets of tasks. As we can see from Table 1, Conditional Attention, Conditional Alignment, Conditional Layer Normalization, MT-Uncertainty play roles in reducing Task $\sigma$ and increasing performance across tasks. This provides partial evidence of CA-MTL's ability to mitigating negative task transfer.
+
+We show that Conditional Alignment can learn to capture covariate distribution differences with task embeddings co-learned from other adapter components of CA-MTL. In Figure 6, we arrive at similar conclusions as Wu et al. (2020), who proved that negative task transfer is reduced when task covariances are aligned. The authors provided a "covariance similarity score" to gauge covariance alignment. For task $i$ and $j$ with $m_{i}$ and $m_{j}$ data samples respectively, and given $d$ dimensional inputs to the first Transformer layer $X_{i} \in \mathbb{R}^{m_{i} \times d}$ and $X_{j} \in \mathbb{R}^{m_{j} \times d}$ , we rewrite the steps to calculate the covariance similarity score between task $i$ and $j$ : (a) Take the covariance matrix $X_{i}^{\top} X_{i}$ , (b) Find its best rank- $r_{i}$ approxima
+
+
+Figure 6: Task performance vs. avg. covariance similarity scores (eq. 7) for MTL and CA-MTL.
+
+tion $U_{i,r_i}D_{i,r_i}U_{i,r_i}^\top$ , where $r_i$ is chosen to contain $99\%$ of the singular values. (c) Apply steps (a), (b) to $X_j$ , and compute the covariance similarity score $CovSim_{i,j}$ :
+
+$$
+C o v S i m _ {i, j} := \frac {\left\| \left(U _ {i , r _ {i}} D _ {i , r _ {i}} ^ {1 / 2}\right) ^ {\top} U _ {j , r _ {j}} D _ {j , r _ {j}} ^ {1 / 2} \right\| _ {F}}{\left\| U _ {i , r _ {i}} D _ {i , r _ {i}} ^ {1 / 2} \right\| _ {F} \cdot \left\| U _ {j , r _ {j}} D _ {j , r _ {j}} ^ {1 / 2} \right\| _ {F}}. C o v S i m _ {i} = \frac {1}{T - 1} \sum_ {j \neq i} C o v S i m _ {i, j} \tag {7}
+$$
+
+Since we are training models with $T$ tasks, we take the average covariance similarity score $CovSim_{i}$ between task $i$ and all other tasks. We measure $CovSim_{i}$ using equation 7 between 9 single-task models trained on individual GLUE tasks. For each task in Figure 6, we measure the similarity score on the MTL trained $\mathrm{BERT}_{\mathrm{BASE}}$ baseline, e.g., CoLA (MTL), or CA-MTL_BERT-BASE model, e.g., MNLI (CA-MTL). Our score improvement measure is the $\%$ difference between a single task model and MTL or CA-MTL on the particular task. We find that covariance similarity increases for 9 tasks and that performance increases for 7 out 9 tasks. These measurements confirm that the Conditional Alignment is able to align task covariance, thereby helping alleviate task interference.
+
+# 4.3 JOINTLY TRAINING ON 8 TASKS: GLUE
+
+In Table 2, we evaluate the performance of CA-MTL against single task fine-tuned models, MTL as well as the other BERT-based adapters on GLUE. As in Houlsby et al. (2019), $\mathrm{MNLI_m}$ and $\mathrm{MNLI_{mm}}$ are treated as separate tasks. Our results indicate that CA-MTL outperforms both the BASE adapter,
+
+Table 2: Adapters with layer freezing vs. ST/MT on GLUE test set. F1 scores are reported for QQP/MRPC, Spearman's correlation for STS-B, accuracy on the matched/mismatch sets for MNLI, Matthew's correlation for CoLA and accuracy for other tasks. * Individual scores not available. ST=Single Task, MTL=Multitask, g.e.= greater or equal to. Results from: ${}^{1}$ Devlin et al. (2018) ${}^{2}$ Stickland et al. (2019). ${}^{3}$ Houlsbby et al. (2019).
+
+| Method | Type | Total
+params | Trained
+params/task | # tasks
+g.e. ST | CoLA | MNLI | MRPC | GLUE |
| QNLI | QQP | RTE | SST-2 | STS-B | Avg |
| Base Models — Test Server Results |
| BERTBASE1 | ST | 9.0× | 100% | — | 52.1 | 84.6/83.4 | 88.9 | 90.5 | 71.2 | 66.4 | 93.5 | 85.8 | 79.6 |
| BERTBASE2 | MTL | 1.0× | 11.1% | 2 | 51.2 | 84.0/83.4 | 86.7 | 89.3 | 70.8 | 76.6 | 93.4 | 83.6 | 79.9 |
| PALs+Anneal Samp.2 | MTL | 1.13× | 12.5% | 4 | 51.2 | 84.3/83.5 | 88.7 | 90.0 | 71.5 | 76.0 | 92.6 | 85.8 | 80.4 |
| CA-MTLBERT-BASE (ours) | MTL | 1.12× | 5.6% | 5 | 53.1 | 85.9/85.8 | 88.6 | 90.5 | 69.2 | 76.4 | 93.2 | 85.3 | 80.9 |
| Large Models — Test Server Results |
| BERTLARGE1 | ST | 9.0× | 100% | — | 60.5 | 86.7/85.9 | 89.3 | 92.7 | 72.1 | 70.1 | 94.9 | 86.5 | 82.1 |
| Adapters-2563 | ST | 1.3× | 3.6% | 3 | 59.5 | 84.9/85.1 | 89.5 | 90.7 | 71.8 | 71.5 | 94.0 | 86.9 | 80.0 |
| CA-MTLBERT-LARGE (ours) | MTL | 1.12× | 5.6% | 3 | 59.5 | 85.9/85.4 | 89.3 | 92.6 | 71.4 | 79.0 | 94.7 | 87.7 | 82.8 |
+
+PALS+Anneal Sampling (Stickland et al., 2019), and the LARGE adapter, Adapters-256 (Houlsby et al., 2019). Against single task (ST) models, CA-MTL is $1.3\%$ higher than $\mathrm{BERT}_{\mathrm{BASE}}$ , with 5 out 9 tasks equal or greater performance, and $0.7\%$ higher than $\mathrm{BERT}_{\mathrm{LARGE}}$ , with 3 out 9 tasks equal or greater performance. ST models, however, need 9 models or close to $9\times$ more parameters for all 9 tasks. We noted that CA-MTL_BERT-LARGE's average score is driven by strong RTE scores. While RTE benefits from MTL, this behavior may also be a side effect of layer freezing. In Table 10, we see that CA-MTL has gains over ST on more and more tasks as we gradually unfreeze layers.
+
+# 4.4 TRANSFER TO NEW TASKS
+
+In Table 3 we examine the ability of our method to quickly adapt to new tasks. We performed domain adaptation on SciTail (Khot et al., 2018) and SNLI (Bowman et al., 2015) datasets, using a CA-MTLBASE model trained on GLUE and a new linear decoder head. We
+
+Table 3: Domain adaptation results on dev. sets for BASE models. ${}^{1}$ Liu et al. (2019b), ${}^{2}$ Jiang et al. (2020)
+
+| % data used | SciTail | SNLI |
| 0.1% | 1% | 10% | 100% | 0.1% | 1% | 10% | 100% |
| BERTBASE1 | 51.2 | 82.2 | 90.5 | 94.3 | 52.5 | 78.1 | 86.7 | 91.0 |
| MT-DNN1 | 81.9 | 88.3 | 91.1 | 95.7 | 81.9 | 88.3 | 91.1 | 95.7 |
| MT-DDNSMART2 | 82.3 | 88.6 | 91.3 | 96.1 | 82.7 | 86.0 | 88.7 | 91.6 |
| CA-MTLBERT | 83.2 | 88.7 | 91.4 | 95.6 | 82.8 | 86.2 | 88.0 | 91.5 |
+
+tested several pretrained and randomly initialized task embeddings in a zero-shot setting. The complete set of experiments with all task embeddings can be found in the Appendix, Section A.4. We then selected the best task embedding for our results in Table 3. STS-B and MRPC MTL-trained task embeddings performed best on SciTail and SNLI respectively. CA-MTL $_{\text{BERT-BASE}}$ has faster adaptation than MT-DNN $_{\text{SMART}}$ (Jiang et al., 2020) as evidenced by higher performances in low-resource regimes (0.1% and 1% of the data). When trained on the complete dataset, CA-MTL $_{\text{BERT-BASE}}$ is on par with MT-DNN $_{\text{SMART}}$ . Unlike MT-DNN $_{\text{SMART}}$ however, we do not add context from a semantic similarity model - MT-DNN $_{\text{SMART}}$ is built off HNN (He et al., 2019). Nonetheless, with a larger model, CA-MTL surpasses MT-DNN $_{\text{SMART}}$ on the full SNLI and SciTail datasets in Table 6.
+
+# 4.5 JOINTLY TRAINING ON 24 TASKS: GLUE/SUPER-GLUE, MRQA AND WNUT2017
+
+Effects of Scaling Task Count. In Figure 7 we continue to test if CA-MTL mitigates task interference by measuring GLUE average scores when progressively adding 9 GLUE tasks, 8 Super-GLUE tasks (Wang et al., 2019b), 6 MRQA tasks (Fisch et al., 2019). Tasks are described in Appendix section A.9. The results show that adding 23 tasks drops the performance of our baseline MTL BERTBASE ( $\pi_{rand}$ ). MTL BERT increases by $4.3\%$ when adding MRQA but, with 23 tasks, the model performance drops by $1.8\%$ . The opposite is true when CA-MTL modules are integrated into the model. CA-MTL continues to show gains with a large number of tasks and surpasses the baseline MTL model by close to $4\%$ when trained on 23 tasks.
+
+24-task CA-MTL. We jointly trained large MTL baselines and CA-MTL models on GLUE/Super-GLUE/MRQA and Named Entity Recognition (NER) WNUT2017 (Derczynski et al., 2017). Since some dev. set scores are not provided and since RoBERTa results were reported with a median score over 5 random seeds, we ran our own single seed ST/MTL baselines (marked "ReImp") for a fair comparison. The dev. set numbers reported in Liu et al. (2019c) are displayed with our baselines in Table 9. Results are presented in Table 4.
+
+We notice in Table 4 that even for large models, CA-MTL provides large gains in performance on average over both ST and MTL models. For the BERT based models, CA-MTL provides $2.3\%$ gain
+
+
+Figure 7: Effects of adding more datasets on avg GLUE scores. Experiments conducted on 3 epochs. When 23 tasks are trained jointly, performance of CA-MTL_BERT-BASE continues to improve.
+
+Table 4: 24-task CA-MTL vs. ST and vs. 24-task MTL with frozen layers on GLUE, SuperGLUE, MRQA and NER development sets. ST=Single Task, MTL=Multitask, g.e.= greater or equal to. Details in section A.5.
+
+| Model | Task Grouping | Avg | # tasks e.g. ST | Total Params |
| GLUE | SuperGLUE | MRQA | NER |
| BERT-LARGE models |
| \(ST_{Relmp}\) | 84.5 | 68.9 | 79.7 | 54.1 | 76.8 | — | 24× |
| \(MTL_{Relmp}\) | 83.2 | 72.1 | 77.8 | 42.2 | 76.4 | 9/24 | 1× |
| CA-MTL | 86.6 | 74.1 | 79.5 | 49.0 | 79.1 | 17/24 | 1.12× |
| RoBERTa-LARGE models |
| \(ST_{Relmp}\) | 88.2 | 76.5 | 83.6 | 57.8 | 81.9 | — | 24× |
| \(MTL_{Relmp}\) | 86.0 | 78.6 | 80.7 | 49.3 | 80.7 | 7/24 | 1× |
| CA-MTL | 89.4 | 80.0 | 82.4 | 55.2 | 83.1 | 15/24 | 1.12× |
+
+over ST and higher scores on 17 out 24 tasks. For RoBERTa based models, CA-MTL provides $1.2\%$ gain over ST and higher scores on 15 out 24 tasks. We remind the reader that this is achieved with a single model. Even when trained with 16 other tasks, it is interesting to note that the MTL baseline perform better than the ST baseline on Super GLUE where most tasks have a small number of samples. Also, we used NER to test if we could still outperform the ST baseline on a token-level task, significantly different from other tasks. Unfortunately, while CA-MTL performs significantly better than the MTL baseline model, CA-MTL had not yet overfit on this particular task and could have closed the gap with the ST baselines with more training cycles.
+
+Comparisons with other methods. In Table 5, CA-MTLBERT is compared to other Large BERT based methods that either use MTL + ST, such as MT-DNN (Liu et al., 2019b), intermediate tasks + ST, such as STILTS (Phang et al., 2018) or MTL model distillation + ST, such as BAM! (Clark et al., 2019c). Our method scores higher than MT-DNN on 5 of 9 tasks and by $1.0\%$ on avg. Against STILTS, CA-MTL realizes a $0.7\%$ avg. score gain, surpassing scores on 6 of 9 tasks. We also show
+
+Table 5: Our 24-task CA-MTL vs. other large models on GLUE. F1 is reported for QQP/MRPC, Spearman's corr. for STS-B, Matthew's corr. for CoLA and accuracy for other tasks. *Split not available. **Uses intermediate task fine-tuning + ST.
+
+| Model | GLUE tasks | Avg |
| CoLA | MNLI | MRPC | QNLI | QQP | RTE | SST-2 | STS-B |
| BERT-LARGE based models on Dev set. |
| MT-DNN | 63.5 | 87.1/86.7 | 91.0 | 92.9 | 89.2 | 83.4 | 94.3 | 90.6 | 85.6 |
| STILTS ** | 62.1 | 86.1* | 92.3 | 90.5 | 88.5 | 83.4 | 93.2 | 90.8 | 85.9 |
| BAM! | 61.8 | 87.0* | - | 92.5 | - | 82.8 | 93.6 | 89.7 | - |
| 24-task CA-MTL | 63.8 | 86.3/86.0 | 92.9 | 93.4 | 88.1 | 84.5 | 94.5 | 90.3 | 86.6 |
| RoBERTa-LARGE based models on Test set. |
| RoBERTA** with Ensemble | 67.8 | 91.0/90.8 | 91.6 | 95.4 | 74.0 | 87.9 | 97.5 | 92.5 | 87.3 |
| 24-task CA-MTL | 62.2 | 89.0/88.4 | 92.0 | 94.7 | 72.3 | 86.2 | 96.3 | 89.8 | 85.7 |
+
+that CA-MTL $_{\text{RoBERTa}}$ is within only $1.6\%$ of a RoBERTa ensemble of 5 to 7 models per task and that uses intermediate tasks. Using our 24-task CA-MTL large RoBERTa-based model, we report NER F1 scores on the WNUT2017 test set in Table 6a. We compare our result with $\text{RoBERTa}_{\text{LARGE}}$ and XLM- $R_{\text{LARGE}}$ (Nguyen et al., 2020) the current state-of-the-art (SOTA). Our model outperforms XLM- $R_{\text{LARGE}}$ by $1.6\%$ , reaching a new state-of-the-art. Using domain adaptation as described in Section 4.4, we report results on the SciTail test set in Table 6b and SNLI test set in Table 6b. For SciTail, our model matches the current SOTA $^2$ ALUM (Liu et al., 2020), a RoBERTa large based model that additionally uses the SMART (Jiang et al., 2020) fine-tuning method. For SNLI, our model outperforms SemBert, the current SOTA $^3$ .
+
+Table 6: CA-MTL test performance vs. SOTA.
+
+| (a) WNUT2017 | F1 |
| RoBERTaLARGE | 56.9 |
| XLM-RLARGE | 57.1 |
| CA-MTLRoBERTa (ours) | 58.0 |
+
+| (b) SciTail | % Acc |
| MT-DNN | 94.1 |
| ALUMRoBERTa | 96.3 |
| ALUMRoBERTa-SMOTE | 96.8 |
| CA-MTLRoBERTa (ours) | 96.8 |
+
+| (c) SNLI | % Acc |
| MT-DNN | 91.6 |
| MT-DNNSMART | 91.7 |
| SemBERT | 91.9 |
| CA-MTLRoBERTa (ours) | 92.1 |
+
+# 5 CONCLUSION
+
+We believe that our experiments here have helped demonstrate the potential of task conditioned adaptive learning within a single model that performs multiple tasks. In a large-scale 24-task NLP experiment, CA-MTL outperforms fully tuned single task models by $2.3\%$ for BERT Large and by $1.2\%$ for RoBERTa Large using 1.12 times the number of parameters, while single task fine-tuning approach requires 24 separately tuned single task models or 24 times the number of parameters. When a BERT vanilla MTL model sees its performance drop as the number of tasks increases, CA-MTL scores continue to climb. Performance gains are not driven by a single task as it is often the case in MTL. Each CA-MTL module that adapts a Transformer model is able to reduce performance variances between tasks, increasing average scores and aligning task covariances. This evidence shows that CA-MTL is able to mitigate task interference and promote more efficient parameter sharing. We showed that MT-Uncertainty is able to avoid degrading performances of low resource tasks. Tasks are sampled whenever the model sees entropy increase, helping avoid catastrophic forgetting. Overall, CA-MTL offers a promising avenue to dynamically adapt and modularize knowledge embedded in large monolithic pretrained models. Extending such ideas will be an objective for future work.
+
+# ACKNOWLEDGMENTS
+
+This research was supported by the Canada CIFAR AI Chairs Program, NSERC and PROMPT. Experiments in this article were conducted with Compute Canada and MILA computational infrastructure and we thank them for their support. We would like to thank Colin Raffel, Sandeep Subramanian, and Nicolas Gontier for their useful feedback and the anonymous reviewers for helpful comments, discussions and suggestions.
+
+# REFERENCES
+
+Roee Aharoni, Melvin Johnson, and Orhan First. Massively multilingual neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 3874-3884, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1388. URL https://www.aclweb.org/anthology/N19-1388.
+Lei Jimmy Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. Layer normalization. CoRR, abs/1607.06450, 2016. URL http://arxiv.org/abs/1607.06450.
+Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pp. 41-48, 2009.
+Joachim Bingel and Anders Søgaard. Identifying beneficial task relations for multi-task learning in deep neural networks. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pp. 164-169, Valencia, Spain, April 2017. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/E17-2026.
+Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, 2015.
+Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. arXiv, pp. arXiv-2005, 2020.
+Rich Caruana. Multitask learning. Mach. Learn., 28(1):41-75, July 1997. ISSN 0885-6125. doi: 10.1023/A:1007379606734. URL https://doi.org/10.1023/A:1007379606734.
+Richard Caruana. Multitask learning: A knowledge-based source of inductive bias. In Proceedings of the Tenth International Conference on Machine Learning, pp. 41-48. Morgan Kaufmann, 1993.
+Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pp. 1-14, Vancouver, Canada, August 2017. Association for Computational Linguistics. doi: 10.18653/v1/S17-2001. URL https://www.aclweb.org/anthology/S17-2001.
+Denis Charles, Max Chickering, and Patrice Simard. Counterfactual reasoning and learning systems: The example of computational advertising. Journal of Machine Learning Research, 14:3207-3260, November 2013.
+Jinying Chen, Andrew Schein, Lyle Ungar, and Martha Palmer. An empirical study of the behavior of active learning for word sense disambiguation. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pp. 120-127, New York City, USA, June 2006. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/N06-1016.
+Zhao Chen, Vijay Badrinarayanan, Chen-Yu Lee, and Andrew Rabinovich. Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks. CoRR, abs/1711.02257, 2017. URL http://arxiv.org/abs/1711.02257.
+
+Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. BoolQ: Exploring the surprising difficulty of natural yes/no questions. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 2924–2936, Minneapolis, Minnesota, June 2019a. Association for Computational Linguistics. doi: 10.18653/v1/N19-1300. URL https://www.aclweb.org/anthology/N19-1300.
+Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. What does BERT look at? an analysis of BERT's attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 276-286, Florence, Italy, August 2019b. Association for Computational Linguistics. doi: 10.18653/v1/W19-4828. URL https://www.aclweb.org/anthology/W19-4828.
+Kevin Clark, Minh-Thang Luong, Urvashi Khandelwal, Christopher D. Manning, and Quoc V. Le. Bam! born-again multi-task networks for natural language understanding. CoRR, abs/1907.04829, 2019c. URL http://arxiv.org/abs/1907.04829.
+Edward Collins, Nikolai Rozanov, and Bingbing Zhang. Evolutionary data measures: Understanding the difficulty of text classification tasks. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pp. 380-391, Brussels, Belgium, October 2018. Association for Computational Linguistics. doi: 10.18653/v1/K18-1037. URL https://www.aclweb.org/anthology/K18-1037.
+Ronan Collobert and Jason Weston. A unified architecture for natural language processing: deep neural networks with multitask learning. In ICML, pp. 160-167, 2008. URL https://doi.org/10.1145/1390156.1390177.
+Marie-Catherine de Marneffe, Mandy Simons, and Judith Tonhauser. The commitmentbank: Investigating projection in naturally occurring discourse. Proceedings of Sinn und Bedeutung, 23(2):107-124, Jul. 2019. doi: 10.18148/sub/2019.v23i2.601. URL https://ojs.ub.uni-konstanz.de/sub/index.php/sub/article/view/601.
+Harm de Vries, Florian Strub, Jeremy Mary, Hugo Larochelle, Olivier Pietquin, and Aaron C Courville. Modulating early visual processing by language. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems 30, pp. 6594-6604. Curran Associates, Inc., 2017. URL http://papers.nips.cc/paper/7237-modulating-early-visual-processing-by-language.pdf.
+Leon Derczynski, Eric Nichols, Marieke van Erp, and Nut Limsopatham. Results of the WNUT2017 shared task on novel and emerging entity recognition. In Proceedings of the 3rd Workshop on Noisy User-generated Text, pp. 140-147, Copenhagen, Denmark, September 2017. Association for Computational Linguistics. doi: 10.18653/v1/W17-4418. URL https://www.aclweb.org/anthology/W17-4418.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805, 2018. URL http://arxiv.org/abs/1810.04805.
+William B. Dolan and Chris Brockett. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005), 2005. URL https://www.aclweb.org/anthology/I05-5002.
+Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur Güney, Volkan Cirik, and Kyunghyun Cho. Searchqa: A new q&a dataset augmented with context from a search engine. CoRR, abs/1704.05179, 2017. URL http://arxiv.org/abs/1704.05179.
+Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eunsol Choi, and Danqi Chen. MRQA 2019 shared task: Evaluating generalization in reading comprehension. In Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pp. 1-13, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-5801. URL https://www.aclweb.org/anthology/D19-5801.
+
+John Glover and Chris Hokamp. Task selection policies for multitask learning. CoRR, 2019. URL http://arxiv.org/abs/1907.06214.
+Andrew Gordon, Zornitsa Kozareva, and Melissa Roemmele. SemEval-2012 task 7: Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In *SEM 2012: The First Joint Conference on Lexical and Computational Semantics – Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pp. 394–398, Montréal, Canada, 7–8 June 2012. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/S12-1052.
+Michelle Guo, Albert Haque, De-An Huang, Serena Yeung, and Li Fei-Fei. Dynamic task prioritization for multitask learning. In Proceedings of the European Conference on Computer Vision (ECCV), September 2018.
+Pengcheng He, Xiaodong Liu, Weizhu Chen, and Jianfeng Gao. A hybrid neural network model for commonsense reasoning. In Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing, pp. 13-21, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-6002. URL https://www.aclweb.org/anthology/D19-6002.
+Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for NLP. CoRR, abs/1902.00751, 2019. URL http://arxiv.org/abs/1902.00751.
+Jeremy Howard and Sebastian Ruder. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 328-339, Melbourne, Australia, July 2018. Association for Computational Linguistics. doi: 10.18653/v1/P18-1031. URL https://www.aclweb.org/anthology/P18-1031.
+Fariz Ikhwantri, Samuel Louvan, Kemal Kurniawan, Bagas Abisena, Valdi Rachman, Alfan Farizki Wicaksono, and Rahmad Mahendra. Multi-task active learning for neural semantic role labeling on low resource conversational corpus. In Proceedings of the Workshop on Deep Learning Approaches for Low-Resource NLP, pp. 43-50, 2018.
+Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Tuo Zhao. SMART: Robust and efficient fine-tuning for pre-trained natural language models through principled regularized optimization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 2177-2190, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.197. URL https://www.aclweb.org/anthology/2020.acl-main.197.
+Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1601-1611, Vancouver, Canada, July 2017. Association for Computational Linguistics. doi: 10.18653/v1/P17-1147. URL https://www.aclweb.org/anthology/P17-1147.
+Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. Spanbert: Improving pre-training by representing and predicting spans. CoRR, abs/1907.10529, 2019. URL http://arxiv.org/abs/1907.10529.
+Alex Kendall, Yarin Gal, and Roberto Cipolla. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. CoRR, abs/1705.07115, 2017. URL http://arxiv.org/abs/1705.07115.
+Emma Kerinec, Chloe Braud, and Anders Søgaard. When does deep multi-task learning work for loosely related document classification tasks? In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 1-8, Brussels, Belgium, November 2018. Association for Computational Linguistics. doi: 10.18653/v1/W18-5401. URL https://www.aclweb.org/anthology/W18-5401.
+
+Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. Looking beyond the surface: A challenge set for reading comprehension over multiple sentences. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 252-262, New Orleans, Louisiana, June 2018. Association for Computational Linguistics. doi: 10.18653/v1/N18-1023. URL https://www.aclweb.org/anthology/N18-1023.
+Tushar Khot, A. Sabharwal, and Peter Clark. Scitail: A textual entailment dataset from science question answering. In AAAI, 2018.
+Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2015.
+Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. Natural questions: a benchmark for question answering research. Transactions of the Association of Computational Linguistics, 2019.
+Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan First, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. Gshard: Scaling giant models with conditional computation and automatic sharding. arXiv preprint arXiv:2006.16668, 2020.
+Hector J. Levesque. The winograd schema challenge. In AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning. AAAI, 2011. URL http://dblp.uni-trier.de/db/conf/aaaiss/aaaiss2011-6.html#Levesque11.
+Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. Linguistic knowledge and transferability of contextual representations. CoRR, abs/1903.08855, 2019a. URL http://arxiv.org/abs/1903.08855.
+Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. Multi-task deep neural networks for natural language understanding. CoRR, abs/1901.11504, 2019b. URL http://arxiv.org/abs/1901.11504.
+Xiaodong Liu, Hao Cheng, Pengcheng He, Weizhu Chen, Yu Wang, Hoifung Poon, and Jianfeng Gao. Adversarial training for large neural language models, 2020.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692, 2019c. URL http://arxiv.org/abs/1907.11692.
+Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. The natural language decathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730, 2018.
+Amil Merchant, Elahe Rahimtoroghi, Ellie Pavlick, and Ian Tenney. What happens to bert embeddings during fine-tuning? arXiv preprint arXiv:2004.14448, 2020.
+Dat Quoc Nguyen, Thanh Vu, and Anh Tuan Nguyen. Bertweet: A pre-trained language model for english tweets. arXiv preprint arXiv:2005.10200, 2020.
+Hao Peng, Roy Schwartz, Dianqi Li, and Noah A. Smith. A mixture of h - 1 heads is better than h heads. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 6566-6577, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.587. URL https://www.aclweb.org/anthology/2020.acl-main.587.
+Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, and Aaron C. Courville. Film: Visual reasoning with a general conditioning layer. In AAAI, 2018.
+Matthew E. Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. Dissecting contextual word embeddings: Architecture and representation. CoRR, abs/1808.08949, 2018. URL http://arxiv.org/abs/1808.08949.
+
+Jason Phang, Thibault Févry, and Samuel R. Bowman. Sentence encoders on STILTs: Supplementary training on intermediate labeled-data tasks. CoRR, abs/1811.01088, 2018. URL http://arxiv.org/abs/1811.01088.
+Adam Poliak, Aparajita Haldar, Rachel Rudinger, J. Edward Hu, Ellie Pavlick, Aaron Steven White, and Benjamin Van Durme. Collecting diverse natural language inference problems for sentence representation evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 67-81, Brussels, Belgium, October-November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1007. URL https://www.aclweb.org/anthology/D18-1007.
+Yada Pruksachatkun, Jason Phang, Haokun Liu, Phu Mon Htut, Xiaoyi Zhang, Richard Yuanzhe Pang, Clara Vania, Katharina Kann, and Samuel R Bowman. Intermediate-task transfer learning with pretrained models for natural language understanding: When and why does it work? arXiv preprint arXiv:2005.00628, 2020.
+Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. 2018.
+Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer, 2019.
+Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2383-2392, Austin, Texas, November 2016a. Association for Computational Linguistics. doi: 10.18653/v1/D16-1264. URL https://www.aclweb.org/anthology/D16-1264.
+Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2383-2392, Austin, Texas, November 2016b. Association for Computational Linguistics. doi: 10.18653/v1/D16-1264.
+Roi Reichart, Katrin Tomanek, Udo Hahn, and Ari Rappoport. Multi-task active learning for linguistic annotations. In Proceedings of ACL-08: HLT, pp. 861-869, 2008.
+Sebastian Ruder. An overview of multi-task learning in deep neural networks. *ArXiv*, abs/1706.05098, 2017.
+Ozan Sener and Vladlen Koltun. Multi-task learning as multi-objective optimization. CoRR, abs/1810.04650, 2018. URL http://arxiv.org/abs/1810.04650.
+Joan Serrà, Didac Suris, Marius Miron, and Alexandros Karatzoglou. Overcoming catastrophic forgetting with hard attention to the task. In ICML, pp. 4555-4564, 2018. URL http://proceedings.mlr.press/v80/serra18a.html.
+Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 1631-1642, Seattle, Washington, USA, October 2013. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/D13-1170.
+Trevor Standley, Amir Roshan Zamir, Dawn Chen, Leonidas J. Guibas, Jitendra Malik, and Silvio Savarese. Which tasks should be learned together in multi-task learning? CoRR, abs/1905.07553, 2019. URL http://arxiv.org/abs/1905.07553.
+Asa Cooper Stickland, Iain Murray, someone, and someone. BERT and PALs: Projected attention layers for efficient adaptation in multi-task learning. volume 97 of Proceedings of Machine Learning Research, pp. 5986-5995, Long Beach, California, USA, 09-15 Jun 2019. PMLR. URL http://proceedings.mlr.press/v97/stickland19a.html.
+
+Yi Tay, Zhe Zhao, Dara Bahri, Donald Metzler, and Da-Cheng Juan. Hypergrid: Efficient multi-task transformers with grid-wise decomposable hyper projections. arXiv preprint arXiv:2007.05891, 2020.
+Ian Tenney, Dipanjan Das, and Ellie Pavlick. BERT rediscovers the classical NLP pipeline. CoRR, abs/1905.05950, 2019a. URL http://arxiv.org/abs/1905.05950.
+Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipanjan Das, and Ellie Pavlick. What do you learn from context? probing for sentence structure in contextualized word representations. CoRR, abs/1905.06316, 2019b. URL http://arxiv.org/abs/1905.06316.
+Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. NewsQA: A machine comprehension dataset. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pp. 191-200, Vancouver, Canada, August 2017. Association for Computational Linguistics. doi: 10.18653/v1/W17-2623. URL https://www.aclweb.org/anthology/W17-2623.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. CoRR, abs/1706.03762, 2017. URL http://arxiv.org/abs/1706.03762.
+Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 353-355, Brussels, Belgium, November 2018. Association for Computational Linguistics. doi: 10.18653/v1/W18-5446. URL https://www.aclweb.org/anthology/W18-5446.
+Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, Berlin Chen, Benjamin Van Durme, Edouard Grave, Ellie Pavlick, and Samuel R. Bowman. Can you tell me how to get past sesame street? sentence-level pretraining beyond language modeling. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), 2019a.
+Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. Superglue: A stickier benchmark for general-purpose language understanding systems. CoRR, abs/1905.00537, 2019b. URL http://arxiv.org/abs/1905.00537.
+Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. Neural network acceptability judgments. CoRR, abs/1805.12471, 2018. URL http://arxiv.org/abs/1805.12471.
+Adina Williams, Nikita Nangia, and Samuel Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1112-1122, New Orleans, Louisiana, June 2018. Association for Computational Linguistics. doi: 10.18653/v1/N18-1101. URL https://www.aclweb.org/anthology/N18-1101.
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierrick Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, and Jamie Brew. Huggingface's transformers: State-of-the-art natural language processing. CoRR, abs/1910.03771, 2019. URL http://arxiv.org/abs/1910.03771.
+Sen Wu, Hongyang R. Zhang, and Christopher Ré. Understanding and improving information transfer in multi-task learning. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=SylzhkBtDB.
+Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2369-2380, Brussels, Belgium, October-November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1259. URL https://www.aclweb.org/anthology/D18-1259.
+
+Tianhe Yu, Saurabh Kumar, Abhishek Gupta, Sergey Levine, Karol Hausman, and Chelsea Finn. Gradient surgery for multi-task learning. arXiv preprint arXiv:2001.06782, 2020.
+Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, and Benjamin Van Durme. Record: Bridging the gap between human and machine commonsense reading comprehension. CoRR, abs/1810.12885, 2018. URL http://arxiv.org/abs/1810.12885.
+Yu Zhang and Qiang Yang. A survey on multi-task learning. CoRR, abs/1707.08114, 2017. URL http://arxiv.org/abs/1707.08114.
+
+# A APPENDIX
+
+# A.1 SUMMARY OF ACRONYMS
+
+Acronyms of datasets and descriptions can be found below in section A.9.
+
+Table 7: List of acronyms used in this paper.
+
+| Acronym | Description |
| ARLM | Autoregressive Language Models |
| CA-MTL | Conditional Adaptive Multi-Task Learning: our architecture |
| CFF | Conditional Feed-Forward: a feed-forward layer modulated by a conditioning vector |
| CLN | Conditional Layer Normalization in section 2.1.3 |
| EDM | Evolutionary Data Measures (Collins et al., 2018): a task difficulty estimate |
| GLUE | General Language Understanding Evaluation Wang et al. (2018): a benchmark with multiple datasets |
| QA | Question Answering |
| MT | Multi-Task |
| MTAL | Multi-Task Active Learning: finding the most informative instance for multiple learners (or models) |
| MLM | Masked Language Model: BERT Devlin et al. (2018) is an example of an MLM |
| MTL | Multi-Task Learning: "learning tasks in parallel while using a shared representation" (Caruana, 1997) |
| MRQA | Machine Reading for Question Answering Fisch et al. (2019): a benchmark with multiple datasets |
| NER | Named Entity Recognition |
| NLP | Natural Language Processing |
| SOTA | State of the art |
| ST | Single Task fine-tuning: all weights are typically updated |
| ST-A | ST with Adapter modules: one adapter per task is trained and pretrained weights are optionally updated |
+
+# A.2 UNCERTAINTY SAMPLING: ALGORITHM AND ADDITIONAL RESULTS
+
+```latex
+Algorithm 1: Multi-task Uncertainty Sampling
+Input: Training data $D_{t}$ for task $t\in [1,\dots,T]$ ; batch size $b$ ; $C_t$ possible output classes for task $t$ ; $f\coloneqq f_{\phi (\mathbf{z}_i),\theta_i}$ our model with weights $\phi ,\theta_{i}$
+Output: $\mathcal{B}^{\prime}$ - multi-task batch of size $b$ $\mathcal{B}\gets \emptyset$
+for $t\gets 1$ to $T$ do
+Generate $\mathbf{x}_t\coloneqq \{x_{t,1},\ldots ,x_{t,b}\}^{\mathrm{i.i.d.}}D_t$
+for $i\gets 1$ to $b$ do
+ $\mathcal{H}_{t,i}\gets -\sum_{c = 1}^{C_i}p_c(f(x_{t,i}))\log p_c(f(x_{t,i}))$ Entropy of each sample
+end
+Compute $\bar{\mathcal{H}}_t\gets \frac{1}{b}\sum_{\mathbf{x}\in \mathbf{x}_i}\mathcal{H}_{t,i}$ Average entropy for task $t$
+Compute $H_{t}^{\prime}\gets -\sum_{c = 1}^{C_{t}}\frac{1}{C_{t}}\log \left[\frac{1}{C_{t}}\right]$ Max entropy (uniform distribution)
+ $\mathcal{B}\gets \mathcal{B}\cup \mathbf{x}_t$ and $D_{t}\gets D_{t}\backslash \mathbf{x}_{t}$
+if $D_{t} = \emptyset$ then
+| Reload $D_{t}$
+end
+for $i\gets 1$ to $b$ do
+| Compute: $\mathcal{U}_{t,i}\gets \mathcal{H}_{t,i} / H_t'$ Uncertainty normalized with max entropy
+end
+Compute $\hat{\mathcal{H}}\gets \max_{i\in \{1,\dots,T\}}[\bar{\mathcal{H}}_t]$ Entropy of task with highest average entropy
+Update $\mathcal{U}_{t,i}\gets \mathcal{U}_{t,i} / \hat{\mathcal{H}}$ Normalize each sample's uncertainty measure
+ $\mathcal{B}^{\prime}\gets \text{top\_b} (\{\mathcal{U}_{t,i}|t\in [1,\dots,T],i\in [1,\dots,b]\})$ b samples w/ highest uncertainty
+Return: With $\mathcal{B}^{\prime}$ , solve eq. 1 with gradient descent; updated model $f$
+```
+
+An advantage of our MT-Uncertainty Sampling approach is its ability to manage task difficulty. This is highlighted in Figure 8. In this experiment, we estimated task difficulty using the Evolutionary Data Measures (EDM) $^4$ proposed by Collins et al. (2018). The task difficulty estimate relies on multiple dataset statistics such as the data size, class diversity, class balance and class interference. Interestingly, estimated task difficulty correlates with the first instance that the selection of a specific task occurs. Supposing that QNLI is an outlier, we notice that peaks in the data occur whenever tasks are first selected by MT Uncertainty sampling. This process follows the following order: 1. MNLI 2. CoLA 3. RTE 4. QQP 5. MRPC 6.SST-2, which is the order from highest task difficulty to lowest task difficulty using EDM. As opposed to Curriculum Learning (Bengio et al., 2009), MT-Uncertainty dynamically prioritizes the most difficult tasks. As also discovered in MTL vision work (Guo et al., 2018), this type of prioritization on more difficult tasks may explain MT-Uncertainty's improved performance over other task selection methods. In MTL, heuristics to balance tasks during training is typically done by weighting each task's loss differently. We see here how MT-Uncertainty is able to prioritize task difficulty.
+
+
+Figure 8: Task composition of MT-Uncertainty sampling and estimated task difficulty using EDM: number of training samples per task at each iteration for batch size of 32. The occurrence of first peaks and estimated difficulty follow the same order: From highest to lowest: MNLI > CoLA > RTE > QQP = MRPC > SST-2.
+
+While the EDM difficulty measure is shown to correlate well with model performance, it lacks precision. As reported in Collins et al. (2018), the average score achieved on the Yahoo Answers dataset is $69.9\%$ and its difficulty is 4.51. The average score achieved on Yelp Full is $56.8\%$ , $13.1\%$ less than Yahoo Answers and its difficulty is 4.42. The authors mention that "This indicates that the difficulty measure in its current incarnation may be more effective at assigning a class of difficulty to datasets, rather than a regression-like value".
+
+# A.3 OTHER RELATED WORK
+
+Multi-Tasking in NLP and other fields. MTL weight sharing algorithms such as Mixture-of-Experts (MoE) have found success in NLP (Lepikhin et al., 2020). CA-MTL can complement MoE since the Transformers multi-headed attention can be seen as a form of MoE (Peng et al., 2020). In Vision, MTL can also improve with optimization (Sener & Koltun, 2018) or gradient-based approaches (Chen et al., 2017; Yu et al., 2020).
+
+Active Learning, Task Selection and Sampling. Ikhwentri et al. (2018) examined multi-task active learning for neural semantic role labeling in a low resource setting, using entity recognition as the sole auxiliary task. They used uncertainty sampling for active learning and found that $12\%$ less data could be used compared to passive learning. Reichart et al. (2008) has examined different active learning techniques for the two task annotation scenario, focusing on named entity recognition and syntactic parse tree annotations. In contrast, here we examine the larger scale data regime, the modularization of a multi-task neural architecture, and the many task $(\gg 2)$ setting among other differences. Other than MTAL (Reichart et al., 2008; Ikhwentri et al., 2018), Kendall et al. (2017) leveraged model uncertainty to balance MTL losses but not to select tasks as is proposed here.
+
+# A.4 ZERO-SHOT RESULTS ON SCITAIL AND SNLI
+
+Before testing models on domain adaptation in section 4.4, we ran zero-shot evaluations on the development set of SciTail and SNLI. Table 8 outlines 8-task CA-MTL_BERT-BASE's zero-shot transfer abilities when pretrained on GLUE with our MTL approach. We expand the task embedding layer to accommodate an extra task and explore various embedding initialization. We found that reusing STS-B and MRPC task embeddings worked best for SciTail and SNLI respectively.
+
+Table 8: CA-MTL is flexible and extensible to new tasks. However, CA-MTL is sensitive to the new task's embedding. We tested multiple task embeddings that worked best on either SciTail or SNLI by checking performance in a zero shot setting or using $0\%$ of the data.
+
+| Initialization of new task embedding layer | SciTail 0% of data | SNLI 0% of data |
| CoLA's embeddings | 43.0 | 34.0 |
| MNLI's embeddings | 24.2 | 33.0 |
| MRPC's embeddings | 34.5 | 45.5 |
| STS-B's embeddings | 46.9 | 33.2 |
| SST-2's embeddings | 25.8 | 34.2 |
| QQP's embeddings | 31.7 | 37.3 |
| QNLI's embeddings | 32.0 | 38.0 |
| RTE's embeddings | 32.3 | 40.6 |
| WNLI's embeddings | 29.0 | 30.4 |
| Average | 28.7 | 37.7 |
| Random initialization | 46.8 | 34.0 |
| Xavier initialization | 29.8 | 37.6 |
+
+# A.5 MORE EXPERIMENTAL DETAILS
+
+We used a batch size of 32 and a seed of 12 in all experiments. We used Adam (Kingma & Ba, 2015) as the optimizer with a learning rate of 2e-5. We applied a learning rate decay with warm up over the first $10\%$ of the training steps. Unless otherwise specified, we used 5 epochs, a seed of 12 and a sequence length of 128. Additional details are outlined in section . Our data prepossessing and linear decoder heads are the same as in Devlin et al. (2018). We used the same dropout rate of 0.1 in all layers. To run our experiments, we used either four NVIDIA P100 GPU for base models or four NVIDIA V100 GPU for larger ones. We did not perform parameter search. We do not use ensemble of models or task-specific tricks (Devlin et al., 2018; Liu et al., 2019b; Clark et al., 2019c). All models are either 12 Transformer layers for BASE and 24 Transformer layers for LARGE. Apart from CA-MTL, models trained in multi-task learning (BERT or RoBERTa without adapters) used random task sampling. For Table 1 and Figure 7, all BERT-based model have half their layers frozen (untrained) for a fair comparison of ablation results. For the 24-task MTL and CA-MTL models in Tables 4 and 5, we increased the input sequence length to 256 and used 8 epochs.
+
+# A.6 THE DIRECT SUM OPERATOR
+
+In section 2.1.1, we used the direct sum operator $\oplus$ . This operation allows us to create a block diagonal matrix. The direct sum of a matrix $A \in \mathbb{R}^{n \times m}$ and $B \in \mathbb{R}^{p \times q}$ results in a matrix of size $(m + p) \times (n + q)$ , defined as:
+
+$$
+\mathbf {A} \oplus \mathbf {B} = \left[ \begin{array}{c c} \mathbf {A} & \mathbf {0} \\ \mathbf {0} & \mathbf {B} \end{array} \right] = \left[ \begin{array}{c c c c c c} a _ {1 1} & \dots & a _ {1 n} & 0 & \dots & 0 \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ a _ {m 1} & \dots & a _ {m n} & 0 & \dots & 0 \\ 0 & \dots & 0 & b _ {1 1} & \dots & b _ {1 q} \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ 0 & \dots & 0 & b _ {p 1} & \dots & b _ {p q} \end{array} \right]
+$$
+
+# A.7 BASELINES AND OTHER EXPERIMENTAL RESULTS
+
+In this section, we present our baseline results for BERT, RoBERTa, CA-MTL as well as other models. Our single task results (ST) that we ran ourselves surpass other paper's reported scores in Table 9. Liu et al. (2019c) reports random seed median scores for RoBERTa. However, our RoBERTa ST baseline matches or surpasses the original paper's scores 4 out 7 times on the development set when scores are comparable (QQP F1 and STS-B spearman are not reported).
+
+Table 9: F1 scores are reported for QQP/MRPC, Spearman's correlation for STS-B, accuracy on the matched/mismatch sets for MNLI, Matthew's correlation for CoLA and accuracy for other tasks. ST=Single Task, MTL=Multitask. *QNLI v1 (we report v2) **F1 score or Spearman's correlation is not reported. ***Unknown random seeds. Results from: ${}^{1}$ Stickland et al. (2019) ${}^{2}$ Liu et al. (2019b) ${}^{3}$ Phang et al. (2018) ${}^{4}$ Liu et al. (2019c).
+
+| Method | Total
+params | Trained
+params/task | GLUE |
| CoLA | MNLI | MRPC | QNLI | QQP | RTE | SST-2 | STS-B | Avg |
| Base Models — Dev set Results |
| PALs+Anneal Samp.1 | 1.13× | 12.5% | - | - | - | - | - | - | - | - | 81.70 |
| 8-task CA-MTLBERT-BASE (ours) | 1.12× | 5.6% | 60.9 | 82.7/83.1 | 88.9 | 90.7 | 90.3 | 79.1 | 91.9 | 88.8 | 84.03 |
| BERT LARGE Models — Dev set Results |
| ST BERT-LARGE2 | 9× | 100% | 60.5 | 86.7/85.9 | 89.3 | 92.7* | 89.3 | 70.1 | 94.9 | 86.5 | 84.0 |
| ST BERT-LARGE3 | 9× | 100% | 62.1 | 86.2/86.2 | 92.3 | 89.4 | 88.5 | 70.0 | 92.5 | 90.1 | 84.1 |
| ST BERT-LARGE (ours) | 9× | 100% | 63.6 | 86.5/86.0 | 91.4 | 91.0 | 88.5 | 70.2 | 94.7 | 88.2 | 84.5 |
| 24-task CA-MTLBERT-LARGE (ours) | 1.12× | 5.6% | 63.8 | 86.3/86.0 | 92.9 | 93.4 | 88.1 | 84.5 | 94.5 | 90.3 | 86.6 |
| RoBERTa LARGE Models — Dev set Results |
| RoBERTa-LARGE4(Median 5 runs)*** | 9× | 100% | 68.0 | 90.2 | 90.9 | 94.7 | ** | 86.6 | 96.4 | ** | - |
| ST RoBERTa-LARGE (ours) | 9× | 100% | 68.3 | 89.2/88.9 | 92.6 | 94.8 | 84.6 | 87.0 | 96.4 | 91.7 | 88.2 |
| 24-task CA-MTL-RoBERTa-LARGE (ours) | 1.12× | 5.6% | 69.7 | 89.4/89.3 | 93.9 | 94.9 | 88.8 | 91.0 | 96.2 | 91.0 | 89.4 |
+
+# A.8 SOME RESULTS ON LAYER FREEZING AND WITH FULL BLOCK ATTENTION.
+
+All experiments in this section were run for only 5 epochs, exclusively on the GLUE dataset for the large BERT-based 8-task CA-MTL model. Results in Table 10 reveal that as we freeze more layers, performance tends to decrease. However, since we wanted to preserve as much pretrained knowledge as possible, we chose to keep at least $50\%$ of layers frozen. While this has slightly lowered our performance on 9 GLUE tasks, we believe that keeping as much of the original pretrained weights is beneficial when increasing the total number of tasks in MTL to 24 or more tasks. However, we did not explore this hypothesis more.
+
+Table 10: 8-task CA-MTLBERT-LARGE (see section 4.3) for various layer freezing configurations. F1 scores are reported for QQP/MRPC, Spearman's correlation for STS-B, accuracy on the matched/mismatch sets for MNLI, Matthew's correlation for CoLA and accuracy for other tasks. FBA = Full Block Attention
+
+| Method | % frozen layers | # tasks g.e ST | GLUE |
| CoLA | MNLI | MRPC | QNLI | QQP | RTE | SST-2 | STS-B | Avg |
| LARGE Models — Dev set Results |
| ST BERT-LARGE (ours) | 0% | — | 63.6 | 86.5/86.0 | 91.4 | 91.0 | 88.5 | 70.2 | 93.1 | 88.2 | 84.3 |
| CA-MTL | 0% | 7 | 60.2 | 86.2/86.0 | 92.0 | 91.5 | 88.7 | 76.3 | 93.3 | 89.5 | 84.9 |
| CA-MTL | 25% | 6 | 63.7 | 86.1/85.8 | 89.1 | 91.2 | 88.6 | 79.7 | 92.9 | 88.5 | 85.1 |
| CA-MTL | 50% | 3 | 63.2 | 85.5/85.5 | 91.8 | 90.9 | 88.3 | 81.4 | 93.0 | 90.1 | 85.5 |
| CA-MTL FBA | 50% | 0 | 60.2 | 81.7/81.1 | 88.0 | 85.8 | 85.7 | 78.7 | 88.6 | 87.1 | 81.8 |
+
+# A.9 DATASET DESCRIPTION
+
+The datasets that were used for the domain adaptation experiments were SciTail and SNLI. We jointly trained a CA-MTLoBERTa-LARGE model on 9 GLUE tasks, 8 Super-GLUE7 tasks, 6 MRQA8 tasks, and on WNUT2017 (Derczynski et al., 2017).
+
+All GLUE tasks are binary classification, except STS-B (regression) and MNLI (three classes). We used the same GLUE data preprocessing as in Devlin et al. (2018).
+
+Table 11: GLUE (Wang et al., 2018) dataset description.
+References: $^{1}$ Warstadt et al. (2018), $^{2}$ Socher et al. (2013), $^{3}$ Dolan & Brockett (2005), $^{4}$ Cer et al. (2017), $^{5}$ Williams et al. (2018), $^{6}$ Wang et al. (2018), $^{7}$ Levesque (2011)
+
+| Acronym | Corpus | |Train| | Task | Domain |
| CoLA1 | Corpus of Linguistic Acceptability | 8.5K | acceptability | miscellaneous |
| SST-22 | Stanford Sentiment Treebank | 67K | sentiment detection | movie reviews |
| MRPC3 | Microsoft Research Paraphrase Corpus | 3.7K | paraphrase detection | news |
| STS-B4 | Semantic Textual Similarity Benchmark | 7K | textual similarity | miscellaneous |
| QQP | Quora Question Pairs | 364K | paraphrase detection | online QA |
| MNLI5 | Multi-Genre NLI | 393K | inference | miscellaneous |
| RTE6 | Recognition Textual Entailment | 2.5K | inference/entailment | news, Wikipedia |
| WNLI7 | Winograd NLI | 634 | coreference | fiction books |
+
+Table 12: Super-GLUE (Wang et al., 2019b) dataset description. References: ${}^{1}$ Clark et al. (2019a), ${}^{2}$ de Marneffe et al. (2019), ${}^{3}$ Gordon et al. (2012), ${}^{4}$ Khashabi et al. (2018), ${}^{5}$ Zhang et al. (2018), ${}^{6}$ Wang et al. (2019b), ${}^{7}$ Poliak et al. (2018), ${}^{8}$ Levesque (2011)
+
+| Acronym | Corpus | Train | Task | Domain |
| BoolQ1 | Boolean Questions | 9.4K | acceptability | Google queries, Wikipedia |
| CB2 | CommitmentBank | 250 | sentiment detection | miscellaneous |
| COPA3 | Choice of Plausible Alternatives | 400 | paraphrase detection | blogs, encyclopedia |
| MultiRC4 | Multi-Sentence Reading Comprehension | 5.1K | textual similarity | miscellaneous |
| ReCoRD5 | Reading Comprehension and Commonsense Reasoning | 101K | paraphrase detection | news |
| RTE6 | Recognition Textual Entailment | 2.5K | inference | news, Wikipedia |
| WiC7 | Word-in-Context | 6K | word sense disambiguation | WordNet, VerbNet |
| WSC8 | Winograd Schema Challenge | 554 | coreference resolution | fiction books |
+
+Table 13: MRQA (Fisch et al., 2019) dataset description. References: ${}^{1}$ Rajpurkar et al. (2016a), ${}^{2}$ Trischler et al. (2017), ${}^{3}$ Joshi et al. (2017), ${}^{4}$ Dunn et al. (2017), ${}^{5}$ Yang et al. (2018), ${}^{6}$ Kwiatkowski et al. (2019)
+
+| Acronym | Corpus | |Train| | Task | Domain |
| SQuAD1 | Stanford QA Dataset | 86.6K | crowdsourced questions | Wikipedia |
| NewsQA2 | NewsQA | 74.2K | crowdsourced questions | news |
| TriviaQA3 | TriviaQA | 61.7K | trivia QA | web snippets |
| SearchQA4 | SearchQA | 117.4K | Jeopardy QA | web snippets |
| HotpotQA5 | HotpotQA | 72.9K | crowdsourced questions | Wikipedia |
| Natural Questions6 | Natural Questions | 104.7K | search logs | Wikipedia |
+
+SuperGLUE has a more diverse task format than GLUE, which is mostly limited to sentence and sentence-pair classification. We follow the same preprocessing procedure as in Wang et al. (2019b). All tasks are binary classification tasks, except CB (three classes). Also, WiC and WSC are span based classification tasks. We used the same modified MRQA dataset and preprocessing steps that were used in Joshi et al. (2019). All MRQA tasks are span prediction tasks which seeks to identify start and end tokens of an answer span in the input text.
+
+Table 14: SNLI (Bowman et al., 2015) and SciTail (Khot et al., 2018) datasets description.
+
+| Acronym | Corpus | |Train| | Task | Domain |
| SNLI1 | Stanford Natural Language Inference | 550.2k | inference | human-written English sentence pairs |
| SciTail2 | Science and Entailment | 23.5K | entailment | Science question answering |
+
+SNLI is a natural inference task where we predict three classes. Examples of three target labels are: Entailment, Contradiction, and Neutral (irrelevant). SciTail is a textual entailment dataset. The hypotheses in SciTail are created from multiple-choice science exams and the answer candidates (premise) are extracted from the web using information retrieval tools. SciTail is a binary true/false classification tasks that seeks to predict whether the premise entails the hypothesis. The two datasets are used only for domain adaptation in this study (see section A.4 for the details of our approach).
\ No newline at end of file
diff --git a/conditionallyadaptivemultitasklearningimprovingtransferlearninginnlpusingfewerparameterslessdata/images.zip b/conditionallyadaptivemultitasklearningimprovingtransferlearninginnlpusingfewerparameterslessdata/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..665188b22c47566f793512b4543c9b844aa2733b
--- /dev/null
+++ b/conditionallyadaptivemultitasklearningimprovingtransferlearninginnlpusingfewerparameterslessdata/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:da4ce6cca934a41003c37bd2b40c3e0c0980af73ea4c0e74a2ad0f57dbfb08e2
+size 946091
diff --git a/conditionallyadaptivemultitasklearningimprovingtransferlearninginnlpusingfewerparameterslessdata/layout.json b/conditionallyadaptivemultitasklearningimprovingtransferlearninginnlpusingfewerparameterslessdata/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..cf7267ee06e9e8a18c5747bb33345c2b433406a1
--- /dev/null
+++ b/conditionallyadaptivemultitasklearningimprovingtransferlearninginnlpusingfewerparameterslessdata/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1f8a0e58f51221590783c934fa3c201720f686dc1ed08bae2be3adedea60b8e9
+size 708539
diff --git a/conditionalnegativesamplingforcontrastivelearningofvisualrepresentations/8c0a39c0-4938-40a6-b40e-aae2e57b7927_content_list.json b/conditionalnegativesamplingforcontrastivelearningofvisualrepresentations/8c0a39c0-4938-40a6-b40e-aae2e57b7927_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..515a00f317aab36d6c5714f7f294d402bcb1ecc8
--- /dev/null
+++ b/conditionalnegativesamplingforcontrastivelearningofvisualrepresentations/8c0a39c0-4938-40a6-b40e-aae2e57b7927_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:383bfc522b1c0a2c64ea6de58a54edccb65650b2a617733f59a8f678e98812ad
+size 84514
diff --git a/conditionalnegativesamplingforcontrastivelearningofvisualrepresentations/8c0a39c0-4938-40a6-b40e-aae2e57b7927_model.json b/conditionalnegativesamplingforcontrastivelearningofvisualrepresentations/8c0a39c0-4938-40a6-b40e-aae2e57b7927_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..6d261bc87fe0dff95ca38b33efdd3c202804f4fd
--- /dev/null
+++ b/conditionalnegativesamplingforcontrastivelearningofvisualrepresentations/8c0a39c0-4938-40a6-b40e-aae2e57b7927_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:44340f7af584a250da28499822e82471fccb3ac3053b1bdc902afca28cea250d
+size 103717
diff --git a/conditionalnegativesamplingforcontrastivelearningofvisualrepresentations/8c0a39c0-4938-40a6-b40e-aae2e57b7927_origin.pdf b/conditionalnegativesamplingforcontrastivelearningofvisualrepresentations/8c0a39c0-4938-40a6-b40e-aae2e57b7927_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..97b22a94650181d855e8586888e5fb21a186261d
--- /dev/null
+++ b/conditionalnegativesamplingforcontrastivelearningofvisualrepresentations/8c0a39c0-4938-40a6-b40e-aae2e57b7927_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:67e6bf41320c6a1f282537147a0e377c7f2f1ef6986f8e78cbf0647baeb6be42
+size 625404
diff --git a/conditionalnegativesamplingforcontrastivelearningofvisualrepresentations/full.md b/conditionalnegativesamplingforcontrastivelearningofvisualrepresentations/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..5d9e3e3c0fbd0627e6f5640d746831fa51070ada
--- /dev/null
+++ b/conditionalnegativesamplingforcontrastivelearningofvisualrepresentations/full.md
@@ -0,0 +1,306 @@
+# CONDITIONAL NEGATIVE SAMPLING FOR CONTRASTIVE LEARNING OF VISUAL REPRESENTATIONS
+
+Mike Wu $^{1}$ , Milan Mosse $^{1,3}$ , Chengxu Zhuang $^{2}$ , Daniel Yamins $^{1,2}$ , Noah Goodman $^{1,2}$
+
+Department of Computer Science1, Psychology2, and Philosophy3
+
+Stanford University
+
+{wumike, chengxuz, mmosse19, yamins, ngoodman}@stanford.edu
+
+# ABSTRACT
+
+Recent methods for learning unsupervised visual representations, dubbed contrastive learning, optimize the noise-contrastive estimation (NCE) bound on mutual information between two transformations of an image. NCE typically uses randomly sampled negative examples to normalize the objective, but this may often include many uninformative examples either because they are too easy or too hard to discriminate. Taking inspiration from metric learning, we show that choosing semi-hard negatives can yield stronger contrastive representations. To do this, we introduce a family of mutual information estimators that sample negatives conditionally – in a “ring” around each positive. We prove that these estimators remain lower-bounds of mutual information, with higher bias but lower variance than NCE. Experimentally, we find our approach, applied on top of existing models (IR, CMC, and MoCo) improves accuracy by $2 - 5\%$ absolute points in each case, measured by linear evaluation on four standard image benchmarks. Moreover, we find continued benefits when transferring features to a variety of new image distributions from the Meta-Dataset collection and to a variety of downstream tasks such as object detection, instance segmentation, and key-point detection.
+
+# 1 INTRODUCTION
+
+Supervised learning has given rise to human-level performance in several visual tasks (Russakovsky et al., 2015; He et al., 2017), relying heavily on large image datasets paired with semantic annotations. These annotations vary in difficulty and cost, spanning from simple class labels to more granular descriptions like bounding boxes and key-points. As it is impractical to scale high quality annotations, this reliance on supervision poses a barrier to widespread adoption. While supervised pretraining is still the dominant approach in computer vision, recent studies using unsupervised "contrastive" objectives, have achieved remarkable results in the last two years, closing the gap to supervised baselines (Wu et al., 2018; Oord et al., 2018; Hjelm et al., 2018; Zhuang et al., 2019; Henaff et al., 2019; Misra & Maaten, 2020; He et al., 2019; Chen et al., 2020a,b; Grill et al., 2020).
+
+Many contrastive algorithms are estimators of mutual information (Oord et al., 2018; Hjelm et al., 2018; Bachman et al., 2019), capturing the intuition that a good low-dimensional "representation" is one that linearizes the useful information embedded within a high-dimensional data point. In vision, these estimators maximize the similarity of encodings for two augmentations of the same image. This is trivial (e.g. assign all image pairs maximum similarity) unless the similarity function is normalized. This is typically done by comparing an image to "negative examples", which a model must assign low similarity to. We hypothesize that how we choose these negatives greatly impacts the representation quality. With harder negatives, the encoder is encouraged to capture more granular information that may improve performance on downstream tasks. While research in contrastive learning has explored architectures, augmentations, and pretext tasks, there has been little attention given to the negative sampling procedure. Meanwhile, there is a rich body of work in deep metric learning showing semi-hard negative mining to improve the efficacy of triplet losses. Inspired by this, we hope to bring harder negative sampling to modern contrastive learning.
+
+Naively choosing difficult negatives may yield an objective that no longer bounds mutual information, removing a theoretical connection that is core to contrastive learning and has been shown to
+
+be important for downstream performance (Tian et al., 2020). In this paper, we present a new estimator of mutual information based on the popular noise-contrastive estimator (NCE) that supports sampling negatives from conditional distributions. We summarize our contributions below:
+
+1. We prove our Conditional-NCE (CNCE) objective to lower bound mutual information. Further, we show that although CNCE is a looser bound than NCE, it has lower variance. This motivates its value for representation learning.
+2. We use CNCE to generalize contrastive algorithms that utilize a memory structure like IR, CMC, and MoCo to sample semi-hard negatives in just a few lines of code and minimal compute overhead.
+3. We find that the naive strategy of sampling hard negatives throughout training can be detrimental. We then show that slowly introducing harder negatives yields good performance.
+4. On four image classification benchmarks, we find improvements of $2 - 5\%$ absolute points. We also find consistent improvements (1) when transferring features to new image datasets and (2) in object detection, instance segmentation, and key-point detection.
+
+# 2 BACKGROUND
+
+We focus on exemplar-based contrastive objectives, where examples are compared to one another to learn a representation. Many of these objectives (Hjelm et al., 2018; Wu et al., 2018; Bachman et al., 2019; Tian et al., 2019; Chen et al., 2020a) are equivalent to NCE (Oord et al., 2018; Poole et al., 2019), a popular lower bound on the mutual information, denoted by $\mathcal{I}$ , between two random variables. This connection is well-known and stated in several works (Chen et al., 2020a; Tschannen et al., 2019; Tian et al., 2020; Wu et al., 2020). To review, recall:
+
+$$
+\mathcal {I} (X; Y) \geq \mathcal {I} _ {\mathrm {N C E}} (X; Y) = \mathbf {E} _ {x _ {i}, y _ {i} \sim p (x, y)} \mathbf {E} _ {y _ {1: k} \sim p (y)} \left[ \log \frac {e ^ {f _ {\theta} \left(x _ {i} , y _ {i}\right)}}{\frac {1}{k + 1} \sum_ {j \in \{i , 1 : k \}} e ^ {f _ {\theta} \left(x _ {i} , y _ {j}\right)}} \right] \tag {1}
+$$
+
+where $x, y$ are realizations of two random variables, $X$ and $Y$ , and $f_{\theta}: X \times Y \to \mathbf{R}$ is a similarity function. We call $y_{1:k} = \{y_1, \ldots, y_k\}$ negative examples, being other realizations of $Y$ .
+
+Suppose the two random variables in Eq. [1] are both transformations of a common random variable $X$ . Let $\mathcal{T}$ be a family of transformations where each member $t$ is a composition of cropping, color jittering, gaussian blurring, among others (Wu et al., 2018; Bachman et al., 2019; Chen et al., 2020a). We call a transformed input $t(x)$ a "view" of $x$ . Let $p(t)$ denote a distribution over $\mathcal{T}$ , a common choice being uniform. Next, introduce an encoder $g_{\theta}: X \to \mathbf{S}^{n-1}$ that maps an example to a $L_2$ -normalized representation. Suppose we have a dataset $\mathcal{D} = \{x_i\}_{i=1}^n$ of $n$ values for $X$ sampled from a distribution $p(x)$ . Then, the contrastive objective for the $i$ -th example is:
+
+$$
+\mathcal {L} \left(x _ {i}\right) = \mathbf {E} _ {t, t ^ {\prime}, t _ {1: k} \sim p (t)} \mathbf {E} _ {x _ {1: k} \sim p (x)} \left[ \log \frac {e ^ {g _ {\theta} \left(t \left(x _ {i}\right)\right) ^ {T} g _ {\theta} \left(t ^ {\prime} \left(x _ {i}\right)\right) / \tau}}{\frac {1}{k + 1} \sum_ {j \in \{i , 1 : k \}} e ^ {g _ {\theta} \left(t \left(x _ {i}\right)\right) ^ {T} g _ {\theta} \left(t _ {j} \left(x _ {j}\right)\right) / \tau}} \right] \tag {2}
+$$
+
+where $\tau$ is a temperature. The equivalence of Eq. ② to NCE is immediate given $f_{\theta}(x,y) = g_{\theta}(x)^{T}g_{\theta}(y) / \tau$ . Maximizing Eq. ② chooses an embedding that pulls two views of the same example together while pushing two views of distinct examples apart. A drawback to this framework is that the number of negatives $k$ must be large to faithfully approximate the true partition. In practice, $k$ is limited by memory. Recent innovations have focused on tackling this challenge:
+
+Instance Discrimination (Wu et al., 2018), or IR, introduces a memory bank of $n$ entries to cache embeddings of each example throughout training. Since every epoch we observe each example once, the memory bank will save the embedding of the view of the $i$ -th example observed last epoch in its $i$ -th entry. Representations stored in the memory bank are removed from the automatic differentiation tape, but in return, we can choose a large $k$ by querying $M$ . A follow up work, Contrastive Multiview Coding (Tian et al., 2019), or CMC, decomposes an image into two color modalities. Then, CMC sums two IR losses where the memory banks for each modality are swapped.
+
+Momentum Contrast (He et al., 2019), or MoCo, observed that the representations stored in the memory bank grow stale, since possibly thousands of optimization steps pass before updating an entry again. So, MoCo makes two important changes. First, it replaces the memory bank with a
+
+first-in first-out (FIFO) queue of size $k$ . During each minibatch, representations are cached into the queue while the most stale ones are removed. Second, MoCo introduces a second (momentum) encoder $g_{\theta'}'$ as a copy of $g_{\theta}$ . The primary encoder $g_{\theta}$ is used to embed one view of $x_{i}$ whereas the momentum encoder is used to embed the other. Again, gradients are not propagated to $g_{\theta'}'$ .
+
+In this work, we focus on contrastive algorithms that utilize a memory structure that we repurpose in Sec. 4 to efficiently sample hard negatives from. In Sec. 7, we briefly discuss generalizations to contrastive algorithms that do not use a memory structure.
+
+# 3 CONDITIONAL NOISE CONTRASTIVE ESTIMATION
+
+In NCE, the negative examples are sampled i.i.d. from the marginal distribution, $p(y)$ . Indeed, the existing proof that NCE lower bounds mutual information (Poole et al., 2019) assumes this to be true. However, choosing negatives in this manner may not be the best choice for learning a good representation. For instance, prior work in metric learning has shown the effectiveness of semi-hard negative mining in optimizing triplet losses (Wu et al., 2017; Yuan et al., 2017; Schroff et al., 2015). We similarly wish to exploit choosing semi-hard negatives in NCE conditional on the current example but to do so in a manner that preserves the lower bound on mutual information.
+
+In presenting the theory, we assume two random variables $X$ and $Y$ , deriving a general bound; we will return to the contrastive learning setting in Sec. 4. To begin, in Eq. 1 suppose we sample negatives from a distribution $q(y|x)$ conditional on a value $x \sim p(x)$ rather than the marginal $p(y)$ , which is independent of $X$ . Ideally, we would like to freely choose $q(y|x)$ to be any distribution but not all choices preserve a bound on mutual information. This does not, however, imply that we can only sample negatives from $p(y)$ (Poole et al., 2019; Oord et al., 2018). One of our contributions is to formally define a family of conditional distributions $\mathcal{Q}$ such that for any $q(y|x) \in \mathcal{Q}$ , drawing negative examples from $q$ defines an estimator that lower bounds $\mathcal{I}(X;Y)$ . We call this new bound the Conditional Noise Contrastive Estimator, or CNCE. We first prove CNCE to be a bound:
+
+Theorem 3.1. (The Conditional NCE bound) Define $d$ -dimensional random variables $X$ and $Y$ by a joint distribution $p(x,y)$ and let $Y_{1}, \ldots, Y_{k}$ be i.i.d. copies of $Y$ with the marginal distribution $p(y)$ . Fix any function $f: (X,Y) \to \mathbf{R}$ , any realization $x$ of $X$ , and let $c = \mathbf{E}_{y \sim p(y)}[e^{f(x,y)}]$ , the expected exponentiated similarity. Pick a set $B \subset \mathbf{R}$ strictly lower-bounded by $c$ . Assume the pulled back set $S_{B} = \{y | e^{f(x,y)} \in B\}$ has non-zero probability (i.e. $p(S_{B}) > 0$ ). For $A_{1}, \ldots, A_{k}$ in the Borel $\sigma$ -algebra over $\mathbf{R}^d$ , define $A = A_{1} \times \ldots \times A_{k}$ and let
+
+$$
+q \left(\left(Y _ {1}, \dots , Y _ {k}\right) \in A | X = x\right) = \prod_ {j = 1} ^ {k} p \left(A _ {j} \mid S _ {B}\right).
+$$
+
+$$
+L e t \mathcal {I} _ {C N C E} (X; Y) = \mathbf {E} _ {x, y \sim p (x, y)} \mathbf {E} _ {y _ {1}, \dots , y _ {k} \sim q (y _ {1}, \dots , y _ {k} | x)} \left[ \log \frac {e ^ {f (x , y)}}{\frac {1}{k} \sum_ {j = 1} ^ {k} e ^ {f (x , y _ {j})}} \right]. T h e n \mathcal {I} _ {C N C E} \leq \mathcal {I} _ {N C E}.
+$$
+
+Proof. To show $\mathcal{I}_{\mathrm{CNCE}} \leq \mathcal{I}_{\mathrm{NCE}}$ , we show $\mathbf{E}_p[\log \sum_{j=1}^k e^{f(x,y_j)}] < \mathbf{E}_q[\log \sum_{j=1}^k e^{f(x,y_j)}]$ . To see this, apply Jensen's to the left-hand side of $\log \mathbf{E}_p[\sum_{j=1}^k e^{f(x,y_j)}] < \log \sum_{j=1}^k e^{f(x,y_j)}$ , which holds if $y_j \in S_B$ for $j = 1, \ldots, k$ , and then take the expectation $\mathbf{E}_q$ of both sides. The last inequality holds by monotonicity of log, linearity of expectation, and the fact that $\mathbf{E}_p[e^{f(x,y_j)}] \leq e^{f(x,y_j)}$ .
+
+Theorem Intuition. For intuition, although using arbitrary negative distributions in NCE does not bound mutual information, we have found a restricted class of distributions $\mathcal{Q}$ where every member $q(y|x)$ "subsets the support" of the distribution $p(y)$ . That is, given some fixed value $x$ , we have defined $q(y|x)$ to constrain the support of $p(y)$ to a set $S_B$ whose members are "close" to $x$ as measured by the similarity function $f$ . For every element $y \in S_B$ , the distribution $q(y|x)$ wants to assign to it the same probability as $p(y)$ . However, as $q(y|x)$ is not defined outside of $S_B$ , we must renormalize it to sum to one (hence $p(A_j|S_B) = \frac{p(A_j \cap S_B)}{p(S_B)}$ ). Intuitively, $q(y|x)$ cannot change $p(y)$ too much: it must redistribute mass proportionally. The primary distinction then, is the smaller
+
+
+(a) IR, CMC, MoCo
+
+
+(b) Ring
+Figure 1: Visual illustration of Ring Discrimination. Black: view of example $x_{i}$ ; gray: second view of $x_{i}$ ; red: negative samples; gray area: distribution $q(x|t(x_i))$ . In subfigure (c), the negative samples are annealed to be closer to $t(x_{i})$ through training. In other words, the support of $q$ shrinks.
+
+
+(c) Annealed Ring
+
+
+
+
+
+support of $q(y|x)$ , which forces samples from it to be harder for $f$ to distinguish from $x$ . Thm. 3.1 shows that substituting $q(y|x)$ for $p(y)$ in NCE still bounds mutual information.
+
+Theorem Example 3.1. We give a concrete example for the choice $B$ that will be used in Sec. 4. For any realization $x$ , suppose we define two similarity thresholds $\omega_{\ell}, \omega_{u} \in \mathbf{R}$ where $c < \omega_{\ell} < \omega_{u}$ . Then, choose $B = [w_{\ell}, w_{u}]$ . In this case, the set $S_{B}$ , which defines the support of the distribution $q(y|x)$ , contains values of $y$ that are not "too-close" to $x$ but not "too-far". In contrastive learning, we might pick these similarity thresholds to vary the difficulty of negative samples.
+
+Interestingly, Thm. 3.1 states that CNCE is looser than NCE, which raises the question: when is a looser bound useful? In reply, we show that while CNCE is a more biased estimator than NCE, in return it has lower variance. Intuitively, because $q(y|x)$ is the result of restricting $p(y)$ to a smaller support, samples from $q(y|x)$ have less opportunity to deviate, hence lower variance. Formally:
+
+Theorem 3.2. (Bias and Variance Tradeoff) Pick any $x, y \sim p(x, y)$ . Fix the distribution $q(y_{1:k} | x)$ as stated in Theorem 3.1. Define a new random variable $Z(y_{1:k}) = \log \left(\frac{e^{f(x, y)}}{\frac{1}{k} \sum_{j=1}^{k} e^{f(x, y_j)}}\right)$ representing the normalized similarity. By Theorem 3.1 the expressions $\mathbf{E}_{p(y_{1:k})}[Z]$ and $\mathbf{E}_{q(y_{1:k} | x)}[Z]$ are estimators for $\mathcal{I}(X; Y)$ . Suppose that the set $S_B$ is chosen to ensure $\operatorname{Var}_{q(y_{1:k} | x)}[Z] \leq \operatorname{Var}_{\tilde{q}(y_{1:k} | x)}[Z]$ , where $\tilde{q}(A) = p(A|$ complement of $S_B)$ . That is, we assume the variance of the normalized similarity when using $y_{1:k} \in S_B$ is smaller than when using $y_{1:k} \notin S_B$ . Then $\operatorname{Bias}_{p(y_{1:k})}(Z) \leq \operatorname{Bias}_{q(y_{1:k} | x)}(Z)$ and $\operatorname{Var}_{p(y_{1:k})}(Z) \geq \operatorname{Var}_{q(y_{1:k} | x)}(Z)$ .
+
+The proof can be found in Sec. A.2 Thm. 3.2 provides one answer to our question of looseness. In stochastic optimization, a lower variance objective may lead to better local optima. For representation learning, using CNCE to sample more difficult negatives may (1) encourage the representation to distinguish fine-grained features useful in transfer tasks, and (2) provide less noisy gradients.
+
+# 4 RING DISCRIMINATION
+
+We have shown CNCE to be a new bound on the mutual information that uses hard negative samples. Now we wish to apply CNCE to contrastive learning where the two random variables are again transformations of a single variable $X$ . In this setting, for a fixed $x_{i} \sim p(x)$ , the CNCE distribution is written as $q(x|t(x_i))$ for some transform $t \in \mathcal{T}$ . Samples from $x \sim q(x|t(x_i))$ will be such that the exponentiated distance, $\exp \{g_{\theta}(t(x_i))^T g_{\theta}(t'(x))\}$ , is at least a minimum value $c$ . As in Example 3.1, we will choose $B = [\omega_{\ell}, \omega_{u}]$ , a closed interval in $\mathbf{R}$ defined by two thresholds.
+
+Picking thresholds. We pick the thresholds conditioned on the $i$ -th example in the dataset, hence each example has a different set $B$ . We first describe how to pick the upper threshold $\omega_{u}$ . Given the $i$ -example $x_{i}$ , we pick a number $u \in [0,100]$ representing an upper "percentile". We consider each example $x$ in the dataset to be in the support $S_{B}$ if and only if the (exponentiated) distance between the embedding of $x_{i}$ and $x$ , or $\exp \{g_{\theta}(t(x_i))^T g_{\theta}(t'(x))\}$ , is below the $u$ -th percentile for all $x \in \mathcal{D}$ . Call this maximum distance $\omega_{u}$ . In other words, we construct $q(x|t(x_i))$ such that we ignore examples from the dataset whose embedding dot produced with the embedding of $x_{i}$ is above $\omega_{u}$ . (Note that $u = 100$ recovers NCE.) For a small enough choice of $u$ , the upper similarity threshold $\omega_{u}$ will be greater than $c$ (defined in Thm. 3.1 as the expected distance with respect to $p(x)$ ), and the samples from $q(x|t(x_i))$ will be harder negatives to discriminate from $x_{i}$ .
+
+In picking the lower threshold $\omega_{\ell}$ , one could choose it to be 0, so $B = [0,\omega_u)$ . However, picking the closest examples to $t(x_i)$ as its negative examples may be inappropriate, as these examples might be better suited as positive views rather than negatives (Zhuang et al., 2019; Xie et al., 2020). As an extreme case, if the same image is included in the dataset twice, we would not like to select it as a negative example for itself. Furthermore, choosing negatives "too close" to the current instance may result in representations that pick up on fine-grain details only, ignoring larger semantic concepts. This suggests removing examples from $q(x|t(x_i))$ we consider "too close" to $x_i$ . To do this, we pick a lower percentile $0 \leq \ell < u$ . For each example $x \in \mathcal{D}$ , we say it is in $S_B$ if $\exp\{g_\theta(t(x_i))^T g_\theta(t'(x))\}$ is below $\omega_u$ and also if it is above the $\ell$ -th percentile of all distances with respect to $\mathcal{D}$ . Call this minimum distance $\omega_\ell$ . Fig. ② visualizes this whole procedure.
+
+
+Step 1: Pick two percentiles.
+
+
+Step 2: Compute distances.
+Figure 2: Defining the CNCE distribution $q(x|t(x_i))$ . By choosing a lower and upper percentile $\ell$ and $u$ , we implicitly define similarity thresholds $\omega_{\ell}$ and $\omega_{u}$ to construct a support of valid negative examples, $S_B$ , which in turn, defines the distribution $q(x|t(x_i))$ .
+
+
+Step 3: Sort distances.
+
+
+Step 4: Compute thresholds.
+
+
+Step 5: Define distribution q.
+
+# Algorithm 1: MoCoRing
+
+g_q, g_k: encoder networks
+# m: momentum; t: temperature
+# u: ring upper percentile
+# l: ring lower percentile
+tx1=aug(x) # random augment;
+tx2=aug(x)
+emb1=norm(g_q(tx1))
+emb2=norm(g_k(tx2)).detach()
+dps=uum(tx1*tx2)/t # dot pro
+# sort from closest to farthe
+all_dps=sort(emb1@queue.T/t)
+# find indices of thresholds
+ix_1=1*len(queue)
+ix_u=u*len(queue)
+ring_dps=all_dps[(:,ix_1:iX_u]
+# nonparametric softmax
+loss=-dps+logsumexp(ring_dps)
+lossbackward()
+step(g-q.params)
+# moco updates
+g_k.params=m*g_k.params+ $(1-m)*g-q.params$
+enqueue.queue,emb2); deque( # threshold updates
+anneal(w_1); anneal(w_u)
+
+Ring Discrimination. Having defined $\omega_{\ell}$ and $\omega_{u}$ , we have a practical method of choosing $B$ , and thus $S_B$ to define $q(x|t(x_i))$ for $i$ -th example. Intuitively, we construct a conditional distribution for negative examples that are (1) not too easy since their representations are fairly similar to that of $x_i$ and (2) not too hard since we remove the "closest" instances to $x_i$ from $S_B$ . We call this algorithm Ring Discrimination, or Ring, inspired by the shape of negative set (see Fig. ①).
+
+Ring can be easily added to popular contrastive algorithms. For IR and CMC, this amounts to simply sampling entries in the memory bank that fall within the $\ell$ -th to $u$ -th percentile of all distances to the current example view (in representation space). Similarly, for MoCo, we sample from a subset of the queue (chosen to be in the $\ell$ -th to $u$ -th percentile), preserving the FIFO ordering. In our experiments, we refer to these as IRing, CM-CRing, MoCoRing, respectively. Alg. 1 shows PyTorch-like pseudocode for MoCoRing. One of the strengths of this approach is the simplicity: the algorithm requires only a few lines of code on top of existing implementations.
+
+Annealing Policy. Naively using hard negatives can collapse to a poor representation, especially if we choose the upper thresh-
+
+old, $\omega_{u}$ , to be very small early in training. At the start of training, the encoder $g_{\theta}$ is randomly initialized and cannot guarantee that elements in the $\ell$ -th to $u$ -th percentile are properly calibrated: if the representations are near random, choosing negatives that are close in embedding distance may detrimentally exclude those examples that are "actually" close. This could lock in poor local minima. To avoid this, we propose to use an annealing policy that reduces $\omega_{u}$ (and thus the size of the support $S_{B}$ ) throughout training. Early in training we choose $\omega_{u}$ to be large. Over many epochs, we slowly decrease $\omega_{u}$ closer to $\omega_{l}$ , thereby selecting more difficult negatives. We explored several annealing policies and found a linear schedule to be well-performing and simple (see Sec. G). In our experiments, annealing is shown to be crucial: being too aggressive with negatives early in training produced representations that performed poorly on downstream tasks.
+
+# 5 EXPERIMENTS
+
+We explore our method applied to IR, CMC, and MoCo in four commonly used visual datasets. As in prior work (Wu et al., 2018; Zhuang et al., 2019; He et al., 2019; Misra & Maaten, 2020; Henaff
+
+et al., 2019; Kolesnikov et al., 2019; Donahue & Simonyan, 2019; Bachman et al., 2019; Tian et al., 2019; Chen et al., 2020a), we evaluate each method by linear classification on frozen embeddings. That is, we optimize a contrastive objective on a pretraining dataset to learn a representation; then, using a transfer dataset, we fit logistic regression on representations only. A better representation would contain more "object-centric" information, thereby achieving a higher classification score.
+
+Training Details. We pick the upper percentile $u = 10$ and the lower percentile $\ell = 1$ although we anneal $u$ starting from 100. We resize input images to be 256 by 256 pixels, and normalize them using dataset mean and standard deviation. The temperature $\tau$ is set to 0.07. We use a composition of a 224 by 224-pixel random crop, random color jittering, random horizontal flip, and random grayscale conversion as our augmentation family $\mathcal{T}$ . We use a ResNet-18 encoder with a output dimension of 128. For CMC, we use two ResNet-18 encoders, doubling the number of parameters. For linear classification, we treat the pre-pool output (size $512 \times 7 \times 7$ ) after the last convolutional layer as the input to the logistic regression. Note that this setup is equivalent to using a linear projection head (Chen et al., 2020a,b). In pretraining, we use SGD with learning rate 0.03, momentum 0.9 and weight decay 1e-4 for 300 epochs and batch size 256 (128 for CMC). We drop the learning rate twice by a factor of 10 on epochs 200 and 250. In transfer, we use SGD with learning rate 0.01, momentum 0.9, and no weight decay for 100 epochs without dropping learning rate. These hyperparameters were taken from Wu et al. (2018) and used in all of Table I for a consistent comparison. We found normalizing hyperparameters to be important for a fair comparison as many competing algorithms use different hyperparameters. For a state-of-the-art comparison, see Table 5
+
+| Model | Linear Evaluation | Model | Linear Evaluation | Model | Linear Evaluation | Model | Linear Evaluation |
| IR | 81.2 | IR | 60.4 | IR | 61.4 | IR | 43.2 |
| IRing | 83.9 (+2.7) | IRing | 62.3 (+1.9) | IRing | 64.3 (+2.9) | IRing | 48.4 (+5.2) |
| CMC* | 85.6 | CMC* | 56.0 | CMC* | 63.8 | CMC* | 48.2 |
| CMCRing* | 87.6 (+2.0) | CMCRing* | 56.0 (+0.0) | CMCRing* | 66.4 (+2.6) | CMCRing* | 50.4 (+2.2) |
| MoCo | 83.1 | MoCo | 59.1 | MoCo | 63.8 | MoCo | 52.8 |
| MoCoRing | 86.1 (+3.0) | MoCoRing | 61.5 (+2.4) | MoCoRing | 65.2 (+1.4) | MoCoRing | 54.6 (+1.8) |
| LA | 83.9 | LA | 61.4 | LA | 63.0 | LA | 48.0 |
| (a) CIFAR10 | (b) CIFAR100 | (c) STL10 | (d) ImageNet |
+
+The results for CIFAR10, CIFAR100, STL10, and ImageNet are in Table [1]. Overall, IR, CMC, and MoCo all benefit from using more difficult negatives as shown by $2 - 5\%$ absolute points of improvement across the four datasets. While we find different contrastive objectives to perform best in each dataset, the improvements from Ring are consistent: the Ring variant outperforms the base for every model and every dataset. We also include as a baseline Local Aggregation, or LA (Zhuang et al., 2019), a popular contrastive algorithm (see Sec. H) that implicitly uses hard negatives without annealing. We find our methods to outperform LA by up to $4\%$ absolute.
+
+Table 1: Comparison of contrastive algorithms on four image domains. Superscript (*) indicates models that use twice as many parameters as others e.g. CMC has "L" and "ab" encoders.
+
+| Model | Linear Eval. |
| IR | 81.2 |
| IRing | 83.9 |
| IRing (No Anneal) | 81.4 |
| IRing (l = 0) | 82.1 |
+
+(a) CIFAR10
+
+| Model | Linear Eval. |
| IR | 43.2 |
| IRing | 48.4 |
| IRing (No Anneal) | 41.3 |
| IRing (ℓ = 0) | 47.3 |
+
+(b) ImageNet
+
+Table 2: Lesioning the effects of annealing and choice of $\ell$ .
+
+Ablations: Annealing and Upper Boundary. Having found good performance with Ring Discrimination, we want to assess the importance of the individual components that comprise Ring. We focus on the annealing policy and the exclusion of very close negatives from $S_B$ . Concretely, we measure the transfer accuracy of (1) IRing without annealing and (2) IRing with an lower percentile $\ell = 0$ , thereby excluding no close negatives. That is, $S_B$ contains all examples in the dataset with representation similarity less than the $\omega_u$ (a "ball" instead of a "ring"). Table 2 compares these ablations to IR and full IRing on CIFAR10 and ImageNet classification transfer. We observe that both ablations result in worse transfer accuracy, with proper annealing being especially important to prevent convergence to bad minima. We also find even with $\ell = 0$ , IRing outperforms IR, suggesting both removing negatives that are "too close" and "too far" contribute to the improved representation quality.
+
+Transferring Features. Thus far we have only evaluated the $n$ examples from the training distribution. As the goal of unsu
+
+pervised learning is to capture general representations, we are also interested in their performance on new, unseen distributions. To gauge this, we use the same linear classification paradigm on a suite of image datasets from the "Meta Dataset" collection (Triantafillou et al., 2019) that have been used before in contrastive literature (Chen et al., 2020a). All representations were trained on CIFAR10. For each transfer dataset, we compute mean and variance from a training split to normalize input images, which we found important for generalization to new visual domains.
+
+| Model | Aircraft | CUBirds | DTD | Fungi | MNIST | FashionMNIST | TrafficSign | VGGFlower | MSCOCO |
| IR | 40.9 | 17.9 | 39.2 | 2.7 | 96.9 | 91.7 | 97.1 | 68.1 | 52.4 |
| IRing | 40.6 (-0.3) | 17.9 (+0.0) | 39.5 (+0.3) | 3.4 (+0.7) | 97.8 (+0.9) | 91.6 (+0.1) | 98.8 (+1.7) | 68.5 (+0.4) | 52.5 (+0.1) |
| MoCo | 41.5 | 18.0 | 39.7 | 3.1 | 96.9 | 90.9 | 97.3 | 64.5 | 52.0 |
| MoCoRing | 41.6(+0.1) | 18.6 (+0.6) | 39.5 (-0.2) | 3.6 (+0.5) | 97.9 (+1.0) | 91.3 (+0.4) | 99.3 (+2.0) | 69.1 (+4.6) | 52.6 (+0.6) |
| CMC | 40.1 | 15.8 | 38.3 | 4.3 | 97.5 | 91.5 | 94.6 | 67.1 | 51.4 |
| CMCRing | 40.8 (+0.7) | 16.8 (+1.0) | 40.6 (+2.3) | 4.2 (-0.1) | 97.9 (+0.4) | 92.1 (+0.6) | 97.1 (+2.5) | 69.1 (+2.0) | 52.1 (+0.7) |
| LA | 41.3 | 17.8 | 39.0 | 2.3 | 97.2 | 92.3 | 98.2 | 66.9 | 52.3 |
+
+We find in Table 3 that the Ring models are competitive with the non-Ring analogues, with increases in transfer accuracies of 0.5 to $2\%$ absolute. Most notable are the TrafficSign and VGGFlower datasets in which Ring models surpass others by a larger margin. We also observe that IRing largely outperforms LA. This suggests the features learned with more difficult negatives are not only useful for the training distribution but may also be transferrable to many visual datasets.
+
+More Downstream Tasks. Object classification is a popular transfer task, but we want our learned representations to capture holistic knowledge about the contents of an image. We must thus evaluate performance on transfer tasks such as detection and segmentation that require different kinds of visual information. We study four additional downstream tasks: object detection on COCO (Lin et al., 2014) and Pascal VOC'07 (Everingham et al., 2010), instance segmentation on COCO, and keypoint detection on COCO. In all cases, we employ embeddings trained on ImageNet with a ResNet-18 encoder. We base these experiments after those found in He et al. (2019) with the same hyperparameters. However, we use a smaller backbone (ResNet-18 versus ResNet-50) and we freeze its parameters instead of finetuning them. We adapt code from Detectron2 (Wu et al., 2019).
+
+Table 3: Transferring CIFAR10 embeddings to various image distributions.
+
+ | COCO: Object Detection | COCO: Inst. Segmentation | COCO: Keypoint Detection | VOC: Object Detection |
| Arch. | Mask R-CNN, R18-FPN, 1x schedule | R-CNN, R18-FPN | Faster R-CNN, R18-C4 |
| Model | APbb | APbb50 | APbb75 | APmk | APmk50 | APmk75 | APkp | APkp50 | APkp75 | APbb | APbb50 | APbb75 |
| IR | 8.6 | 19.0 | 6.6 | 8.5 | 17.4 | 7.4 | 34.6 | 63.0 | 32.9 | 5.5 | 14.5 | 3.3 |
| IRing | 10.9 | 22.9 | 8.7 | 11.0 | 20.9 | 9.6 | 37.2 | 66.1 | 35.7 | 7.6 | 20.3 | 4.4 |
| MoCo | 6.0 | 14.3 | 4.0 | 10.8 | 21.4 | 9.7 | 37.6 | 66.5 | 36.9 | 7.3 | 17.9 | 4.1 |
| MoCoRing | 9.4 | 20.3 | 7.6 | 12.0 | 22.9 | 10.8 | 38.7 | 67.7 | 37.9 | 8.0 | 22.1 | 4.8 |
| LA | 10.2 | 22.0 | 8.1 | 10.0 | 20.3 | 9.0 | 36.3 | 65.3 | 35.1 | 7.6 | 20.0 | 4.3 |
+
+Table 4: Evaluation of ImageNet representations using four visual transfer tasks.
+
+We find IRing outperforms IR by around 2.3 points in COCO object detection, 2.5 points in COCO Instance Segmentation, 2.6 points in COCO keypoint detection, and 2.1 points in VOC object detection. Similarly, MoCoRing finds consistent improvements of 1-3 points over MoCo on the four tasks. Future work can investigate orthogonal directions of using larger encoders (e.g. ResNet-50) and finetuning ResNet parameters for these individual tasks.
+
+# 6 RELATED WORK
+
+Several of the ideas in Ring Discrimination relate to existing work. Below, we explore these connections, and at the same time, place our work in a fast-paced and growing field.
+
+Hard negative mining. While it has not been deeply explored in modern contrastive learning, negative mining has a rich line of research in the metric learning community. Deep metric learning utilizes triplet objectives of the form $\mathcal{L}_{\mathrm{triplet}} = d(g_{\theta}(x_i),g_{\theta}(x_+)) - d(g_{\theta}(x_i),g_{\theta}(x_-) + \alpha)$ where $d$ is a distance function (e.g. L2 distance), $x_{+}$ and $x_{-}$ are a positive and negative example, respectively, relative to $x_{i}$ , the current instance, and $\alpha \in \mathbf{R}^{+}$ is a margin. In this context, several approaches pick
+
+semi-hard negatives: Schroff et al. (2015) treats the furthest (in $\mathrm{L}_2$ distance) example in the same minibatch as $x_{i}$ as its negative, whereas Oh Song et al. (2016) weight each example in the minibatch by its distance to $g_{\theta}(x_i)$ , thereby being a continuous version of Schroff et al. (2015). More sophisticated negative sampling strategies developed over time. In Wu et al. (2017), the authors pick negatives from a fixed normal distribution that is shown to approximate $\mathrm{L}_2$ normalized embeddings in high dimensions. The authors show that weighting by this distribution samples more diverse negatives. Similarly, HDC (Yuan et al., 2017) simultaneously optimizes a triplet loss using many levels of "hardness" in negatives, again improving the diversity. Although triplet objectives paved the way for modern NCE-based objectives, the focus on negative mining has largely been overlooked. Ring Discrimination, being inspired by the deep metric learning literature, reminds that negative sampling is still an effective way of learning stronger representations in the new NCE framework. As such, an important contribution was to do so while retaining the theoretical properties of NCE, namely in relation to mutual information. This, to the best of our knowledge, is novel as negative mining in metric learning literature was not characterized in terms of information theory.
+
+That being said, there are some cases of negative mining in contrastive literature. In CPC (Oord et al., 2018), the authors explore using negatives from the same speaker versus from mixed speakers in audio applications, the former of which can be interpreted as being more difficult. A recent paper, InterCLR (Xie et al., 2020), also finds that using "semi-hard negatives" is beneficial to contrastive learning whereas negatives that are too difficult or too easy produce worse representations. Where InterCLR uses a margin-based approach to sample negatives, we explore a wider family of negative distributions and show analysis that annealing offers a simple and easy solution to choosing between easy and hard negatives. Further, as InterCLR's negative sampling procedure is a special case of CNCE, we provide theory grounding these approaches in information theory. Finally, a separate line of work in contrastive learning explores using neighboring examples (in embedding space) as "positive" views of the instance (Zhuang et al., 2019; Xie et al., 2020; Asano et al., 2019; Caron et al., 2020; Li et al., 2020). That is, finding a set $\{x_{j}\}$ such that we consider $x_{j} = t(x_{i})$ for the current instance $x_{i}$ . While this does not deal with negatives explicitly, it shares similarities to our approach by employing other examples in the contrastive objective to learn better representations. In the Appendix, we discuss how one of these algorithms, LA (Zhuang et al., 2019), implicitly uses hard negatives and expand the Ring family with ideas inspired by it.
+
+Contrastive learning. We focused primarily on comparing Ring Discrimination to three recent and highly performing contrastive algorithms, but the field contains much more. The basic idea of learning representations to be invariant under a family of transformations is an old one, having been explored with self-organizing maps (Becker & Hinton, 1992) and dimensionality reduction (Hadsell et al., 2006). Before IR, the idea of instance discrimination was studied (Dosovitskiy et al., 2014; Wang & Gupta, 2015) among many pretext objectives such as position prediction (Doersch et al., 2015), color prediction (Zhang et al., 2016), multi-task objectives (Doersch & Zisserman, 2017), rotation prediction (Gidaris et al., 2018; Chen et al., 2019), and many other "pretext" objectives (Pathak et al., 2017). As we have mentioned, one of the primary challenges to instance discrimination is making such a large softmax objective tractable. Moving from a parametric (Dosovitskiy et al., 2014) to a nonparametric softmax reduced issues with vanishing gradients, shifting the challenge to efficient negative sampling. The memory bank approach (Wu et al., 2018) is a simple and memory-efficient solution, quickly being adopted by the research community (Zhuang et al., 2019; Tian et al., 2019; He et al., 2019; Chen et al., 2020b; Misra & Maaten, 2020). With enough computational resources, it is now also possible to reuse examples in a large minibatch and negatives of one another (Ye et al., 2019; Ji et al., 2019; Chen et al., 2020a). In our work, we focus on hard negative mining in the context of a memory bank or queue due to its computational efficiency. However, the same principles should be applicable to batch-based methods (e.g. SimCLR): assuming a large enough batch size, for each example, we only use a subset of the minibatch as negatives as in Ring. Finally, more recent work (Grill et al., 2020) removes negatives altogether, which is speculated to implicitly use negative samples via batch normalization (Ioffe & Szegedy, 2015); we leave a more thorough understanding of negatives in this setting to future work.
+
+# 7 DISCUSSION
+
+Computational cost of Ring. To measure the cost of CNCE, we compare the cost an epoch of training MoCo/IR versus MoCoRing/IRing on four image datasets. Table 5a reports the average
+
+| Model | CIFAR10 (sec.) | ImageNet (min.) |
| IR | 136.0 ± 4 | 43.9 ± 1 | |
| IRing | 141.1 ± 5 (1.1x) | 51.0 ± 1 | (1.2x) |
| MoCo | 318.4 ± 16 | 61.1 ± 1 | |
| MoCoRing | 383.4 ± 12 (1.2x) | 64.9 ± 1 | (1.1x) |
+
+(a) Average Epoch Cost
+
+| Transfer Task | MoCo | MoCoRing |
| LibriSpeech Spk. ID (Panayotov et al. 2015) | 95.5 | 96.6 (+1.1) |
| AudioMNIST (Becker et al. 2018) | 87.4 | 91.3 (+3.9) |
| Google Commands (Warden 2018) | 38.5 | 41.4 (+2.9) |
| Fluent Actions (Lugosch et al. 2019) | 36.5 | 36.8 (+0.3) |
| Fluent Objects (Lugosch et al. 2019) | 41.9 | 44.1 (+2.2) |
| Fluent Locations (Lugosch et al. 2019) | 60.9 | 63.9 (+3.0) |
+
+(c) Speech Extension
+
+| Dataset | Arch. | MoCo-v2 | MoCoRing-v2 |
| CIFAR10 | ResNet-18 | 90.1 | 91.9 | (+1.8) |
| CIFAR10 | ResNet-50 | 92.4 | 94.1 | (+1.6) |
| CIFAR100 | ResNet-18 | 65.1 | 67.3 | (+2.2) |
| STL10 | ResNet-18 | 74.8 | 76.7 | (+1.9) |
+
+(b) Comparison with SOTA
+
+| Dataset | SimCLR | SimCLRing |
| CIFAR10 | 88.9 | 89.3 | (+0.4) |
| CIFAR100 | 63.5 | 64.1 | (+0.6) |
| STL10 | 71.2 | 72.1 | (+0.9) |
+
+(d) SimCLRing Extension
+
+Table 5: Generalizations of Ring to a new modality (a) and a batch-based algorithm (b).
+
+cost over 200 epochs. We observe that Ring models cost no more than 1.5 times the cost of standard contrastive algorithms, amounting to a difference of 3 to 7 minutes in ImageNet and 10 to 60 seconds in three other datasets per epoch. In the context of deep learning, we do not find the cost increases to be substantial. In particular, since (1) the memory structure in IR and MoCo allows us to store and reuse embeddings and (2) gradients are not propagated through the memory structure, the additional compute of Ring amounts to one matrix multiplication, which is cheap on modern hardware. We used a single Titan X GPU with 8 CPU workers, and PyTorch Lightning (Falcon et al., 2019).
+
+Comparison with the state-of-the-art. Unlike the experiments in Sec. 5, we now choose the optimal hyperparameters for MoCo-v2 (Chen et al., 2020b) separately for CIFAR10, CIFAR100, and STL10. Table 5b compares MoCo-v2 and its CNCE equivalent, MoCoRing-v2 using linear evaluation. We observe comparable improvements as found in Table I even with optimal hyperparameters. Notably, the gains generalize to ResNet-50 encoders. Refer to Sec. F for hyperparameter choices.
+
+Generalization to other modalities. Thus far, we have focused on visual representation learning, although the same ideas apply to other domains. To exemplify the generality of CNCE, we apply MoCoRing to learning speech representations. Table 5c reports linear evaluation on six transfer datasets, ranging from predicting speaker identity to speech recognition to intent prediction. We find significant gains of 1 to 4 percent over 4 datasets and 6 transfer tasks with an average of 2.2 absolute percentage points. See Sec. E for experimental details.
+
+Batch-based negative sampling. In Ring, we assumed to have a memory structure that stores embeddings, which led to an efficient procedure of mining semi-hard negatives. However, another flavor of contrastive algorithms removes the memory structure entirely, using the examples in the minibatch as negatives of one another. Here, we motivate a possible extension of Ring to SimCLR, and leave more careful study to future work. In SimCLR, we are given a minibatch $M$ of examples. To sample hard negatives, as before, pick $\ell$ and $u$ as lower and upper percentiles. For every example $x_{i}$ in the minibatch, only consider the subset of the minibatch $\{x : x \subseteq M, \exp\{g_{\theta}(t(x_i))^T g_{\theta}(t'(x))\}$ in the $\ell$ -th and $u$ -th percentiles in $M\}$ as negative examples for $x_{i}$ . This can be efficiently implemented as a matrix operation using an element-wise mask. Thus, we ignore gradient signal for examples too far or too close to $x_{i}$ in representation. As before, we anneal $u$ from 100 to 10 and set $\ell = 1$ . Table 5d report consistent but moderate gains over SimCLR, showing promise but room for improvement in future research.
+
+# 8 CONCLUDING REMARKS
+
+To conclude, we presented a family of mutual information estimators that approximate the partition function using samples from a class of conditional distributions. We proved several theoretical statements about this family, showing a bound on mutual information and a tradeoff between bias and variance. Then, we applied these estimators as objectives in contrastive representation learning. In doing so, we found that our representations outperform existing approaches consistently across a spectrum of contrastive objectives, data distributions, and transfer tasks. Overall, we hope our work to encourage more exploration of negative sampling in the recent growth of contrastive learning.
+
+# ACKNOWLEDGMENTS
+
+This research was supported by the Office of Naval Research grant ONR MURI N00014-16-1-2007. MW is supported by the Stanford Interdisciplinary Graduate Fellowship as the Karr Family Fellow.
+
+# REFERENCES
+
+Yuki Markus Asano, Christian Rupprecht, and Andrea Vedaldi. Self-labelling via simultaneous clustering and representation learning. arXiv preprint arXiv:1911.05371, 2019.
+Philip Bachman, R Devon Hjelm, and William Buchwalter. Learning representations by maximizing mutual information across views. In Advances in Neural Information Processing Systems, pp. 15535-15545, 2019.
+Sören Becker, Marcel Ackermann, Sebastian Lapuschkin, Klaus-Robert Müller, and Wojciech Samek. Interpreting and explaining deep neural networks for classification of audio signals. arXiv preprint arXiv:1807.03418, 2018.
+Suzanna Becker and Geoffrey E Hinton. Self-organizing neural network that discovers surfaces in random-dot stereograms. Nature, 355(6356):161-163, 1992.
+Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. arXiv preprint arXiv:2006.09882, 2020.
+Ting Chen, Xiaohua Zhai, Marvin Ritter, Mario Lucic, and Neil Houlsby. Self-supervised gans via auxiliary rotation loss. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 12154-12163, 2019.
+Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709, 2020a.
+Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He. Improved baselines with momentum contrastive learning. arXiv preprint arXiv:2003.04297, 2020b.
+Carl Doersch and Andrew Zisserman. Multi-task self-supervised visual learning. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2051-2060, 2017.
+Carl Doersch, Abhinav Gupta, and Alexei A Efros. Unsupervised visual representation learning by context prediction. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1422-1430, 2015.
+Jeff Donahue and Karen Simonyan. Large scale adversarial representation learning. In Advances in Neural Information Processing Systems, pp. 10542-10552, 2019.
+Alexey Dosovitskiy, Jost Tobias Springenberg, Martin Riedmiller, and Thomas Brox. Discriminative unsupervised feature learning with convolutional neural networks. In Advances in neural information processing systems, pp. 766-774, 2014.
+Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The Pascal visual object classes (voc) challenge. International journal of computer vision, 88(2): 303-338, 2010.
+WEA Falcon et al. Pytorch lightning. *GitHub. Note: https://github.com/williamFalcon/pytorch-lightning Cited by, 3, 2019.*
+Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. arXiv preprint arXiv:1803.07728, 2018.
+Jean-Bastien Grill, Florian Strub, Florent Alché, Corentin Tallec, Pierre H Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent: A new approach to self-supervised learning. arXiv preprint arXiv:2006.07733, 2020.
+
+Raia Hadsell, Sumit Chopra, and Yann LeCun. Dimensionality reduction by learning an invariant mapping. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), volume 2, pp. 1735-1742. IEEE, 2006.
+Kaiming He, Georgia Gkioxari, Piotr Dollar, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 2961-2969, 2017.
+Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. arXiv preprint arXiv:1911.05722, 2019.
+Olivier J Henaff, Aravind Srinivas, Jeffrey De Fauw, Ali Razavi, Carl Doersch, SM Eslami, and Aaron van den Oord. Data-efficient image recognition with contrastive predictive coding. arXiv preprint arXiv:1905.09272, 2019.
+R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. arXiv preprint arXiv:1808.06670, 2018.
+Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
+Xu Ji, João F Henriques, and Andrea Vedaldi. Invariant information clustering for unsupervised image classification and segmentation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 9865-9874, 2019.
+Alexander Kolesnikov, Xiaohua Zhai, and Lucas Beyer. Revisiting self-supervised visual representation learning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 1920-1929, 2019.
+Junnan Li, Pan Zhou, Caiming Xiong, Richard Socher, and Steven CH Hoi. Prototypical contrastive learning of unsupervised representations. arXiv preprint arXiv:2005.04966, 2020.
+Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dóllár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pp. 740-755. Springer, 2014.
+Loren Lugosch, Mirco Ravanelli, Patrick Ignoto, Vikrant Singh Tomar, and Yoshua Bengio. Speech model pre-training for end-to-end spoken language understanding. arXiv preprint arXiv:1904.03670, 2019.
+Ishan Misra and Laurens van der Maaten. Self-supervised learning of pretext-invariant representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6707-6717, 2020.
+Hyun Oh Song, Yu Xiang, Stefanie Jegelka, and Silvio Savarese. Deep metric learning via lifted structured feature embedding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4004-4012, 2016.
+Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.
+Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. Librispeech: an asr corpus based on public domain audio books. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5206-5210. IEEE, 2015.
+Deepak Pathak, Ross Girshick, Piotr Dólar, Trevor Darrell, and Bharath Hariharan. Learning features by watching objects move. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2701-2710, 2017.
+Ben Poole, Sherjil Ozair, Aaron van den Oord, Alexander A Alemi, and George Tucker. On variational bounds of mutual information. arXiv preprint arXiv:1905.06922, 2019.
+Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211-252, 2015.
+
+Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 815-823, 2015.
+Alex Tamkin, Mike Wu, and Noah Goodman. Viewmaker networks: Learning views for unsupervised representation learning. arXiv preprint arXiv:2010.07432, 2020.
+Yonglong Tian, Dilip Krishnan, and Phillip Isola. Contrastive multiview coding. arXiv preprint arXiv:1906.05849, 2019.
+Yonglong Tian, Chen Sun, Ben Poole, Dilip Krishnan, Cordelia Schmid, and Phillip Isola. What makes for good views for contrastive learning. arXiv preprint arXiv:2005.10243, 2020.
+Eleni Triantafillou, Tyler Zhu, Vincent Dumoulin, Pascal Lamblin, Utku Evci, Kelvin Xu, Ross Goroshin, Carles Gelada, Kevin Swersky, Pierre-Antoine Manzagol, et al. Meta-dataset: A dataset of datasets for learning to learn from few examples. arXiv preprint arXiv:1903.03096, 2019.
+Michael Tschannen, Josip Djolonga, Paul K Rubenstein, Sylvain Gelly, and Mario Lucic. On mutual information maximization for representation learning. arXiv preprint arXiv:1907.13625, 2019.
+Xiaolong Wang and Abhinav Gupta. Unsupervised learning of visual representations using videos. In Proceedings of the IEEE international conference on computer vision, pp. 2794-2802, 2015.
+Pete Warden. Speech commands: A dataset for limited-vocabulary speech recognition. arXiv preprint arXiv:1804.03209, 2018.
+Chao-Yuan Wu, R Manmatha, Alexander J Smola, and Philipp Krahenbuhl. Sampling matters in deep embedding learning. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2840-2848, 2017.
+Mike Wu, Chengxu Zhuang, Milan Mosse, Daniel Yamins, and Noah Goodman. On mutual information in contrastive learning for visual representations. arXiv preprint arXiv:2005.13149, 2020.
+Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen Lo, and Ross Girshick. Detector2. https://github.com/facebookresearch/detectron2, 2019.
+Zhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua Lin. Unsupervised feature learning via nonparametric instance discrimination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3733-3742, 2018.
+Jiahao Xie, Xiaohang Zhan, Ziwei Liu, Yew Soon Ong, and Chen Change Loy. Delving into interimage invariance for unsupervised visual representations. arXiv preprint arXiv:2008.11702, 2020.
+Mang Ye, Xu Zhang, Pong C Yuen, and Shih-Fu Chang. Unsupervised embedding learning via invariant and spreading instance feature. In Proceedings of the IEEE Conference on computer vision and pattern recognition, pp. 6210-6219, 2019.
+Yuhui Yuan, Kuiyuan Yang, and Chao Zhang. Hard-aware deeply cascaded embedding. In Proceedings of the IEEE international conference on computer vision, pp. 814-823, 2017.
+Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. In European conference on computer vision, pp. 649-666. Springer, 2016.
+Chengxu Zhuang, Alex Lin Zhai, and Daniel Yamins. Local aggregation for unsupervised learning of visual embeddings. In Proceedings of the IEEE International Conference on Computer Vision, pp. 6002-6012, 2019.
\ No newline at end of file
diff --git a/conditionalnegativesamplingforcontrastivelearningofvisualrepresentations/images.zip b/conditionalnegativesamplingforcontrastivelearningofvisualrepresentations/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..5822452cae7007c89c0ba3f47736d565be2ea3d4
--- /dev/null
+++ b/conditionalnegativesamplingforcontrastivelearningofvisualrepresentations/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a7e7226d0ce2c035f5b34492316314ce7cb58b91d980c30d9dfa9f01af03cb20
+size 255431
diff --git a/conditionalnegativesamplingforcontrastivelearningofvisualrepresentations/layout.json b/conditionalnegativesamplingforcontrastivelearningofvisualrepresentations/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..18c78943d1163fc03f266e2af9a81028d3d59eab
--- /dev/null
+++ b/conditionalnegativesamplingforcontrastivelearningofvisualrepresentations/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c467d6b74171e9ed2af37c116ac8aac27cb843cc649445f7ed07d387e97a0ce5
+size 536323
diff --git a/conformationguidedmolecularrepresentationwithhamiltonianneuralnetworks/1e1c731a-0044-41b7-9d56-ff827b20cdb4_content_list.json b/conformationguidedmolecularrepresentationwithhamiltonianneuralnetworks/1e1c731a-0044-41b7-9d56-ff827b20cdb4_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..58f3d0a7736b3da62f9cf31b07be34acb4e423be
--- /dev/null
+++ b/conformationguidedmolecularrepresentationwithhamiltonianneuralnetworks/1e1c731a-0044-41b7-9d56-ff827b20cdb4_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:47adf0e6a6b71cf3613f2eb3c8a55a9e5d6c23676d313e11fc83dc7c30402f75
+size 77657
diff --git a/conformationguidedmolecularrepresentationwithhamiltonianneuralnetworks/1e1c731a-0044-41b7-9d56-ff827b20cdb4_model.json b/conformationguidedmolecularrepresentationwithhamiltonianneuralnetworks/1e1c731a-0044-41b7-9d56-ff827b20cdb4_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..c84449949d425a95cd77dde3791490089cb555bb
--- /dev/null
+++ b/conformationguidedmolecularrepresentationwithhamiltonianneuralnetworks/1e1c731a-0044-41b7-9d56-ff827b20cdb4_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:387b693b5368f73ff0953552e81677449d91783dda986cd89a3c04d49184d60c
+size 92327
diff --git a/conformationguidedmolecularrepresentationwithhamiltonianneuralnetworks/1e1c731a-0044-41b7-9d56-ff827b20cdb4_origin.pdf b/conformationguidedmolecularrepresentationwithhamiltonianneuralnetworks/1e1c731a-0044-41b7-9d56-ff827b20cdb4_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..0aca91406da6823a373d9adcc29f6e7a2359dec5
--- /dev/null
+++ b/conformationguidedmolecularrepresentationwithhamiltonianneuralnetworks/1e1c731a-0044-41b7-9d56-ff827b20cdb4_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ae1a519ffac7f59f974fd35601c0f3e059b90902ec601a9046cfb451267a4b0f
+size 3544376
diff --git a/conformationguidedmolecularrepresentationwithhamiltonianneuralnetworks/full.md b/conformationguidedmolecularrepresentationwithhamiltonianneuralnetworks/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..e2d4ca06e83f36ae4bdef2532ba95272219a91ab
--- /dev/null
+++ b/conformationguidedmolecularrepresentationwithhamiltonianneuralnetworks/full.md
@@ -0,0 +1,341 @@
+# CONFORMATION-GUIDED MOLECULAR REPRESENTATION WITH HAMILTONIAN NEURAL NETWORKS
+
+Ziyao Li $^{1}$ , Shuwen Yang $^{2*}$ , Guojie Song $^{2\dagger}$ , Lingsheng Cai $^{2}$
+
+1Center for Data Science, Peking University, Beijing, China
+$^{2}$ Key Laboratory of Machine Perception and Intelligence (MOE), Peking University, Beijing, China
+
+{leeeezy,swyang,gjsong,cailingsheng}@pku.edu.cn
+
+# ABSTRACT
+
+Well-designed molecular representations (fingerprints) are vital to combine medical chemistry and deep learning. Whereas incorporating 3D geometry of molecules (i.e. conformations) in their representations seems beneficial, current 3D algorithms are still in infancy. In this paper, we propose a novel molecular representation algorithm which preserves 3D conformations of molecules with a Molecular Hamiltonian Network (HamNet). In HamNet, implicit positions and momentums of atoms in a molecule interact in the Hamiltonian Engine following the discretized Hamiltonian equations. These implicit coordinations are supervised with real conformations with translation- & rotation-invariant losses, and further used as inputs to the Fingerprint Generator, a message-passing neural network. Experiments show that the Hamiltonian Engine can well preserve molecular conformations, and that the fingerprints generated by HamNet achieve state-of-the-art performances on MoleculeNet, a standard molecular machine learning benchmark.
+
+# 1 INTRODUCTION
+
+The past several years have seen a prevalence of the intersection between medical chemistry and deep learning. Remarkable progress has been made in various applications on small molecules, ranging from generation (Jin et al., 2018; You et al., 2018) and property prediction (Gilmer et al., 2017; Cho & Choi, 2019; Klicpera et al., 2020) to protein-ligand interaction analysis (Lim et al., 2019; Wang et al., 2020), yet all these tasks rely on well-designed numerical representations, or fingerprints, of molecules. These fingerprints encode molecular structures and serve as the indicators in downstream tasks. Early work of molecular fingerprints (Morgan, 1965; Rogers & Hahn, 2010) started from encoding the two-dimensional (2D) structures of molecules, i.e. the chemical bonds between atoms, often stored as atom-bond graphs. More recently, a trend of incorporating molecular geometry into the representations arose (Axen et al., 2017; Cho & Choi, 2019).
+
+Molecular geometry refers to the conformation (the three-dimensional (3D) coordinations of atoms) of a molecule, which contains widely interested chemical information such as bond lengths and angles, and thus stands vital for determining physical, chemical, and biomedical properties of the molecule. Whereas incorporating 3D geometry of molecules seems indeed beneficial, 3D fingerprints, especially in combination with deep learning, are still in infancy. The use of 3D fingerprints is limited by pragmatic considerations including i) calculation costs, ii) translational & rotational invariances, and iii) the availability of conformations, especially considering the generated ligand candidates in drug discovery tasks. Furthermore, compared with current 3D algorithms, mature 2D fingerprints (Rogers & Hahn, 2010; Gilmer et al., 2017; Xiong et al., 2020) are generally more popular with equivalent or even better performances in practice. For example, as a 2D approach, Attentive Fingerprints (Attentive FP) (Xiong et al., 2020) have become the de facto state-of-the-art approach.
+
+To push the boundaries of leveraging 3D geometries in molecular fingerprints, we propose HamNet (Molecular Hamiltonian Networks). HamNet simulates the process of molecular dynamics (MD)
+
+to model the conformations of small molecules, based on which final fingerprints are calculated similarly to (Xiong et al., 2020). To address the potential lack of labeled conformations, HamNet does not regard molecular conformations as all-time available inputs. Instead, A Hamiltonian engine is designed to reconstruct known conformations and generalize for unknown ones. Encoded from atom features, implicit positions and momentum of atoms interact in the engine following the discretized Hamiltonian Equations with learnable energy and dissipation functions. Final positions are supervised with real conformations, and further used as inputs to a Message-Passing Neural Network (MPNN) (Gilmer et al., 2017) to generate the fingerprints. Novel loss functions with translational & rotational invariances are proposed to supervise the Hamiltonian Engine, and the architecture of the Fingerprint Generator is elaborated to better incorporate the output quantities from the engine.
+
+We show via our conformation-reconstructing experiments that the proposed Hamiltonian Engine is eligible to better predict molecular conformations than conventional geometric approaches as well as common neural structures (MPNNs). We also evaluate HamNet on several datasets with different targets collected in a standard molecular machine learning benchmark, MoleculeNet (Wu et al., 2017), all following the same experimental setups. HamNet demonstrates state-of-the-art performances, outperforming baselines including both 2D and 3D approaches.
+
+# 2 PRELIMINARIES
+
+Notations. Given a molecule with $n$ atoms, we use $v_{i}$ to denote the features of atom $i$ , and $e_{ij}$ that of the chemical bond between $i$ and $j$ (if exists). Bold, upper-case letters denote matrices, and lower-case, vectors. All vectors in this paper are column vectors, and $\cdot^{\top}$ stands for the transpose operation. We use $\oplus$ for the concatenation operation of vectors. The positions and momentums of atom $i$ are denoted as $q_{i}$ and $p_{i}$ , and the set of all positions of atoms in a molecule is denoted as $Q = (q_{1},\dots ,q_{n})^{\top}$ . $\mathcal{N}(v)$ refers to the neighborhood of node $v$ in some graph.
+
+Graph Convolutional Networks (GCNs). Given an attributed graph $G = (V, \mathbf{A}, \mathbf{X})$ , where $V = \{v_{1}, \dots, v_{n}\}$ is the set of vertices, $\mathbf{A} \in \mathbb{R}^{n \times n}$ the (weighted) adjacency matrix, and $\mathbf{X} \in \mathbb{R}^{n \times d}$ the attribute matrix, GCNs (Kipf & Welling, 2017) calculate the hidden states of graph nodes as
+
+$$
+\mathbf {G C N} ^ {(L)} (\boldsymbol {X}) \equiv \boldsymbol {H} ^ {(L)}, \quad \boldsymbol {H} ^ {(l + 1)} = \sigma \left(\hat {\boldsymbol {A}} \boldsymbol {H} ^ {(l)} \boldsymbol {W} ^ {(l)}\right), \quad \boldsymbol {H} ^ {(0)} = \boldsymbol {X}, \quad l = 0, 1, \dots , L - 1. \tag {1}
+$$
+
+Here, $\pmb{H} = (h_{v_1},\dots ,h_{v_n})^\top$ are hidden representations of nodes, $\hat{A} = D^{-\frac{1}{2}}AD^{-\frac{1}{2}}$ is the normalized adjacency matrix, $\pmb{D}$ with $D_{ii} = \sum_{j}\pmb{A}_{ij}$ is the diagonal matrix of node degrees, and $Ws$ are network parameters.
+
+Message-Passing Neural Networks (MPNNs). MPNN (Gilmer et al., 2017) introduced a general framework of Graph Neural Networks (GNNs). In the $t$ -th layer of a typical MPNN, messages $(\boldsymbol{m}^t)$ are generated between two connected nodes $(i,j)$ based on the hidden representations of both nodes $(\boldsymbol{h}^t)$ and the edge in-between. After that, nodes receive the messages and update their own hidden representations. A readout function is then defined over final node representations $(\boldsymbol{h}^T)$ to derive graph-level representations. Denoted in formula, the calculation follows
+
+$$
+\boldsymbol {m} _ {v} ^ {t + 1} = \sum_ {w \in \mathcal {N} (v)} M _ {t} \left(\boldsymbol {h} _ {v} ^ {t}, \boldsymbol {h} _ {w} ^ {t}, \boldsymbol {e} _ {v, w}\right), \quad \boldsymbol {h} _ {v} ^ {t + 1} = U _ {t} \left(\boldsymbol {h} _ {v} ^ {t}, \boldsymbol {m} _ {v} ^ {t + 1}\right), \quad \hat {\boldsymbol {y}} = R \left(\left\{\boldsymbol {h} _ {v} ^ {T} | v \in V \right\}\right), \tag {2}
+$$
+
+where $M_{t}, U_{t}, R$ are the message, update and readout functions.
+
+Hamiltonian Equations. The Hamiltonian Equations depict Newton's laws of motion in the form of first-order PDEs. Considering a system of $n$ particles with positions $(q_1, \dots, q_n)$ and momentums $(p_1, \dots, p_n)$ , the dynamics of the system follow
+
+$$
+\dot {\boldsymbol {q}} _ {i} \equiv \frac {\mathrm {d} \boldsymbol {q} _ {i}}{\mathrm {d} t} = \frac {\partial \mathcal {H}}{\partial \boldsymbol {p} _ {i}}, \quad \dot {\boldsymbol {p}} _ {i} \equiv \frac {\mathrm {d} \boldsymbol {p} _ {i}}{\mathrm {d} t} = - \frac {\partial \mathcal {H}}{\partial \boldsymbol {q} _ {i}}, \tag {3}
+$$
+
+where $\mathcal{H}$ is the Hamiltonian of the system, and equals to the total system energy. Generally, the Hamiltonian is composed of the kinetic energy of all particles and the potential energy as
+
+$$
+\mathcal {H} = \sum_ {i = 1} ^ {n} \mathcal {T} _ {i} + \mathcal {U}. \tag {4}
+$$
+
+
+
+
+Figure 1: The overall structure of HamNet.
+
+Meanwhile, if dissipation exists in the system, the Hamiltonian Equations shall be adapted as
+
+$$
+\dot {\boldsymbol {q}} _ {i} = \frac {\partial \mathcal {H}}{\partial \boldsymbol {p} _ {i}}, \quad \dot {\boldsymbol {p}} _ {i} = - \left(\frac {\partial \mathcal {H}}{\partial \boldsymbol {q} _ {i}} + \frac {\partial \Phi}{\partial \dot {\boldsymbol {q}} _ {i}}\right) = - \left(\frac {\partial \mathcal {H}}{\partial \boldsymbol {q} _ {i}} + m _ {i} \frac {\partial \Phi}{\partial \boldsymbol {p} _ {i}}\right) \tag {5}
+$$
+
+where $m_{i}$ is the mass of the particle and $\Phi$ is the dissipation function which describes how the system energy is dissipated by the outer environment.
+
+# 3 METHOD:HAMNET
+
+Figure 1 shows the overall architecture of HamNet. HamNet consists of two modules: i) a Hamiltonian Engine, where molecular conformations are reconstructed; and ii) a Fingerprint Generator, where final fingerprints are generated from atom & bond features and outputs from the Hamiltonian Engine.
+
+# 3.1 HAMILTONIAN ENGINE
+
+Discretized Hamiltonian Equations. The Hamiltonian Engine is designed to simulate the physical interactions between atoms in a molecule. To correctly incorporate the laws of motion, we discretize the Hamiltonian Equations with dissipation (Equation 5) and model the energy and dissipation with learnable functions. At the $t$ -th step $(t = 0,1,\dots ,T)$ in the engine, for atom $i$
+
+$$
+\boldsymbol {q} _ {i} ^ {(t + 1)} = \boldsymbol {q} _ {i} ^ {(t)} + \eta \frac {\partial \mathcal {H} ^ {(t)}}{\partial \boldsymbol {p} _ {i} ^ {(t)}}, \quad \boldsymbol {p} _ {i} ^ {(t + 1)} = \boldsymbol {p} _ {i} ^ {(t)} - \eta \left(\frac {\partial \mathcal {H} ^ {(t)}}{\partial \boldsymbol {q} _ {i} ^ {(t)}} + m _ {i} \frac {\partial \Phi^ {(t)}}{\partial \boldsymbol {p} _ {i} ^ {(t)}}\right). \tag {6}
+$$
+
+Here, $\eta$ is a hyperparameter of step size which controls the granularity of the discretization, $m_{i}$ is the (normalized) mass of atom $i$ , and $\mathcal{H}^{(t)}, \Phi^{(t)}$ are the learnable Hamiltonian and dissipation functions of $\pmb{q}^{(t)}$ and $\pmb{p}^{(t)}$ : (superscripts skipped)
+
+$$
+\mathcal {H} = \sum_ {i = 1} ^ {n} T _ {i} \left(\boldsymbol {p} _ {i}\right) + U \left(\boldsymbol {q} _ {i}, \dots , \boldsymbol {q} _ {n}\right), \quad \Phi = \sum_ {i = 1} ^ {n} \phi \left(\boldsymbol {p} _ {i}\right). \tag {7}
+$$
+
+It should be noted that in order to improve the expressional power of the network, we extend the concept of positions and momentum into implicit ones in a generalized $d_{f}$ -dimensional space, i.e. $\pmb{q},\pmb{p} \in \mathbb{R}^{d_f}, d_f > 3$ . We will discuss how to supervise these implicit quantities, and will show the influences of the dimensionality in the experimental results. Nonetheless, we parameterize the energy
+
+and dissipation function in a physically inspired manner: for the atom-wise kinetic energy, we generalize the definition of kinetic energy $(T = \frac{p^2}{2m})$ as the quadratic forms of the implicit momentums; for the atom-wise dissipation, we adapt the Rayleigh's dissipation function $(\Phi = \frac{1}{2}\sum_{i,j=1}^{n}c_{ij}\dot{\mathbf{q}}_i\dot{\mathbf{q}}_j)$ into the generalized space. Denoted in formula, we have
+
+$$
+T _ {i} \left(\boldsymbol {p} _ {i}\right) = \frac {\boldsymbol {p} _ {i} ^ {\top} \boldsymbol {W} _ {T} ^ {\top} \boldsymbol {W} _ {T} \boldsymbol {p} _ {i}}{2 m _ {i}}, \quad \phi_ {i} \left(\boldsymbol {p} _ {i}\right) = \frac {\boldsymbol {p} _ {i} ^ {\top} \boldsymbol {W} _ {\phi} ^ {\top} \boldsymbol {W} _ {\phi} \boldsymbol {p} _ {i}}{2 m _ {i} ^ {2}}. \tag {8}
+$$
+
+For the potential energy, we simplify the Lennard-Jones potential $(U(r) = \epsilon (r^{-12} - r^{-6}))$ with parameterized distances $rs$ in the generalized space, that is,
+
+$$
+U = \sum_ {i \neq j} u _ {i j}, \quad u _ {i j} = r _ {i j} ^ {- 4} - r _ {i j} ^ {- 2}, \quad r _ {i j} ^ {2} \equiv r (\boldsymbol {q} _ {i}, \boldsymbol {q} _ {j}) = \left(\boldsymbol {q} _ {i} - \boldsymbol {q} _ {j}\right) ^ {\top} \boldsymbol {W} _ {U} ^ {\top} \boldsymbol {W} _ {U} \left(\boldsymbol {q} _ {i} - \boldsymbol {q} _ {j}\right). \tag {9}
+$$
+
+In both Equation 8 and Equation 9, $W_{s}$ are network parameters. For the mass of the atoms, we empirically normalize the relative atomic mass $M_{i}$ of atom $i$ with $m_{i} = M_{i} / 50$ . One would also note that the parameters in each step of the engine remain the same, and thus the scale of parameters in the engine depends only on $d_{f}$ and is irrelevant to the depth $T$ .
+
+Initial positions and momentums. Graph-based neural networks are used to initialize these spatial quantities. Molecules are essentially graphs, while bonds between atoms can be of different types (single, double, triple and aromatic). Instead of using channel-wise GCNs which assign different parameters for different types of edges (Schlichtkrull et al., 2018), we calculate a bond-strength adjacency for the molecules: the strength of a bond depends on the atom-bond-atom tuple, i.e.
+
+$$
+\boldsymbol {A} _ {i j} = \text {s i g m o i d} \left(\mathbf {M L P} \left(\left(\boldsymbol {v} _ {i} \oplus \boldsymbol {e} _ {i j} \oplus \boldsymbol {v} _ {j}\right)\right)\right) \text {i f t h e b o n d e x i s t s ,} \quad \boldsymbol {A} _ {i j} = 0 \text {o t h e r w i s e .} \tag {10}
+$$
+
+With so-defined adjacency, we first encode the atoms with vanilla GCNs (Equation 1) and concatenate all hidden layers following the DenseNet scheme (Huang et al., 2017). Deep enough GCNs do capture entire molecular structures, however, atoms with identical chemical environment cannot be distinguished, for example, carbon atoms in the benzene ring. This may not be a problem in conventional MPNNs, but atoms with coinciding positions are unacceptable in physics as well as the Hamiltonian engine. Therefore, we conduct an LSTM over the GCN outputs to generate unique positions for atoms. The order that the atoms appear in the LSTM conforms with the SMILES representations (Weininger, 1988) of the molecule, which display atoms in the molecule in a specific topological order. Denoted in formula, initial positions and momentums of atoms are calculated as
+
+$$
+\tilde {\boldsymbol {q}} _ {i} = \bigoplus_ {l = 0} ^ {L} \boldsymbol {f} _ {i} ^ {(l)}, \quad \boldsymbol {q} _ {i} ^ {(0)} = \mathbf {L S T M} _ {s _ {i}} \left(\tilde {\boldsymbol {q}} _ {s _ {1}}, \tilde {\boldsymbol {q}} _ {s _ {2}}, \dots , \tilde {\boldsymbol {q}} _ {s _ {n}}\right) \tag {11}
+$$
+
+$$
+\tilde {\boldsymbol {p}} _ {i} = \bigoplus_ {l = 0} ^ {L} \boldsymbol {g} _ {i} ^ {(l)}, \quad \boldsymbol {p} _ {i} ^ {(0)} = \mathbf {L S T M} _ {s _ {i}} \left(\tilde {\boldsymbol {p}} _ {s _ {1}}, \tilde {\boldsymbol {p}} _ {s _ {2}}, \dots , \tilde {\boldsymbol {p}} _ {s _ {n}}\right) \tag {12}
+$$
+
+where $\pmb{f}_i^{(l)}, \pmb{g}_i^{(l)}$ are hidden representations of atom $i$ in the $l$ -th GCN layer and $s_k$ is the atom at $k$ -th position in the SMILES. As unique orders are assigned for atoms, their initial positions are thus unique.
+
+Conformation preserving. After the dynamical process in the Hamiltonian Engine, positions in the generalized $\mathbb{R}^{d_f}$ space are transformed into real 3D space linearly, that is,
+
+$$
+\hat {\boldsymbol {Q}} = \boldsymbol {Q} \boldsymbol {W} _ {\text {t r a n s}}, \quad \hat {\boldsymbol {Q}} \in \mathbb {R} ^ {n \times 3}, \quad \boldsymbol {Q} = \left(\boldsymbol {q} _ {1}, \dots , \boldsymbol {q} _ {n}\right) \in \mathbb {R} ^ {n \times d _ {f}}. \tag {13}
+$$
+
+Considering translational & rotational invariances, we do not require that $\hat{Q}$ approximates the labeled 3D coordinations of atoms. Instead, three translational- and rotational-invariant metrics are proposed and used to supervise $\hat{Q}$ :
+
+I) Kabsch-RMSD (K-RMSD). We use the Kabsch Algorithm (Kabsch, 1976) to rotate and align the approximated atom positions $(\hat{Q})$ to the real ones $(Q^R)$ , and then calculate the Root of Mean Squared Deviations (RMSD) of two conformations using atom mass as weights:
+
+$$
+\hat {\boldsymbol {Q}} ^ {K} = \mathbf {K a b s c h} (\hat {\boldsymbol {Q}}; \boldsymbol {Q} ^ {R}), \quad L _ {k - r m s d} (\hat {\boldsymbol {Q}}, \boldsymbol {Q} ^ {R}) = \sqrt {\frac {\sum_ {i = 1} ^ {n} m _ {i} \times \left\| \hat {\boldsymbol {q}} _ {i} ^ {K} - \boldsymbol {q} _ {i} ^ {R} \right\| _ {2} ^ {2}}{\sum_ {i = 1} ^ {n} m _ {i}}}. \tag {14}
+$$
+
+One should note that the alignment Kabsch $(\cdot ;\cdot)$ is calculated with SVD and is thus differentiable.
+
+II) Distance Loss. Pair-wise distances between atoms are vital quantities in describing molecular conformations and enjoy desired invariances. Therefore, we propose the metric $L_{dist}$ as
+
+$$
+L _ {d i s t} ^ {2} (\hat {\boldsymbol {Q}}, \boldsymbol {Q} ^ {R}) = \frac {1}{n ^ {2}} \sum_ {i, j = 1} ^ {n} \left(\left\| \boldsymbol {q} _ {i} - \boldsymbol {q} _ {j} \right\| _ {2} ^ {2} - \left\| \boldsymbol {q} _ {i} ^ {R} - \boldsymbol {q} _ {j} ^ {R} \right\| _ {2} ^ {2}\right) ^ {2} \tag {15}
+$$
+
+III) ADJ- $k$ Loss. A drawback of the naive distance loss is that distances between far atoms are over-emphasized, leading to deformed local structures. Therefore, we further propose ADJ- $k$ loss, where only distances between $k$ -hop connected atoms are preserved under weights calculated from hop-distances. Denote the normalized, simple adjacency matrix as $\tilde{\mathbf{A}}$ ,1 the ADJ- $k$ loss is defined as
+
+$$
+L _ {a d j - k} ^ {2} (\hat {\boldsymbol {Q}}, \boldsymbol {Q} ^ {R}) = \frac {1}{n} \sum_ {i, j = 1} ^ {n} \tilde {\boldsymbol {A}} _ {i j} ^ {k} \left(\left\| \boldsymbol {q} _ {i} - \boldsymbol {q} _ {j} \right\| _ {2} ^ {2} - \left\| \boldsymbol {q} _ {i} ^ {R} - \boldsymbol {q} _ {j} ^ {R} \right\| _ {2} ^ {2}\right) ^ {2} \tag {16}
+$$
+
+In implementation, we use a linear combination of $K$ -RMSD and ADJ-3 losses to supervise the engine, i.e. $L_{HE} = L_{k - rmsd} + \lambda L_{adj - 3}$ , where $\lambda$ is a hyperparameter.
+
+# 3.2 FINGERPRINT GENERATOR
+
+After the dynamical process in the Hamiltonian Engine, the molecular fingerprints are generated with the outputs as well as atom & bond features. The architecture of the Fingerprint Generator can be seen as an MPNN (Gilmer et al., 2017) instance. Analogous to that in Attentive FP (Xiong et al., 2020), messages are generated with Graph Attention Layers (GAT) (Velickovic et al., 2018), and hidden representations of nodes are updated with Gated Recurrent Units (GRU) (Cho et al., 2014). Nonetheless, the architecture of HamNet is further adapted as conformation-aware: we modify the calculation of messages and attentive energies to incorporate relative positions and momentums. Denoted in formula, the atom-level calculation in the Fingerprint Generator follows
+
+$$
+\boldsymbol {h} _ {i} ^ {0} = \mathbf {M L P} (\boldsymbol {v} _ {i}), \quad \boldsymbol {f} _ {i j} = \mathbf {M L P} (\boldsymbol {e} _ {i j}), \quad \boldsymbol {r} _ {i j} = \left(\boldsymbol {q} _ {i} \oplus \boldsymbol {p} _ {i}\right) - \left(\boldsymbol {q} _ {j} \oplus \boldsymbol {p} _ {j}\right); \tag {17}
+$$
+
+$$
+\epsilon_ {i j} ^ {l} = \left(\boldsymbol {w} _ {\epsilon} ^ {l}\right) ^ {\top} \left(\boldsymbol {f} _ {i j} \oplus \boldsymbol {r} _ {i j}\right), \quad \alpha_ {i j} ^ {l} = \operatorname {s o f t m a x} \left(\left\{\epsilon_ {i j} ^ {l} \mid j \in \mathcal {N} (i) \right\}\right), \tag {18}
+$$
+
+$$
+\boldsymbol {m} _ {i} ^ {l + 1} = \sum_ {j \in \mathcal {N} (i)} \alpha_ {i j} ^ {l + 1} \boldsymbol {W} _ {M} ^ {l} \left[ \boldsymbol {h} _ {i} ^ {l} \oplus \boldsymbol {r} _ {i j} \oplus \boldsymbol {h} _ {j} ^ {l} \right], \quad h _ {i} ^ {l + 1} = \mathbf {G R U} \left(\boldsymbol {h} _ {i} ^ {l}, \boldsymbol {m} _ {i} ^ {l + 1}\right), \quad l = 0, 1, \dots , L. \tag {19}
+$$
+
+Atom representations in the last layer $(h_i^L\mathrm{s})$ then serve as inputs to a global attentive readout function. A virtual meta node $(g)$ is established and connected to all atoms in order to conduct $M$ layers of attentive pooling. Similar to that in atom-level calculation, the positions and momentums of atoms are incorporated in the calculation:
+
+$$
+\boldsymbol {h} _ {g} ^ {0} = \frac {1}{n} \sum_ {i} \boldsymbol {h} _ {i} ^ {L}; \quad \eta_ {i} ^ {m} = \left[ \boldsymbol {h} _ {g} ^ {l} \oplus \boldsymbol {q} _ {i} \oplus \boldsymbol {p} _ {i} \oplus \boldsymbol {h} _ {i} ^ {L} \right], \quad \beta_ {i} ^ {m} = \operatorname {s o f t m a x} \left(\left\{\eta_ {i} ^ {m} | i \in V \right\}\right), \tag {20}
+$$
+
+$$
+\boldsymbol {s} _ {g} ^ {m} = \sum_ {i} \beta_ {i} ^ {m} \boldsymbol {W} _ {s} ^ {m} \left[ \boldsymbol {q} _ {i} \oplus \boldsymbol {p} _ {i} \oplus \boldsymbol {h} _ {i} ^ {L} \right], \quad \boldsymbol {h} _ {g} ^ {m + 1} = \mathbf {G R U} \left(\boldsymbol {h} _ {g} ^ {m}, \boldsymbol {s} _ {g} ^ {m}\right), \quad m = 0, 1, \dots , M. \tag {21}
+$$
+
+Here, $s_g$ is the global message, and $V$ is the set of all atoms. The final output, $h_g^M$ , is the desired fingerprints of the target molecules. The same as current neural molecular fingerprints (Duvenaud et al., 2015; Xiong et al., 2020), the generated fingerprints are then supervised with molecular properties, such as regression and classification tasks.
+
+# 3.3 DISCUSSION
+
+From a physical perspective, the Hamiltonian Engine is essentially inspired by and highly related to Molecular Dynamics (MD). The potential energy function in the engine can be regarded as a generalized while simplified molecular force field. Force fields are widely used tools in molecular simulation, where potential energies are modeled as a family of functions of conformations, whose parameters are calculated with quantum chemistry or determined by experiments. After a force field is established, conformations are optimized by minimizing the potential energy. Similarly, the dissipation we introduced in the Hamiltonian Engine serves as an implicit optimization of potential energy, as the system energy is continuously dissipated through $\Phi$ during the dynamical process. As a result, after adequate steps, the molecular conformations always converge to a local minimum of the potential energy, and the momentums of atoms converge to 0. From a deep learning perspective, the Hamiltonian Engine can be seen as a pair of dual, residual MPNNs operating on fully-connected graphs: at each step, messages calculated by $q$ influence $p$ and vice versa. Under this view, the most essential difference of introducing physical laws is that the messages passed from $q$ to $p$ is symmetric between any two given atoms, following the Newton's third law and conforming a conservative field (the potential force field). Speaking more detailly, $q$ -messages sent between a pair of atoms (i.e. the forces, $\partial \mathcal{H} / \partial q$ ) are implicitly guaranteed as symmetric, yet the actual $p$ -update (i.e. the accelerations, $\dot{p} / m$ ) may not be: they are also related to properties of the receiver (the atom mass) and the outer environment (the dissipation).
+
+# 4 RESULTS
+
+# 4.1 EXPERIMENTAL SETUP
+
+Datasets. Five molecular datasets are used to evaluate HamNet, including a Quantum Mechanics dataset (QM9) and four biomedical datasets, namely Tox21, Lipop, FreeSolv, and ESOL. QM9 (Ramakrishnan et al., 2014) contains calculated conformations and 12 quantitative quantum-chemical properties of $133\mathrm{k}$ molecules; Tox21 contains 12 binary toxicological indices of 7,831 molecules; Lipop contains quantitative liposolubility lipophilicity of 4,200 molecules; FreeSolv contains quantitative hydration free energy of 642 molecules, and ESOL contains quantitative solubility of 1,128 molecules. All datasets are referred in MoleculeNet, and the same metrics $^2$ , data split ratios $^3$ , and multi-task scheme (for QM9 and Tox21) $^4$ are used in our paper.
+
+Featurization and Implementation. We use the identical featurization as Attentive FP (Xiong et al., 2020). In total, 39-dimensional atom features (including atom types, atom degree, indicators of aromaticity and chirality et al) and 10-dimensional bond features (including bond types, indicator of conjugation et al) are derived from the molecules. One could refer to the Appendix for more details of featurization. As a default setup, we use a 20-step $(T = 20)$ Hamiltonian Engine with $d_{f} = 32$ and $L = 2$ , $M = 2$ with 200-dimensional hidden representations $(\dim(\boldsymbol{h}_i) = \dim(\boldsymbol{h}_g) = 200)$ in the Fingerprint Generator. For the training of HamNet, we first train the Hamiltonian Engine with known conformations, $^5$ and use the output to train the Fingerprint Generator, with mean-squared-error losses for regression tasks with RMSE metric, mean-absolute-error losses for those with MAE metric, and cross-entropy losses for classification tasks. Other implementation details, including the choices of hyperparameters on different datasets and the training setup are available in the Appendix.
+
+# 4.2 CONFORMATION PREDICTION
+
+We evaluate the ability of the Hamiltonian Engine in predicting molecular conformations on QM9, where known conformations of molecules are available, and reported the Kabsch-RMSD $L_{k - rmsd}$ and distance loss $L_{dist}$ losses. Two baselines are compared against the Hamiltonian Engine (Ham. Eng.): i) an MPNN with the exact architecture proposed in (Gilmer et al., 2017), supervised in the
+
+2We use MAE for QM9, ROC for Tox21, and RMSE for Lipop, FreeSolv & ESOL.
+3Data are randomly split to $8:1:1$ as training, validation and test sets.
+4Models are trained to simultaneously preserve all targets after standard normalization, and averaged performances are reported.
+On datasets without known conformations, we generate labeled conformations with RDKit to train the engine
+
+Table 1: Quantitative results of conformation prediction on QM9.
+
+| METRIC | Kabsch-RMSD (Å) | Distance Loss (10-2 Å) |
| MPNN | 1.708 | 8.620 |
| RDKit | 1.649 | 7.519 |
| Ham. Eng. (w/o LSTM) | 2.039 | 10.871 |
| Ham. Eng. (w/o dyn.) | 1.442 | 5.519 |
| Ham. Eng. (w/o Φ) | 1.389 | 5.227 |
| Ham. Eng. (w/o ADJ-3) | 1.084 | 7.746 |
| Ham. Eng. (as proposed) | 1.384 | 5.186 |
+
+
+(a)
+
+
+(b)
+
+
+(c)
+Real Conf.
+Figure 2: Visualized conformations at different steps of the Hamiltonian Engine.
+
+
+
+
+
+
+Step 0
+
+
+
+
+
+
+Step 3
+
+
+
+
+
+
+Step 6
+
+
+
+
+
+
+Step 9
+
+
+
+
+
+
+Step 12
+
+
+
+
+
+
+Step 15
+
+
+
+
+
+
+Step 20
+
+same way as we have introduced in Section 3.1; ii) a Distance Geometry (Blaney & Dixon, 2007) method tuned with the Universal Force Field (UFF), implemented in the RDKit package (referred as $RDKit$ ). We also conduct an ablation analysis of the Hamiltonian Engine by testing: i) an engine with the LSTM removed ( $w/o LSTM$ ); ii) an engine with no Hamiltonian dynamics, i.e. $T = 0$ ( $w/o dyn$ ); iii) an engine without the dissipation function ( $w/o \Phi$ ); and iv) an engine trained without the ADJ-3 loss ( $w/o ADJ-3$ ).
+
+Table 1 shows the two losses on the test sets. Our approach outperforms the Distance Geometry baseline (RDKit) by $16\%$ , while the MPNN cannot. In the ablation analysis, effectiveness of different components is also verified: improvements on both metrics are observed when LSTM, molecular dynamics, and dissipation function exist. Although training the Hamiltonian Engine simply with Kabsch-RMSD leads to better performances on the very metric, distance losses of these models are unacceptably large. This indicates although atoms tend to appear closer to their labeled locations, the relative structures inside the molecules are compromised, which is a particularly undesired result in molecular science. One could refer to Figure 2 for a more intuitive understanding of the dynamics in the Hamiltonian Engine: atoms in the initial conformations $(\hat{Q}^{(0)})$ tend to gather around the molecular centers, and the repulsion forces derived from the potential energy stretch the molecules into the correct conformations $(Q^R)$ . As dissipation exists in the system, conformations converge to the real ones after adequate steps.
+
+# 4.3 MOLECULAR PROPERTY PREDICTION
+
+We compare HamNet with five baselines on the molecular property prediction tasks. i) MoleculeNet (Wu et al., 2017) tested a collection of molecular representation approaches at the time, by which we present the best performances achieved. ii) 3DGCN (Cho & Choi, 2019) augmented conventional GCN-based methods with input bond directions. iii) DimeNet (Klicpera et al., 2020) proposed directional message passing where messages instead of atoms are embedded. iv) Attentive FP (Xiong et al., 2020) proposed an MPNN instance where local and global attentive layers are used. v) CMPNN (Song et al., 2020) strengthened the message interactions between nodes and edges through a communicative kernel. Two HamNet variants are also tested: i) a HamNet without known conformation, where all $q, p$ -related components in the Fingerprint Generator are removed;
+
+Table 2: Quantitative results on various datasets of baselines, HamNet, and its variants. Baselines using 3D conformations of test molecules are marked italic. For different metrics, $\uparrow$ indicates that the higher is better, $\downarrow$ contrarily. We directly take reported performances from corresponding references, and leave unreported entries blank $\left( {{}^{\prime \prime } - {}^{\prime \prime }}\right)$ .
+
+| DATASET METRIC | QM9 Multi-MAE↓ | Tox21 Multi-ROC↑ | Lipop RMSE↓ | FreeSolv RMSE↓ | ESOL RMSE↓ |
| MoleculeNet (2017) | 2.350 | 0.829 | 0.655 | 1.150 | 0.580 |
| 3DGCN (2019) | - | - | - | 0.824±0.014 | 0.558±0.069 |
| DimeNet (2020) | 1.920 | - | - | - | - |
| Attentive FP (2020) | 1.292 | 0.857 | 0.578 | 0.736 | 0.505 |
| CMPNN (2020) | - | 0.856±0.006 | - | 0.808±0.129 | 0.547±0.011 |
| HamNet (w/o conf.) | 1.237±0.030 | 0.868±0.012 | 0.572±0.011 | 0.840±0.023 | 0.547±0.015 |
| HamNet (real conf.) | 1.199±0.017 | 0.864±0.006 | 0.566±0.015 | 0.811±0.048 | 0.584±0.012 |
| HamNet (ours) | 1.194±0.038 | 0.875±0.006 | 0.557±0.014 | 0.731±0.024 | 0.504±0.016 |
+
+
+(a) Engine depth $(T)$ .
+
+
+(b) Dimensionality of $\pmb{q}, \pmb{p}(d_f)$ .
+
+
+(c) Step size $(\eta)$ .
+Figure 3: Effects on conformation prediction of hyperparameters in the Hamiltonian Engine. Distance losses and running time versus (a) the engine depth $T$ ; (b) the dimensionality of the generalized space $d_f$ ; and (c) the step size $\eta$ of discretization are plotted.
+
+ii) a HamNet with real conformations, where the Hamiltonian Engine is removed and all $q, p$ s are transformed from real coordinates, with MLPs, to the corresponding dimensionality. Five replicas of HamNet models are trained with means and standard deviations reported.
+
+Table 2 shows the quantitative results of the fingerprints. As HamNet and all chosen baselines follow the same evaluation scheme proposed in MoleculeNet, we directly present the reported performances and leave unreported ones blank. We also italicize baselines which leverage true 3D conformations of the test set. HamNet is able to significantly outperform all baselines on all datasets. Moreover, compared with the HamNet variant using real conformations, HamNet with the Hamiltonian Engine still performs better. We believe the reason is that as the Hamiltonian Engine is trained with translation- & rotation-invariant losses, the generalized space enjoys more robustness compared with real coordinations of atoms used directly.
+
+# 4.4 PARAMETER ANALYSIS
+
+We conduct further analysis of the effects of several important hyperparameters on the conformation prediction performances in the Hamiltonian Engine, including the engine depth $(L)$ , the dimensionality of the generalized space $(d_f)$ , and the step size $(\eta)$ . Figure 3 demonstrate the distance loss and $l$ or the running time. i) As the depth increases, the running time increases linearly, and the test loss gradually decreases until convergence. Empirically, 25-30 steps suffice. ii) The running time of the engine is hardly influenced by the dimensionality $d_f$ , while the performance enjoys a significant improvement by increasing the dimensionality when $d_f \leq 32$ . iii) An appropriate choice of the step size is crucial. For example, with the fixed engine depth $T = 20$ , an ideal choice of the step size would be $\eta \in [0.025, 0.050]$ .
+
+# 5 DISCUSSION & FUTURE WORK
+
+In this paper, we proposed a novel molecular representation approach, Molecular Hamiltonian Network (HamNet). Instead of directly using conformations as inputs, HamNet learns to predict real conformations with a physically inspired module, the Hamiltonian Engine. Novel loss functions with translational & rotational invariances are proposed to train the engine. Final representations of molecules are generated with a Fingerprint Generator, whose architecture is based on MPNNs and considerably modified to incorporate generated implicit conformations. We discussed the relationships between physics and deep learning inside the Hamiltonian Engine from both perspectives, and we believe that the physics-based model enjoys better interpretability than general MPNNs. We further demonstrated with our experiments that the proposed Hamiltonian Engine is eligible to learn molecular conformations, and that HamNet achieves state-of-the-art performances on molecular property prediction tasks in a standard benchmark (MoleculeNet).
+
+It should be noted that a recent trend in incorporating machine learning, especially deep learning, into modeling molecular potentials emerged (Chmiela et al., 2017; 2018; Zhang et al., 2018). Another related field of HamNet is neural physics engines (Sanchez-Gonzalez et al., 2018; 2019; Greydanus et al., 2019), which learn to conduct simulations that conform to physical laws. The design of the Hamiltonian Engine in our paper is highly motivated by these works, while the ultimate goal of HamNet is to derive well-designed molecular representations, instead of to accurately model the molecular dynamics. Also, instead of using the structural optimization to derive stabilized conformations after a force-field (potentials in HamNet) is established, HamNet uses the Hamiltonian Engine to make the whole process differentiable.
+
+For future work, a promising aspect would be to elaborate the learnable potential, kinetics and dissipation functions used in the Hamiltonian Engine. Work in using HamNet in more straight-forward applications would also be useful, such as virtual screening, protein-ligand binding prediction, etc. In addition, an interesting attempt in further modifying HamNet would be to change the current discretization approach of the Hamiltonian Equations to Neural ODEs (Chen et al., 2018; Sanchez-Gonzalez et al., 2019), which may yield a finer-grained simulation of the molecular dynamics.
+
+# ACKNOWLEDGMENTS
+
+This work was supported by the National Natural Science Foundation of China (Grant No. 61876006). We would also like to thank Dr. Chenbo Wang for his help in the physics theories of this paper.
+
+# REFERENCES
+
+S. D. Axen, X. P. Huang, E. L. Cceres, L. Gendelev, B. L. Roth, and M. J. Keiser. A simple representation of three-dimensional molecular structure. Journal of medicinal chemistry, 60(17): 7393-7409, 2017.
+Jeffrey M. Blaney and J. Scott Dixon. Distance Geometry in Molecular Modeling, pp. 299-335. John Wiley & Sons, Ltd, 2007.
+Tian Qi Chen, Yulia Rubanova, Jesse Bettencourt, and David Duvenaud. Neural ordinary differential equations. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018 (NeurIPS 2018), pp. 6572-6583, 2018.
+Stefan Chmiela, Alexandre Tkatchenko, Huziel E. Sauceda, Igor Poltavsky, Kristof T. Schütt, and Klaus-Robert Müller. Machine learning of accurate energy-conserving molecular force fields. Science advances, 3(5), 2017.
+Stefan Chmiela, Huziel E. Sauceda, Klaus-Robert Miller, and Alexandre Tkatchenko. Towards exact molecular dynamics simulations with machine-learned force fields. Nature communications, 9 (1), 2018.
+H. Cho and I. S. Choi. Enhanced deep-learning prediction of molecular properties via augmentation of bond topology. *ChemMedChem*, 14(17):1604–1609, 2019.
+
+Kyunghyun Cho, Bart van Merrienboer, Caglar Gulçehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), pp. 1724-1734, 2014.
+David Duvenaud, Dougal Maclaurin, Jorge Aguilera-Iparraguirre, Rafael Gomez-Bombarelli, Timothy Hirzel, Alán Aspuru-Guzik, and Ryan P. Adams. Convolutional networks on graphs for learning molecular fingerprints. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015 (NeurIPS 2015), pp. 2224-2232, 2015.
+Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning (ICML 2017), pp. 1263-1272, 2017.
+Samuel Greydanus, Misko Dzamba, and Jason Yosinski. Hamiltonian neural networks. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019 (NeurIPS 2019), pp. 15353-15363, 2019.
+Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q. Weinberger. Densely connected convolutional networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), pp. 2261-2269, 2017.
+Wengong Jin, Regina Barzilay, and Tommi S. Jaakkola. Junction tree variational autoencoder for molecular graph generation. In Proceedings of the 35th International Conference on Machine Learning, (ICML 2018), pp. 2328-2337, 2018.
+Wolfgang Kabsch. A solution of the best rotation to relate two sets of vectors. Acta Crystallographica A, 32:922-923, 1976.
+Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In Proceedings of the 5th International Conference on Learning Representations (ICLR 2017), 2017.
+Johannes Klicpera, Janek Groß, and Stephan Gunnemann. Directional message passing for molecular graphs. In 8th International Conference on Learning Representations (ICLR 2020), 2020.
+J. Lim, S. Ryu, K. Park, Y. J. Choe, J. Ham, and W. Y. Kim. Predicting drug-target interaction using a novel graph neural network with 3d structure-embedded graph representation. Journal of chemical information and modeling, 59(9):3981-3988, 2019.
+H. L. Morgan. The generation of a unique machine description for chemical structures-a technique developed at chemical abstracts service. Journal of chemical documentation, 5(2):63-112, 1965.
+Raghunathan Ramakrishnan, Pavlo O. Dral, Matthias Rupp, and O. Anatole von Lilienfeld. Quantum chemistry structures and properties of 134 kilo molecules. Scientific Data, 1(140022), 2014.
+D. Rogers and M. Hahn. Extended-connectivity fingerprints. Journal of chemical information and modeling, 50(5):742-754, 2010.
+Alvaro Sanchez-Gonzalez, Nicolas Heess, Jost Tobias Springenberg, Josh Merel, Martin A. Ried-miller, Raia Hadsell, and Peter W. Battaglia. Graph networks as learnable physics engines for inference and control. In Proceedings of the 35th International Conference on Machine Learning (ICML 2018), pp. 4467-4476, 2018.
+Alvaro Sanchez-Gonzalez, Victor Bapat, Kyle Cranmer, and Peter W. Battaglia. Hamiltonian graph networks with ODE integrators. CoRR, abs/1909.12790, 2019.
+Michael Sejr Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. Modeling relational data with graph convolutional networks. In Proceedings of the 15th Extended Semantic Web Conference (ESWC 2018), pp. 593-607, 2018.
+
+Ying Song, Shuangjia Zheng, Zhangming Niu, Zhang-Hua Fu, Yutong Lu, and Yuedong Yang. Communicative representation learning on attributed molecular graphs. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, (IJCAI 2020), pp. 2831-2838, 2020.
+Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. In Proceedings of the 6th International Conference on Learning Representations (ICLR 2018), 2018.
+Y. Wang, J. Hu, J. Lai, Y. Li, H. Jin, L. Zhang, L. R. Zhang, and Z. M. Liu. Tf3p: Three-dimensional force fields fingerprint learned by deep capsular network. Journal of chemical information and modeling, 60(6):2754-2765, 2020.
+D. Weininger. Smiles 1. introduction and encoding rules. Journal of chemical information and computer sciences, 28(31), 1988.
+Z. Wu, B. Ramsundar, E. N. Feinberg, J. Gomes, C. Geniesse, A. S. Pappu, K. Leswing, and V. Pande. Moleculenet: a benchmark for molecular machine learning. Chemical science, 9 (2):513-530, 2017.
+Z. Xiong, D. Wang, X. Liu, F. Zhong, X. Wan, X. Li, Z. Li, X. Luo, K. Chen, H. Jiang, and M. Zheng. Pushing the boundaries of molecular representation for drug discovery with the graph attention mechanism. Journal of medicinal chemistry, 63(16):8749-8760, 2020.
+Jiaxuan You, Bowen Liu, Zhitao Ying, Vijay S. Pande, and Jure Leskovec. Graph convolutional policy network for goal-directed molecular graph generation. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018 (NeurIPS 2018), pp. 6412-6422, 2018.
+Linfeng Zhang, Jiequn Han, Han Wang, Roberto Car, and E. Weinan. Deep potential molecular dynamics: A scalable model with the accuracy of quantum mechanics. Physical review letters, 120(14), 2018.
\ No newline at end of file
diff --git a/conformationguidedmolecularrepresentationwithhamiltonianneuralnetworks/images.zip b/conformationguidedmolecularrepresentationwithhamiltonianneuralnetworks/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..eb2d2dea20bdfe9874c65c7f1ec6cbc1a9354481
--- /dev/null
+++ b/conformationguidedmolecularrepresentationwithhamiltonianneuralnetworks/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2fb697bdb0a1b78ab2a57450792a99e412b20effcaeb0d5104f40fda2d668782
+size 454231
diff --git a/conformationguidedmolecularrepresentationwithhamiltonianneuralnetworks/layout.json b/conformationguidedmolecularrepresentationwithhamiltonianneuralnetworks/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..df950879c34729949b48f030d2164f9965233079
--- /dev/null
+++ b/conformationguidedmolecularrepresentationwithhamiltonianneuralnetworks/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e341aae6a9b15f2dbc6c1a70e192a05fdb8bebebe703d44cd4c9b0e72198dd14
+size 434974
diff --git a/cprclassifierprojectionregularizationforcontinuallearning/d098824d-89c4-4450-8e7d-5efb4f5126d5_content_list.json b/cprclassifierprojectionregularizationforcontinuallearning/d098824d-89c4-4450-8e7d-5efb4f5126d5_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..5cdd59bcf316c34b757f2126cf0f61ab06ca73ae
--- /dev/null
+++ b/cprclassifierprojectionregularizationforcontinuallearning/d098824d-89c4-4450-8e7d-5efb4f5126d5_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:55c585793ee509a8ffd5bf4262cf8cf8e9dce3606433a2930f06528f6a41749a
+size 74740
diff --git a/cprclassifierprojectionregularizationforcontinuallearning/d098824d-89c4-4450-8e7d-5efb4f5126d5_model.json b/cprclassifierprojectionregularizationforcontinuallearning/d098824d-89c4-4450-8e7d-5efb4f5126d5_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..2e072578ba3ba8caa26dbf1890c4f3285bbbc47b
--- /dev/null
+++ b/cprclassifierprojectionregularizationforcontinuallearning/d098824d-89c4-4450-8e7d-5efb4f5126d5_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8782b04b7a0dd64eac9930dc0a67e42fa0f5d44d59c790959fd0089a91c625a2
+size 91929
diff --git a/cprclassifierprojectionregularizationforcontinuallearning/d098824d-89c4-4450-8e7d-5efb4f5126d5_origin.pdf b/cprclassifierprojectionregularizationforcontinuallearning/d098824d-89c4-4450-8e7d-5efb4f5126d5_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..39d8e548423317ff211de268ca23d25d694380c7
--- /dev/null
+++ b/cprclassifierprojectionregularizationforcontinuallearning/d098824d-89c4-4450-8e7d-5efb4f5126d5_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7a7f218fe28688ae3de7e5651a74a13b7a032215c8073e5339b9be344e8250f2
+size 1167150
diff --git a/cprclassifierprojectionregularizationforcontinuallearning/full.md b/cprclassifierprojectionregularizationforcontinuallearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..955638e1ff28b8030b53636b19f902c41b67d4be
--- /dev/null
+++ b/cprclassifierprojectionregularizationforcontinuallearning/full.md
@@ -0,0 +1,259 @@
+# CPR: CLASSIFIER-PROJECTION REGULARIZATION FOR CONTINUAL LEARNING
+
+Sungmin Cha $^{1}$ , Hsiang Hsu $^{2}$ , Taebaek Hwang $^{1}$ , Flavio P. Calmon $^{2}$ , and Taesup Moon $^{3*}$
+
+$^{1}$ Sungkyunkwan University $^{2}$ Harvard University $^{3}$ Seoul National University csm9493@skku.edu, hsianghsu@g.harvard.edu, gxq9106@gmail.com, fcalmon@g.harvard.edu, tsmoon@snu.ac.kr
+
+# ABSTRACT
+
+We propose a general, yet simple patch that can be applied to existing regularization-based continual learning methods called classifier-projection regularization (CPR). Inspired by both recent results on neural networks with wide local minima and information theory, CPR adds an additional regularization term that maximizes the entropy of a classifier's output probability. We demonstrate that this additional term can be interpreted as a projection of the conditional probability given by a classifier's output to the uniform distribution. By applying the Pythagorean theorem for KL divergence, we then prove that this projection may (in theory) improve the performance of continual learning methods. In our extensive experimental results, we apply CPR to several state-of-the-art regularization-based continual learning methods and benchmark performance on popular image recognition datasets. Our results demonstrate that CPR indeed promotes a wide local minima and significantly improves both accuracy and plasticity while simultaneously mitigating the catastrophic forgetting of baseline continual learning methods. The codes and scripts for this work are available at https://github.com/csm9493/CPR_CL.
+
+# 1 INTRODUCTION
+
+Catastrophic forgetting (McCloskey & Cohen, 1989) is a central challenge in continual learning (CL): when training a model on a new task, there may be a loss of performance (e.g., decrease in accuracy) when applying the updated model to previous tasks. At the heart of catastrophic forgetting is the stability-plasticity dilemma (Carpenter & Grossberg, 1987; Mermillod et al., 2013), where a model exhibits high stability on previously trained tasks, but suffers from low plasticity for the integration of new knowledge (and vice-versa). Attempts to overcome this challenge in neural network-based CL can be grouped into three main strategies: regularization methods (Li & Hoiem, 2017; Kirkpatrick et al., 2017; Zenke et al., 2017; Nguyen et al., 2018; Ahn et al., 2019; Aljundi et al., 2019), memory replay (Lopez-Paz & Ranzato, 2017; Shin et al., 2017; Rebuffi et al., 2017; Kemker & Kanan, 2018), and dynamic network architecture (Rusu et al., 2016; Yoon et al., 2018; Golkar et al., 2019). In particular, regularization methods that control model weights bear the longest history due to its simplicity and efficiency to control the trade-off for a fixed model capacity.
+
+In parallel, several recent methods seek to improve the generalization of neural network models trained on a single task by promoting wide local minima (Keskar et al., 2017; Chaudhari et al., 2019; Pereyra et al., 2017; Zhang et al., 2018). Broadly speaking, these efforts have experimentally shown that models trained with wide local minima-promoting regularizers achieve better generalization and higher accuracy (Keskar et al., 2017; Pereyra et al., 2017; Chaudhari et al., 2019; Zhang et al., 2018), and can be more robust to weight perturbations (Zhang et al., 2018) when compared to usual training methods. Despite the promising results, methods that promote wide local minima have yet to be applied to CL.
+
+In this paper, we make a novel connection between wide local minima in neural networks and regularization-based CL methods. The typical regularization-based CL aims to preserve important weight parameters used in past tasks by penalizing large deviations when learning new tasks. As
+
+
+Figure 1: In typical regularization-based CL (top), when the low-error ellipsoid around local minima is sharp and narrow, the space for candidate model parameters that perform well on all tasks (i.e., the intersection of the ellipsoid for each task) quickly becomes very small as learning continues, thus, an inevitable trade-off between stability and plasiticity occurs. In contrast, when the wide local minima exists for each task (bottom), it is more likely the ellipsoids will significantly overlap even when the learning continues, hence, finding a well performing model for all tasks becomes more feasible.
+
+shown in the top of Fig. 1, a popular geometric intuition (as first given in EWC (Kirkpatrick et al., 2017)) for such CL methods is to consider the (uncertainty) ellipsoid of parameters around the local minima. When learning new tasks, parameter updates are selected in order to not significantly hinder model performance on past tasks. Our intuition is that promoting a wide local minima—which conceptually stands for local minima having a flat, rounded uncertainty ellipsoid—can be particularly beneficial for regularization-based CL methods by facilitating diverse update directions for the new tasks (i.e., improves plasticity) while not hurting the past tasks (i.e., retains stability). As shown in the bottom of Fig. 1, when the ellipsoid containing the parameters with low-error is wider, i.e., when the wide local minima exist, there is more flexibility in finding a parameter that performs well for all tasks after learning a sequence of new tasks. We provide further details in Section 2.1.
+
+Based on the above intuition, we propose a general, yet simple patch that can be applied to existing regularization-based CL methods dubbed as Classifier-Projection Regularization (CPR). Our method implements an additional regularization term that promotes wide local minima by maximizing the entropy of the classifier's output distribution. Furthermore, from a theory standpoint, we make an observation that our CPR term can be further interpreted in terms of information projection (I-projection) formulations (Cover & Thomas, 2012; Murphy, 2012; Csiszár & Matus, 2003; Walsh & Regalia, 2010; Amari et al., 2001; Csiszár & Matus, 2003; Csiszár & Shields, 2004) found in information theory. Namely, we argue that applying CPR corresponds to projecting a classifier's output onto a Kullback-Leibler (KL) divergence ball of finite radius centered around the uniform distribution. By applying the Pythagorean theorem for KL divergence, we then prove that this projection may (in theory) improve the performance of continual learning methods.
+
+Through extensive experiments on several benchmark datasets, we demonstrate that applying CPR can significantly improve the performance of the state-of-the-art regularization-based CL: using our simple patch improves both the stability and plasticity and, hence, achieves better average accuracy almost uniformly across the tested algorithms and datasets—confirming our intuition of wide local minima in Fig. 1. Furthermore, we use a feature map visualization that compares methods trained with and without CPR to further corroborate the effectiveness of our method.
+
+# 2 CPR: CLASSIFIER-PROJECTION REGULARIZATION FOR WIDE LOCAL MINIMUM
+
+In this section, we elaborate in detail the core motivation outlined in Fig. 1, then formalize CPR as the combination of two regularization terms: one stemming from prior regularization-based CL methods, and the other that promotes a wide local minima. Moreover, we provide an information-geometric interpretation (Csiszár, 1984; Cover & Thomas, 2012; Murphy, 2012) for the observed gain in performance when applying CPR to CL.
+
+We consider continual learning of $T$ classification tasks, where each task contains $N$ training sample-label pairs $\{(\mathbf{x}_n^t,y_n^t)\}_{n = 1}^N$ , $t\in [1,\dots ,T]$ with $\mathbf{x}_n^t\in \mathbb{R}^d$ , and the labels of each task has $M_{t}$ classes, i.e., $y_{n}^{t}\in [1,\dots ,M_{t}]$ . Note that task boundaries are given in evaluation time; i.e., we consider a task-aware setting. We denote $f_{\theta}:\mathbb{R}^{d}\to \Delta_{M}$ as a neural network-based classification model with softmax output layer parameterized by $\pmb{\theta}$ .
+
+# 2.1 MOTIVATION: INTRODUCING WIDE LOCAL MINIMA IN CONTINUAL LEARNING
+
+Considering the setting of typical regularization-based CL (top of Fig. 1), we denote $\theta_{i}^{*}$ as parameters that achieve local minima for a specific task $i$ and $\hat{\theta}_i$ is that obtained with regularization terms. Assuming that $\theta_1^*$ is learnt, when learning task 2, an appropriate regularization updates the parameters from $\theta_{1}^{*}$ to $\hat{\theta}_{2}$ instead of $\theta_{2}^{*}$ , since $\hat{\theta}_{2}$ achieves low-errors on both tasks 1 and 2. However, when the low-error regimes (ellipsoids in Fig. 1) are narrow, it is often infeasible to obtain a parameter that performs well on all three tasks. This situation results in the trade-off between stability and plasticity in regularization-based CL (Carpenter & Grossberg, 1987). Namely, stronger regularization strength (direction towards past tasks) brings more stability $(\hat{\theta}_3^1)$ , and hence less forgetting on past tasks. In contrast, weaker regularization strength (direction towards future tasks) leads to more plasticity so that the updated parameter $\hat{\theta}_{3}^{2}$ performs better on recent tasks, at the cost of compromising the performance of past tasks.
+
+A key problem in the previous setting is that the parameter regimes that achieve low error for each task are often narrow and do not overlap with each other. Therefore, a straightforward solution is to enlarge the low-error regimes such that they have non-empty intersections with higher chance. This observation motivates us to consider wide local minima for each task in CL (bottom of Fig. 1). With wide local minima for each task, a regularization-based CL can more easily find a parameter, $\hat{\theta}_{3}$ , that is close to the local minima for each task, i.e., $\{\pmb{\theta}_i^*\}_{i=1}^3$ . Moreover, it suggests that once we promote the wide local minima of neural networks during continual learning, both the stability and plasticity could potentially be improved and result in simultaneously higher accuracy for all task — which is later verified in our experimental (see Sec. 3). In the next section, we introduce the formulation of wide local minima in CL.
+
+# 2.2 CLASSIFIER PROJECTION REGULARIZATION FOR CONTINUAL LEARNING
+
+Regularization-based continual learning Typical regularization-based CL methods attach a regularization term that penalizes the deviation of important parameters learned from past tasks in order to mitigate catastrophic forgetting. The general loss form for these methods when learning task $t$ is
+
+$$
+L _ {\mathrm {C L}} ^ {t} (\boldsymbol {\theta}) = L _ {\mathrm {C E}} ^ {t} (\boldsymbol {\theta}) + \lambda \sum_ {i} \Omega_ {i} ^ {t - 1} \left(\theta_ {i} - \theta_ {i} ^ {t - 1}\right) ^ {2}, \tag {1}
+$$
+
+where $L_{\mathrm{CE}}^t (\pmb {\theta})$ is the ordinary cross-entropy loss function for task $t$ , $\lambda$ is the dimensionless regularization strength, $\Omega^{t - 1} = \{\Omega_i^{t - 1}\}$ is the set of estimates of the weight importance, and $\{\theta_i^{t - 1}\}$ is the parameter learned until task $t - 1$ . A variety of previous work, e.g., EWC (Kirkpatrick et al., 2017), SI (Zenke et al., 2017), MAS (Aljundi et al., 2018), and RWalk (Chaudhry et al., 2018), proposed different ways of calculating $\Omega^{t - 1}$ to measure weight importance.
+
+Single-task wide local minima Several recent schemes have been proposed (Pereyra et al., 2017; Szegedy et al., 2016; Zhang et al., 2018) to promote wide local minima of a neural network for solving a single task. These approaches can be unified by the following common loss form
+
+$$
+L _ {\mathsf {W L M}} (\boldsymbol {\theta}) = L _ {\mathsf {C E}} (\boldsymbol {\theta}) + \frac {\beta}{N} \sum_ {n = 1} ^ {N} D _ {\mathsf {K L}} \left(f _ {\boldsymbol {\theta}} \left(\mathbf {x} _ {n}\right) \| g\right), \tag {2}
+$$
+
+where $g$ is some probability distribution in $\Delta_M$ that regularizes the classifier output $f_{\theta}$ , $\beta$ is a trade-off parameter, and $D_{\mathrm{KL}}(\cdot \| \cdot)$ is the KL divergence (Cover & Thomas, 2012). Note that, for example, when $g$ is uniform distribution $P_U$ in $\Delta_M$ , the regularization term corresponds to entropy maximization proposed in Pereyra et al. (2017), and when $g$ is another classifier's output $f_{\theta'}$ , Eq. (2) becomes equivalent to the loss function in Zhang et al. (2018).
+
+CPR: Achieving wide local minima in continual learning Combining the above two regularization terms, we propose the CPR as the following loss form for learning task $t$ :
+
+$$
+L _ {\mathrm {C P R}} ^ {t} (\boldsymbol {\theta}) = L _ {\mathrm {C E}} ^ {t} (\boldsymbol {\theta}) + \frac {\beta}{N} \sum_ {n = 1} ^ {N} D _ {\mathrm {K L}} \left(f _ {\boldsymbol {\theta}} \left(\mathbf {x} _ {n} ^ {t}\right) \| P _ {U}\right) + \lambda \sum_ {i} \Omega_ {i} ^ {t - 1} \left(\theta_ {i} - \theta_ {i} ^ {t - 1}\right) ^ {2}, \tag {3}
+$$
+
+where $\lambda$ and $\beta$ are the regularization parameters. The first regularization term promotes the wide local minima while learning task $t$ by using $P_U$ as the regularizing distribution $g$ in (2), and the second term is from the typical regularization-based CL. Note that this formulation is oblivious to $\Omega_{t-1}$ and, hence, it can be applied to any state-of-the-art regularization-based CL methods. In our experiments, we show that the simple addition of the KL-term can significantly boost the performance of several representative state-of-the-art methods, confirming our intuition on wide local minima for CL given in Section 2.1 and Fig 1. Furthermore, we show in the next section that the KL-term can be geometrically interpreted in terms of information projections (Csiszár, 1984; Cover & Thomas, 2012; Murphy, 2012), providing an additional argument (besides promoting wide local minima) for the benefit of using CPR in continual learning.
+
+# 2.3 INTERPRETATION BY INFORMATION PROJECTION
+
+Minimizing the KL divergence terms in (2) and (3) can be expressed as the optimization $\min_{Q\in \mathcal{Q}}D_{\mathrm{KL}}(Q\| P)$ , where $P$ is a given distribution and $\mathcal{Q}$ is a convex set of distributions in the probability simplex $\Delta_m\triangleq \{\mathbf{p}\in [0,1]^m |\sum_{i = 1}^m\mathbf{p}_i = 1\}$ . In other words, the optimizer $P^{*}$ is a distribution in $\mathcal{Q}$ that is "closest" to $P$ , where the distance is measured by the KL divergence, and is termed the information projection (also called I-projection), i.e.,
+
+$$
+P ^ {*} = \underset {Q \in \mathcal {Q}} {\arg \min } D _ {\mathrm {K L}} (Q \| P). \tag {4}
+$$
+
+The quantity $P^*$ has several operational interpretations in information theory (e.g., in universal source coding (Cover & Thomas, 2012)). In addition, the information projection enables a "geometric" interpretation of KL divergence, where $D_{\mathrm{KL}}(Q \| P)$ behaves as the squared Euclidean distance, and $(Q, P^*, P)$ form a "right triangle". The following lemma resembles the Pythagoras' triangle inequality theorem (not satisfied in general by the KL divergence) in information geometry (Cover & Thomas, 2012).
+
+Lemma 1. Suppose $\exists P^{*}\in \mathcal{Q}$ such that $D_{\mathsf{KL}}(P^{*}\| P) = \min_{Q\in \mathcal{Q}}D_{\mathsf{KL}}(Q\| P)$ , then
+
+$$
+D _ {\mathrm {K L}} (Q \| P) \geq D _ {\mathrm {K L}} (Q \| P ^ {*}) + D _ {\mathrm {K L}} (P ^ {*} \| P), \forall Q \in \mathcal {Q}. \tag {5}
+$$
+
+In this sense, when minimizing the KL divergence terms in (2) and (3), we project the classifier output $f_{\theta}(\cdot)$ towards a given distribution (e.g., a uniform distribution). Since $f_{\theta}(\cdot)$ can be viewed as a conditional probability distribution $Q_{Y|X}$ , where $Y$ is the class label and $X$ is the input, we consider an extension of the information projection to conditional distributions. We call this extension as the classifier projection, which seeks a classifier output $P_{Y|X}^{*}$ in a set $\mathcal{C}$ that is closest (measured by the expected KL divergence) to a given conditional distribution $P_{Y|X}$ . Formally, given a convex set $\mathcal{C}$ of conditional distributions, the classifier projection is defined as
+
+$$
+P _ {Y \mid X} ^ {*} = \underset {Q _ {Y \mid X} \in \mathcal {C}} {\arg \min } \mathbb {E} _ {P _ {X}} \left[ D _ {\mathrm {K L}} \left(Q _ {Y \mid X} (\cdot | X) \| P _ {Y \mid X} (\cdot | X)\right) \right]. \tag {6}
+$$
+
+The KL divergence terms in (2) and (3) are exactly the empirical version of the objective in (6) by taking $Q_{Y|X}(\cdot |X) = f_{\pmb{\theta}}(\cdot)$ and $P_{Y|X}(\cdot |X) = g$ or $P_U$ .
+
+Now, we explain why classifier projection can help overcoming catastrophic forgetting. Let the classifier after training task 1 be $P_{Y|X}^{1*} \in \mathcal{C} \subset \Delta_m$ , and suppose we have two classifiers for task 2, one is $P_{Y|X}^2 \notin \mathcal{C}$ and the other is $P_{Y|X}^{2*}$ by projecting $P_{Y|X}^2$ to $\mathcal{C}$ using (6), the triplet $(P_{Y|X}^{1*}, P_{Y|X}^2, P_{Y|X}^{2*})$ forms a right triangle, and $P_{Y|X}^{1*}$ and $P_{Y|X}^{2*}$ are closer to each other measured by the KL divergence (Lemma 1). Therefore, when evaluating on task 1, the classifier of task 2 after projection is more similar to each other in terms of cross-entropy (transformed from the KL divergence), guaranteeing a smaller change in training loss and accuracy. The following proposition formally summarizes this paragraph.
+
+Proposition 1. For any classifier $P_{Y|X}^{t - 1*} \in \mathcal{C}$ for task $t - 1$ with data distribution $P_X^{t - 1}$ , and let any classifier for task $t$ be $P_{Y|X}^t \notin \mathcal{C}$ and $P_{Y|X}^{t*}$ be the projected classifier by (6), then
+
+$$
+\mathbb {E} _ {P _ {Y \mid X} ^ {t - 1 *} P _ {X} ^ {t - 1}} \left[ - \log P _ {Y \mid X} ^ {t} P _ {X} ^ {t - 1} \right] \geq \mathbb {E} _ {P _ {Y \mid X} ^ {t - 1 *} P _ {X} ^ {t - 1}} \left[ - \log P _ {Y \mid X} ^ {t *} P _ {X} ^ {t - 1} \right]. \tag {7}
+$$
+
+To implement the CPR, we need to pre-define the set of possible classifiers $\mathcal{C}$ for projection. An intuitively best choice is the set of classifiers that perform well on all tasks (can be obtained by training all tasks simultaneously). However, in CL setting, such classifiers are not available; therefore, we pick the set of possible classifiers $\mathcal{C}$ to be a KL divergence ball centered at the uniform distribution $P_U$ , i.e.,
+
+$$
+\mathcal {C} \left(P _ {U}, \epsilon\right) \triangleq \left\{Q _ {Y | X} \in \Delta_ {M} \mid \mathbb {E} _ {X} \left[ D _ {K L} \left(Q _ {Y | X} \| P _ {U}\right) \right] \leq \epsilon \right\}.
+$$
+
+We select $P_U$ since it is the centroid of $\Delta_M$ and, hence, the worst-case divergence between any distribution and $P_U$ is at most $\log M$ . From the vantage point of classifier projection, the CPR regularization term in (3) can be viewed as the Lagrange dual of the constraint $Q_{Y|X} \in \mathcal{C}(P_U, \epsilon)$ —the term that projects the classifier of individual tasks towards the uniform distribution in order to minimize changes when training sequential tasks (See Fig. 2).
+
+
+Figure 2: CPR can be understood as a projection onto a finite radius ball around $P_U$ .
+
+# 3 EXPERIMENTAL RESULTS
+
+We apply CPR to four regularization-based supervised CL methods: EWC (Kirkpatrick et al., 2017), SI (Zenke et al., 2017), MAS (Aljundi et al., 2018), RWalk (Chaudhry et al., 2018) and AGS-CL (Jung et al., 2020), and further analyze CPR via ablation studies and feature map visualizations.
+
+# 3.1 DATA AND EVALUATION METRICS
+
+We select CIFAR-100, CIFAR-10/100 (Krizhevsky et al., 2009), Omniglot (Lake et al., 2015), and CUB200 (Welinder et al., 2010) as benchmark datasets. Note that we ignore the permuted-MNIST dataset (LeCun et al., 1998) since most state-of-the-art algorithms can already achieve near perfect accuracy on it. CIFAR-100 is divided into 10 tasks where each task has 10 classes. CIFAR-10/100 additionally uses CIFAR-10 for pre-training before learning tasks from CIFAR-100. Omniglot has 50 tasks, where each task is a binary image classification on a given alphabet. For these datasets, we used a simple feed-forward convolutional neural network (CNN) architecture. For the more challenging CUB200 dataset, which has 10 tasks with 20 classes for each task, we used a pre-trained ResNet-18 (He et al., 2016) as the initial model. Training details, model architectures, hyperparameters tuning, and source codes are available in the Supplementary Material (SM).
+
+For evaluation, we first let $a_{k,j} \in [0,1]$ be the $j$ -th task accuracy after training the $k$ -th task ( $j \leq k$ ). Then, we used the following three metrics to measure the continual learning performance:
+
+- Average Accuracy (A) is the average accuracy $A_{k}$ on the first $k$ tasks after training the $k$ -th task, i.e., $A_{k} = \frac{1}{k}\sum_{j=1}^{k}a_{k,j}$ . While being a natural metric, Average Accuracy fails to explicitly measure the plasticity and stability of a CL method.
+- Forgetting Measure (F) evaluates stability. Namely, we define the forgetting measure $f_{k}^{j}$ of the $j$ -th task after training $k$ -th task as $f_{k}^{j} = \max_{l \in \{j, \dots, k - 1\}} a_{l,j} - a_{k,j}, \forall j < k$ , and the average forgetting measure $F_{k}$ of a CL method as $F_{k} = \frac{1}{k - 1} \sum_{j = 1}^{k - 1} f_{k}^{j}$ .
+- Intransigence Measure (I) measures the plasticity. Let $a_{j}^{\star}$ be accuracy of a model trained by fine-tuning for the $j$ -th task without applying any regularization (in other words, only task-specific loss is used). The intransigence measure $I_{s,k}$ is then defined as $I_{s,k} = \frac{1}{k - s + 1}\sum_{j=s}^{k}i_{j}$ , where $i_{j} = a_{j}^{\star} - a_{j,j}$ .
+
+The $F$ and $I$ metrics were originally proposed in (Chaudhry et al., 2018), and we slightly modified their definitions for our usage. Note that a low $F_{k}$ and $I_{1,k}$ implies high stability (low forgetting) and high plasticity (good forward transfer) of a CL method, respectively.
+
+
+(a) Selecting regularization strength for CPR
+
+
+(b) Adding Gaussian noise
+
+
+(c) Analysis using PyHessian
+Figure 3: Verifying the regularization for wide local minima
+
+# 3.2 QUANTIFYING THE ROLE OF WIDE LOCAL MINIMA REGULARIZATION
+
+We first demonstrate the effect of applying CPR with varying trade-off parameter $\beta$ in (3) by taking EWC (Kirkpatrick et al., 2017) trained on CIFAR-100 as a running example. Fig. 3(a) shows how the aforementioned metrics varies as $\beta$ changes over $[0.1, \dots, 1]$ . First, we observe that $A_{10}$ certainly increases as $\beta$ increases. Moreover, we can break down the gain in terms of $I_{1,10}$ and $F_{10}$ ; we observe $I_{1,10}$ monotonically decreases as $\beta$ increases, but $F_{10}$ does not show the similar monotonicity although it also certainly decreases with $\beta$ . This suggests that enlarged wide local minima is indeed helpful for improving both plasticity and stability. In the subsequent experiments, we selected $\beta$ using validation sets by considering all three metrics; among the $\beta$ 's that achieve sufficiently high $A_{10}$ , we chose one that can reduce $F_{10}$ more than reducing $I_{1,10}$ , since it turns out improving the stability seems more challenging. (In fact, in some experiments, when we simply consider $A_{10}$ , the chosen $\beta$ will result in the lowest $I_{1,10}$ but with even higher $F_{10}$ than the case without CPR.) For comparison purposes, we also provide experiments using Deep Mutual Learning (Zhang et al., 2018) and Label Smoothing (Szegedy et al., 2016) regularizer for achieving the wide local minima in the SM; their performance was slightly worse than CPR.
+
+With the best $\beta$ in hand, Fig. 3(b) experimentally verifies whether using CPR indeed makes the local minima wide. Following the methodology in Zhang et al. (2018), we perturb the network parameters, after learning the final task, of EWC and EWC+CPR by adding Gaussian noise with increasing $\sigma$ , then measure the increase in test loss for each task (for CIFAR-100). From the figure, we clearly observe that EWC+CPR has a smoother increase in test loss compared with EWC (without CPR) in each task. This result empirically confirms that CPR indeed promotes wide local minima for each task in CL settings and validates our initial intuition given in Sec. 2.1. In the SM, we repeat the same experiment with MAS (Aljundi et al., 2018).
+
+To plot the loss landscape of each model directly, we used PyHessian (Yao et al., 2019), which is the framework that can plot the loss landscape of NNs by perturbing the model parameters across the first and second Hessian eigenvectors. As an example, Figure 3(c) plots and compares the loss landscapes on the training data of the 2nd task of CIFAR-100, for the networks trained with EWC and EWC+CPR, respectively. It clearly shows that, by applying CPR, the loss landscape becomes much wider than that of the vanilla EWC. We consistently observe such trend across all tasks, of which are visualized in the SM, and we believe this confirms our intuition for adding the CPR term.
+
+# 3.3 COMPARISON WITH STATE-OF-THE-ART
+
+Next, we apply CPR to the state-of-the-art regularization-based CL on the benchmark datasets and measure the performance improvement with the three metrics in Section 3.1. For the regularization strengths, we first select the best $\lambda$ without CPR, then choose $\beta$ according to the procedure in Section 3.2. The results in Table 1 are averaged over 10 repeated experiments with different random initialization and task sequence using the chosen $(\lambda, \beta)$ . The hyperparameters are reported in the SM.
+
+CIFAR-100 and CIFAR-10/100 In Table 1 and Fig. 4(a), we observe that CPR consistently improves all regularization-based methods for all tested datasets in terms of increasing $A_{10}$ and decreasing $I_{1,10}$ , and also consistently decreases $F_{10}$ . Additionally, we find that for CIFAR-10/100, the orders of the 10 tasks in CIFAR-100 and CIFAR-10 affect the performance of the CPR; namely, in the SM, we show that when CIFAR-10 tasks are positioned in different positions rather than at the beginning, the gain due to CPR got much bigger.
+
+Table 1: Experimental results on CL benchmark dataset with and without CPR.
+
+| Dataset | Method | Average Accuracy (A10) | Forgetting Measure (F10) | Intransigence Measure (I1,10) |
| W/o CPR | W/ CPR | diff (W-W/o) | W/o CPR | W/ CPR | diff (W-W/o) | W/o CPR | W/ CPR | diff (W-W/o) |
| CIFAR100 (T=10) | EWC | 0.6002 | 0.6328 | +0.0326 (+5.2%) | 0.0312 | 0.0285 | -0.0027 (-8.7%) | 0.1419 | 0.1117 | -0.0302 (-21.3%) |
| SI | 0.6141 | 0.6476 | +0.0336 (+5.5%) | 0.1106 | 0.0999 | -0.0107 (-9.7%) | 0.0566 | 0.0327 | -0.0239 (-42.2%) |
| MAS | 0.6172 | 0.6442 | +0.0270 (+4.4%) | 0.0416 | 0.0460 | -0.0011 (-2.6%) | 0.1155 | 0.0778 | -0.0257 (-22.2%) |
| Rwalk | 0.5784 | 0.6366 | +0.0581 (+10.0%) | 0.0937 | 0.0769 | -0.0169 (-18.0%) | 0.1074 | 0.0644 | -0.0430 (-40.0%) |
| AGS-CL | 0.6369 | 0.6615 | +0.0246 (+3.9%) | 0.0259 | 0.0247 | -0.0012 (-4.63%) | 0.1100 | 0.0865 | -0.0235 (-24.4%) |
| CIFAR10/100 (T=11) | EWC | 0.6950 | 0.7055 | +0.0105 (+1.5%) | 0.0228 | 0.0181 | -0.0048 (-21.1%) | 0.1121 | 0.1058 | -0.0062 (-5.5%) |
| SI | 0.7127 | 0.7186 | +0.0059 (+0.8%) | 0.0459 | 0.0408 | -0.0051 (-11.1%) | 0.0733 | 0.0721 | -0.0012 (-1.6%) |
| MAS | 0.7239 | 0.7257 | +0.0017 (+0.2%) | 0.0479 | 0.0476 | -0.0003 (-0.6%) | 0.0603 | 0.0588 | -0.0015 (-2.5%) |
| Rwalk | 0.6934 | 0.7046 | +0.0112 (+1.6%) | 0.0738 | 0.0707 | -0.0031 (-4.2%) | 0.0672 | 0.0589 | -0.0084 (-12.5%) |
| AGS-CL | 0.7580 | 0.7613 | +0.0032 (+0.4%) | 0.0009 | 0.0009 | 0 | 0.0731 | 0.0697 | -0.0034 (-4.7%) |
| Omniglot (T=50) | EWC | 0.6632 | 0.8387 | +0.1755 (+26.5%) | 0.2096 | 0.0321 | -0.1776 (-84.7%) | -0.0227 | -0.0239 | -0.0012 (-5.3%) |
| SI | 0.8478 | 0.8621 | +0.0143 (+1.7%) | 0.0247 | 0.0167 | -0.0079 (-32.0%) | -0.0258 | -0.0282 | -0.0065 (-25.3%) |
| MAS | 0.8401 | 0.8679 | +0.0278 (+3.3%) | 0.0316 | 0.0101 | -0.0215 (-68.0%) | -0.0247 | -0.0314 | -0.0067 (-27.1%) |
| Rwalk | 0.8056 | 0.8497 | +0.0440 (+5.5%) | 0.0644 | 0.0264 | -0.0380 (-59.0%) | -0.0226 | -0.0294 | -0.0068 (-30.1%) |
| AGS-CL | 0.8553 | 0.8805 | +0.0253 (+3.0%) | 0 | 0 | 0 | 0.0323 | 0.0046 | -0.0277 (-85.8%) |
| CUB200 (T=10) | EWC | 0.5746 | 0.6098 | +0.0348 (+6.1%) | 0.0811 | 0.0807 | -0.0004 (-0.5%) | 0.1011 | 0.0667 | -0.0345 (-34.1%) |
| SI | 0.6047 | 0.6232 | +0.0157 (+2.6%) | 0.0549 | 0.0474 | -0.0075 (-13.7%) | 0.0918 | 0.0827 | -0.0091 (-9.9%) |
| MAS | 0.5842 | 0.6123 | +0.0281 (+4.8%) | 0.1188 | 0.1030 | -0.0158 (-13.3%) | 0.0575 | 0.0436 | -0.0139 (-24.2%) |
| Rwalk | 0.6078 | 0.6324 | +0.0247 (+4.1%) | 0.0811 | 0.0601 | -0.0210 (-25.9%) | 0.0679 | 0.0621 | -0.0058 (-8.5%) |
| AGS-CL | 0.5403 | 0.5623 | +0.0220 (+4.07%) | 0.0750 | 0.0692 | -0.0058 (-7.7%) | 0.1408 | 0.1241 | -0.0167 (-11.7%) |
+
+
+(a) CIFAR-100
+
+
+(b) Omniglot
+
+
+(c) CUB200
+Figure 4: Experimental results on CL benchmark dataset
+
+Omniglot This dataset is well-suited to evaluate CL with long task sequences (50 tasks). In Table 1, it is clear that the CPR considerably increases both plasticity and stability in long task sequences. In particular, CPR significantly decreases $F_{10}$ for EWC and leads to a huge improvement in $A_{10}$ . Interestingly, unlike the previous datasets, $I_{1,10}$ is negative, implying that past tasks help in learning new tasks for the Omniglot dataset; when applying CPR, the gains in $I_{1,10}$ are even better. Furthermore, Fig. 4(b) indicates that applying CPR leads to less variation in $A_{t}$ curves.
+
+CUB200 The results in Table 1 and Fig. 4(c) show that CPR is also effective when using a pre-trained ResNet model for all methods and metrics.
+
+# 3.4 ABLATION STUDY
+
+
+(a) $f_{10}^{t}$
+Figure 5: Ablation studies on CL with wide local minima
+
+
+(b) $I_{t + 1,10}$
+
+
+(c) $A_{10}$ and $a_{t,10}$
+
+We study the ablation of the CPR on the regularization-based methods using CIFAR-100 with the best $(\lambda, \beta)$ found previously, and report the averaged results over 5 random initializations and task sequences in Fig. 5. The ablation is performed in two cases: (i) using CPR only at task $t$ , denoted as $\mathrm{EWC} + \mathrm{CPR}$ (only $t$ -th task), and (ii) using CPR except task $t$ , denoted as $\mathrm{EWC} + \mathrm{CPR}$ (w/o $t$ -th task). Fig. 5(a) shows $f_{10}^{t}$ , the amount of forgetting for task $t$ after learning the task 10, and Fig. 5(b) shows
+
+$I_{t+1,10}$ , the amount of gap with fine-tuning after task $t$ . In Fig. 5(a), we observe that CPR helps to decrease $f_{10}^{t}$ for each task whenever it is used (except for task 3), but $f_{10}^{t}$ of EWC + CPR (w/o $t$ -th task) shows a more random tendency. On average, EWC + CPR does reduce forgetting in all tasks, demonstrating the effectiveness of applying CPR to all tasks. Notably in Fig. 5(b), $I_{t+1,10}$ of EWC + CPR (only $t$ -th task) is lower than that of EWC + CPR (w/o $t$ -th task) only when $t = 1$ ; this indicates that CPR is most beneficial in terms of plasticity when CPR is applied as early as possible to the learning sequence. EWC + CPR again achieves the lowest (i.e., most favorable) $I_{t+1,10}$ . Fig. 5(c), as a further evidence, also suggests that applying CPR for $t = 1$ gives a better accuracy. Moreover, the accuracy of EWC + CPR (w/o $t$ -th task) gets closer to the optimal EWC + CPR, which is consistent with the decreasing difference of $I_{t+1,10}$ between EWC + CPR (w/o $t$ -th task) and EWC + CPR in Fig. 5(b). The EWC + CPR still gives the best $A_{10}$ and individual $a_{t,10}$ accuracy. We emphasize that model converging to a wide local minima from the first task onwards considerably helps the training of future tasks as well, i.e., a significant increase in the plasticity can be achieved. By using this finding, we conducted an experiment on the case where CPR have to learn unscheduled additional tasks and got the impressive experimental result which is reported in the SM.
+
+In the SM, we also visualize how the learned representations change with the additional CPR term. We utilized UMAP (McInnes et al., 2018) and show that the learned representations less drift as the learning continues with CPR, which again indirectly corroborates the existence of wide local minima. Due to the space constraint, we refer the detailed explanation and visualization to the SM.
+
+# 3.5 APPLYING CPR TO CONTINUAL REINFORCEMENT LEARNING
+
+To show the effectiveness of CPR in various domain, we applied CPR to EWC (Kirkpatrick et al., 2017), MAS (Aljundi et al., 2018) and AGS-CL (Jung et al., 2020) in continual RL. We followed exactly same experimental settings with AGS-CL, therefore, we did experiments on 8 different Atari (Brockman et al., 2016) tasks, i.e., {StarGunner - Boxing - VideoPinball-Crazyclimber-Gopher-Robotank-DemonAttack - NameThisGame}. We used PPO (Schulman et al., 2017) for reinforcement learning and we simply applied CPR to PPO by adding KL divergence term in Eq.2. We trained each method on three different seeds and we report the averaged result. More detailed experimental settings are in the SM.
+
+
+Figure 6: Accumulated normalized reward
+
+Figure 6 shows the experimental results and fine-tuning means the result of continual learning without regularization. x-axis is the training step and y-axis means the normalized accumulated reward, which is the sum of each task reward normalized by the reward of fine-tuning. We observe that CPR increases the accumulated reward of each method. From the analysis, we found out that, training with the best hyperparameter for each method already do not suffer from catastrophic forgetting on previous tasks but each method shows the difference on the ability to learn a new task well. However, we check that CPR can averagely increase the average reward of each task by $27\%$ (EWC), $6\%$ (MAS) and $5\%$ (AGS-CL) and we believe that this is the reason why CPR leads to the improvement of the accumulated reward of each method.
+
+# 4 RELATED WORK
+
+Several methods have been recently proposed to reduce catastrophic forgetting (see Parisi et al. (2019) for a survey). In this paper, we mainly focus on the regularization-based CL methods (Li & Hoiem, 2017; Kirkpatrick et al., 2017; Aljundi et al., 2018; Chaudhry et al., 2018; Zenke et al., 2017; Nguyen et al., 2018; Ahn et al., 2019; Aljundi et al., 2019; Jung et al., 2020). Broadly speaking, the motivation behind the regularization-based CL is to measure the importance of model parameters in previous tasks. This measure is then used in a regularization term for overcoming catastrophic forgetting when training for new tasks. Consequently, the main research focus of the regularization-based CL is creating metrics for quantifying weight importance on previous tasks (e.g., Kirkpatrick et al. (2017); Aljundi et al. (2018); Chaudhry et al. (2018); Zenke et al. (2017); Nguyen et al. (2018); Ahn et al.
+
+(2019); Jung et al. (2020)). In contrast, here we focus on developing a general method for augmenting regularization-based CL instead of proposing (yet another) new metric for measuring the weight importance.
+
+The work that shares similar philosophy as ours is Aljundi et al. (2019), which encourages sparsity of representations for each task by adding an additional regularizer to the regularization-based CL methods. Note that the motivation of Aljundi et al. (2019)—imposing sparsity of neuron activations—is considerably different from ours that promotes wide local minima. Moreover, whereas Aljundi et al. (2019) focuses on average accuracy, we carefully evaluate the advantage of the added CPR regularization in terms of increasing both plasticity and stability of CL in addition to accuracy.
+
+Several papers have recently proposed methods that promote wide local minima in neural networks in order to improve single-task generalization, including using small mini-batch size (Keskar et al., 2017), regularizing the output of the softmax layer in neural networks (Szegedy et al., 2016; Pereyra et al., 2017), using a newly proposed optimizer which constructs a local-entropy-based objective function (Pereyra et al., 2017) and distilling knowledge from other models (Zhang et al., 2018). We expand upon this prior work and investigate here the role of wide local minima in CL.
+
+Mirzadeh et al. (2020) recently proposed a similar point of view with ours on the advantages of wide local minima in continual learning. However, our work is significantly different from theirs in the following aspects. Firstly, the core motivations are different. Mirzadeh et al. (2020) begins with defining a metric for forgetting, then approximates it with the second order Taylor expansion. They then mainly focus on the stability and argue that, if the model converges to a wide local minima during continual learning, the forgetting would decrease. However, as shown in Fig. 1, our paper is motivated from a geometric intuition, from which we further explain that if the model achieves a wide local minima for each task during continual learning, not only stability but also plasticity will improve (in other words, the trade-off between stability and plasticity in continual learning can be improved simultaneously). Secondly, the proposed method for converging to a wide local minima is different. Mirzadeh et al. (2020) proposed to control three elements, such as, learning rate, mini-batch size and drop out. On the other hand, we used classifier projection as a regularization that promotes wide local minima. Therefore, our method requires only one additional hyperparameter to be controlled, so the complexity of our method is much lower. Thirdly, while Mirzadeh et al. (2020) only empirically analyzed the forgetting of CL, we proposed a more principled theoretical interpretation of the role of CPR in terms of information projection. Fourthly, unlike Mirzadeh et al. (2020) which only considered a single epoch setting on a limited benchmark sets, we conducted extensive experiments on multiple epochs setting and diverse settings, such as, four different classification datasets (CIFAR100, CIFAR10/100, Omniglot, CUB200) and continual reinforcement learning using Atari 8 tasks. Finally, there is a difference in experimental analyses. We conducted a more detailed experimental analysis of the effect of wide local minima for continual learning. This is because, by using UMAP, we verified the change of feature maps of a model with or without CPR and how the plasticity and stability were improved after applying CPR through ablation study.
+
+# 5 CONCLUDING REMARK
+
+We proposed a simple classifier-projection regularization (CPR) which can be combined with any regularization-based continual learning (CL) method. Through extensive experiments in supervised and reinforcement learning, we demonstrated that, by converging to a wide local minima at each task, CPR can significantly increase the plasticity and stability of CL. These encouraging results indicate that wide local minima-promoting regularizers have a critical role in successful CL. As a theoretical interpretation, we argue that the additional term found in CPR can be understood as a projection of the conditional probability given by a classifier's output onto a ball centered around the uniform distribution.
+
+# ACKNOWLEDGMENT
+
+This work was supported in part by NRF Mid-Career Research Program [NRF-2021R1A2C2007884] and IITP grant [No.2019- 0-01396, Development of framework for analyzing, detecting, mitigating of bias in AI model and training data], funded by the Korean government (MSIT).
+
+# REFERENCES
+
+Hongjoon Ahn, Sungmin Cha, Donggyu Lee, and Taesup Moon. Uncertainty-based continual learning with adaptive regularization. In Advances in Neural Information Processing Systems (NeurIPS), pp. 4394-4404, 2019.
+Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, and Tinne Tuytelaars. Memory aware synapses: Learning what (not) to forget. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 139-154, 2018.
+Rahaf Aljundi, Marcus Rohrbach, and Tinne Tuytelaars. Selfless sequential learning. In International Conference on Learning Representations (ICLR), 2019.
+Shun-ichi Amari, S Ikeda, and H Shimokawa. Information geometry of-projection in mean field approximation. Advanced Mean Field Methods, pp. 241-258, 2001.
+Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. OpenAI Gym. arXiv:1606.01540, 2016.
+Gail A Carpenter and Stephen Grossberg. Art 2: Self-organization of stable category recognition codes for analog input patterns. Applied Optics, 26(23):4919-4930, 1987.
+Pratik Chaudhari, Anna Choromanska, Stefano Soatto, Yann LeCun, Carlo Baldassi, Christian Borgs, Jennifer Chayes, Levent Sagun, and Riccardo Zecchina. Entropy-sgd: Biasing gradient descent into wide valleys. Journal of Statistical Mechanics: Theory and Experiment, 2019(12):124018, 2019.
+Arslan Chaudhry, Puneet K Dokania, Thalaiyasingam Ajanthan, and Philip HS Torr. Riemannian walk for incremental learning: Understanding forgetting and intransigence. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 532-547, 2018.
+Thomas M Cover and Joy A Thomas. Elements of information theory. John Wiley & Sons, 2012.
+Imre Csiszár. Sanov property, generalized I-projection and a conditional limit theorem. The Annals of Probability, pp. 768-793, 1984.
+Imre Csiszár and Frantisek Matus. Information projections revisited. IEEE Transactions on Information Theory, 49(6):1474-1490, 2003.
+Imre Csiszár and Paul C Shields. Information theory and statistics: A tutorial. Now Publishers Inc, 2004.
+Siavash Golkar, Michael Kagan, and Kyunghyun Cho. Continual learning via neural pruning. arXiv preprint arXiv:1903.04476, 2019.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp. 770-778, 2016.
+Sangwon Jung, Hongjoon Ahn, Sungmin Cha, and Taesup Moon. Continual learning with node-importance based adaptive group sparse regularization. In Advances in Neural Information Processing Systems (NeurIPS), volume 33, pp. 3647-3658. Curran Associates, Inc., 2020.
+Ronald Kemker and Christopher Kanan. Fearnet: Brain-inspired model for incremental learning. In International Conference on Learning Representations (ICLR), 2018.
+Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. In International Conference on Learning Representations (ICLR), 2017.
+James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 114(13):3521-3526, 2017. ISSN 0027-8424. doi: 10.1073/pnas.1611835114.
+
+Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
+Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332-1338, 2015.
+Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998.
+Zhizhong Li and Derek Hoiem. Learning without forgetting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(12):2935-2947, 2017.
+David Lopez-Paz and Marc Aurelio Ranzato. Gradient episodic memory for continual learning. In Advances in Neural Information Processing System (NeurIPS), pp. 6467-6476. 2017.
+Michael McCloskey and Neal J Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. In *Psychology of learning and motivation*, volume 24, pp. 109-165. Elsevier, 1989.
+Leland McInnes, John Healy, and James Melville. Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426, 2018.
+Martial Mermillod, Aurélia Bugaiska, and Patrick Bonin. The stability-plasticity dilemma: Investigating the continuum from catastrophic forgetting to age-limited learning effects. Frontiers in Psychology, 4:504, 2013.
+Seyed Iman Mirzadeh, Mehrdad Farajtabar, Razvan Pascanu, and Hassan Ghasemzadeh. Understanding the role of training regimes in continual learning. In Advances in Neural Information Processing Systems (NeurIPS), volume 33, pp. 7308-7320. Curran Associates, Inc., 2020.
+Kevin P Murphy. Machine learning: a probabilistic perspective. MIT press, 2012.
+Cuong V. Nguyen, Yingzhen Li, Thang D. Bui, and Richard E. Turner. Variational continual learning. In International Conference on Learning Representations (ICLR), 2018.
+German I Parisi, Ronald Kemker, Jose L Part, Christopher Kanan, and Stefan Wermter. Continual lifelong learning with neural networks: A review. Neural Networks, 113:54-71, 2019.
+Gabriel Pereyra, George Tucker, Jan Chorowski, Łukasz Kaiser, and Geoffrey Hinton. Regularizing neural networks by penalizing confident output distributions. arXiv preprint arXiv:1701.06548, 2017.
+Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert. icarl: Incremental classifier and representation learning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR), pp. 2001-2010, 2017.
+Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. arXiv preprint arXiv:1606.04671, 2016.
+John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
+Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. Continual learning with deep generative replay. In Advances in Neural Information Processing System (NeurIPS), pp. 2990-2999. 2017.
+Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp. 2818-2826, 2016.
+John MacLaren Walsh and Phillip A Regalia. Belief propagation, dykstra's algorithm, and iterated information projections. IEEE Transactions on Information Theory, 56(8):4114-4128, 2010.
+Peter Welinder, Steve Branson, Takeshi Mita, Catherine Wah, Florian Schroff, Serge Belongie, and Pietro Perona. Caltech-ucsd birds 200. 2010.
+
+Zhewei Yao, Amir Gholami, Kurt Keutzer, and Michael Mahoney. Pyhessian: Neural networks through the lens of the hessian. arXiv preprint arXiv:1912.07145, 2019.
+Jaehong Yoon, Eunho Yang, Jeongtae Lee, and Sung Ju Hwang. Lifelong learning with dynamically expandable networks. In International Conference on Learning Representations (ICLR), 2018.
+Friedemann Zenke, Ben Poole, and Surya Ganguli. Continual learning through synaptic intelligence. In International Conference on Machine Learning (ICML), pp. 3987-3995, 2017.
+Ying Zhang, Tao Xiang, Timothy M Hospedales, and Huchuan Lu. Deep mutual learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4320-4328, 2018.
\ No newline at end of file
diff --git a/cprclassifierprojectionregularizationforcontinuallearning/images.zip b/cprclassifierprojectionregularizationforcontinuallearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..7f1782d2e800d147db5e05fe07384a44dcdd47c6
--- /dev/null
+++ b/cprclassifierprojectionregularizationforcontinuallearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f0023c2ff80904e7a1c2863fab61f62212e54b2462f1ea369904989ef7540936
+size 445963
diff --git a/cprclassifierprojectionregularizationforcontinuallearning/layout.json b/cprclassifierprojectionregularizationforcontinuallearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..6e6f964db0c9e951d792ec95828549ec3ad7f60d
--- /dev/null
+++ b/cprclassifierprojectionregularizationforcontinuallearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e56cee63f39516ac3571ed193eb5f0d63a6e24bf04ea0fad1a72e7d36d9f5520
+size 444478
diff --git a/ctnetchanneltensorizationnetworkforvideoclassification/defa873a-dabd-4cbf-91cb-23fd869c9eea_content_list.json b/ctnetchanneltensorizationnetworkforvideoclassification/defa873a-dabd-4cbf-91cb-23fd869c9eea_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..2de721f34e0cdbe1e327792d638af62de5f631a9
--- /dev/null
+++ b/ctnetchanneltensorizationnetworkforvideoclassification/defa873a-dabd-4cbf-91cb-23fd869c9eea_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a5d75aa95d4b0d5bd008023eea1388e3b030aeb2741de94abc7f2b72e2e77555
+size 87066
diff --git a/ctnetchanneltensorizationnetworkforvideoclassification/defa873a-dabd-4cbf-91cb-23fd869c9eea_model.json b/ctnetchanneltensorizationnetworkforvideoclassification/defa873a-dabd-4cbf-91cb-23fd869c9eea_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..65118897b5f390ae2feb037ed72221e91a2297ae
--- /dev/null
+++ b/ctnetchanneltensorizationnetworkforvideoclassification/defa873a-dabd-4cbf-91cb-23fd869c9eea_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5534b0aa39afc32db3e5aeaf9adfd44e361af6ef1cc8440c57a707b442c0cf46
+size 100489
diff --git a/ctnetchanneltensorizationnetworkforvideoclassification/defa873a-dabd-4cbf-91cb-23fd869c9eea_origin.pdf b/ctnetchanneltensorizationnetworkforvideoclassification/defa873a-dabd-4cbf-91cb-23fd869c9eea_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c0a02dea49ba64af5d94ee7c113fd20572a1867b
--- /dev/null
+++ b/ctnetchanneltensorizationnetworkforvideoclassification/defa873a-dabd-4cbf-91cb-23fd869c9eea_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:52e4a702bbf420cd64dbe294dbdc1c32b9c436389bcf72d70f9c7d23c93958c4
+size 7125595
diff --git a/ctnetchanneltensorizationnetworkforvideoclassification/full.md b/ctnetchanneltensorizationnetworkforvideoclassification/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..36c3a44e3d1b2e273114767afce426833f062cbd
--- /dev/null
+++ b/ctnetchanneltensorizationnetworkforvideoclassification/full.md
@@ -0,0 +1,293 @@
+# CT-NET: CHANNEL TENSORIZATION NETWORK FOR VIDEO CLASSIFICATION
+
+Kunchang Li $^{12*}$ , Xianhang Li $^{14*}$ , Yali Wang $^{13*}$ , Jun Wang ${}^{4}$ & Yu Qiao $^{13\dagger}$
+
+$^{1}$ Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
+
+$^{2}$ University of Chinese Academy of Sciences
+
+$^{3}$ SIAT Branch, Shenzhen Institute of Artificial Intelligence and Robotics for Society
+
+4University of Central Florida
+
+{kc.li, yl.wang, yu.qiao}@siat.ac.cn
+
+xianhangli@knights.ucf.edu, Jun.Wang@ucf.edu
+
+# ABSTRACT
+
+3D convolution is powerful for video classification but often computationally expensive, recent studies mainly focus on decomposing it on spatial-temporal and/or channel dimensions. Unfortunately, most approaches fail to achieve a preferable balance between convolutional efficiency and feature-interaction sufficiency. For this reason, we propose a concise and novel Channel Tensorization Network (CT-Net), by treating the channel dimension of input feature as a multiplication of $K$ sub-dimensions. On one hand, it naturally factorizes convolution in a multiple dimension way, leading to a light computation burden. On the other hand, it can effectively enhance feature interaction from different channels, and progressively enlarge the 3D receptive field of such interaction to boost classification accuracy. Furthermore, we equip our CT-Module with a Tensor Excitation (TE) mechanism. It can learn to exploit spatial, temporal and channel attention in a high-dimensional manner, to improve the cooperative power of all the feature dimensions in our CT-Module. Finally, we flexibly adapt ResNet as our CT-Net. Extensive experiments are conducted on several challenging video benchmarks, e.g., Kinetics-400, Something-Something V1 and V2. Our CT-Net outperforms a number of recent SOTA approaches, in terms of accuracy and/or efficiency.
+
+# 1 INTRODUCTION
+
+3D convolution has been widely used to learn spatial-temporal representation for video classification (Tran et al., 2015; Carreira & Zisserman, 2017). However, over parameterization often makes it computationally expensive and hard to train. To alleviate such difficulty, recent studies mainly focus on decomposing 3D convolution (Tran et al., 2018; 2019). One popular approach is spatial-temporal factorization (Qiu et al., 2017; Tran et al., 2018; Xie et al., 2018), which can reduce overfitting by replacing 3D convolution with 2D spatial convolution and 1D temporal convolution. But it still introduces unnecessary computation burden, since both spatial convolution and temporal convolution are performed over all the feature channels. To further decrease such computation cost, channel separation has been recently developed via operating 3D convolution in the depth-wise manner (Tran et al., 2019). However, it inevitably loses accuracy due to the lack of feature interaction between different channels. For compensation, it has to introduce point-wise convolution to preserve interaction with extra computation. So there is a natural question: How to construct effective 3D convolution to achieve a preferable trade-off between efficiency and accuracy for video classification?
+
+
+Figure 1: Simple illustration of channel tensorization ( $K = 2$ ). We tensorize the channel dimension of input feature as a multiplication of $K$ sub-dimensions. Via performing spatial/temporal tensor separable convolution along each sub-dimension, we can achieve a preferable balance between convolutional efficiency and feature-interaction sufficiency. Introduction shows more explanations.
+
+| Method | 3D Convolution (t × h × w) | Convolutional Efficiency | Feature-Interaction Sufficiency |
| Spatial-temporal | Channel | Interact Manner | Interact Field1 |
| C3D (Tran et al., 2015) | Full: 3 × 3 × 3 | X | X | STC | 33 |
| R(2+1)D (Tran et al., 2018) | Full: 1 × 3 × 3 | ✓ | X | SC | 33 |
| Full: 3 × 1 × 1 | TC |
| CSN (Tran et al., 2019) | Full: 1 × 1 × 1 | X | ✓ | C | 33 |
| DW: 3 × 3 × 3 | ST |
| Our CT-Net (C = C1 × ··· × CK) | C1: C1 × ··· × 1 × (1 × 3 × 3 + 3 × 1 × 1) | ✓ | ✓ | STC1 | (2K + 1)3 |
| CK: 1 × ··· × CK × (1 × 3 × 3 + 3 × 1 × 1) | STCk |
+
+1 Interact Field means the receptive field for feature interaction.
+
+Table 1: Two design principles to build effective video representation and efficient convolution.
+
+This paper attempts to address this question by investigating two design principles. (1) Convolutional Efficiency. As shown in Table 1, current designs of spatial-temporal convolution mainly focus on decomposition from either spatial-temporal (Tran et al., 2018) or channel dimension (Tran et al., 2019). To enhance convolutional efficiency, we consider decomposing convolution in a higher dimension with a novel representation of feature tensor. (2) Feature-Interaction Sufficiency. Table 1 clearly shows that, for current decomposition approaches (Tran et al., 2018; 2019), feature interaction only contains one or two of spatial, temporal and channel dimensions at each sub-operation. Such a partial interaction manner would reduce classification accuracy. On one hand, it decreases the discriminative power of video representation, due to the lack of joint learning on all the dimensions. On the other hand, it restricts feature interaction in a limited receptive field, which ignores rich context from a larger 3D region. Hence, to boost classification accuracy, each sub-operation should achieve feature interaction on all the dimensions, and the receptive field of such interaction should be progressively enlarged as the number of sub-operations increases.
+
+Based on these desirable principles, we design a novel and concise Channel Tensorization Module (CT-Module). Specifically, we propose to tensorize the channel dimension of input feature as a multiplication of $K$ sub-dimensions, i.e., $C = C_1 \times C_2 \times \dots \times C_K$ . Via performing spatial/temporal separable convolution along each sub-dimension, we can effectively achieve convolutional efficiency and feature-interaction sufficiency. For better understanding, we use the case of $K = 2$ as a simple illustration in Figure 1. First, we tensorize the input channel into $C = C_1 \times C_2$ . Naturally, we separate the convolution into distinct ones along each sub-dimension, e.g., for the $1^{st}$ sub-dimension, we apply our spatial-temporal tensor separable convolution with the size $C_1 \times 1 \times t \times h \times w$ , which allows us to achieve convolutional efficiency on all the spatial, temporal and channel dimensions. After that, we sequentially perform the tensor separable convolution sub-dimension by sub-dimension. As a result, we can progressively achieve feature interaction on all the channels, and enlarge the spatial-temporal receptive field. For example, after operating $1^{st}$ tensor separable convolution on the $1^{st}$ sub-dimension, $C_1$ channels interact, and 3D receptive field of such interaction is $3 \times 3 \times 3$ . Via further operating $2^{nd}$ tensor separable convolution on the $2^{nd}$ sub-dimension, all $C_1 \times C_2 = C$
+
+channels have feature interaction, and 3D receptive field of such interaction becomes $5 \times 5 \times 5$ . This clearly satisfies our principle of feature-interaction sufficiency.
+
+We summarize our contributions in the following. First, we design a novel Channel Tensorized Module (CT-Module), which can achieve convolutional efficiency and feature-interaction sufficiency, via progressively performing spatial/temporal tensor separable convolution along each sub-dimension of the tensorized channel. Second, we equip CT-Module with a distinct Tensor Excitation (TE) mechanism, which can further activate the video features of each sub-operation by spatial, temporal and channel attention in a tensor-wise manner. Subsequently, we apply this full module in a residual block, and flexibly adopt 2D ResNet as our Channel Tensorized Network (CT-Net). In this case, we can gradually enhance feature interaction from a broader 3D receptive field, and learn the key spatial-temporal representation with light computation. Finally, we conduct extensive experiments on a number of popular and challenging benchmarks, e.g., Kinetics (Carreira & Zisserman, 2017), Something-Something V1 and V2 (Goyal et al., 2017b). Our CT-Net outperforms the state-of-the-art methods in terms of classification accuracy and/or computation cost.
+
+# 2 RELATED WORKS
+
+2D CNN for video classification. 2D CNN is a straightforward but useful method for video classification (Karpathy et al., 2014; Simonyan & Zisserman, 2014; Wang et al., 2016; Liu et al., 2020; Jiang et al., 2019). For example, Two-stream methods (Simonyan & Zisserman, 2014) learn video representations by fusing the features from RGB and optical flow respectively. Instead of sampling a single RGB frame, TSN (Wang et al., 2016) proposes a sparse temporal sampling strategy to learn video representations. To further improve accuracy, TSM (Lin et al., 2019) proposes a zero-parameter temporal shift module to exchange information with adjacent frames. However, these methods may lack the capacity of learning spatial-temporal interaction comprehensively, which often reduces their discriminative power to recognize complex human actions.
+
+3D CNN for video classification. 3D CNN has been widely used to learn a rich spatial-temporal context better (Tran et al., 2015; Carreira & Zisserman, 2017; Feichtenhofer et al., 2019; Sudhakaran et al., 2020; Feichtenhofer, 2020). However, it introduces a lot of parameters, which leads to a difficult optimization problem and large computational load. To resolve this issue, I3D (Carreira & Zisserman, 2017) inflates all the 2D convolution kernels pre-trained on ImageNet, which is helpful for optimizing. Other works also try to factorize 3D convolution kernel to reduce complexity, such as P3D (Qiu et al., 2017) and $\mathrm{R}(2 + 1)\mathrm{D}$ (Tran et al., 2018). Recently, CSN (Tran et al., 2019) operates 3D convolution in the depth-wise manner. Nevertheless, all these methods still do not achieve a good trade-off between accuracy and efficiency. To tackle this challenge, we propose CT-Net which learns on spatial-temporal and channel dimensions jointly with lower computation than previous methods.
+
+# 3 METHODS
+
+In this section, we describe our Channel Tensorization Network (CT-Net) in detail. First, we formally introduce our CT-Module in a generic manner. Second, we design a Tensor Excitation (TE) mechanism to enhance CT-Module. Finally, we flexibly adapt ResNet as our CT-Net to achieve a preferable trade-off between accuracy and efficiency for video classification.
+
+# 3.1 CHANNEL TENSORIZATION MODULE
+
+As discussed in the introduction, the previous approaches have problems in convolutional efficiency or feature-interaction sufficiency. To tackle such a problem, we introduce a generic Channel Tensorization Module (CT-Module), by treating the channel dimension of input feature as a multiplication of $K$ sub-dimensions, i.e., $C = C_1 \times C_2 \times \dots \times C_K$ . Naturally, this tensor representation allows to tensorize the kernel size of convolution $TConv()$ as a multiplication of $K$ sub-dimensions, too. To simplify the notation, the channel dimension of the output is omitted by default. The output $X_{out}$ can be calculated as follows:
+
+$$
+\mathbf {X} _ {\text {o u t}} = T \operatorname {C o n v} \left(\mathbf {X} _ {\text {i n}}, W ^ {C _ {1} \times C _ {2} \times \dots \times C _ {K} \times t \times h \times w}\right) \tag {1}
+$$
+
+where $\mathbf{X}_{in}$ and $W$ denote the tensorized input and kernel respectively. However, such an operation requires large computation, so we introduce the tensor separable convolution to alleviate the issue.
+
+Tensor Separable Convolution. We propose to factorize $TConv()$ along $K$ channel sub-dimensions. Specifically, we decompose $TConv()$ as $K$ tensor separable convolutions $TSCov()$ , and apply $TSCov()$ sub-dimension by sub-dimension as follows:
+
+$$
+\mathbf {X} _ {k} = T S C o n v \left(\mathbf {X} _ {k - 1}, W ^ {1 \times \dots \times C _ {k} \times \dots \times 1 \times t \times h \times w}\right) \tag {2}
+$$
+
+where $\mathbf{X}_0 = \mathbf{X}_{in}$ and $\mathbf{X}_{out} = \mathbf{X}_K$ . On one hand, the kernel size of the $k^{th}$ TSCov() is $(1\times \dots \times C_k\times \dots \times 1\times t\times h\times w)$ . It illustrates that only $C_k$ channels interact in the $k^{th}$ suboperation, which leads to convolution efficiency. On the other hand, as we stack the TSCov(), each convolution performs on the output features of the previous convolution. Therefore, the spatial-temporal receptive field is enlarged. Besides, interactions first occur in $C_1$ channels, second in $C_1\times C_2$ channels and so on. Finally, $C_1\times C_2\times \dots \times C_K = C$ channels can progressively interact. This clearly satisfies our principle of feature-interaction sufficiency.
+
+Spatial-Temporal Tensor Separable Convolution. To further improve convolution efficiency, we factorize the 3D $TSConv()$ into 2D spatial $TSConv()$ and 1D temporal $TSConv()$ . Thus, we can obtain the output features $\mathbf{X}_k^S$ and $\mathbf{X}_k^T$ as follows:
+
+$$
+\mathbf {X} _ {k} ^ {S} = S - T S C o n v \left(\mathbf {X} _ {k - 1}, W ^ {1 \times \dots \times C _ {k} \times \dots \times 1 \times 1 \times h \times w}\right) \tag {3}
+$$
+
+$$
+\mathbf {X} _ {k} ^ {T} = T \text {- T S C o n v} \left(\mathbf {X} _ {k - 1}, W ^ {1 \times \dots \times C _ {k} \times \dots \times 1 \times t \times 1 \times 1}\right) \tag {4}
+$$
+
+where $S$ -TSCov() and $T$ -TSCov() represent spatial and temporal tensor separable convolution respectively. Finally, we attempt to aggregate spatial and temporal convolution. There are various connection types of spatial and temporal tensor separable convolution, e.g., parallel and serial types. According to the results of the experiments in Section 4, we utilize the parallel method, which illustrates that we sum the spatial feature $\mathbf{X}_k^S$ and temporal feature $\mathbf{X}_k^T$ :
+
+$$
+\mathbf {X} _ {k} = \mathbf {X} _ {k} ^ {S} + \mathbf {X} _ {k} ^ {T} \tag {5}
+$$
+
+# 3.2 TENSOR EXCITATION
+
+Our CT-Module separates feature along spatial, temporal and channel dimensions. To make full use of their cooperative power to learn distinct video features, we design a concise Tensor Excitation (TE) mechanism for each dimension. First of all, we attempt to utilize the TE mechanism to enhance spatial and temporal features respectively. For the spatial feature $\mathbf{X}_k^S$ obtained by Equation 5, our corresponding spatial TE mechanism can be formulated as:
+
+$$
+\mathbf {U} _ {k} = \mathbf {X} _ {k} ^ {S} \otimes \operatorname {S i g m o d} (S - T S C o n v (T - P o o l (\mathbf {X} _ {k} ^ {S}))) \tag {6}
+$$
+
+where $T$ -Pool() represents global temporal pooling, i.e., $T \times 1 \times 1$ average pooling. By performing it on $\mathbf{X}_k^S$ , we obtain the feature with the size $(C_1 \times C_2 \times \dots \times C_K \times 1 \times H \times W)$ , which gathers spatial contexts along temporal dimension. Subsequently, the spatial tensor separable convolution $S$ -TSCov() and the activate function Sigmoid() are performed to generate the spatial attention heatmap. Finally, the element-wise multiplication $\otimes$ broadcasts the spatial attention along the temporal dimension. Similarly, we perform the temporal TE mechanism for the temporal feature $\mathbf{X}_k^T$ :
+
+$$
+\mathbf {V} _ {k} = \mathbf {X} _ {k} ^ {T} \otimes \operatorname {S i g m o d} (T - T S C o n v (S - P o o l (\mathbf {X} _ {k} ^ {T}))) \tag {7}
+$$
+
+where $S$ -Pool() and $T$ -TSCov() are global spatial pooling and temporal tensor separable convolution correspondingly. At last, after aggregating the spatial and temporal features by addition, i.e., $\mathbf{R}_k = \mathbf{U}_k + \mathbf{V}_k$ , we perform a channel-wise TE mechanism as follows:
+
+$$
+\mathbf {X} _ {k} = \mathbf {R} _ {k} \otimes \operatorname {S i g m o d} \left(\text {P W - T S C o n v} \left(S - \text {P o o l} \left(\mathbf {R} _ {k}\right)\right)\right) \tag {8}
+$$
+
+We adopt point-wise tensor separable convolution $PW-TSCov()$ to learn the weights for aggregating distinctive channels. The rest follows the previous design. Note that all tensor separable convolutions are performed on the same sub-dimension as the previous convolution, which is essentially different from the SE mechanism (Hu et al., 2020). Through the cooperation of the TE mechanism along three different dimensions, the spatial-temporal features can be significantly enhanced.
+
+
+Figure 2: The pipelines of CT-Blocks and the overall architecture of CT-Net. We replace one of every two ResBlocks in ResNet with our CT-Block and the extra point-wise convolution in the last sub-dimension $(k = K)$ is ignored. More details can be found in Section3.3.
+
+
+
+
+
+# 3.3 CHANNEL TENSORIZATIONN NETWORK
+
+We regard ResNet as an exemplar and build up our CT-Net from ResNet50 (or ResNet101). First, we design a simple CT-Block in Figure 2(a), which adapts the $3 \times 3$ convolutional layer in Residual Block (ResBlock) into our CT-Module. It can achieve both convolutional efficiency and feature-interaction sufficiency. Second, we equip our simple CT-Block with the TE mechanism in Figure 2(b), forming a full CT-Block that can improve the cooperative power of all the feature dimensions. Besides, extra point-wise convolutions are added between different sub operations, which are beneficial for more sufficient feature interaction. At last, we build up a novel CT-Net with CT-Block. As shown in Figure 2(c), we replace one of every two ResBlocks with our CT-Block in every stage. Such a method guarantees a better balance between efficiency and accuracy in our experiments.
+
+Discussion. In fact, the popular methods in video classification like C3D, R(2+1)D and CSN (Tran et al., 2015; 2018; 2019) can be viewed as special cases of our CT-Net. We can generate different forms by adjusting three hyper-parameters: the number of sub-dimensions $(K)$ , the corresponding dimension size $(C_k)$ and the spatial-temporal kernel size $(\text{Kernel}_k)$ . To degenerate into C3D, we can set $K = 1$ and $\text{Kernel}_1 = 3 \times 3 \times 3$ . When $K = 2$ , $\text{Kernel}_1 = 1 \times 3 \times 3$ , $\text{Kernel}_2 = 3 \times 1 \times 1$ and $C_1 = C_2 = C$ without channel tensorization, it becomes R(2+1)D. Unfortunately, because of lacking the decomposition of channels, C3D and R(2+1)D still have a large computational load. When $K = 2$ , $C = C_1 \times C_2 = C \times 1$ , $\text{Kernel}_1 = 1 \times 1 \times 1$ , $\text{Kernel}_2 = 3 \times 3 \times 3$ , obviously it is equivalent to CSN. However, CSN has a limited receptive field of spatial-temporal interaction. In our CT-Net, we utilize channel tensorization and perform tensor separable convolution along each sub-dimension in turn. Such design can not only preserve interaction among spatial, temporal and channel dimensions but also enlarge the receptive field of feature interaction progressively.
+
+# 4 EXPERIMENTS AND RESULTS
+
+Datasets and implementation details. We conduct experiments on three large video benchmarks: Kinetics-400 (Carreira & Zisserman, 2017), Something-Something V1 and V2 (Goyal et al., 2017b). We choose ResNet50 and ResNet101 (He et al., 2016) pre-trained on ImageNet as the backbone and the parameters of CT-Module are randomly initialized. For training, we utilize the dense sampling strategy (Wang et al., 2018) for Kinetics and sparse sampling strategy (Wang et al., 2016) for Something-Something. Random scaling and cropping are applied for data argumentation. Finally, we resize the cropped regions to $256 \times 256$ . For testing, we sample multiple clips per video (4 for Kinetics, 2 for others) for pursuing high accuracy, and average all scores for the final prediction.
+
+| Method | 3D Convolution (t × h × w) | GFLOPs | Top-1 | Top-5 |
| C3D-Module (Tran et al., 2015) | Full: 3 × 3 × 3 | 59.9 | 46.1 | 75.0 |
| R(2+1)D-Module (Tran et al., 2018) | Full: 1 × 3 × 3 + Full: 3 × 1 × 1 | 45.8 | 47.0 | 76.1 |
| CSN-Module (Tran et al., 2019) | Full: 1 × 1 × 1 + DW: 3 × 3 × 3 | 35.6 | 46.8 | 75.7 |
| Our CT-Module | C1, ..., CK: 1 × 3 × 3 + 3 × 1 × 1 | 36.3 | 47.3 | 76.2 |
+
+(a) Effectiveness of CT-Module. CT-Module outperforms the recent modules for video modeling.
+
+| Number | GFLOPs | Top-1 | Top-5 |
| 1D | 45.8 | 46.5 | 75.6 |
| 2D | 36.3 | 47.3 | 76.2 |
| 3D | 35.7 | 47.1 | 75.8 |
| 4D | 35.6 | 46.5 | 75.3 |
+
+| Type | Top-1 | Top-5 |
| coupling | 47.1 | 76.0 |
| serial | 47.2 | 76.1 |
| parallel | 47.3 | 76.2 |
+
+| C2 | GFLOPs | Top-1 | Top-5 |
| 1 | 45.9 | 47.0 | 76.1 |
| 4 | 37.6 | 46.8 | 76.0 |
| 16 | 36.4 | 47.2 | 76.0 |
| [√C] | 36.3 | 47.3 | 76.2 |
+
+(b) Number of sub-dimensions. The higher the dimension, the smaller the GFLOPs. 2D channel tensorization achieves the best trade-off.
+(c) Connection type of spatiotemporal convolution. The parallel connection between spatial and temporal convolution is the best choice.
+
+| Number | GFLOPs | Top-1 | Top-5 |
| +0 | (TSN) | 43.0 | 16.9 | 42.0 |
| +1 | stage5 | 41.9 | 42.3 | 71.1 |
| +4 | stage4-5 | 38.9 | 46.9 | 75.7 |
| +6 | stage3-5 | 37.1 | 47.2 | 76.1 |
| +7 | stage2-5 | 36.3 | 47.3 | 76.2 |
| +12 | stage2-5 | 31.5 | 45.9 | 76.1 |
+
+(d) Dimension size. $C = C_1 \times C_2$ and the best trade-off is achieved when adopting the rounded middle size $\lfloor \sqrt{C} \rfloor$ .
+
+| Kernel size | GFLOPs | Top-1 | Top-5 |
| C1 | 1 × 1 × 1 | |1 × 1 × 1 | 35.5 | 46.1 | 75.1 |
| C2 | 1 × 3 × 3 | |3 × 1 × 1 |
| C1 | 1 × 1 × 1 | |1 × 1 × 1 | 36.6 | 47.2 | 76.2 |
| C2 | 1 × 5 × 5 | |5 × 1 × 1 |
| C1 | 1 × 3 × 3 | |3 × 1 × 1 | 36.3 | 47.3 | 76.2 |
| C2 | 1 × 3 × 3 | |3 × 1 × 1 |
| C1 | 1 × 3 × 3 | |3 × 1 × 1 | 37.4 | 47.5 | 76.4 |
| C2 | 1 × 5 × 5 | |5 × 1 × 1 |
| C1 | 1 × 5 × 5 | |5 × 1 × 1 | 38.9 | 47.6 | 76.5 |
| C2 | 1 × 5 × 5 | |5 × 1 × 1 |
+
+(e) Number and location of CT-Blocks. Simply replacing 1 block in stage5 can bring significant performance improvement. As we replace more blocks from the bottom up, the GFLOPs continues to decrease. Replacing 7 blocks achieves the best trade-off between accuracy and GFLOPs.
+(f) Kernel sizes along different dimensions. The larger kernel size brings improvement with more calculation.
+
+| Model | GFLOPs | Top-1 | Top-5 |
| Baseline (TSN) | 43.0 | 16.9 | 42.0 |
| +CT-Module | 36.3 | 47.3 | 76.2 |
| +CT-Module+PWConv | 37.2 | 48.0 | 76.7 |
| +CT-Module+PWConv+SE | 37.2 | 48.8 | 77.4 |
| +CT-Module+PWConv+TE | 37.3 | 50.1 | 78.8 |
+
+ | Input | GFLOPs | Top-1 | Top-5 |
| Train: | 224 × 224 | 28.6 | 49.1 | 77.4 |
| Test: | 224 × 224 |
| Train: | 224 × 224 | 37.3 | 49.7 | 77.7 |
| Test: | 256 × 256 |
| Train: | 256 × 256 | 37.3 | 50.1 | 78.8 |
| Test: | 256 × 256 |
+
+(g) Impact of different modules. CT-Module is essential for temporal modeling and TE mechanism is also beneficial.
+
+(h) Impact of different spatial resolution.
+
+Table 2: Ablation studies on Something-Something V1. All models use ResNet50 as the backbone
+
+We follow the same strategy in Non-local (Wang et al., 2018) to pre-process the frames and take 3 crops of $256 \times 256$ as input. Because some multi-clip models in Table 3 and Table 4 sample crops of $256 \times 256$ , we simply multiply the GFLOPs reported in the corresponding papers by $(256 / 224)^{2}$ for a fair comparison. When considering efficiency, we use just 1 clip per video and the final crop is scaled to $256 \times 256$ to ensure comparable GFLOPs.
+
+# 4.1 ABLATION STUDIES
+
+Table 2 shows our ablation studies on Something-Something V1, which is a challenging dataset that requires video architecture to have a robust spatial-temporal representation ability and is suitable to verify the effectiveness of our method. All models use ResNet50 as the backbone.
+
+Effectiveness of CT-Module. In Table 2a, we replace the $3 \times 3$ convolutional layer in ResNet50 with different modules in recent methods (Tran et al., 2015; 2018; 2019). Compared with CSN-Module, our module achieves a better result with similar computation, which reflects the importance of sufficient feature interaction. Besides, it is slightly better than R(2+1)D-Module with much lower calculation, showing the necessity of efficient convolution. Such results demonstrate the effectiveness of our CT-Module. It illustrates our two design principles give preferable guidance for designing an efficient module for temporal modeling.
+
+| Method | Backbone | #Frame | GFLOPs | SomethingV1 | SomethingV2 |
| Top-1 | Top-5 | Top-1 | Top-5 |
| ECOENLite (Zolfaghari et al., 2018) | Incep+3D R18 | 92 | 267 | 46.4 | - | - | - |
| NL I3D + GCN (Wang & Gupta, 2018) | 3D R50 | 32×3×2 | 1818 | 46.1 | 76.8 | - | - |
| ir-CSN (Tran et al., 2019) | 3D R101 | 32×1×10 | 738 | 48.4 | - | - | - |
| ir-CSN (Tran et al., 2019) | 3D R152 | 32×1×10 | 967 | 49.3 | - | - | - |
| CorrNet (Wang et al., 2020) | 3D R50 | 32×1×10 | 1150 | 49.3 | - | - | - |
| TSN (Wang et al., 2016) | 2D R50 | 8 | 33 | 19.7 | 46.6 | 27.8 | 57.6 |
| TSM (Lin et al., 2019) | 2D R50 | 8 | 33 | 45.6 | 74.2 | 59.1 | 85.6 |
| bLVNet-TAM (Fan et al., 2019) | bLR50 | 8×2 | 24 | 46.4 | 76.6 | 59.1 | 86.0 |
| TEINet (Liu et al., 2020) | 2D R50 | 8 | 33 | 47.4 | - | 61.3 | - |
| TEA (Li et al., 2020b) | 2D Res2Net50 | 8 | 35 | 48.9 | 78.1 | - | - |
| PEM+TDLoss (Weng et al., 2020) | 2D R50+TIM | 8 | 33 | 49.8 | - | 62.6 | - |
| PEM+TDLoss (Weng et al., 2020) | 2D R50+TIM | 8×3×2 | 259 | 50.4 | - | 63.5 | - |
| Our CT-Net | 2D R50 | 8 | 37 | 50.1 | 78.8 | 62.5 | 87.7 |
| Our CT-Net | 2D R50 | 8×3×2 | 224 | 51.7 | 80.1 | 63.9 | 88.8 |
| TSN (Wang et al., 2016) | 2D R50 | 16 | 66 | 19.9 | 47.3 | 30.0 | 60.5 |
| TSM (Lin et al., 2019) | 2D R50 | 16 | 66 | 47.2 | 77.1 | 63.4 | 88.5 |
| bLVNet-TAM (Fan et al., 2019) | bLR50 | 16×2 | 48 | 48.4 | 78.8 | 61.7 | 88.1 |
| TEINet (Liu et al., 2020) | 2D R50 | 16 | 66 | 49.9 | - | 62.1 | - |
| TEA (Li et al., 2020b) | 2D Res2Net50 | 16 | 70 | 51.9 | 80.3 | - | - |
| PEM+TDLoss (Weng et al., 2020) | 2D R50+TIM | 16 | 66 | 50.9 | - | 63.8 | - |
| PEM+TDLoss (Weng et al., 2020) | 2D R50+TIM | 16×3×2 | 517 | 52.0 | - | 65.0 | - |
| Our CT-Net | 2D R50 | 16 | 75 | 52.5 | 80.9 | 64.5 | 89.3 |
| Our CT-Net | 2D R50 | 16×3×2 | 447 | 53.4 | 81.7 | 65.9 | 90.1 |
| Our CT-NetEN | 2D (R50)×4 | 8+12+16+24 | 280 | 56.6 | 83.9 | 67.8 | 91.1 |
+
+Table 3: Comparison with the state-of-the-art on Something-Something V1&V2. Our CT-Net $_{16f}$ outperforms all the single-clip models in Something-Something and even better than most of the multi-clip models. And our CT-Net $_{EN}$ outperforms all methods with very lower computation.
+
+Number of sub-dimensions. Increasing the number of sub-dimensions saves a lot of computation, but the corresponding accuracy first increases and then decreases as shown in Table 2b. Compared with the 1D method, the 4D method significantly reduces GFLOPs, achieving comparable accuracy. As for the decrease of accuracy when $K$ is too large, we argue that the number of channel in the shallow layer is small (64/128), thus there are too few channels in a single dimension, leading to insufficient feature-interaction. Since the 2D method obtains the best trade-off, we set $K = 2$ in all the following experiments.
+
+Connection type of spatiotemporal convolution. The coupling $3 \times 3 \times 3$ convolution can be decomposed into serial or parallel spatial/temporal convolution. Table 2c reveals that factorizing the 3D kernel can boost results as expected. Besides, the parallel connection is better, thus we adopt parallel connection as the default.
+
+Dimension size. As we set $K = 2$ , it is essential to explore the impact of changing the dimension size $C_2$ . We can demonstrate that the computation is the lowest when $C_1 = C_2 = \sqrt{C}$ . Since $C$ is not always a perfect square number, we adopt the rounded middle size $\lfloor \sqrt{C} \rfloor$ . Table 2d shows that when $C_2 = \lfloor \sqrt{C} \rfloor$ , the model not only requires the fewest computation cost but also achieves the best performance. Hence, we set $C_2 = \lfloor \sqrt{C} \rfloor$ naturally.
+
+Number and location of CT-Blocks. Table 2e illustrates that simply replacing 1 block in stage5 can bring significant performance improvement (16.9% vs. 42.3%). As we replace more blocks from the bottom up, the GFLOPs continues to decrease. Moreover, the bottom blocks seem to be more beneficial to temporal modeling, since replacing the extra 3 blocks in stage2 and stage3 only improve the accuracy by 0.4% (46.9% vs. 47.3%). Since replacing 7 blocks achieves the highest accuracy, we replace 7 blocks by default.
+
+Kernel sizes along different dimensions. In Table 2f, we can observe that two concatenated $3^{3}$ convolution kernels are slightly better than those with the same receptive field $(1^{3} + 5^{3})$ . Furthermore, the larger kernel size can bring performance improvement but more calculation. It reveals that our CT-Module avoids the limited receptive field of feature interaction, and it can progressively enlarge the receptive field of such interaction on all the dimensions. Considering a better trade-off between accuracy and computation, we choose two concatenated $3^{3}$ convolution kernels.
+
+| Method | Backbone | #Frame | GFLOPs | Top-1 | Top-5 |
| R(2+1)D (Tran et al., 2018) | 2D R34 | 32×1×10 | 1520=152×10 | 72.0 | 91.4 |
| TSN (Wang et al., 2016) | Inception | 25×10×1 | 800=80×10 | 72.5 | 90.2 |
| I3D (Carreira & Zisserman, 2017) | Inception | 64×N/A×N/A | 108×N/A | 71.1 | 89.3 |
| TSM (Lin et al., 2019) | 2D R50 | 16×3×10 | 2580=86×30 | 74.7 | - |
| TEINet (Liu et al., 2020) | 2D R50 | 16×3×10 | 2580=86×30 | 76.2 | 92.5 |
| bLVNet-TAM (Fan et al., 2019) | bLR50 | (16×2)×3×3 | 561=62.3×9 | 72.0 | 90.6 |
| TEA (Li et al., 2020b) | 2D Res2Net50 | 16×3×10 | 2730=91×30 | 76.1 | 92.5 |
| PEM+TDLoss (Weng et al., 2020) | 2D R50+TIM | 16×3×10 | 2580=86×30 | 76.9 | 93.0 |
| CorrNet (Wang et al., 2020) | 3D R50 | 32×1×10 | 1150=115×10 | 77.2 | - |
| SlowFast (Feichtenhofer et al., 2019) | 3D R50+R50 | 36=(4+32)×3×10 | 1083=36.1×30 | 75.2 | 91.5 |
| SlowFast (Feichtenhofer et al., 2019) | 3D R50+R50 | 40=(8+32)×3×10 | 1971=65.7×30 | 76.4 | 92.2 |
| Our CT-Net | 2D R50 | 16×3×4 | 895=74.6×12 | 77.3 | 92.7 |
| X3D-XL (Feichtenhofer, 2020) | - | 16×3×10 | 1452=48.4×30 | 79.1 | 93.9 |
| SmallBigNet (Li et al., 2020a) | 2D R101 | 32×3×4 | 6552=546×12 | 77.4 | 93.3 |
| ip-CSN (Tran et al., 2019) | 3D R101 | 32×3×10 | 2490=83.0×30 | 76.8 | 92.5 |
| ip-CSN (Tran et al., 2019) | 3D R152 | 32×3×10 | 3264=108.8×30 | 77.8 | 92.8 |
| CorrNet (Wang et al., 2020) | 3D R101 | 32×3×10 | 6720=224×30 | 79.2 | - |
| SlowFast (Feichtenhofer et al., 2019) | 3D R101+R101 | 40=(8+32)×3×10 | 3180=106×30 | 77.9 | 93.2 |
| SlowFast (Feichtenhofer et al., 2019) | 3D R101+R101 | 80=(16+64)×3×10 | 6390=213×30 | 78.9 | 93.5 |
| NL I3D (Wang & Gupta, 2018) | 3D R101 | 128×3×10 | 10770=359×30 | 77.7 | 93.3 |
| Our CT-Net | 2D R101 | 16×3×4 | 1746=145.5×12 | 78.8 | 93.7 |
| Our CT-NetEN | 2D R50+R101 | (16+16)×3×4 | 2641=220.1×12 | 79.8 | 94.2 |
+
+Table 4: Comparison with the state-of-the-art on Kinetics-400. It shows that CT-Net- $R50_{16f}$ can surpass all existing lightweight models and even SlowFast- $R50_{40f}$ . When fusing different models, our model is $2.4 \times$ faster than SlowFast- $R101_{80f}$ and shows an $0.9\%$ performance gain.
+
+Impact of different modules and different spatial resolution. In Table 2g, our CT-Module can significantly boost its baseline (16.9% vs. 47.3%) and the TE mechanism can further improve the accuracy by 2.1% (48.0% vs. 50.1%). The extra point-wise convolution also boosts performance, which demonstrates that it is beneficial for sufficient feature interaction. Compared with the SE mechanisms, our TE mechanism focuses more on features in different sub-dimensions individually, thus effectively enhancing spatial-temporal features. In our experiments, to ensure GFLOPs is comparable with other methods, we crop the input to $256 \times 256$ during testing. Table 2h shows both training and testing with a larger spatial resolution of input bring clear performance improvement.
+
+# 4.2 COMPARISONS WITH THE STATE-OF-THE-ARTS
+
+Something-Something V1&V2. We make a comprehensive comparison in Table 3. Compared with NL I3D+GCN $_{32f}$ , our CT-Net $_{8f}$ gains $4.0\%$ top-1 accuracy improvement with $49.1 \times$ fewer GFLOPs in Something-Something V1. Besides, our CT-Net $_{8f}$ ( $51.7\%$ ) is better than the ir-CSN $_{32f}$ ( $49.3\%$ ) which adopts ResNet-152 as the backbone. Moreover, our CT-Net $_{16f}$ outperforms all the single-clip models in Something-Something V1&V2 and even better than most of the multilip models. It illustrates that our CT-Net is preferable to capture temporal contextual information efficiently. Surprisingly, with only 280 GFLOPs, our ensemble model CT-Net $_{EN}$ achieves $56.6\% (67.8\%)$ top-1 accuracy in Something-Something V1(V2), which outperforms all methods.
+
+Kinetics-400. Kinetics-400 is a large-scale sence-related dataset, and the lightweight 2D models are usually inferior to the 3D models on it. Table 4 shows our CT-Net- $R50_{16f}$ can surpass all existing lightweight models based on 2D backbone. Even compared with SlowFast- $R50_{40f}$ , our CT-Net- $R50_{16f}$ also achieves higher accuracy (77.3% vs. 76.4%). Note that our reproduced SlowFast-R50 performs worse than that in the paper (Feichtenhofer et al., 2019), which may result from the missing videos in Kinetics-400. As for the deeper model, compared with SlowFast- $R101_{80f}$ , our CT-Net- $R101_{16f}$ requires $3.7 \times$ fewer GFLOPs but gains comparable results (78.8% vs. 78.9%). Besides, it achieves comparable top-1 accuracy with X3D-XL (78.8% vs. 79.1%) under a similar GFLOPs. However, X3D requires extensive model searching with an expensive GPU setting, while our CT-Net can be trained traditionally with feasible computation. We perform score fusion over CT-Net- $R50_{16f}$ and CT-Net- $R101_{16f}$ , which mimics two-steam fusion with two temporal rates. In this setting, our model is $2.4 \times$ faster than SlowFast- $R101_{80f}$ and shows an 0.9% performance gain (79.8% vs. 78.9%) but only uses 32 frames.
+
+
+Figure 3: Comparison of visualization. Videos are sampled from Something-Something V1. Compared with R(2+1)D and CSN, our CT-Net can localize the action and object better both in space and time thanks to the larger spatial-temporal receptive field.
+
+# 4.3 VISUALIZATION
+
+We use Saliency Tubes (Stergiou et al., 2019) to generate the visualization, for it can show the most discriminative features that the network locates. In Figure 3, we sample two videos from Something-Something V1 which requires complex temporal modeling. In the left example, our CT-Net focuses on a larger area around the towel, especially in the fourth and fifth frames, thus predicting that someone is twisting it. In contrast, $\mathrm{R}(2 + 1)\mathrm{D}$ only concentrates on one side of the towel and gives the wrong judgment. The same situation can be seen in the right example. We argue that CT-Net can localize the action and object accurately thanks to the larger spatial-temporal receptive field. As for CSN, the regions of interest seem to be scattered, because it lacks sufficient spatial-temporal interaction, thus ignoring the rich context both in space and time.
+
+# 5 CONCLUSIONS
+
+In this paper, we construct an efficient tensor separable convolution to learn the discriminative video representation. We view the channel dimension of the input feature as a multiplication of K sub-dimensions and stack spatial/temporal tensor separable convolution along each of K sub-dimensions. Moreover, CT-Module is cooperated with the Tensor Excitation mechanism to further improve performance. All experiments demonstrate that our concise and novel CT-Net obtains a preferable balance between accuracy and efficiency on large-scale video datasets. Our proposed principles are preferable guidance for designing an efficient module for temporal modeling.
+
+# 6 ACKNOWLEDGEMENT
+
+This work is partially supported by National Natural Science Foundation of China (6187617, U1713208), the National Key Research and Development Program of China (No. 2020YFC2004800), Science and Technology Service Network Initiative of Chinese Academy of Sciences (KFJ-STS-QYZX-092), Shenzhen Institute of Artificial Intelligence and Robotics for Society.
+
+# REFERENCES
+
+João Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4724-4733, 2017.
+Quanfu Fan, Chun-Fu Chen, Hilde Kuehne, Marco Pistoia, and D. Cox. More is less: Learning efficient video representations by temporal aggregation module. In NeurIPS 2019, 2019.
+Christoph Feichtenhofer. X3d: Expanding architectures for efficient video recognition. 2020 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 200-210, 2020.
+
+Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. Slowfast networks for video recognition. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 6201-6210, 2019.
+Priya Goyal, P. Dollar, Ross B. Girshick, P. Noordhuis, L. Wesolowski, Aapo Kyrola, Andrew Tulloch, Y. Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour. ArXiv, abs/1706.02677, 2017a.
+Raghav Goyal, Samira Ebrahimi Kahou, Vincent Michalski, Joanna Materzynska, Susanne Westphal, Heuna Kim, Valentin Haenel, Ingo Fründ, Peter Yianilos, Moritz Mueller-Freitag, Florian Hoppe, Christian Thurau, Ingo Bax, and Roland Memisevic. The "something something" video database for learning and evaluating visual common sense. 2017 IEEE International Conference on Computer Vision (ICCV), pp. 5843-5851, 2017b.
+Kaiming He, X. Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, 2016.
+Jie Hu, Li Shen, Samuel Albanie, Gang Sun, and Enhua Wu. Squeeze-and-excitation networks. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 42:2011-2023, 2020.
+Boyuan Jiang, Mengmeng Wang, Weihao Gan, Wei Wu, and Junjie Yan. Stm: Spatiotemporal and motion encoding for action recognition. 2019 IEEE International Conference on Computer Vision (ICCV), pp. 2000-2009, 2019.
+A. Karpathy, G. Toderici, Sanketh Shetty, T. Leung, R. Sukthankar, and Li Fei-Fei. Large-scale video classification with convolutional neural networks. 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725-1732, 2014.
+Hilde Kuehne, Hueihan Jhuang, E. Garrote, T. Poggio, and Thomas Serre. Hmdb: A large video database for human motion recognition. 2011 International Conference on Computer Vision, pp. 2556-2563, 2011.
+X. Li, Yali Wang, Zhipeng Zhou, and Yu Qiao. Smallbignet: Integrating core and contextual views for video classification. 2020 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1089-1098, 2020a.
+Yinong Li, Bin Ji, Xintian Shi, Jianguo Zhang, Bin Kang, and Limin Wang. Tea: Temporal excitation and aggregation for action recognition. ArXiv, abs/2004.01398, 2020b.
+Ji Lin, Chuang Gan, and Song Han. Tsm: Temporal shift module for efficient video understanding. 2019 IEEE International Conference on Computer Vision (ICCV), pp. 7082-7092, 2019.
+Zhaoyang Liu, D. Luo, Yabiao Wang, L. Wang, Ying Tai, Chengjie Wang, Jilin Li, Feiyue Huang, and Tong Lu. Teinet: Towards an efficient architecture for video recognition. ArXiv, abs/1911.09435, 2020.
+I. Loshchilov and F. Hutter. Sgdr: Stochastic gradient descent with warm restarts. In ICLR, 2017.
+Zhaofan Qiu, Ting Yao, and Tao Mei. Learning spatio-temporal representation with pseudo-3d residual networks. 2017 IEEE International Conference on Computer Vision (ICCV), pp. 5534-5542, 2017.
+K. Simonyan and Andrew Zisserman. Two-stream convolutional networks for action recognition in videos. In NIPS, 2014.
+K. Soomro, A. Zamir, and M. Shah. Ucf101: A dataset of 101 human actions classes from videos in the wild. ArXiv, abs/1212.0402, 2012.
+Alexandros Stergiou, G. Kapidis, Grigorios Kalliatakis, C. Chrysoulas, R. Veltkamp, and R. Poppe. Saliency tubes: Visual explanations for spatio-temporal convolutions. 2019 IEEE International Conference on Image Processing (ICIP), pp. 1830-1834, 2019.
+Swathikiran Sudhakaran, S. Escalera, and O. Lanz. Gate-shift networks for video action recognition. 2020 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1099-1108, 2020.
+
+Du Tran, Lubomir D. Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. Learning spatiotemporal features with 3d convolutional networks. 2015 IEEE International Conference on Computer Vision (ICCV), pp. 4489-4497, 2015.
+Du Tran, Hong xiu Wang, Lorenzo Torresani, Jamie Ray, Yann LeCun, and Manohar Paluri. A closer look at spatiotemporal convolutions for action recognition. 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6450-6459, 2018.
+Du Tran, Heng Wang, Lorenzo Torresani, and Matt Feiszli. Video classification with channel-separated convolutional networks. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 5551-5560, 2019.
+Heng Wang, Du Tran, L. Torresani, and Matt Feiszli. Video modeling with correlation networks. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 349-358, 2020.
+L. Wang, Yuanjun Xiong, Zhe Wang, Y. Qiao, D. Lin, X. Tang, and L. Gool. Temporal segment networks: Towards good practices for deep action recognition. In ECCV, 2016.
+X. Wang and A. Gupta. Videos as space-time region graphs. In ECCV, 2018.
+Xiaolong Wang, Ross B. Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7794-7803, 2018.
+Junwu Weng, D. Luo, Yabiao Wang, Ying Tai, Chengjie Wang, Jilin Li, Feiyue Huang, Xudong Jiang, and J. Yuan. Temporal distinct representation learning for action recognition. *ArXiv*, abs/2007.07626, 2020.
+Saining Xie, C. Sun, J. Huang, Zhuowen Tu, and Kevin Murphy. Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification. In ECCV, 2018.
+Mohammadreza Zolfaghari, K. Singh, and T. Brox. Eco: Efficient convolutional network for online video understanding. ArXiv, abs/1804.09066, 2018.
+
+# A APPENDIX
+
+# A.1 MORE TRAINING DETAILS
+
+We use SGD with momentum 0.9 and cosine learning rate schedule (Loshchilov & Hutter, 2017) to train the entire network. The first 10 epochs are used for warm-up (Goyal et al., 2017a) to overcome early optimization difficulty. For kinetics, the batch, total epochs, initial learning rate, dropout and weight decay are set to 64, 110, 0.01, 0.5 and 1e-4 respectively. All these hyper-parameters are set to 64, 45, 0.02, 0.3 and 5e-4 respectively for Something-Something.
+
+# A.2 TENSOR EXCITATION MECHANISM
+
+The implementation of our Tensor Excitation is shown in Figure 4. Different from the SE module, we use tensor separable convolution in the TE mechanism. Moreover, when obtaining the spatial attention, we squeeze the temporal dimension and perform spatial tensor separable convolution, because temporal information is insignificant for spatial attention and vice versa. We add the Batch Normalization (BN) layer for better optimization.
+
+# A.3 RESULTS ON UCF101 AND HMDB51
+
+To verify the generation ability of our CT-Net on smaller datasets, we conduct transfer learning experiments from Kinetics400 to UCF101 (Soomro et al., 2012) and HMDB-51 (Kuehne et al., 2011). We test CT-Net with 16 input frames and evaluate it over three splits and report the averaged results. As shown in Table 5, our $\mathrm{CT - Net}_{16f}$ achieves competitive performance when compared with the recent methods, which demonstrates the generation ability of our CT-Net.
+
+
+
+Figure 4: The implementation of our Tensor Excitation (TE) mechanism.
+
+| Method | Backbone | Pretrain | UCF101 | HMDB51 |
| C3D(Tran et al., 2015) | 3D VGG-11 | Sports-1M | 82.3 | 51.6 |
| I3D(Carreira & Zisserman, 2017) | 3D Inception | ImageNet+Kinetics | 95.1 | 74.3 |
| ECO(Zolfaghari et al., 2018) | Inception+3D R18 | ImageNet+Kinetics | 94.8 | 72.4 |
| TSN(Wang et al., 2016) | Inception | ImageNet+Kinetics | 91.1 | - |
| TSM(Lin et al., 2019) | 2D R50 | ImageNet+Kinetics | 94.5 | 70.7 |
| STM(Jiang et al., 2019) | 2D R50 | ImageNet+Kinetics | 96.2 | 72.2 |
| Our CT-Net | 2D R50 | ImageNet+Kinetics | 96.2 | 73.2 |
+
+Table 5: Comparison results on UCF101 and HMDB51.
+
+# A.4 MORE RESULTS ON SOMETHING-SOMETHING V1&V2
+
+Table 6 shows more results on Something-Something V1&V2. We train CT-Net with a different number of input frames and then test these models with different sampling strategies. We average the prediction scores obtained from the previous models to evaluate the ensemble models. With more input frames, the corresponding accuracy becomes higher. As for the reason that $\mathrm{CT - Net}_{24f}$ is worse than $\mathrm{CT - Net}_{16f}$ , we argue that is because the model is hard to optimize with too many input frames. Sampling more clips or more crops also boosts performance. Moreover, our ensemble models gain the state-of-the-art top-1 accuracy of $56.6\% (68.3\%)$ on Something-Something V1(V2).
+
+# A.5 MORE ABLATION STUDIES ON MINI-KINETICS AND SOMETHING-SOMETHING V2
+
+To verify the effectiveness of our module comprehensively, we also conduct experiments in Mini-Kinetics and Something-Something V2 and report the multi-clip accuracy and single-clip accuracy respectively. Mini-Kinetics covers 200 action classes and is a subset of Kinetics-400, while Something-Something V2 covers the same action classes as Something-Something V1 but contains more videos. As shown in Table 7, the performance trend of different modules is similar to that shown in Table 2a. Since Mini-Kinetics does not highly depend on temporal modeling, the gap becomes smaller but still demonstrates the effectiveness of our CT-Module.
+
+# A.6 ADAPTING DIFFERENT PRE-TRAINED IMAGENET ARCHITECTURES AS CT-NET
+
+In fact, by directly replacing the $3 \times 3$ convolution with our CT-Module, we can easily adapt different pre-trained ImageNet architectures as CT-Net. Table 8 shows that it is also sensible to use InceptionV3 as the backbone. We believe that through more elaborate design, our CT-Net based on different backbones can achieve comparable performance.
+
+# A.7 VALIDATION PLOT
+
+In Figure 5, we plot the accuracy vs per-clip GFLOPs on Kinetics-400. It reveals that our CT-Net achieves a better trade-off than most of the existing methods on Kinetics-400.
+
+| Method | #Frame | GFLOPs | #Param | SomethingV1 | SomethingV2 |
| Top-1 | Top-5 | Top-1 | Top-5 |
| Our CT-Net | 8 | 37 | 21.0M | 50.1 | 78.8 | 62.5 | 87.7 |
| 12 | 56 | 52.1 | 80.0 | 63.9 | 88.7 |
| 16 | 75 | 52.5 | 80.9 | 64.5 | 89.3 |
| 24 | 112 | 52.5 | 80.9 | 64.6 | 89.1 |
| 8×1×2 | 75 | 21.0M | 51.6 | 79.7 | 63.5 | 88.5 |
| 12×1×2 | 112 | 52.8 | 80.6 | 64.6 | 89.3 |
| 16×1×2 | 151 | 53.2 | 81.3 | 65.2 | 89.7 |
| 24×1×2 | 224 | 52.9 | 81.3 | 65.0 | 89.3 |
| 8×3×2 | 224 | 21.0M | 51.7 | 80.1 | 63.9 | 88.8 |
| 12×3×2 | 336 | 53.0 | 81.1 | 65.3 | 89.6 |
| 16×3×2 | 447 | 53.4 | 81.7 | 65.9 | 90.1 |
| 24×3×2 | 672 | 53.6 | 81.6 | 65.5 | 89.8 |
| Our CT-NetEN | 8+16 | 112 | 83.8M | 54.4 | 82.0 | 66.2 | 90.4 |
| (8+12+16+24)×1×2 | 280 | 56.6 | 83.9 | 67.8 | 91.1 |
| (8+12+16+24)×1×2 | 560 | 56.6 | 84.0 | 67.8 | 91.3 |
| (8+12+16+24)×3×2 | 1679 | 56.6 | 83.9 | 68.3 | 91.3 |
+
+Table 6: More results on Something-Something V1&V2.
+
+| Method | Backbone | GFLOPs | Mini-Kinetics | SomethingV2 |
| Top-1 | Top-5 | Top-1 | Top-5 |
| C3D-Module (Tran et al., 2015) | 2D R50 | 59.9 | 77.5 | 93.0 | 59.1 | 85.5 |
| R(2+1)D-Module (Tran et al., 2018) | 2D R50 | 45.8 | 77.8 | 93.2 | 60.0 | 86.0 |
| CSN-Module (Tran et al., 2019) | 2D R50 | 35.6 | 77.6 | 93.2 | 59.5 | 86.0 |
| Our CT-Module | 2D R50 | 36.3 | 78.0 | 93.6 | 60.3 | 86.4 |
+
+Table 7: More ablation studies on Mini-Kinetics and Something-Something V2.
+
+| Method | Backbone | GFLOPs | #Param.(M) | Top-1 | Top-5 |
| Baseline (TSN) | 2D ResNet-50 | 43.0 | 23.9 | 16.9 | 42.0 |
| Our CT-Net | 2D ResNet-50 | 37.3 | 21.0 | 50.1 | 78.8 |
| Baseline (TSN) | InceptionV3 | 45.8 | 22.1 | 18.3 | 43.9 |
| Our CT-Net | InceptionV3 | 43.9 | 20.9 | 47.2 | 76.1 |
+
+Table 8: Adapting different pre-trained ImageNet architectures as CT-Net.
+
+
+Figure 5: Accuracy vs per-clip GFLOPs on Kinetics-400.
\ No newline at end of file
diff --git a/ctnetchanneltensorizationnetworkforvideoclassification/images.zip b/ctnetchanneltensorizationnetworkforvideoclassification/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..471376b36dabb6a14b8004084e3903a8874dde15
--- /dev/null
+++ b/ctnetchanneltensorizationnetworkforvideoclassification/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2be92fe1d0561458f5aab568e5dcc5332639e4bf43fd9b332351f863411581bd
+size 1023865
diff --git a/ctnetchanneltensorizationnetworkforvideoclassification/layout.json b/ctnetchanneltensorizationnetworkforvideoclassification/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..e3efdf2b605e6519e440ef53897eb6150cdce301
--- /dev/null
+++ b/ctnetchanneltensorizationnetworkforvideoclassification/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5a40bad0ed55286b2ed8f8bb3d31bbe0e44802905a3c43339bd7ba05b9ea8f88
+size 418989